Re: [PATCH 15/25] genirq: Allow per-cpu interrupt sharing for non-overlapping affinities

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 08 Sep 2025 17:31:17 +0100,
Marc Zyngier <maz@xxxxxxxxxx> wrote:
> 
> Interrupt sharing for percpu-devid interrupts is forbidden, and
> for good reasons. These are interrupts generated *from* a CPU and
> handled by itself (timer, for example). Nobody in their right mind
> would put two devices on the same pin (and if they have, they get to
> keep the pieces...).
> 
> But this also prevents more benign cases, where devices are connected
> to groups of CPUs, and for which the affinities are not overlapping.
> Effectively, the only thing they share is the interrupt number, and
> nothing else.
> 
> Let's tweak the definition of IRQF_SHARED applied to percpu_devid
> interrupts to allow this particular case. This results in extra
> validation at the point of the interrupt being setup and freed,
> as well as a tiny bit of extra complexity for interrupts at handling
> time (to pick the correct irqaction).
> 
> Signed-off-by: Marc Zyngier <maz@xxxxxxxxxx>
> ---
>  kernel/irq/chip.c   |  8 ++++--
>  kernel/irq/manage.c | 67 +++++++++++++++++++++++++++++++++++++--------
>  2 files changed, 61 insertions(+), 14 deletions(-)
> 
> diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
> index 0d0276378c707..af90dd440d5ee 100644
> --- a/kernel/irq/chip.c
> +++ b/kernel/irq/chip.c
> @@ -897,8 +897,9 @@ void handle_percpu_irq(struct irq_desc *desc)
>  void handle_percpu_devid_irq(struct irq_desc *desc)
>  {
>  	struct irq_chip *chip = irq_desc_get_chip(desc);
> -	struct irqaction *action = desc->action;
>  	unsigned int irq = irq_desc_get_irq(desc);
> +	unsigned int cpu = smp_processor_id();
> +	struct irqaction *action;
>  	irqreturn_t res;
>  
>  	/*
> @@ -910,12 +911,15 @@ void handle_percpu_devid_irq(struct irq_desc *desc)
>  	if (chip->irq_ack)
>  		chip->irq_ack(&desc->irq_data);
>  
> +	for (action = desc->action; action; action = action->next)
> +		if (cpumask_test_cpu(cpu, action->affinity))
> +			break;
> +
>  	if (likely(action)) {
>  		trace_irq_handler_entry(irq, action);
>  		res = action->handler(irq, raw_cpu_ptr(action->percpu_dev_id));
>  		trace_irq_handler_exit(irq, action, res);
>  	} else {
> -		unsigned int cpu = smp_processor_id();
>  		bool enabled = cpumask_test_cpu(cpu, desc->percpu_enabled);
>  
>  		if (enabled)

As Will points out off the list, the above lacks the a similar
handling for percpu_devid NMIs, leading to NMIs that are only handled
on the first affinity group.

It's easy enough to move the above to common code and share it with
handle_percpu_devid_fasteoi_nmi(), but at this point there is hardly
any difference with handle_percpu_devid_irq().

Any objection to simply killing the NMI version?

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.




[Index of Archives]     [Linux IBM ACPI]     [Linux Power Management]     [Linux Kernel]     [Linux Laptop]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]
  Powered by Linux