On Tue, 09 Sep 2025 10:45:36 +0100, Thomas Gleixner <tglx@xxxxxxxxxxxxx> wrote: > > On Mon, Sep 08 2025 at 17:31, Marc Zyngier wrote: > > @@ -2564,13 +2566,14 @@ int request_percpu_nmi(unsigned int irq, irq_handler_t handler, > > !irq_supports_nmi(desc)) > > return -EINVAL; > > > > - /* The line cannot already be NMI */ > > - if (irq_is_nmi(desc)) > > + /* The line cannot be NMI already if the new request covers all CPUs */ > > + if (irq_is_nmi(desc) && > > + (!affinity || cpumask_equal(affinity, cpu_possible_mask))) > > return -EINVAL; > > This check looks odd. What makes sure that the affinities do not > overlap? The following patch in the series does, at the point of adding the new irqaction to the list. This is modelled after the current handling of shared interrupts. M. -- Without deviation from the norm, progress is not possible.