On Thu, Jul 03, 2025 at 08:43:01AM +0200, Hannes Reinecke wrote: > > drivers/scsi/fnic/fnic_isr.c | 7 +++++-- > > drivers/scsi/hisi_sas/hisi_sas_v3_hw.c | 1 + > > drivers/scsi/megaraid/megaraid_sas_base.c | 5 ++++- > > drivers/scsi/mpi3mr/mpi3mr_fw.c | 6 +++++- > > drivers/scsi/mpt3sas/mpt3sas_base.c | 5 ++++- > > drivers/scsi/pm8001/pm8001_init.c | 1 + > > drivers/scsi/qla2xxx/qla_isr.c | 1 + > > drivers/scsi/smartpqi/smartpqi_init.c | 7 +++++-- > > 8 files changed, 26 insertions(+), 7 deletions(-) > > > > All of these drivers are not aware of CPU hotplug, and as such > will not be notified when the number of CPUs changes. > But you use 'blk_mq_online_queue_affinity()' for all of these > drivers. > Wouldn't 'blk_mq_possible_queue_affinit()' a better choice here > to insulate against CPU hotplug effects? > > Also some drivers which are using irq affinity (eg aacraid, lpfc) are > missing from these conversions. Why? I've updated both drivers to use pci_alloc_irq_vectors_affinity with the PCI_IRQ_AFFINITY flag. But then I saw this: dafeaf2c03e7 ("scsi: aacraid: Stop using PCI_IRQ_AFFINITY") So we need be careful here. In the case of lpfc (and qla2xxx), the nvme-fabrics core needs to be updated too (gets out ouf sync with the number of queues allocated). I already have patches for this. But I'd say we first continue with this series before the next set of patches. Thus I decided to drop all the driver updates which are currently not using pci_alloc_irq_vectors_affinity and PCI_IRQ_AFFINITY. These are supporting managed irqs thus these should be all ready for this feature. For the rest of the drivers, I'd rather update one by one so we don't introduce regressions (e.g. aacraid)