Trimming down the cc list (and oh, what a cc list it was!!!) to x86 folks. On 5/2/25 08:20, Peter Zijlstra wrote: > So where IPI is: > > - IPI all CPUs > - local invalidate > - wait for completion To drill down on this a bit, the IPI is actually something like for_each_cpu(IPI_cpumask) per_cpu_ptr(cpu)->csd = 1; send_ipi(IPI_cpumask) // local invalidate // wait for completion ... and the send_ipi() can be a for loop too if it's in clustered mode. So there is at least _a_ for loop in this case in practice because each CPU has a per-cpu structure to tell it what to do in the IPI. > This then becomes: > > for () > - RAR some CPUs > - wait for completion Were you thinking that the "some CPUs" was limited to 64 because of the size of the payload table and action vectors? Maybe I was thinking of arranging the data structures differently. I was figuring that we could use one entry in the payload table per IPI operation, *not* one per CPU. Something like: e = alloc_payload_entry(); payload_table[e] = payload; for_each_cpu(RAR_cpumask) per_cpu_ptr(cpu)->action_vector[e] = RAR_PENDING; send_ipi(RAR_cpumask) // local invalidate // wait for completion free_table_entry(e); In that silly scheme, the allocation can fail. But in that case it's easy to just fall back to IPIs. I _think_ that works, but it's all in my head and maybe I'm missing something silly. I think the mechanism you were thinking of was something like this (diff'd from what I had above): - e = alloc_payload_entry(); + e = smp_processor_id(); payload_table[e] = payload; for_each_cpu(RAR_cpumask) per_cpu_ptr(cpu)->action_vector[e] = RAR_PENDING; send_ipi(RAR_cpumask) // local invalidate // wait for completion - free_table_entry(e); That beats my scheme because it doesn't have any allocation, free or locking overhead and can't fail to allocate. But it would be limited to <=64 CPUs because the payload table and action vector are only 64 entries long.