I forgot to CC Tejun, so doing it now. On Mon, May 12, 2025 at 05:56:09PM +0200, Vlastimil Babka wrote: > On 5/10/25 01:28, Shakeel Butt wrote: > > BPF programs can trigger memcg charged kernel allocations in nmi > > context. However memcg charging infra for kernel memory is not equipped > > to handle nmi context. This series adds support for kernel memory > > charging for nmi context. > > > > The initial prototype tried to make memcg charging infra for kernel > > memory re-entrant against irq and nmi. However upon realizing that > > this_cpu_* operations are not safe on all architectures (Tejun), this > > I assume it was an off-list discussion? > Could we avoid this for the architectures where these are safe, which should > be the major ones I hope? Yes it was an off-list discussion. The discussion was more about the this_cpu_* ops vs atomic_* ops as on x86 this_cpu_* does not have lock prefix and how I should prefer this_cpu_* over atomic_* for my series on objcg charging without disabling irqs. Tejun pointed out this_cpu_* are not nmi safe for some archs and it would be better to handle nmi context separately. So, I am not that worried about optimizing for NMI context but your next comment on generic_atomic64_* ops is giving me headache. > > > series took a different approach targeting only nmi context. Since the > > number of stats that are updated in kernel memory charging path are 3, > > this series added special handling of those stats in nmi context rather > > than making all >100 memcg stats nmi safe. > > Hmm so from patches 2 and 3 I see this relies on atomic64_add(). > But AFAIU lib/atomic64.c has the generic fallback implementation for > architectures that don't know better, and that would be using the "void > generic_atomic64_##op" macro, which AFAICS is doing: > > local_irq_save(flags); \ > arch_spin_lock(lock); \ > v->counter c_op a; \ > arch_spin_unlock(lock); \ > local_irq_restore(flags); \ > > so in case of a nmi hitting after the spin_lock this can still deadlock? > > Hm or is there some assumption that we only use these paths when already > in_nmi() and then another nmi can't come in that context? > > But even then, flush_nmi_stats() in patch 1 isn't done in_nmi() and uses > atomic64_xchg() which in generic_atomic64_xchg() implementation also has the > irq_save+spin_lock. So can't we deadlock there? I was actually assuming that atomic_* ops are safe against nmis for all archs. I looked at atomic_* ops in include/asm-generic/atomic.h and it is using arch_cmpxchg() for CONFIG_SMP and it seems like for archs with cmpxchg should be fine against nmi. I am not sure why atomic64_* are not using arch_cmpxchg() instead. I will dig more. I also have the followup series on objcg charging without irq almost ready. I will send it out as rfc soon. Thanks a lot for awesome and insightful comments. Shakeel