On Mon, Jun 09, 2025 at 04:44:10PM -0700, Andrew Morton wrote: > On Mon, 9 Jun 2025 15:56:08 -0700 Shakeel Butt <shakeel.butt@xxxxxxxxx> wrote: > > > BPF programs can run in nmi context and may trigger memcg charged memory > > allocation in such context. Recently linux added support to nmi safe > > page allocation along with memcg charging of such allocations. However > > the kmalloc/slab support and corresponding memcg charging is still > > lacking, > > > > To provide nmi safe support for memcg charging for kmalloc/slab > > allocations, we need nmi safe memcg stats and for that we need nmi safe > > css_rstat_updated() which adds the given cgroup state whose stats are > > updated into the per-cpu per-ss update tree. This series took the aim to > > make css_rstat_updated() nmi safe. > > > > This series made css_rstat_updated by using per-cpu lockless lists whose > > node in embedded in individual struct cgroup_subsys_state and the > > per-cpu head is placed in struct cgroup_subsys. For rstat users without > > cgroup_subsys, a global per-cpu lockless list head is created. The main > > challenge to use lockless in this scenario was the potential multiple > > inserters using the same lockless node of a cgroup_subsys_state which is > > different from traditional users of lockless lists. > > > > The multiple inserters using potentially same lockless node was resolved > > by making one of them succeed on reset the lockless node and the winner > > gets to insert the lockless node in the corresponding lockless list. > > And what happens with the losers? Losers can continue their normal work without worrying about this specific insertion. Basically we need one successful insertion. In addition this is a contention between process context, softirq, hardirq and nmi on the same cpu for the same cgroup which should be very unlikely.