Re: [PATCH 4/4] memcg: make objcg charging nmi safe

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 13, 2025 at 03:25:31PM -0700, Alexei Starovoitov wrote:
> On Fri, May 9, 2025 at 4:29 PM Shakeel Butt <shakeel.butt@xxxxxxxxx> wrote:
> >
> > To enable memcg charged kernel memory allocations from nmi context,
> > consume_obj_stock() and refill_obj_stock() needs to be nmi safe. With
> > the simple in_nmi() check, take the slow path of the objcg charging
> > which handles the charging and memcg stats updates correctly for the nmi
> > context.
> >
> > Signed-off-by: Shakeel Butt <shakeel.butt@xxxxxxxxx>
> > ---
> >  mm/memcontrol.c | 14 +++++++++++++-
> >  1 file changed, 13 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index bba549c1f18c..6cfa3550f300 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -2965,6 +2965,9 @@ static bool consume_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes,
> >         unsigned long flags;
> >         bool ret = false;
> >
> > +       if (unlikely(in_nmi()))
> > +               return ret;
> > +
> >         local_lock_irqsave(&obj_stock.lock, flags);
> >
> >         stock = this_cpu_ptr(&obj_stock);
> > @@ -3068,6 +3071,15 @@ static void refill_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes,
> >         unsigned long flags;
> >         unsigned int nr_pages = 0;
> >
> > +       if (unlikely(in_nmi())) {
> > +               if (pgdat)
> > +                       __mod_objcg_mlstate(objcg, pgdat, idx, nr_bytes);
> > +               nr_pages = nr_bytes >> PAGE_SHIFT;
> > +               nr_bytes = nr_bytes & (PAGE_SIZE - 1);
> > +               atomic_add(nr_bytes, &objcg->nr_charged_bytes);
> > +               goto out;
> > +       }
> 
> 
> Now I see what I did incorrectly in my series and how this patch 4
> combined with patch 3 is doing accounting properly.
> 
> The only issue here and in other patches is that in_nmi() is
> an incomplete condition to check for.
> The reentrance is possible through kprobe or tracepoint.
> In PREEMP_RT we will be fully preemptible, but
> obj_stock.lock will be already taken by the current task.
> To fix it you need to use local_lock_is_locked(&obj_stock.lock)
> instead of in_nmi() or use local_trylock_irqsave(&obj_stock.lock).
> 
> local_trylock_irqsave() is cleaner and works today,
> while local_lock_is_locked() hasn't landed yet, but if we go
> is_locked route we can decouple reentrant obj_stock operation vs normal.
> Like the if (!local_lock_is_locked(&obj_stock.lock))
> can be done much higher up the stack from
> __memcg_slab_post_alloc_hook() the way I did in my series,
> and if locked it can do atomic_add()-style charging.
> So refill_obj_stock() and friends won't need to change.

Thanks Alexei for taking a look. For now I am going with the trylock
path and later will check if your suggested is_locked() makes things
better.




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux