Re: [PATCH v2 3/6] locking/local_lock: Introduce local_lock_lockdep_start/end()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 14, 2025 at 11:33 AM Vlastimil Babka <vbabka@xxxxxxx> wrote:
>
> On 7/14/25 19:52, Alexei Starovoitov wrote:
> > On Mon, Jul 14, 2025 at 4:06 AM Sebastian Andrzej Siewior
> > <bigeasy@xxxxxxxxxxxxx> wrote:
> >>
> >> On 2025-07-11 19:19:26 [-0700], Alexei Starovoitov wrote:
> >> > > If there is no parent check then we could do "normal lock" on both
> >> > > sides.
> >> >
> >> > How would ___slab_alloc() know whether there was a parent check or not?
> >> >
> >> > imo keeping local_lock_irqsave() as-is is cleaner,
> >> > since if there is no parent check lockdep will rightfully complain.
> >>
> >> what about this:
> >>
> >> diff --git a/mm/slub.c b/mm/slub.c
> >> index 7e2ffe1d46c6c..3520d1c25c205 100644
> >> --- a/mm/slub.c
> >> +++ b/mm/slub.c
> >> @@ -3693,6 +3693,34 @@ static inline void *freeze_slab(struct kmem_cache *s, struct slab *slab)
> >>         return freelist;
> >>  }
> >>
> >> +static void local_lock_cpu_slab(struct kmem_cache *s, const gfp_t gfp_flags,
> >> +                               unsigned long *flags)
> >> +{
> >> +       bool allow_spin = gfpflags_allow_spinning(gfp_flags);
> >> +
> >> +       /*
> >> +        * ___slab_alloc()'s caller is supposed to check if kmem_cache::kmem_cache_cpu::lock
> >> +        * can be acquired without a deadlock before invoking the function.
> >> +        *
> >> +        * On PREEMPT_RT an invocation is not possible from IRQ-off or preempt
> >> +        * disabled context. The lock will always be acquired and if needed it
> >> +        * block and sleep until the lock is available.
> >> +        *
> >> +        * On !PREEMPT_RT allocations from any context but NMI are safe. The lock
> >> +        * is always acquired with disabled interrupts meaning it is always
> >> +        * possible to it.
> >> +        * In NMI context it is needed to check if the lock is acquired. If it is not,
> >> +        * it is safe to acquire it. The trylock semantic is used to tell lockdep
> >> +        * that we don't spin. The BUG_ON() will not trigger if it is safe to acquire
> >> +        * the lock.
> >> +        *
> >> +        */
> >> +       if (!IS_ENABLED(CONFIG_PREEMPT_RT) && !allow_spin)
> >> +               BUG_ON(!local_trylock_irqsave(&s->cpu_slab->lock, *flags));
> >> +       else
> >> +               local_lock_irqsave(&s->cpu_slab->lock, *flags);
> >> +}
> >
> > the patch misses these two:
> >
> > diff --git a/mm/slub.c b/mm/slub.c
> > index 36779519b02c..2f30b85fbf68 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -3260,7 +3260,7 @@ static void put_cpu_partial(struct kmem_cache
> > *s, struct slab *slab, int drain)
> >         unsigned long flags;
> >         int slabs = 0;
> >
> > -       local_lock_irqsave(&s->cpu_slab->lock, flags);
> > +       local_lock_cpu_slab(s, 0, &flags);
> >
> >         oldslab = this_cpu_read(s->cpu_slab->partial);
> >
> > @@ -4889,8 +4889,9 @@ static __always_inline void do_slab_free(struct
> > kmem_cache *s,
> >                         goto redo;
> >                 }
> >         } else {
> > +               long flags;
> >                 /* Update the free list under the local lock */
> > -               local_lock(&s->cpu_slab->lock);
> > +               local_lock_cpu_slab(s, 0, &flags);
> >                 c = this_cpu_ptr(s->cpu_slab);
> >                 if (unlikely(slab != c->slab)) {
> >                         local_unlock(&s->cpu_slab->lock);
> >
> > I realized that the latter one was missing local_lock_lockdep_start/end()
> > in my patch as well, but that's secondary.
> >
> > So with above it works on !RT,
> > but on RT lockdep complains as I explained earlier.
> >
> > With yours and above hunks applied here is full lockdep splat:
> >
> > [   39.819636] ============================================
> > [   39.819638] WARNING: possible recursive locking detected
> > [   39.819641] 6.16.0-rc5-00342-gc8aca7837440-dirty #54 Tainted: G           O
> > [   39.819645] --------------------------------------------
> > [   39.819646] page_alloc_kthr/2306 is trying to acquire lock:
> > [   39.819650] ff110001f5cbea88 ((&c->lock)){+.+.}-{3:3}, at:
> > ___slab_alloc+0xb7/0xec0
> > [   39.819667]
> > [   39.819667] but task is already holding lock:
> > [   39.819668] ff110001f5cbfe88 ((&c->lock)){+.+.}-{3:3}, at:
> > ___slab_alloc+0xb7/0xec0
> > [   39.819677]
> > [   39.819677] other info that might help us debug this:
> > [   39.819678]  Possible unsafe locking scenario:
> > [   39.819678]
> > [   39.819679]        CPU0
> > [   39.819680]        ----
> > [   39.819681]   lock((&c->lock));
> > [   39.819684]   lock((&c->lock));
> > [   39.819687]
> > [   39.819687]  *** DEADLOCK ***
> > [   39.819687]
> > [   39.819687]  May be due to missing lock nesting notation
> > [   39.819687]
> > [   39.819689] 2 locks held by page_alloc_kthr/2306:
> > [   39.819691]  #0: ff110001f5cbfe88 ((&c->lock)){+.+.}-{3:3}, at:
> > ___slab_alloc+0xb7/0xec0
> > [   39.819700]  #1: ffffffff8588f3a0 (rcu_read_lock){....}-{1:3}, at:
> > rt_spin_lock+0x197/0x250
> > [   39.819710]
> > [   39.819710] stack backtrace:
> > [   39.819714] CPU: 1 UID: 0 PID: 2306 Comm: page_alloc_kthr Tainted:
> > G           O        6.16.0-rc5-00342-gc8aca7837440-dirty #54
> > PREEMPT_RT
> > [   39.819721] Tainted: [O]=OOT_MODULE
> > [   39.819723] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
> > BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
> > [   39.819726] Call Trace:
> > [   39.819729]  <TASK>
> > [   39.819734]  dump_stack_lvl+0x5b/0x80
> > [   39.819740]  print_deadlock_bug.cold+0xbd/0xca
> > [   39.819747]  __lock_acquire+0x12ad/0x2590
> > [   39.819753]  ? __lock_acquire+0x42b/0x2590
> > [   39.819758]  lock_acquire+0x133/0x2d0
> > [   39.819763]  ? ___slab_alloc+0xb7/0xec0
> > [   39.819769]  ? try_to_take_rt_mutex+0x624/0xfc0
> > [   39.819773]  ? __lock_acquire+0x42b/0x2590
> > [   39.819778]  rt_spin_lock+0x6f/0x250
>
> But why are we here in ___slab_alloc, trying to take the lock...
>
> > [   39.819783]  ? ___slab_alloc+0xb7/0xec0
> > [   39.819788]  ? rtlock_slowlock_locked+0x5c60/0x5c60
> > [   39.819792]  ? rtlock_slowlock_locked+0xc3/0x5c60
> > [   39.819798]  ___slab_alloc+0xb7/0xec0
> > [   39.819803]  ? __lock_acquire+0x42b/0x2590
> > [   39.819809]  ? my_debug_callback+0x20e/0x390 [bpf_testmod]
> > [   39.819826]  ? __lock_acquire+0x42b/0x2590
> > [   39.819830]  ? rt_read_unlock+0x2f0/0x2f0
> > [   39.819835]  ? my_debug_callback+0x20e/0x390 [bpf_testmod]
> > [   39.819844]  ? kmalloc_nolock_noprof+0x15a/0x430
> > [   39.819849]  kmalloc_nolock_noprof+0x15a/0x430
>
> When in patch 6/6 __slab_alloc() we should have bailed out via
>
>         if (unlikely(!gfpflags_allow_spinning(gfpflags))) {
> +               if (local_lock_is_locked(&s->cpu_slab->lock)) {
> +                       /*
> +                        * EBUSY is an internal signal to kmalloc_nolock() to
> +                        * retry a different bucket. It's not propagated
> +                        * to the caller.
> +                        */
> +                       p = ERR_PTR(-EBUSY);
> +                       goto out;
> +               }
>
> So it doesn't seem to me as a lack of lockdep tricking, but we reached
> something we should not have because the avoidance based on
> local_lock_is_locked() above didn't work properly? At least if I read the
> splat and backtrace properly, it doesn't seem to suggest a theoretical
> scenario but that we really tried to lock something we already had locked.

It's not theoretical. Such slab re-entrance can happen with
a tracepoint:
slab -> some tracepoint -> bpf -> slab

I simulate it with a stress test:
+extern void (*debug_callback)(void);
+#define local_unlock_irqrestore(lock, flags)                   \
+       do {                    \
+               if (debug_callback) debug_callback(); \
+               __local_unlock_irqrestore(lock, flags); \
+       } while (0)

and debug_callback() calls kmalloc_nolock(random_size) without any bpf
to simplify testing.

> > [   39.819857]  my_debug_callback+0x20e/0x390 [bpf_testmod]
>
> What exactly did you instrument here?
>
> > [   39.819867]  ? page_alloc_kthread+0x320/0x320 [bpf_testmod]
> > [   39.819875]  ? lock_is_held_type+0x85/0xe0
> > [   39.819881]  ___slab_alloc+0x256/0xec0
>
> And here we took the lock originally?

yes, but they are truly different local_locks of different
kmalloc buckets, and local_lock_is_locked() is working.

See in the splat:

> > [   39.819646] page_alloc_kthr/2306 is trying to acquire lock:
> > [   39.819650] ff110001f5cbea88 ((&c->lock)){+.+.}-{3:3}, at:
> > ___slab_alloc+0xb7/0xec0
> > [   39.819667]
> > [   39.819667] but task is already holding lock:
> > [   39.819668] ff110001f5cbfe88 ((&c->lock)){+.+.}-{3:3}, at:
> > ___slab_alloc+0xb7/0xec0

the addresses of the locks are different and they're different
kmalloc buckets, but lockdep cannot understand this without
explicit local_lock_lockdep_start().
The same thing I'm trying to explain in the commit log.





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux