Re: [PATCH v2 3/6] locking/local_lock: Introduce local_lock_lockdep_start/end()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jul 15, 2025 at 07:48:39PM +0200, Vlastimil Babka wrote:
> On 7/15/25 19:29, Alexei Starovoitov wrote:
> > On Tue, Jul 15, 2025 at 08:56:21AM +0200, Vlastimil Babka wrote:
> >> > the addresses of the locks are different and they're different
> >> > kmalloc buckets, but lockdep cannot understand this without
> >> > explicit local_lock_lockdep_start().
> >> > The same thing I'm trying to explain in the commit log.
> >> 
> >> Thanks for the explanation and sorry for being so dense.
> >> Maybe lockdep's lock classes can be used here somehow instead of having to
> >> teach lockdep completely new tricks, but I don't know enough about those to
> >> know for sure.
> > 
> > I tried that with a separate lock_key for each local_trylock_t
> > and it's sort-of kinda works for 16 cpus, but doesn't scale
> > when number of cpus is large.
> > There is no good way to pick LOCKDEP_BITS.
> > 
> > It can be made optional on PREEMP_RT only
> > and used for kmalloc buckets only that kmalloc_nolock() is using,
> > but still feels less clean than
> > local_lock_lockdep_start/end()
> > since it makes lockdep work harder.
> > 
> > Better ideas?
> 
> I was thinking something like a class for normal context and a different
> class for kmalloc_nolock() context (determined by allow_spinning) but it's
> quite vague as I don't understand lockdep enough.

lockdep is not context sensitive. Any lock can either 'lock' or 'trylock'.
One cannot tell lockdep to use different class when lock is 'lock'ed
from a different context (like special gfpflags),
but good news... Turned out lockdep understand LD_LOCK_PERCPU which was
added specifically for local_lock.
So no need to register lock_key for each cpu.
The following diff addresses false positive on PREEMPT_RT.
This is definitely cleaner than local_lock_lockdep_start/end() trickery.
I'm thinking to wrap it with ifdef CONFIG_PREEMPT_RT and add to respin.
As long as we stick to local_trylock_irqsave() for !PREEMPT_RT
and local_lock_irqsave() for PREEMPT_RT all cases should be covered.

For !PREEMP_RT there will be no issue with
"inconsistent {INITIAL USE} -> {IN-NMI} usage."
case, since we will be doing local_trylock_irqsave().
And for PREEMPT_RT there is no NMI or hardirq in this path.

Meaning that what you were suggesting earlier:
> if (unlikely(!local_trylock_irqsave(&s->cpu_slab->lock, *flags))
>        local_lock_irqsave(&s->cpu_slab->lock, *flags);
isn't an option.
For !RT we can only use local_trylock_irqsave() without fallback
otherwise we will be back to square one re: lockdep falsepositives.

So I'll be going with Sebastian's approach:
+       if (!IS_ENABLED(CONFIG_PREEMPT_RT) && !allow_spin)
+               BUG_ON(!local_trylock_irqsave(&s->cpu_slab->lock, *flags));
+       else
+               local_lock_irqsave(&s->cpu_slab->lock, *flags);


[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux