On 2025-05-12 17:01:55 [+0200], Vlastimil Babka wrote: > On 5/12/25 16:56, Sebastian Andrzej Siewior wrote: > > On 2025-04-30 20:27:15 [-0700], Alexei Starovoitov wrote: > >> --- a/include/linux/local_lock_internal.h > >> +++ b/include/linux/local_lock_internal.h > >> @@ -285,4 +288,9 @@ do { \ > >> __local_trylock(lock); \ > >> }) > >> > >> +/* migration must be disabled before calling __local_lock_is_locked */ > >> +#include "../../kernel/locking/rtmutex_common.h" > >> +#define __local_lock_is_locked(__lock) \ > >> + (rt_mutex_owner(&this_cpu_ptr(__lock)->lock) == current) > > > > So I've been looking if we really need rt_mutex_owner() or if > > rt_mutex_base_is_locked() could do the job. Judging from the slub-free > > case, the rt_mutex_base_is_locked() would be just fine. The alloc case > > on the other hand probably not so much. On the other hand since we don't > > accept allocations from hardirq or NMI the "caller == owner" case should > > never be observed. Unless buggy & debugging and this should then also be > > observed by lockdep. Right? > > AFAIU my same line of thought was debunked by Alexei here: > > https://lore.kernel.org/all/CAADnVQLO9YX2_0wEZshHbwXoJY2-wv3OgVGvN-hgf6mK0_ipxw@xxxxxxxxxxxxxx/ > > e.g. you could have the lock and then due to kprobe or tracing in the slab > allocator code re-enter it. Okay. So I assumed that re-entrace is not a thing here on PREEMPT_RT but thank you correcting. There is a difference if the lock is locked and you try a different one if it is possible just to avoid the contention. Otherwise fallback to the contention case if there is no other way. The other case is to avoid recursive locking. > > If there is another case where recursion can be observed and need to be > > addressed I would prefer to move the function (+define) to > > include/linux/rtmutex.h. instead of doing this "../../ include". > > > >> #endif /* CONFIG_PREEMPT_RT */ Sebastian