On Fri, May 9, 2025 at 9:54 AM Steven Rostedt <rostedt@xxxxxxxxxxx> wrote: > > From: Steven Rostedt <rostedt@xxxxxxxxxxx> > > Instead of using the callback_mutex to protect the link list of callbacks > in unwind_deferred_task_work(), use SRCU instead. This gets called every > time a task exits that has to record a stack trace that was requested. > This can happen for many tasks on several CPUs at the same time. A mutex > is a bottleneck and can cause a bit of contention and slow down performance. > > As the callbacks themselves are allowed to sleep, regular RCU can not be > used to protect the list. Instead use SRCU, as that still allows the > callbacks to sleep and the list can be read without needing to hold the > callback_mutex. > > Link: https://lore.kernel.org/all/ca9bd83a-6c80-4ee0-a83c-224b9d60b755@xxxxxxxxxxxx/ > > Suggested-by: Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx> > Signed-off-by: Steven Rostedt (Google) <rostedt@xxxxxxxxxxx> > --- > kernel/unwind/deferred.c | 33 +++++++++++++++++++++++++-------- > 1 file changed, 25 insertions(+), 8 deletions(-) > > diff --git a/kernel/unwind/deferred.c b/kernel/unwind/deferred.c > index 7ae0bec5b36a..5d6976ee648f 100644 > --- a/kernel/unwind/deferred.c > +++ b/kernel/unwind/deferred.c > @@ -13,10 +13,11 @@ > > #define UNWIND_MAX_ENTRIES 512 > > -/* Guards adding to and reading the list of callbacks */ > +/* Guards adding to or removing from the list of callbacks */ > static DEFINE_MUTEX(callback_mutex); > static LIST_HEAD(callbacks); > static unsigned long unwind_mask; > +DEFINE_STATIC_SRCU(unwind_srcu); > > /* > * Read the task context timestamp, if this is the first caller then > @@ -108,6 +109,7 @@ static void unwind_deferred_task_work(struct callback_head *head) > struct unwind_work *work; > u64 timestamp; > struct task_struct *task = current; > + int idx; > > if (WARN_ON_ONCE(!info->pending)) > return; > @@ -133,13 +135,15 @@ static void unwind_deferred_task_work(struct callback_head *head) > > timestamp = info->timestamp; > > - guard(mutex)(&callback_mutex); > - list_for_each_entry(work, &callbacks, list) { > + idx = srcu_read_lock(&unwind_srcu); nit: you could have used guard(srcu)(&unwind_srcu) ? > + list_for_each_entry_srcu(work, &callbacks, list, > + srcu_read_lock_held(&unwind_srcu)) { > if (task->unwind_mask & (1UL << work->bit)) { > work->func(work, &trace, timestamp); > clear_bit(work->bit, ¤t->unwind_mask); > } > } > + srcu_read_unlock(&unwind_srcu, idx); > } > > static int unwind_deferred_request_nmi(struct unwind_work *work, u64 *timestamp) [...]