On Wed, Apr 30, 2025 at 01:42:47PM +0200, Vlastimil Babka wrote: > On 4/30/25 01:04, Shakeel Butt wrote: > > The per-cpu stock_lock protects cached memcg and cached objcg and their > > respective fields. However there is no dependency between these fields > > and it is better to have fine grained separate locks for cached memcg > > and cached objcg. This decoupling of locks allows us to make the memcg > > charge cache and objcg charge cache to be nmi safe independently. > > > > At the moment, memcg charge cache is already nmi safe and this > > decoupling will allow to make memcg charge cache work without disabling > > irqs. > > > > Signed-off-by: Shakeel Butt <shakeel.butt@xxxxxxxxx> > > --- > > mm/memcontrol.c | 52 +++++++++++++++++++++++++++---------------------- > > 1 file changed, 29 insertions(+), 23 deletions(-) > > > @@ -1883,19 +1885,22 @@ static void drain_local_stock(struct work_struct *dummy) > > struct memcg_stock_pcp *stock; > > unsigned long flags; > > > > - /* > > - * The only protection from cpu hotplug (memcg_hotplug_cpu_dead) vs. > > - * drain_stock races is that we always operate on local CPU stock > > - * here with IRQ disabled > > - */ > > - local_lock_irqsave(&memcg_stock.stock_lock, flags); > > + if (WARN_ONCE(!in_task(), "drain in non-task context")) > > + return; > > > > + preempt_disable(); > > stock = this_cpu_ptr(&memcg_stock); > > + > > + local_lock_irqsave(&memcg_stock.obj_lock, flags); > > drain_obj_stock(stock); > > + local_unlock_irqrestore(&memcg_stock.obj_lock, flags); > > + > > + local_lock_irqsave(&memcg_stock.memcg_lock, flags); > > drain_stock_fully(stock); > > - clear_bit(FLUSHING_CACHED_CHARGE, &stock->flags); > > + local_unlock_irqrestore(&memcg_stock.memcg_lock, flags); > > > > - local_unlock_irqrestore(&memcg_stock.stock_lock, flags); > > + clear_bit(FLUSHING_CACHED_CHARGE, &stock->flags); > > + preempt_enable(); > > This usage of preempt_disable() looks rather weird and makes RT unhappy as > the local lock is a mutex, so it gives you this: > > BUG: sleeping function called from invalid context at > kernel/locking/spinlock_rt.c:48 > > I know the next patch removes it again but for bisectability purposes it > should be avoided. Instead of preempt_disable() we can extend the local lock > scope here? > Indeed and thanks for the suggestion, will fix in v2.