On Tue, Apr 29, 2025 at 04:04:25PM -0700, Shakeel Butt wrote: > The consume_stock() does not need to check gfp_mask for spinning and can > simply trylock the local lock to decide to proceed or fail. No need to > spin at all for local lock. > > Signed-off-by: Shakeel Butt <shakeel.butt@xxxxxxxxx> > --- > mm/memcontrol.c | 20 +++++++------------- > 1 file changed, 7 insertions(+), 13 deletions(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 650fe4314c39..40d0838d88bc 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -1804,16 +1804,14 @@ static bool obj_stock_flush_required(struct memcg_stock_pcp *stock, > * consume_stock: Try to consume stocked charge on this cpu. > * @memcg: memcg to consume from. > * @nr_pages: how many pages to charge. > - * @gfp_mask: allocation mask. > * > - * The charges will only happen if @memcg matches the current cpu's memcg > - * stock, and at least @nr_pages are available in that stock. Failure to > - * service an allocation will refill the stock. > + * Consume the cached charge if enough nr_pages are present otherwise return > + * failure. Also return failure for charge request larger than > + * MEMCG_CHARGE_BATCH or if the local lock is already taken. > * > * returns true if successful, false otherwise. > */ > -static bool consume_stock(struct mem_cgroup *memcg, unsigned int nr_pages, > - gfp_t gfp_mask) > +static bool consume_stock(struct mem_cgroup *memcg, unsigned int nr_pages) > { > struct memcg_stock_pcp *stock; > uint8_t stock_pages; > @@ -1821,12 +1819,8 @@ static bool consume_stock(struct mem_cgroup *memcg, unsigned int nr_pages, > bool ret = false; > int i; > > - if (nr_pages > MEMCG_CHARGE_BATCH) > - return ret; > - > - if (gfpflags_allow_spinning(gfp_mask)) > - local_lock_irqsave(&memcg_stock.stock_lock, flags); > - else if (!local_trylock_irqsave(&memcg_stock.stock_lock, flags)) > + if (nr_pages > MEMCG_CHARGE_BATCH || > + !local_trylock_irqsave(&memcg_stock.stock_lock, flags)) I don't think it's a good idea. spin_trylock() will fail often enough in PREEMPT_RT. Even during normal boot I see preemption between tasks and they contend on the same cpu for the same local_lock==spin_lock. Making them take slow path is a significant behavior change that needs to be carefully considered. Also please cc bpf@vger in the future for these kind of changes.