On Fri, Jul 11, 2025 at 12:30:19PM +0200, Vlastimil Babka wrote: > > and > > static DEFINE_PER_CPU(struct llist_head, defer_deactivate_slabs); > > Should work. Also deactivate_slab() should be the correct operation for both > a slab from partial list and a newly allocated one. > But oops, where do we store all the parameters for deactivate_slab() We can > probably reuse the union with "struct list_head slab_list" for queueing. > kmem_cache pointer can be simply taken from struct slab, it's already tehre. > But the separate flush_freelist pointer? Maybe take advantage of list_head > being two pointers and struct llist_node just one pointer, so what we need > will still fit? > > Otherwise we could do the first two phases of deactivate_slab() immediately > and only defer the third phase where the freelists are already merged and > there's no freelist pointer to handle anymore. But if it's not necessary, > let's not complicate. > > Also should kmem_cache_destroy() path now get a barrier to flush all pending > irq_work? Does it exist? Thanks a lot everyone for great feedback. Here is what I have so far that addresses the comments. The only thing I struggle with is how to properly test "if (unlikely(c->slab))" condition in retry_load_slab. I couldn't trigger it no matter what I tried. So I manually unit-tested defer_deactivate_slab() bits with hacks. Will fold and respin next week. --