Re: [PATCH v2 6/6] slab: Introduce kmalloc_nolock() and kfree_nolock().

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 10, 2025 at 2:36 AM Vlastimil Babka <vbabka@xxxxxxx> wrote:
> > +     if (unlikely(!allow_spin)) {
> > +             folio = (struct folio *)alloc_frozen_pages_nolock(0/* __GFP_COMP is implied */,
> > +                                                               node, order);
> > +     } else if (node == NUMA_NO_NODE)
> >               folio = (struct folio *)alloc_frozen_pages(flags, order);
> >       else
> >               folio = (struct folio *)__alloc_frozen_pages(flags, order, node, NULL);
>
> Nit: should use { } either for everything or nothing (seems your new branch
> would work without them)

leftover from v1. will fix.

> >                       stat(s, ALLOC_NODE_MISMATCH);
> > @@ -3730,7 +3762,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
> >        * PFMEMALLOC but right now, we are losing the pfmemalloc
> >        * information when the page leaves the per-cpu allocator
> >        */
> > -     if (unlikely(!pfmemalloc_match(slab, gfpflags)))
> > +     if (unlikely(!pfmemalloc_match(slab, gfpflags) && allow_spin))
> >               goto deactivate_slab;
> >
> >       /* must check again c->slab in case we got preempted and it changed */
> > @@ -3803,7 +3835,12 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
> >               slub_set_percpu_partial(c, slab);
> >
> >               if (likely(node_match(slab, node) &&
> > -                        pfmemalloc_match(slab, gfpflags))) {
> > +                        pfmemalloc_match(slab, gfpflags)) ||
> > +                 /*
> > +                  * Reentrant slub cannot take locks necessary
> > +                  * for __put_partials(), hence downgrade to any node
> > +                  */
> > +                 !allow_spin) {
>
> Uh this seems rather ugly, I'd move the comment above everything. Also it's
> not "downgrade" as when you assign NUMA_NO_NODE earlier, I'd say "ignore the
> preference".
> Note that it would be bad to ignore with __GFP_THISNODE but then it's not
> allowed for kmalloc_nolock() so that's fine.

Yes. All correct. Will reword.

> > @@ -3911,6 +3953,12 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
> >               void *flush_freelist = c->freelist;
> >               struct slab *flush_slab = c->slab;
> >
> > +             if (unlikely(!allow_spin))
> > +                     /*
> > +                      * Reentrant slub cannot take locks
> > +                      * necessary for deactivate_slab()
> > +                      */
> > +                     return NULL;
>
> Hm but this is leaking the slab we allocated and have in the "slab"
> variable, we need to free it back in that case.
>
> >               c->slab = NULL;
> >               c->freelist = NULL;
> >               c->tid = next_tid(c->tid);
>
> > @@ -4593,10 +4792,31 @@ static __always_inline void do_slab_free(struct kmem_cache *s,
> >       barrier();
> >
> >       if (unlikely(slab != c->slab)) {
> > -             __slab_free(s, slab, head, tail, cnt, addr);
> > +             /* cnt == 0 signals that it's called from kfree_nolock() */
> > +             if (unlikely(!cnt)) {
> > +                     /*
> > +                      * __slab_free() can locklessly cmpxchg16 into a slab,
> > +                      * but then it might need to take spin_lock or local_lock
> > +                      * in put_cpu_partial() for further processing.
> > +                      * Avoid the complexity and simply add to a deferred list.
> > +                      */
> > +                     defer_free(head);
> > +             } else {
> > +                     __slab_free(s, slab, head, tail, cnt, addr);
> > +             }
> >               return;
> >       }
> >
> > +     if (unlikely(!cnt)) {
> > +             if ((in_nmi() || !USE_LOCKLESS_FAST_PATH()) &&
> > +                 local_lock_is_locked(&s->cpu_slab->lock)) {
> > +                     defer_free(head);
> > +                     return;
> > +             }
> > +             cnt = 1;
>
> Hmm we might end up doing a "goto redo" later and then do the wrong thing above?

Great catch. Will fix. That's two serious bugs.
That's my penalty for reviewing other people code 99% of the time
and little time to code myself.





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux