Re: [PATCH v2 6/6] slab: Introduce kmalloc_nolock() and kfree_nolock().

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 7/11/25 09:36, Harry Yoo wrote:
> On Tue, Jul 08, 2025 at 06:53:03PM -0700, Alexei Starovoitov wrote:
> 
> By adding it to the lockless list, it's overwriting freed objects,
> and it's not always safe.
> 
> Looking at calculate_sizes():
> 
>         if (((flags & SLAB_TYPESAFE_BY_RCU) && !args->use_freeptr_offset) ||    
>             (flags & SLAB_POISON) || s->ctor ||                                 
>             ((flags & SLAB_RED_ZONE) &&                                         
>              (s->object_size < sizeof(void *) || slub_debug_orig_size(s)))) {    
>                 /*                                                              
>                  * Relocate free pointer after the object if it is not          
>                  * permitted to overwrite the first word of the object on           
>                  * kmem_cache_free.                                             
>                  *                                                              
>                  * This is the case if we do RCU, have a constructor or         
>                  * destructor, are poisoning the objects, or are                
>                  * redzoning an object smaller than sizeof(void *) or are           
>                  * redzoning an object with slub_debug_orig_size() enabled,         
>                  * in which case the right redzone may be extended.             
>                  *                                                              
>                  * The assumption that s->offset >= s->inuse means free         
>                  * pointer is outside of the object is used in the              
>                  * freeptr_outside_object() function. If that is no             
>                  * longer true, the function needs to be modified.              
>                  */                                                             
>                 s->offset = size;                                               
>                 size += sizeof(void *);  
> 
> Only sizeof(void *) bytes from object + s->offset is always safe to overwrite.

Great point! Agreed.

> So either 1) teach defer_free() that it needs to use s->offset for each
> object, instead of zero (and that the list can have objects from
> different caches), or 2) introduce per-cache per-CPU lockless lists?

1) should be feasible. s->offset should be available when queueing the
object, and in free_deferred_objects() you already obtain "s" too so it's
trivial to subtract s->offset back. Thus 2) is unnecessary overhead.





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux