On Wed, Jul 09, 2025 at 07:24:41PM +0200, Vitaly Wool wrote: > Reimplement k[v]realloc_node() to be able to set node and > alignment should a user need to do so. In order to do that while > retaining the maximal backward compatibility, add > k[v]realloc_node_align() functions and redefine the rest of API > using these new ones. > > While doing that, we also keep the number of _noprof variants to a > minimum, which implies some changes to the existing users of older > _noprof functions, that basically being bcachefs. > > With that change we also provide the ability for the Rust part of > the kernel to set node and alignment in its K[v]xxx > [re]allocations. > > Signed-off-by: Vitaly Wool <vitaly.wool@xxxxxxxxxxx> > --- > fs/bcachefs/darray.c | 2 +- > fs/bcachefs/util.h | 2 +- > include/linux/bpfptr.h | 2 +- > include/linux/slab.h | 38 +++++++++++++++---------- > lib/rhashtable.c | 4 +-- > mm/slub.c | 64 +++++++++++++++++++++++++++++------------- > 6 files changed, 72 insertions(+), 40 deletions(-) > diff --git a/mm/slub.c b/mm/slub.c > index c4b64821e680..6fad4cdea6c4 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -4845,7 +4845,7 @@ void kfree(const void *object) > EXPORT_SYMBOL(kfree); > > static __always_inline __realloc_size(2) void * > -__do_krealloc(const void *p, size_t new_size, gfp_t flags) > +__do_krealloc(const void *p, size_t new_size, unsigned long align, gfp_t flags, int nid) > { > void *ret; > size_t ks = 0; > @@ -4859,6 +4859,20 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags) > if (!kasan_check_byte(p)) > return NULL; > > + /* refuse to proceed if alignment is bigger than what kmalloc() provides */ > + if (!IS_ALIGNED((unsigned long)p, align) || new_size < align) > + return NULL; Hmm but what happens if `p` is aligned to `align`, but the new object is not? For example, what will happen if we allocate object with size=64, align=64 and then do krealloc with size=96, align=64... Or am I missing something? > + /* > + * If reallocation is not necessary (e. g. the new size is less > + * than the current allocated size), the current allocation will be > + * preserved unless __GFP_THISNODE is set. In the latter case a new > + * allocation on the requested node will be attempted. > + */ > + if (unlikely(flags & __GFP_THISNODE) && nid != NUMA_NO_NODE && > + nid != page_to_nid(virt_to_page(p))) > + goto alloc_new; > + > if (is_kfence_address(p)) { > ks = orig_size = kfence_ksize(p); > } else { -- Cheers, Harry / Hyeonggon