Re: [DISCUSSION] Revisiting Slab Movable Objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 30, 2025 at 3:11 PM Harry Yoo <harry.yoo@xxxxxxxxxx> wrote:
> On Mon, Apr 28, 2025 at 05:31:35PM +0200, Jann Horn wrote:
> > On Fri, Apr 25, 2025 at 1:09 PM Harry Yoo <harry.yoo@xxxxxxxxxx> wrote:
> > > On Tue, Apr 22, 2025 at 07:54:08AM +1000, Dave Chinner wrote:
> > > > On Mon, Apr 21, 2025 at 10:47:39PM +0900, Harry Yoo wrote:
> > > > > Hi folks,
> > > > >
> > > > > As a long term project, I'm starting to look into resurrecting
> > > > > Slab Movable Objects. The goal is to make certain types of slab memory
> > > > > movable and thus enable targeted reclamation, migration, and
> > > > > defragmentation.
> > > > >
> > > > > The main purpose of this posting is to briefly review what's been tried
> > > > > in the past, ask people why prior efforts have stalled (due to lack of
> > > > > time or insufficient justification for additional complexity?),
> > > > > and discuss what's feasible today.
> > > > >
> > > > > Please add anyone I may have missed to Cc. :)
> > > >
> > > > Adding -fsdevel because dentry/inode cache discussion needs to be
> > > > visible to all the fs/VFS developers.
> > > >
> > > > I'm going to cut straight to the chase here, but I'll leave the rest
> > > > of the original email quoted below for -fsdevel readers.
> > > >
> > > > > Previous Work on Slab Movable Objects
> > > > > =====================================
> > > >
> > > > <snip>
> > > >
> > > > Without including any sort of viable proposal for dentry/inode
> > > > relocation (i.e. the showstopper for past attempts), what is the
> > > > point of trying to ressurect this?
> > >
> > > Migrating slabs still makes sense for other objects such as xarray / maple
> > > tree nodes, and VMAs.
> >
> > Do we have examples of how much memory is actually wasted on
> > sparsely-used slabs, and which slabs this happens in, from some real
> > workloads?
>
> Workloads that uses a large amount of reclaimable slab memory (inode,
> dentry, etc.) and triggers reclamation can observe this problem.
>
> On my laptop, I can reproduce the problem by running 'updatedb' command
> that touches many files and triggering reclamation by running programs
> that consume large amount of memory. As slab memory is reclaimed, it becomes
> sparsely populated (as slab memory is not reclaimed folio by folio)
>
> During reclamation, the total slab memory utilization drops from 95% to 50%.
> For very sparsely populated caches, the cache utilization is between
> 12% and 33%. (ext4_inode_cache, radix_tree_node, dentry, trace_event_file,
> and some kmalloc caches on my machine).
>
> At the time OOM-killer is invoked, there's about 50% slab memory wasted
> due to sparsely populated slabs, which is about 236 MiB on my laptop.
> I would say it's a sufficiently big problem to solve.
>
> I wonder how worse this problem would be on large file servers,
> but I don't run such servers :-)
>
> > If sparsely-used slabs are a sufficiently big problem, maybe another
> > big hammer we have is to use smaller slab pages, or something along
> > those lines? Though of course a straightforward implementation of that
> > would probably have negative effects on the performance of SLUB
> > fastpaths, and depending on object size it might waste more memory on
> > padding.
>
> So it'll be something like prefering low orders when in calculate_order()
> while keeping fractional waste reasonably.
>
> One problem could be making n->list_lock contention much worse
> on larger machines as you need to grab more slabs from the list?

Maybe. I imagine using batched operations could help, such that the
amount of managed memory that is transferred per locking operation
stays the same...

> > (An adventurous idea would be to try to align kmem_cache::size such
> > that objects start at some subpage boundaries of SLUB folios, and then
> > figure out a way to shatter SLUB folios into smaller folios at runtime
> > while they contain objects... but getting the SLUB locking right for
> > that without slowing down the fastpath for freeing an object would
> > probably be a large pain.)
>
> You can't make virt_to_slab() work if you shatter a slab folio
> into smaller ones?

Yeah, I think that would be hard. We could maybe avoid the
virt_to_slab() on the active-slab fastpath, and maybe there is some
kind of RCU-transition scheme that could be used on the path for
non-active slabs (a bit similarly to how percpu refcounts transition
to atomic mode, with a transition period where objects are allowed to
still go on the freelist of the former head page)...

> A more general question: will either shattering or allocating
> smaller slabs help free more memory anyway? It likely depends on
> the spatial pattern of how the objects are reclaimed and remain
> populated within a slab?

Probably, yeah.

As a crude thought experiment, if you (somewhat pessimistically?)
assume that the spatial pattern is "we first allocate a lot of
objects, then for each object we roll a random number and free it with
a 90% probability", and you have something like a kmalloc-512 slab
(normal order 2, which fits 32 objects), then the probability that an
entire order-2 page will be empty would be
pow(0.9, 32) ~= 3.4%
while the probability that an individual order-0 page is empty would be
pow(0.9, 8) ~= 43%
There could be patterns that are worse, like "we preserve exactly
every fourth object"; though SLUB's freelist randomization (if
CONFIG_SLAB_FREELIST_RANDOM is enabled) would probably transform that
into a different pattern, so that it's not actually a sequential
pattern where every fourth object is allocated.

In case you want to do more detailed experiments with this: FYI, I
have a branch "slub-binary-snapshot" at https://github.com/thejh/linux
with a draft patch that provides a debugfs API for getting a binary
dump of SLUB allocations (I wrote that patch for another project):
https://github.com/thejh/linux/commit/685944dc69fd21e92bf110713b491d5c050328af
- maybe with some changes that would be useful for analyzing SLUB
fragmentation from userspace.

But IDK if that's a good way to experiment with this, or if it'd be
easier to directly analyze fragmentation in debugfs code in SLUB.





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux