[PATCH 0/2] Add mru cache for inode to zone allocation mapping

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



These patches cleans up the xfs mru code a bit and adds a cache for
keeping track of which zone an inode allocated data to last. Placing
file data in the same zone helps reduce garbage collection overhead,
and with this patch we add support per-file co-location for random
writes.

While I was initially concerned by adding overhead to the allocation
path, the cache actually reduces it as as we avoid going through the
zone allocation algorithm for every random write.

When I run a fio workload with 16 writers to different files in
parallel, bs=8k, iodepth=4, size=1G, I get these throughputs:

baseline	with_cache
774 MB/s	858 MB/s (+11%)

(averaged over three runs ech on a nullblk device)

I see similar, figures when benchmarking on a zns nvme drive (+17%).

No updates in the code since the RFC:
https://www.spinics.net/lists/linux-xfs/msg98889.html

Christoph Hellwig (1):
  xfs: free the item in xfs_mru_cache_insert on failure

Hans Holmberg (1):
  xfs: add inode to zone caching for data placement

 fs/xfs/xfs_filestream.c |  15 ++----
 fs/xfs/xfs_mount.h      |   1 +
 fs/xfs/xfs_mru_cache.c  |  15 ++++--
 fs/xfs/xfs_zone_alloc.c | 109 ++++++++++++++++++++++++++++++++++++++++
 4 files changed, 126 insertions(+), 14 deletions(-)

-- 
2.34.1





[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux