On Sun, Apr 20, 2025 at 02:47:02AM -0700, Guenter Roeck wrote: > On Tue, Mar 25, 2025 at 09:10:49AM +0000, Hans Holmberg wrote: > > Presently we start garbage collection late - when we start running > > out of free zones to backfill max_open_zones. This is a reasonable > > default as it minimizes write amplification. The longer we wait, > > the more blocks are invalidated and reclaim cost less in terms > > of blocks to relocate. > > > > Starting this late however introduces a risk of GC being outcompeted > > by user writes. If GC can't keep up, user writes will be forced to > > wait for free zones with high tail latencies as a result. > > > > This is not a problem under normal circumstances, but if fragmentation > > is bad and user write pressure is high (multiple full-throttle > > writers) we will "bottom out" of free zones. > > > > To mitigate this, introduce a zonegc_low_space tunable that lets the > > user specify a percentage of how much of the unused space that GC > > should keep available for writing. A high value will reclaim more of > > the space occupied by unused blocks, creating a larger buffer against > > write bursts. > > > > This comes at a cost as write amplification is increased. To > > illustrate this using a sample workload, setting zonegc_low_space to > > 60% avoids high (500ms) max latencies while increasing write > > amplification by 15%. > > > ... > > bool > > xfs_zoned_need_gc( > > struct xfs_mount *mp) > > { > > + s64 available, free; > > + > ... > > + > > + free = xfs_estimate_freecounter(mp, XC_FREE_RTEXTENTS); > > + if (available < mult_frac(free, mp->m_zonegc_low_space, 100)) > > + return true; > > + > > With some 32-bit builds (parisc, openrisc so far): > > Error log: > ERROR: modpost: "__divdi3" [fs/xfs/xfs.ko] undefined! > ERROR: modpost: "__umoddi3" [fs/xfs/xfs.ko] undefined! > ERROR: modpost: "__moddi3" [fs/xfs/xfs.ko] undefined! > ERROR: modpost: "__udivdi3" [fs/xfs/xfs.ko] undefined! > I opened a discussion about this: https://lore.kernel.org/lkml/20250419115157.567249-1-cem@xxxxxxxxxx/