Re: [PATCH v2 00/12] mm/iomap: add granular dirty and writeback accounting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello!

On Fri 29-08-25 16:39:30, Joanne Koong wrote:
> This patchset adds granular dirty and writeback stats accounting for large
> folios.
> 
> The dirty page balancing logic uses these stats to determine things like
> whether the ratelimit has been exceeded, the frequency with which pages need
> to be written back, if dirtying should be throttled, etc. Currently for large
> folios, if any byte in the folio is dirtied or written back, all the bytes in
> the folio are accounted as such.
> 
> In particular, there are four places where dirty and writeback stats get
> incremented and decremented as pages get dirtied and written back:
> a) folio dirtying (filemap_dirty_folio() -> ... -> folio_account_dirtied())
>    - increments NR_FILE_DIRTY, NR_ZONE_WRITE_PENDING, WB_RECLAIMABLE,
>      current->nr_dirtied
> 
> b) writing back a mapping (writeback_iter() -> ... ->
> folio_clear_dirty_for_io())
>    - decrements NR_FILE_DIRTY, NR_ZONE_WRITE_PENDING, WB_RECLAIMABLE
> 
> c) starting writeback on a folio (folio_start_writeback())
>    - increments WB_WRITEBACK, NR_WRITEBACK, NR_ZONE_WRITE_PENDING
> 
> d) ending writeback on a folio (folio_end_writeback())
>    - decrements WB_WRITEBACK, NR_WRITEBACK, NR_ZONE_WRITE_PENDING

I was looking through the patch set. One general concern I have is that it
all looks somewhat fragile. If you say start writeback on a folio with a
granular function and happen to end writeback with a non-granular one,
everything will run fine, just a permanent error in the counters will be
introduced.  Similarly with a dirtying / starting writeback mismatch. The
practicality of this issue is demostrated by the fact that you didn't
convert e.g. folio_redirty_for_writepage() so anybody using it together
with fine-grained accounting will just silently mess up the counters.
Another issue of a similar kind is that __folio_migrate_mapping() does not
support fine-grained accounting (and doesn't even have a way to figure out
proper amount to account) so again any page migration may introduce
permanent errors into counters. One way to deal with this fragility would
be to have a flag in the mapping that will determine whether the dirty
accounting is done by MM or the filesystem (iomap code in your case)
instead of determining it at the call site.

Another concern I have is the limitation to blocksize >= PAGE_SIZE you
mention below. That is kind of annoying for filesystems because generally
they also have to deal with cases of blocksize < PAGE_SIZE and having two
ways of accounting in one codebase is a big maintenance burden. But this
was discussed elsewhere in this series and I think you have settled on
supporting blocksize < PAGE_SIZE as well?

Finally, there is one general issue for which I'd like to hear opinions of
MM guys: Dirty throttling is a mechanism to avoid a situation where the
dirty page cache consumes too big amount of memory which makes page reclaim
hard and the machine thrashes as a result or goes OOM. Now if you dirty a
2MB folio, it really makes all those 2MB hard to reclaim (neither direct
reclaim nor kswapd will be able to reclaim such folio) even though only 1KB
in that folio needs actual writeback. In this sense it is actually correct
to account whole big folio as dirty in the counters - if you accounted only
1KB or even 4KB (page), a user could with some effort make all page cache
memory dirty and hard to reclaim without crossing the dirty limits. On the
other hand if only 1KB in a folio trully needs writeback, the writeback
will be generally significantly faster than with 2MB needing writeback. So
in this sense it is correct to account amount to data that trully needs
writeback.

I don't know what the right answer to this "conflict of interests" is. We
could keep accounting full folios in the global / memcg counters (to
protect memory reclaim) and do per page (or even finer) accounting in the
bdi_writeback which is there to avoid excessive accumulation of dirty data
(and thus long writeback times) against one device. This should still help
your case with FUSE and strictlimit (which is generally constrained by
bdi_writeback counters). One just needs to have a closer look how hard
would it be to adapt writeback throttling logic to the different
granularity of global counters and writeback counters...

								Honza

> Patches 1 to 9 adds support for the 4 cases above to take in the number of
> pages to be accounted, instead of accounting for the entire folio.
> 
> Patch 12 adds the iomap changes that uses these new APIs. This relies on the
> iomap folio state bitmap to track which pages are dirty (so that we avoid
> any double-counting). As such we can only do granular accounting if the
> block size >= PAGE_SIZE.
> 
> This patchset was run through xfstests using fuse passthrough hp (with an
> out-of-tree kernel patch enabling fuse large folios).
> 
> This is on top of commit 4f702205 ("Merge branch 'vfs-6.18.rust' into
> vfs.all") in Christian's vfs tree, and on top of the patchset that removes
> BDI_CAP_WRITEBACK_ACCT [1].
> 
> Local benchmarks were run on xfs by doing the following:
> 
> seting up xfs (per the xfstests readme):
> # xfs_io -f -c "falloc 0 10g" test.img
> # xfs_io -f -c "falloc 0 10g" scratch.img
> # mkfs.xfs test.img
> # losetup /dev/loop0 ./test.img
> # losetup /dev/loop1 ./scratch.img
> # mkdir -p /mnt/test && mount /dev/loop0 /mnt/test
> 
> # sudo sysctl -w vm.dirty_bytes=$((3276 * 1024 * 1024)) # roughly 20% of 16GB
> # sudo sysctl -w vm.dirty_background_bytes=$((1638*1024*1024)) # roughly 10% of 16GB
> 
> running this test program (ai-generated) [2] which essentially writes out 2 GB
> of data 256 MB at a time and then spins up 15 threads to do 50-byte 50k
> writes.
> 
> On my VM, I saw the writes take around 3 seconds (with some variability of
> taking 0.3 seconds to 5 seconds sometimes) in the base version vs taking
> a pretty consistent 0.14 seconds with this patchset. It'd be much appreciated
> if someone could also run it on their local system to verify they see similar
> numbers.
> 
> Thanks,
> Joanne
> 
> [1] https://lore.kernel.org/linux-fsdevel/20250707234606.2300149-1-joannelkoong@xxxxxxxxx/
> [2] https://pastebin.com/CbcwTXjq
> 
> Changelog
> v1: https://lore.kernel.org/linux-fsdevel/20250801002131.255068-1-joannelkoong@xxxxxxxxx/
> v1 -> v2:
> * Add documentation specifying caller expectations for the
>   filemap_dirty_folio_pages() -> __folio_mark_dirty() callpath (Jan)
> * Add requested iomap bitmap iteration refactoring (Christoph)
> * Fix long lines (Christoph)
> 
> Joanne Koong (12):
>   mm: pass number of pages to __folio_start_writeback()
>   mm: pass number of pages to __folio_end_writeback()
>   mm: add folio_end_writeback_pages() helper
>   mm: pass number of pages dirtied to __folio_mark_dirty()
>   mm: add filemap_dirty_folio_pages() helper
>   mm: add __folio_clear_dirty_for_io() helper
>   mm: add no_stats_accounting bitfield to wbc
>   mm: refactor clearing dirty stats into helper function
>   mm: add clear_dirty_for_io_stats() helper
>   iomap: refactor dirty bitmap iteration
>   iomap: refactor uptodate bitmap iteration
>   iomap: add granular dirty and writeback accounting
> 
>  fs/btrfs/subpage.c         |   2 +-
>  fs/buffer.c                |   6 +-
>  fs/ext4/page-io.c          |   2 +-
>  fs/iomap/buffered-io.c     | 281 ++++++++++++++++++++++++++++++-------
>  include/linux/page-flags.h |   4 +-
>  include/linux/pagemap.h    |   4 +-
>  include/linux/writeback.h  |  10 ++
>  mm/filemap.c               |  12 +-
>  mm/internal.h              |   2 +-
>  mm/page-writeback.c        | 115 +++++++++++----
>  10 files changed, 346 insertions(+), 92 deletions(-)
> 
> -- 
> 2.47.3
> 
-- 
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux