Re: [PATCH v2 12/12] iomap: add granular dirty and writeback accounting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 5, 2025 at 5:43 AM Jan Kara <jack@xxxxxxx> wrote:
>
> On Fri 05-09-25 07:19:05, Brian Foster wrote:
> > On Thu, Sep 04, 2025 at 05:14:21PM -0700, Joanne Koong wrote:
> > > On Thu, Sep 4, 2025 at 1:07 PM Darrick J. Wong <djwong@xxxxxxxxxx> wrote:
> > > > On Thu, Sep 04, 2025 at 07:47:11AM -0400, Brian Foster wrote:
> > > > > On Wed, Sep 03, 2025 at 05:35:51PM -0700, Joanne Koong wrote:
> > > > > > On Wed, Sep 3, 2025 at 11:44 AM Brian Foster <bfoster@xxxxxxxxxx> wrote:
> > > > > > > On Tue, Sep 02, 2025 at 04:46:04PM -0700, Darrick J. Wong wrote:
> > > > > > > > On Fri, Aug 29, 2025 at 04:39:42PM -0700, Joanne Koong wrote:
> > > > > > > > > Add granular dirty and writeback accounting for large folios. These
> > > > > > > > > stats are used by the mm layer for dirty balancing and throttling.
> > > > > > > > > Having granular dirty and writeback accounting helps prevent
> > > > > > > > > over-aggressive balancing and throttling.
> > > > > > > > >
> > > > > > > > > There are 4 places in iomap this commit affects:
> > > > > > > > > a) filemap dirtying, which now calls filemap_dirty_folio_pages()
> > > > > > > > > b) writeback_iter with setting the wbc->no_stats_accounting bit and
> > > > > > > > > calling clear_dirty_for_io_stats()
> > > > > > > > > c) starting writeback, which now calls __folio_start_writeback()
> > > > > > > > > d) ending writeback, which now calls folio_end_writeback_pages()
> > > > > > > > >
> > > > > > > > > This relies on using the ifs->state dirty bitmap to track dirty pages in
> > > > > > > > > the folio. As such, this can only be utilized on filesystems where the
> > > > > > > > > block size >= PAGE_SIZE.
> > > > > > > >
> > > > > > > > Er... is this statement correct?  I thought that you wanted the granular
> > > > > > > > dirty page accounting when it's possible that individual sub-pages of a
> > > > > > > > folio could be dirty.
> > > > > > > >
> > > > > > > > If i_blocksize >= PAGE_SIZE, then we'll have set the min folio order and
> > > > > > > > there will be exactly one (large) folio for a single fsblock.  Writeback
> > > > > >
> > > > > > Oh interesting, this is the part I'm confused about. With i_blocksize
> > > > > > >= PAGE_SIZE, isn't there still the situation where the folio itself
> > > > > > could be a lot larger, like 1MB? That's what I've been seeing on fuse
> > > > > > where "blocksize" == PAGE_SIZE == 4096. I see that xfs sets the min
> > > > > > folio order through mapping_set_folio_min_order() but I'm not seeing
> > > > > > how that ensures "there will be exactly one large folio for a single
> > > > > > fsblock"? My understanding is that that only ensures the folio is at
> > > > > > least the size of the fsblock but that the folio size can be larger
> > > > > > than that too. Am I understanding this incorrectly?
> > > > > >
> > > > > > > > must happen in units of fsblocks, so there's no point in doing the extra
> > > > > > > > accounting calculations if there's only one fsblock.
> > > > > > > >
> > > > > > > > Waitaminute, I think the logic to decide if you're going to use the
> > > > > > > > granular accounting is:
> > > > > > > >
> > > > > > > >       (folio_size > PAGE_SIZE && folio_size > i_blocksize)
> > > > > > > >
> > > > > >
> > > > > > Yeah, you're right about this - I had used "ifs && i_blocksize >=
> > > > > > PAGE_SIZE" as the check, which translates to "i_blocks_per_folio > 1
> > > > > > && i_block_size >= PAGE_SIZE", which in effect does the same thing as
> > > > > > what you wrote but has the additional (and now I'm realizing,
> > > > > > unnecessary) stipulation that block_size can't be less than PAGE_SIZE.
> > > > > >
> > > > > > > > Hrm?
> > > > > > > >
> > > > > > >
> > > > > > > I'm also a little confused why this needs to be restricted to blocksize
> > > > > > > gte PAGE_SIZE. The lower level helpers all seem to be managing block
> > > > > > > ranges, and then apparently just want to be able to use that directly as
> > > > > > > a page count (for accounting purposes).
> > > > > > >
> > > > > > > Is there any reason the lower level functions couldn't return block
> > > > > > > units, then the higher level code can use a blocks_per_page or some such
> > > > > > > to translate that to a base page count..? As Darrick points out I assume
> > > > > > > you'd want to shortcut the folio_nr_pages() == 1 case to use a min page
> > > > > > > count of 1, but otherwise ISTM that would allow this to work with
> > > > > > > configs like 64k pagesize and 4k blocks as well. Am I missing something?
> > > > > > >
> > > > > >
> > > > > > No, I don't think you're missing anything, it should have been done
> > > > > > like this in the first place.
> > > > > >
> > > > >
> > > > > Ok. Something that came to mind after thinking about this some more is
> > > > > whether there is risk for the accounting to get wonky.. For example,
> > > > > consider 4k blocks, 64k pages, and then a large folio on top of that. If
> > > > > a couple or so blocks are dirtied at one time, you'd presumably want to
> > > > > account that as the minimum of 1 dirty page. Then if a couple more
> > > > > blocks are dirtied in the same large folio, how do you determine whether
> > > > > those blocks are a newly dirtied page or part of the already accounted
> > > > > dirty page? I wonder if perhaps this is the value of the no sub-page
> > > > > sized blocks restriction, because you can imply that newly dirtied
> > > > > blocks means newly dirtied pages..?
> > > > >
> > > > > I suppose if that is an issue it might still be manageable. Perhaps we'd
> > > > > have to scan the bitmap in blks per page windows and use that to
> > > > > determine how many base pages are accounted for at any time. So for
> > > > > example, 3 dirty 4k blocks all within the same 64k page size window
> > > > > still accounts as 1 dirty page, vs. dirty blocks in multiple page size
> > > > > windows might mean multiple dirty pages, etc. That way writeback
> > > > > accounting remains consistent with dirty accounting. Hm?
> > > >
> > > > Yes, I think that's correct -- one has to track which basepages /were/
> > > > dirty, and then which ones become dirty after updating the ifs dirty
> > > > bitmap.
> > > >
> > > > For example, if you have a 1k fsblock filesystem, 4k base pages, and a
> > > > 64k folio, you could write a single byte at offset 0, then come back and
> > > > write to a byte at offset 1024.  The first write will result in a charge
> > > > of one basepage, but so will the second, I think.  That results
> > > > incharges for two dirty pages, when you've really only dirtied a single
> > > > basepage.
> > >
> > > Does it matter though which blocks map to which pages? AFAIU, the
> > > "block size" is the granularity for disk io and is not really related
> > > to pages (eg for writing out to disk, only the block gets written, not
> > > the whole page). The stats (as i understand it) are used to throttle
> > > how much data gets written back to disk, and the primary thing it
> > > cares about is how many bytes that is, not how many pages, it's just
> > > that it's in PAGE_SIZE granularity because prior to iomap there was no
> > > dirty tracking of individual blocks within a page/folio; it seems like
> > > it suffices then to just keep track of  total # of dirty blocks,
> > > multiply that by blocksize, and roundup divide that by PAGE_SIZE and
> > > pass that to the stats.
> > >
> >
> > I suppose it may not matter in terms of the purpose of the mechanism
> > itself. In fact if the whole thing could just be converted to track
> > bytes, at least internally, then maybe that would eliminate some of the
> > confusion in dealing with different granularity of units..? I have no
> > idea how practical or appropriate that is, though. :)
> >
> > The concern Darrick and I were discussing is more about maintaining
> > accounting consistency in the event that we do continue translating
> > blocks to pages and ultimately add support for the block size < page
> > size case.
> >
> > In that case the implication is that we'd still need to account
> > something when we dirty a single block out of a page (i.e.  use
> > Darrick's example where we dirty a 1k fs block out of a 4k page). If we
> > round that partial page case up to 1 dirty page and repeat as each 1k
> > block is dirtied, then we have to make sure accounting remains
> > consistent in the case where we dirty account each sub-block of a page
> > through separate writes, but then clear dirty accounting for the entire
> > folio once at writeback time.

Agreed, in the case where we do need to care about which block maps to
which page, we could parse the bitmap in PAGE_SIZE chunks where if any
bit in that range is marked dirty then the whole page is accounted for
as dirty. I don't think this would add too much overhead given that we
already need to iterate over bitmap ranges. Looking at this patchset
again, I think we can even get rid of ifs_count_dirty_pages() entirely
and just do the counting dynamically as blocks get dirtied, not sure
if there was some reason I didn't do it this way earlier, but I think
that works.

> >
> > But I suppose we are projecting the implementation a bit so it might not
> > be worth getting this far into the weeds until you determine what
> > direction you want to go with this and have more code to review. All in
> > all, I do agree with Jan's general concern that I'd rather not have to
> > deal with multiple variants of sub-page state tracking in iomap. It's

I agree, I think we should try to keep the iomap stats accounting as
simple as possible. I like Jan's idea of having iomap's accounting go
towards bdi_writeback and leaving the other stuff untouched.

> > already a challenge to support multiple different filesystems. This does
> > seem like a useful enhancement to me however, so IMO it would be fine to
> > just try and make it more generic (short of something more generic on
> > the mm side or whatever) than it is currently.
> >
> > > But, as Jan pointed out to me in his comment, the stats are also used
> > > for monitoring the health of reclaim, so maybe it does matter then how
> > > the blocks translate to pages.
> > >
> >
> > Random thought, but would having an additional/optional stat to track
> > bytes (alongside the existing page granularity counts) help at all? For
> > example, if throttling could use optional byte granular dirty/writeback
> > counters when they are enabled instead of the traditional page granular,
> > would that solve your problem and be less disruptive to other things
> > that actually prefer the page count?
>
> FWIW my current thinking is that the best might be to do byte granularity
> tracking for wb_stat_ counters and leave current coarse-grained accounting
> for the zone / memcg stats. That way mm counters could be fully managed
> within mm code and iomap wouldn't have to care and writeback counters
> (which care about amount of IO, not amount of pinned memory) would be
> maintained by filesystems / iomap. We'd just need to come up with sensible
> rules where writeback counters should be updated when mm doesn't do it.
>

I like your idea a lot.

Thanks,
Joanne
>                                                                 Honza
> --
> Jan Kara <jack@xxxxxxxx>
> SUSE Labs, CR





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux