On Thu, Sep 4, 2025 at 1:53 AM Jan Kara <jack@xxxxxxx> wrote: > > Hello! > > On Fri 29-08-25 16:39:30, Joanne Koong wrote: > > This patchset adds granular dirty and writeback stats accounting for large > > folios. > > > > The dirty page balancing logic uses these stats to determine things like > > whether the ratelimit has been exceeded, the frequency with which pages need > > to be written back, if dirtying should be throttled, etc. Currently for large > > folios, if any byte in the folio is dirtied or written back, all the bytes in > > the folio are accounted as such. > > > > In particular, there are four places where dirty and writeback stats get > > incremented and decremented as pages get dirtied and written back: > > a) folio dirtying (filemap_dirty_folio() -> ... -> folio_account_dirtied()) > > - increments NR_FILE_DIRTY, NR_ZONE_WRITE_PENDING, WB_RECLAIMABLE, > > current->nr_dirtied > > > > b) writing back a mapping (writeback_iter() -> ... -> > > folio_clear_dirty_for_io()) > > - decrements NR_FILE_DIRTY, NR_ZONE_WRITE_PENDING, WB_RECLAIMABLE > > > > c) starting writeback on a folio (folio_start_writeback()) > > - increments WB_WRITEBACK, NR_WRITEBACK, NR_ZONE_WRITE_PENDING > > > > d) ending writeback on a folio (folio_end_writeback()) > > - decrements WB_WRITEBACK, NR_WRITEBACK, NR_ZONE_WRITE_PENDING > > I was looking through the patch set. One general concern I have is that it > all looks somewhat fragile. If you say start writeback on a folio with a > granular function and happen to end writeback with a non-granular one, > everything will run fine, just a permanent error in the counters will be > introduced. Similarly with a dirtying / starting writeback mismatch. The > practicality of this issue is demostrated by the fact that you didn't > convert e.g. folio_redirty_for_writepage() so anybody using it together > with fine-grained accounting will just silently mess up the counters. > Another issue of a similar kind is that __folio_migrate_mapping() does not > support fine-grained accounting (and doesn't even have a way to figure out > proper amount to account) so again any page migration may introduce > permanent errors into counters. One way to deal with this fragility would > be to have a flag in the mapping that will determine whether the dirty > accounting is done by MM or the filesystem (iomap code in your case) > instead of determining it at the call site. > > Another concern I have is the limitation to blocksize >= PAGE_SIZE you > mention below. That is kind of annoying for filesystems because generally > they also have to deal with cases of blocksize < PAGE_SIZE and having two > ways of accounting in one codebase is a big maintenance burden. But this > was discussed elsewhere in this series and I think you have settled on > supporting blocksize < PAGE_SIZE as well? > > Finally, there is one general issue for which I'd like to hear opinions of > MM guys: Dirty throttling is a mechanism to avoid a situation where the > dirty page cache consumes too big amount of memory which makes page reclaim > hard and the machine thrashes as a result or goes OOM. Now if you dirty a > 2MB folio, it really makes all those 2MB hard to reclaim (neither direct > reclaim nor kswapd will be able to reclaim such folio) even though only 1KB > in that folio needs actual writeback. In this sense it is actually correct > to account whole big folio as dirty in the counters - if you accounted only > 1KB or even 4KB (page), a user could with some effort make all page cache > memory dirty and hard to reclaim without crossing the dirty limits. On the > other hand if only 1KB in a folio trully needs writeback, the writeback > will be generally significantly faster than with 2MB needing writeback. So > in this sense it is correct to account amount to data that trully needs > writeback. > > I don't know what the right answer to this "conflict of interests" is. We > could keep accounting full folios in the global / memcg counters (to > protect memory reclaim) and do per page (or even finer) accounting in the > bdi_writeback which is there to avoid excessive accumulation of dirty data > (and thus long writeback times) against one device. This should still help > your case with FUSE and strictlimit (which is generally constrained by > bdi_writeback counters). One just needs to have a closer look how hard > would it be to adapt writeback throttling logic to the different > granularity of global counters and writeback counters... > > Honza Hi Honza, Thanks for sharing your thoughts on this. Those are good points, especially the last one about reclaim. I'm curious to hear too what the mm people think. If it turns out this patchset is not actually that useful, I'm happy to drop it. Thanks, Joanne