On Tue, Aug 12, 2025 at 6:20 PM Darrick J. Wong <djwong@xxxxxxxxxx> wrote: > > On Tue, Aug 12, 2025 at 04:02:12PM -0700, Joanne Koong wrote: > > On Tue, Aug 12, 2025 at 12:38 PM Darrick J. Wong <djwong@xxxxxxxxxx> wrote: > > > > > My understanding of strictlimit is that it's a way of preventing > > non-trusted filesystems from dirtying too many pages too quickly and > > thus taking up too much bandwidth. It imposes stricter / more > > Oh, BDI_CAP_STRICTLIMIT. > > /me digs > > "Then wb_thresh is 1% of 20% of 16GB. This amounts to ~8K pages." > > Oh wow. > > > conservative limits on how many pages a filesystem can dirty before it > > gets forcibly throttled (the bulk of the logic happens in > > balance_dirty_pages()). This is needed for fuse because fuse servers > > may be unprivileged and malicious or buggy. The feature was introduced > > in commit 5a53748568f7 ("mm/page-writeback.c: add strictlimit > > feature). The reason we now run into this is because with large > > folios, the dirtying happens so much faster now (eg a 1MB folio is > > dirtied and copied at once instead of page by page), and as a result > > the fuse server gets throttled more while doing large writes, which > > ends up making the write overall slower. > > <nod> and hence your patchset gives the number of dirty blocks (pages?) > within the large folio to the writeback throttling code so that you > don't get charged for 2M of dirty data if you've really only touched a > single byte of a 2M folio, right? > Yeah, exactly! > Will go have a look at that tomorrow. Thanks! > > --D >