On Tue, 1 Jul 2025 11:47:37 -0700, djwong@xxxxxxxxxx wrote: > On Tue, Jul 03, 2025 at 10:48:47PM +0800, alexjlzheng@xxxxxxxxx wrote: > > From: Jinliang Zheng <alexjlzheng@xxxxxxxxxxx> > > > > In the buffer write path, iomap_set_range_uptodate() is called every > > time iomap_end_write() is called. But if folio_test_uptodate() holds, we > > know that all blocks in this folio are already in the uptodate state, so > > there is no need to go deep into the critical section of state_lock to > > execute bitmap_set(). > > > > Although state_lock may not have significant lock contention due to > > folio lock, this patch at least reduces the number of instructions. > > > > Signed-off-by: Jinliang Zheng <alexjlzheng@xxxxxxxxxxx> > > --- > > fs/iomap/buffered-io.c | 3 +++ > > 1 file changed, 3 insertions(+) > > > > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c > > index 3729391a18f3..fb4519158f3a 100644 > > --- a/fs/iomap/buffered-io.c > > +++ b/fs/iomap/buffered-io.c > > @@ -71,6 +71,9 @@ static void iomap_set_range_uptodate(struct folio *folio, size_t off, > > unsigned long flags; > > bool uptodate = true; > > > > + if (folio_test_uptodate(folio)) > > + return; > > Looks fine, but how exhaustively have you tested this with heavy IO > workloads? I /think/ it's the case that folios always creep towards > ifs_is_fully_uptodate() == true state and once they've gotten there > never go back. But folio state bugs are tricky to detect once they've > crept in. I tested fio, ltp and xfstests combined for about 30 hours. The command used for fio test is: fio --name=4k-rw \ --filename=/data2/testfile \ --size=1G \ --bs=4096 \ --ioengine=libaio \ --iodepth=32 \ --rw=randrw \ --direct=0 \ --buffered=1 \ --numjobs=16 \ --runtime=60 \ --time_based \ --group_reporting ltp and xfstests showed no noticeable errors caused by this patch. thanks, Jinliang Zheng. :) > > --D > > > + > > if (ifs) { > > spin_lock_irqsave(&ifs->state_lock, flags); > > uptodate = ifs_set_range_uptodate(folio, ifs, off, len); > > -- > > 2.49.0 > > > >