On Tue, Jul 01, 2025 at 10:48:47PM +0800, alexjlzheng@xxxxxxxxx wrote: > From: Jinliang Zheng <alexjlzheng@xxxxxxxxxxx> > > In the buffer write path, iomap_set_range_uptodate() is called every > time iomap_end_write() is called. But if folio_test_uptodate() holds, we > know that all blocks in this folio are already in the uptodate state, so > there is no need to go deep into the critical section of state_lock to > execute bitmap_set(). > > Although state_lock may not have significant lock contention due to > folio lock, this patch at least reduces the number of instructions. > > Signed-off-by: Jinliang Zheng <alexjlzheng@xxxxxxxxxxx> > --- > fs/iomap/buffered-io.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c > index 3729391a18f3..fb4519158f3a 100644 > --- a/fs/iomap/buffered-io.c > +++ b/fs/iomap/buffered-io.c > @@ -71,6 +71,9 @@ static void iomap_set_range_uptodate(struct folio *folio, size_t off, > unsigned long flags; > bool uptodate = true; > > + if (folio_test_uptodate(folio)) > + return; Looks fine, but how exhaustively have you tested this with heavy IO workloads? I /think/ it's the case that folios always creep towards ifs_is_fully_uptodate() == true state and once they've gotten there never go back. But folio state bugs are tricky to detect once they've crept in. --D > + > if (ifs) { > spin_lock_irqsave(&ifs->state_lock, flags); > uptodate = ifs_set_range_uptodate(folio, ifs, off, len); > -- > 2.49.0 > >