On Thu, Jun 05, 2025 at 01:33:52PM -0400, Brian Foster wrote: > Add a new filemap_get_folios_dirty() helper to look up existing dirty > folios in a range and add them to a folio_batch. This is to support > optimization of certain iomap operations that only care about dirty > folios in a target range. For example, zero range only zeroes the subset > of dirty pages over unwritten mappings, seek hole/data may use similar > logic in the future, etc. > > Note that the helper is intended for use under internal fs locks. > Therefore it trylocks folios in order to filter out clean folios. > This loosely follows the logic from filemap_range_has_writeback(). > > Signed-off-by: Brian Foster <bfoster@xxxxxxxxxx> You might want to cc willy directly on this one... > --- > include/linux/pagemap.h | 2 ++ > mm/filemap.c | 42 +++++++++++++++++++++++++++++++++++++++++ > 2 files changed, 44 insertions(+) > > diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h > index e63fbfbd5b0f..fb83ddf26621 100644 > --- a/include/linux/pagemap.h > +++ b/include/linux/pagemap.h > @@ -941,6 +941,8 @@ unsigned filemap_get_folios_contig(struct address_space *mapping, > pgoff_t *start, pgoff_t end, struct folio_batch *fbatch); > unsigned filemap_get_folios_tag(struct address_space *mapping, pgoff_t *start, > pgoff_t end, xa_mark_t tag, struct folio_batch *fbatch); > +unsigned filemap_get_folios_dirty(struct address_space *mapping, > + pgoff_t *start, pgoff_t end, struct folio_batch *fbatch); > > /* > * Returns locked page at given index in given cache, creating it if needed. > diff --git a/mm/filemap.c b/mm/filemap.c > index bada249b9fb7..d28e984cdfd4 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -2334,6 +2334,48 @@ unsigned filemap_get_folios_tag(struct address_space *mapping, pgoff_t *start, > } > EXPORT_SYMBOL(filemap_get_folios_tag); > > +unsigned filemap_get_folios_dirty(struct address_space *mapping, pgoff_t *start, > + pgoff_t end, struct folio_batch *fbatch) This ought to have a comment explaining what the function does. It identifies every folio starting at @*start and ending before @end that is dirty and tries to assign them to @fbatch, right? The code looks reasonable to me; hopefully there aren't some subtleties that I'm missing here :P > +{ > + XA_STATE(xas, &mapping->i_pages, *start); > + struct folio *folio; > + > + rcu_read_lock(); > + while ((folio = find_get_entry(&xas, end, XA_PRESENT)) != NULL) { > + if (xa_is_value(folio)) > + continue; > + if (folio_trylock(folio)) { > + bool clean = !folio_test_dirty(folio) && > + !folio_test_writeback(folio); > + folio_unlock(folio); > + if (clean) { > + folio_put(folio); > + continue; > + } > + } > + if (!folio_batch_add(fbatch, folio)) { > + unsigned long nr = folio_nr_pages(folio); > + *start = folio->index + nr; > + goto out; > + } > + } > + /* > + * We come here when there is no page beyond @end. We take care to not ...no folio beyond @end? --D > + * overflow the index @start as it confuses some of the callers. This > + * breaks the iteration when there is a page at index -1 but that is > + * already broke anyway. > + */ > + if (end == (pgoff_t)-1) > + *start = (pgoff_t)-1; > + else > + *start = end + 1; > +out: > + rcu_read_unlock(); > + > + return folio_batch_count(fbatch); > +} > +EXPORT_SYMBOL(filemap_get_folios_dirty); > + > /* > * CD/DVDs are error prone. When a medium error occurs, the driver may fail > * a _large_ part of the i/o request. Imagine the worst scenario: > -- > 2.49.0 > >