> -static int iomap_read_folio_sync(loff_t block_start, struct folio *folio, > - size_t poff, size_t plen, const struct iomap *iomap) > +static int iomap_read_folio_sync(const struct iomap_iter *iter, loff_t block_start, > + struct folio *folio, size_t poff, size_t plen) > { > - return iomap_bio_read_folio_sync(block_start, folio, poff, plen, iomap); > + const struct iomap_folio_ops *folio_ops = iter->iomap.folio_ops; > + const struct iomap *srcmap = iomap_iter_srcmap(iter); > + > + if (folio_ops && folio_ops->read_folio_sync) > + return folio_ops->read_folio_sync(block_start, folio, > + poff, plen, srcmap, > + iter->private); > + > + /* IOMAP_IN_MEM iomaps must always handle ->read_folio_sync() */ > + WARN_ON_ONCE(iter->iomap.type == IOMAP_IN_MEM); > + > + return iomap_bio_read_folio_sync(block_start, folio, poff, plen, srcmap); I just ran into this for another project and I hated my plumbing for this. I hate yours very slightly less but I still don't like it. This is really more of a VM level concept, so I wonder if we should instead: - add a new read_folio_sync method to the address space operations that reads a folio without unlocking it. - figure out if just reading the head/tail really is as much of an optimization, and if it it pass arguments to it to just read the head/tail, and if not skip it.