On Thu, Jun 05, 2025 at 01:33:55PM -0400, Brian Foster wrote: > Use the iomap folio batch mechanism to select folios to zero on zero > range of unwritten mappings. Trim the resulting mapping if the batch > is filled (unlikely for current use cases) to distinguish between a > range to skip and one that requires another iteration due to a full > batch. > > Signed-off-by: Brian Foster <bfoster@xxxxxxxxxx> > --- > fs/xfs/xfs_iomap.c | 23 +++++++++++++++++++++++ > 1 file changed, 23 insertions(+) > > diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c > index b5cf5bc6308d..63054f7ead0e 100644 > --- a/fs/xfs/xfs_iomap.c > +++ b/fs/xfs/xfs_iomap.c > @@ -1691,6 +1691,8 @@ xfs_buffered_write_iomap_begin( > struct iomap *iomap, > struct iomap *srcmap) > { > + struct iomap_iter *iter = container_of(iomap, struct iomap_iter, > + iomap); /me has been wondering more and more if we should just pass the iter directly to iomap_begin rather than make them play these container_of tricks... OTOH I think the whole point of this: ret = ops->iomap_begin(iter->inode, iter->pos, iter->len, iter->flags, &iter->iomap, &iter->srcmap); is to "avoid" allowing the iomap users to mess with the internals of the iomap iter... > struct xfs_inode *ip = XFS_I(inode); > struct xfs_mount *mp = ip->i_mount; > xfs_fileoff_t offset_fsb = XFS_B_TO_FSBT(mp, offset); > @@ -1762,6 +1764,7 @@ xfs_buffered_write_iomap_begin( > */ > if (flags & IOMAP_ZERO) { > xfs_fileoff_t eof_fsb = XFS_B_TO_FSB(mp, XFS_ISIZE(ip)); > + u64 end; > > if (isnullstartblock(imap.br_startblock) && > offset_fsb >= eof_fsb) > @@ -1769,6 +1772,26 @@ xfs_buffered_write_iomap_begin( > if (offset_fsb < eof_fsb && end_fsb > eof_fsb) > end_fsb = eof_fsb; > > + /* > + * Look up dirty folios for unwritten mappings within EOF. > + * Providing this bypasses the flush iomap uses to trigger > + * extent conversion when unwritten mappings have dirty > + * pagecache in need of zeroing. > + * > + * Trim the mapping to the end pos of the lookup, which in turn > + * was trimmed to the end of the batch if it became full before > + * the end of the mapping. > + */ > + if (imap.br_state == XFS_EXT_UNWRITTEN && > + offset_fsb < eof_fsb) { > + loff_t len = min(count, > + XFS_FSB_TO_B(mp, imap.br_blockcount)); > + > + end = iomap_fill_dirty_folios(iter, offset, len); ...though I wonder, does this need to happen in xfs_buffered_write_iomap_begin? Is it required to hold the ILOCK while we go look for folios in the mapping? Or could this become a part of iomap_write_begin? --D > + end_fsb = min_t(xfs_fileoff_t, end_fsb, > + XFS_B_TO_FSB(mp, end)); > + } > + > xfs_trim_extent(&imap, offset_fsb, end_fsb - offset_fsb); > } > > -- > 2.49.0 > >