Re: [PATCH v3 6/7] iomap: remove old partial eof zeroing optimization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 14, 2025 at 04:41:21PM -0400, Brian Foster wrote:
> iomap_zero_range() optimizes the partial eof block zeroing use case
> by force zeroing if the mapping is dirty. This is to avoid frequent
> flushing on file extending workloads, which hurts performance.
> 
> Now that the folio batch mechanism provides a more generic solution
> and is used by the only real zero range user (XFS), this isolated
> optimization is no longer needed. Remove the unnecessary code and
> let callers use the folio batch or fall back to flushing by default.
> 
> Signed-off-by: Brian Foster <bfoster@xxxxxxxxxx>
> Reviewed-by: Christoph Hellwig <hch@xxxxxx>

Heh, I was staring at this last Friday chasing fuse+iomap bugs in
fallocate zerorange and straining to remember what this does.
Is this chunk still needed if the ->iomap_begin implementation doesn't
(or forgets to) grab the folio batch for iomap?

My bug turned out to be a bug in my fuse+iomap design -- with the way
iomap_zero_range does things, you have to flush+unmap, punch the range
and zero the range.  If you punch and realloc the range and *then* try
to zero the range, the new unwritten extents cause iomap to miss dirty
pages that fuse should've unmapped.  Ooops.

--D

> ---
>  fs/iomap/buffered-io.c | 24 ------------------------
>  1 file changed, 24 deletions(-)
> 
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index 194e3cc0857f..d2bbed692c06 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -1484,33 +1484,9 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero,
>  		.private	= private,
>  	};
>  	struct address_space *mapping = inode->i_mapping;
> -	unsigned int blocksize = i_blocksize(inode);
> -	unsigned int off = pos & (blocksize - 1);
> -	loff_t plen = min_t(loff_t, len, blocksize - off);
>  	int ret;
>  	bool range_dirty;
>  
> -	/*
> -	 * Zero range can skip mappings that are zero on disk so long as
> -	 * pagecache is clean. If pagecache was dirty prior to zero range, the
> -	 * mapping converts on writeback completion and so must be zeroed.
> -	 *
> -	 * The simplest way to deal with this across a range is to flush
> -	 * pagecache and process the updated mappings. To avoid excessive
> -	 * flushing on partial eof zeroing, special case it to zero the
> -	 * unaligned start portion if already dirty in pagecache.
> -	 */
> -	if (!iter.fbatch && off &&
> -	    filemap_range_needs_writeback(mapping, pos, pos + plen - 1)) {
> -		iter.len = plen;
> -		while ((ret = iomap_iter(&iter, ops)) > 0)
> -			iter.status = iomap_zero_iter(&iter, did_zero);
> -
> -		iter.len = len - (iter.pos - pos);
> -		if (ret || !iter.len)
> -			return ret;
> -	}
> -
>  	/*
>  	 * To avoid an unconditional flush, check pagecache state and only flush
>  	 * if dirty and the fs returns a mapping that might convert on
> -- 
> 2.50.0
> 
> 




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux