Re: [PATCH v1 0/8] fuse: use iomap for buffered writes + writeback

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jun 06, 2025 at 04:37:55PM -0700, Joanne Koong wrote:
> This series adds fuse iomap support for buffered writes and dirty folio
> writeback. This is needed so that granular dirty tracking can be used in
> fuse when large folios are enabled so that if only a few bytes in a large
> folio are dirty, only a smaller portion is written out instead of the entire
> folio.
> 
> In order to do so, a new iomap type, IOMAP_IN_MEM, is added that is more
> generic and does not depend on the block layer. The parts of iomap buffer io
> that depend on bios and CONFIG_BLOCK is moved to a separate file,
> buffered-io-bio.c, in order to allow filesystems that do not have CONFIG_BLOCK
> set to use IOMAP_IN_MEM buffered io.
> 
> This series was run through fstests with large folios enabled and through
> some quick sanity checks on passthrough_hp with a) writing 1 GB in 1 MB chunks
> and then going back and dirtying a few bytes in each chunk and b) writing 50 MB
> in 1 MB chunks and going through dirtying the entire chunk for several runs.
> a) showed about a 40% speedup increase with iomap support added and b) showed
> roughly the same performance.
> 
> This patchset does not enable large folios yet. That will be sent out in a
> separate future patchset.
> 
> 
> Thanks,
> Joanne
> 
> Joanne Koong (8):
>   iomap: move buffered io bio logic into separate file
>   iomap: add IOMAP_IN_MEM iomap type
>   iomap: add buffered write support for IOMAP_IN_MEM iomaps
>   iomap: add writepages support for IOMAP_IN_MEM iomaps

AFAICT, this is just adding a synchronous "read folio" and "write
folio" hooks into iomapi that bypass the existing "map and pack"
bio-based infrastructure. i.e. there is no actual "iomapping" being
done, it's adding special case IO hooks into the IO back end
iomap bio interfaces.

Is that a fair summary of what this is doing?

If so, given that FUSE is actually a request/response protocol,
why wasn't netfs chosen as the back end infrastructure to support
large folios in the FUSE pagecache?

It's specifically designed for request/response IO interfaces that
are not block IO based, and it has infrastructure such as local file
caching built into it for optimising performance on high latency/low
bandwidth network based filesystems.

Hence it seems like this patchset is trying to duplicate
functionality that netfs already provides request/response
protocol-based filesystems, but with much less generic functionality
than netfs already provides....

Hence I'm not seeing why this IO patch was chosen for FUSE. Was
netfs considered as a candidate infrastructure large folio support
for FUSE? If so, why was iomap chosen over netfs? If not, would FUSE
be better suited to netfs integration than hacking fuse specific "no
block mapping" IO paths into infrastructure specifically optimised
for block based filesystems?

-Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux