Re: [PATCH v1 13/16] iomap: add a private arg for read and readahead

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2025/9/5 07:29, Joanne Koong wrote:
On Tue, Sep 2, 2025 at 6:55 PM Gao Xiang <hsiangkao@xxxxxxxxxxxxxxxxx> wrote:


...



   int iomap_read_folio(struct folio *folio, const struct iomap_ops *ops,
-             const struct iomap_read_ops *read_ops)
+             const struct iomap_read_ops *read_ops, void *private)
   {
        struct iomap_iter iter = {
                .inode          = folio->mapping->host,
                .pos            = folio_pos(folio),
                .len            = folio_size(folio),
+             .private        = private,
        };

Will this whole work be landed for v6.18?

If not, may I ask if this patch can be shifted advance in this
patchset for applying separately (I tried but no luck).

Because I also need some similar approach for EROFS iomap page
cache sharing feature since EROFS uncompressed I/Os go through
iomap and extra information needs a proper way to pass down to
iomap_{begin,end} with extra pointer `.private` too.

Hi Gao,

I'm not sure whether this will be landed for v6.18 but I'm happy to
shift this patch to the beginning of the patchset for applying
separately.

Yeah, thanks.  At least this common patch can be potentially applied
easily (e.g. form a common commit id for both features if really
needed) since other iomap/FUSE patches are not dependency of our new
feature and shouldn't be coupled with our development branch later.


Hi Gao,

I'll be dropping this patch in v2 since all the iomap read stuff is
going to go through a struct ctx arg instead of through iter->private.
Sorry this won't help your use case, but looking forward to seeing your patches.

Hi Joanne,

Thanks for your reminder.  Okay, I will check your v2 to know how
you change then.

Also, one thing I really think it's helpful for our use cases is
converting .iomap_begin() at least to pass struct iomap_iter *
directly rather than (inode, pos, len, flags, iomap, srcmap)
since:
  - .iomap_{begin,end}() are introduced before iomap_iter()
    and struct iomap_iter but those callbacks are basically
    now passed down some fields of `struct iomap_iter` now;

  - struct iomap_iter->private then can contain a per-request
    context so that .iomap_begin() can leverage too;

  - There are already too many arguments for .iomap_begin(),
    pass down struct iomap_iter directly could avoid adding
    another `private` argument to .iomap_begin()..

Therefore, I do wonder if this change (.iomap_begin() passes
struct iomap_iter *) is a good idea for the iomap folks, in
addition that filesystems can specify `struct iomap_iter->private`
as in this patch.  Since this change is necessary to make our
page cache sharing feature efficient, I will continue working on
this soon.


Another thing I want to discuss (but it's less important for our
recent features) is the whole callback hook model of iomap.

Basically the current model does mean if any filesystem doesn't
fulfill the iomap standard flow, it has to add some customized
callback hook somewhere to modify the code flow then (or introduce
a new special flag and move their specific logic into iomap/
itself even other fses may not need this), but the hook way will
cause increased indirect calls for them, currently we have
`struct iomap_ops`, `struct iomap_writeback_ops` and
`struct iomap_dio_ops`, if some another filesystem (when converting
buffer I/Os for example or adding {pre,post}-processing ) have
specified timing, it needs to add new hooks then.

I do wonder if it's possible to convert iomap to get rid of the
indirect-call model by just providing helper kAPIs instead,
take .read_folio / .fiemap for example e.g.

   xxxfs_read_folio:
      loop iomap_iter
        xxxfs_iomap_begin();
	iomap_readpage_bio_advance(); [ or if a fs is non-bio
             based, spliting more low-level helpers for them. ]
        xxxfs_iomap_end();

   xxxfs_fiemap():
      iomap_fiemap_begin
      loop iomap_iter
        xxxfs_iomap_begin();
        iomap_readpage_fiemap_advance()
        xxxfs_iomap_end();
      iomap_fiemap_end
So that each fs can use those helpers flexibly instead of diging
into adding various new indirect call hooks or moving customized
logic into iomap/ itself.

I don't have a specific example  because currently we don't have
direct issue against standard iomap flow on our uncompressed
path, but after a quick glance of other potential users who try
to convert their buffer I/Os to iomap, I had such impression in
my head for a while.

Thanks,
Gao Xiang



Thanks,
Joanne

Thanks,
Gao Xiang


Thanks,
Joanne

Thanks,
Gao Xiang






[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux