Re: [PATCH v3] fuse: new workqueue to periodically invalidate expired dentries

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 02 2025, Luis Henriques wrote:

> On Wed, Jul 02 2025, Miklos Szeredi wrote:
>
>> On Tue, 20 May 2025 at 17:42, Luis Henriques <luis@xxxxxxxxxx> wrote:
>>>
>>> This patch adds a new module parameter 'inval_wq' which is used to start a
>>> workqueue to periodically invalidate expired dentries.  The value of this
>>> new parameter is the period, in seconds, of the workqueue.  When it is set,
>>> every new dentry will be added to an rbtree, sorted by the dentry's expiry
>>> time.
>>>
>>> When the workqueue is executed, it will check the dentries in this tree and
>>> invalidate them if:
>>>
>>>   - The dentry has timed-out, or if
>>>   - The connection epoch has been incremented.
>>
>> I wonder, why not make the whole infrastructure global?  There's no
>> reason to have separate rb-trees and workqueues for each fuse
>> instance.
>
> Hmm... true.  My initial approach was to use a mount parameter to enabled
> it for each connection.  When you suggested replacing that by a module
> parameter, I should have done that too.

While starting working on this, I realised there's an additional
complication with this approach.  Having a dentries tree per connection
allows the workqueue to stop walking through the tree once we find a
non-expired dentry: it has a valid timestamp *and* it's epoch is equal to
the connection epoch.

Moving to a global tree, I'll need to _always_ walk through all the
dentries, because the epoch for a specific connection may have been
incremented.

So, I can see two options to solve this:

1) keep the design as is (i.e. a tree/workqueue per connection), or

2) add another flag indicating whether there has been an epoch increment
   in any connection, and only keep walking through all the dentries in
   that case.

A third option could be to change dentries timestamps and re-order the
tree when there's an epoch increment.  But this would probably be messy,
and very hacky I believe.

Any thoughts?

Cheers,
-- 
Luís


>> Contention on the lock would be worse, but it's bad as it
>> is, so need some solution, e.g. hashed lock, which is better done with
>> a single instance.
>
> Right, I'll think how to fix it (or at least reduce contention).
>
>>> The workqueue will run for, at most, 5 seconds each time.  It will
>>> reschedule itself if the dentries tree isn't empty.
>>
>> It should check need_resched() instead.
>
> OK.
>
>>> diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
>>> index 1fb0b15a6088..257ca2b36b94 100644
>>> --- a/fs/fuse/dir.c
>>> +++ b/fs/fuse/dir.c
>>> @@ -34,33 +34,153 @@ static void fuse_advise_use_readdirplus(struct inode *dir)
>>>         set_bit(FUSE_I_ADVISE_RDPLUS, &fi->state);
>>>  }
>>>
>>> -#if BITS_PER_LONG >= 64
>>> -static inline void __fuse_dentry_settime(struct dentry *entry, u64 time)
>>> +struct fuse_dentry {
>>> +       u64 time;
>>> +       struct rcu_head rcu;
>>> +       struct rb_node node;
>>> +       struct dentry *dentry;
>>> +};
>>> +
>>
>> You lost the union with rcu_head.   Any other field is okay, none of
>> them matter in rcu protected code.  E.g.
>>
>> struct fuse_dentry {
>>         u64 time;
>>         union {
>>                 struct rcu_head rcu;
>>                 struct rb_node node;
>>         };
>>         struct dentry *dentry;
>> };
>
> Oops.  I'll fix that.
>
> Thanks a lot for your feedback, Miklos.  Much appreciated.  I'll re-work
> this patch and send a new revision shortly.
>
> Cheers,
> -- 
> Luís






[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux