Re: Reseting pending fanotify events

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat 19-04-25 00:37:44, Amir Goldstein wrote:
> On Tue, Apr 15, 2025 at 5:51 PM Jan Kara <jack@xxxxxxx> wrote:
> >
> > On Wed 09-04-25 14:36:16, Amir Goldstein wrote:
> > > On Tue, Apr 8, 2025 at 8:55 PM Ibrahim Jirdeh <ibrahimjirdeh@xxxxxxxx> wrote:
> > > >
> > > > > 1. Start a new server instance
> > > > > 2. Set default response in case of new instance crash
> > > > > 3. Hand over a ref of the existing group fd to the new instance if the
> > > > > old instance is running
> > > > > 4. Start handling events in new instance (*)
> > > > > 5. Stop handling new events in old instance, but complete pending events
> > > > > 6. Shutdown old instance
> > > >
> > > > I think this should work for our case, we will only need to reconstruct
> > > > the group/interested mask in case of crash. I can help add the feature for
> > > > setting different default responses.
> > > >
> > >
> > > Please go ahead.
> > >
> > > We did not yet get any feedback from Jan on this idea,
> > > but ain't nothing like a patch to solicit feedback.
> >
> > I'm sorry for the delay but I wanted to find time to give a deeper thought
> > to this.
> >
> 
> Same here. I had to think hard.
> 
> > > > > I might have had some patches similar to this floating around.
> > > > > If you are interested in this feature, I could write and test a proper patch.
> > > >
> > > > That would be appreciated if its not too much trouble, the approach outlined
> > > > in sketch should be enough for our use-case (pending the sb vs mount monitoring
> > > > point you've raised).
> > > >
> > >
> > > Well, the only problem is when I can get to it, which does not appear to be
> > > anytime soon. If this is an urgent issue for you I could give you more pointers
> > > to  try and do it yourself.
> > >
> > > There is one design decision that we would need to make before
> > > getting to the implementation.
> > > Assuming that this API is acceptable:
> > >
> > > fanotify_mark(fd, FAN_MARK_ADD | FAN_MARK_FILESYSTEM | FAN_MARK_DEFAULT, ...
> > >
> > > What happens when fd is closed?
> > > Can the sbinfo->default_mask out live the group fd?
> >
> > So I think there are two options how to consistently handle this and we
> > need to decide which one to pick. Do we want to:
> >
> > a) tie the concept to a particular notification group - i.e., if a particular
> > notification group is not existing anymore, we want events on particular
> > object(s) auto-rejected.
> >
> > or
> >
> > b) tie the concept to the object itself - i.e., if there's no notification
> > group handling events for the object, auto-reject the events.
> >
> > Both has its advantages and disadvantages. With a) we can easily have
> > multiple handlers cooperate on one filesystem (e.g. an HSM and antivirus
> > solution), the notification group can just register itself as mandatory for
> > all events on the superblock object and we don't have to care about details
> > how the notification group watches for events or so. But what gets complex
> > with this variant is how to hand over control from the old to the new
> > version of the service or even worse how to recover from crashed service -
> > you need to register the new group as mandatory and somehow "unregister"
> > the crashed one.
> >
> 
> I prefer this option, but with a variant -
> The group has two fds:
> one control-only fd (RDONLY) to keep it alive and add marks
> and one queue-fd (RDWR) to handle events.
> 
> The control fd can be placed in the fd store.
> When service crashes, the queue fd is closed so the group
> cannot handle events and default response is returned.
> 
> When the service starts it finds the control fd in the fd store and
> issues an ioctl or something to get the queue fd.

Yes, this sounds elegant. I like it.

> > For b) hand-over or crash recovery is simple. As soon as someone places a
> > mark over given object, it is implicitly the new handler for the object and
> > auto-reject does not trigger. But if you set that e.g. all events on the
> > superblock need to be handled, then you really have to setup watches so
> > that notification system understands each event really got handled (which
> > potentially conflicts with the effort to avoid handling uninteresting
> > events). Also any coexistence of two services using this facility is going
> > to be "interesting".
> >
> > > I think that closing this group should remove the default mask
> > > and then the default mask is at least visible at fdinfo of this fd.
> >
> > Once we decide the above dilema, we can decide on what's the best way to
> > set these and also about visibility (I agree that is very important as
> > well).
> 
> With the control fd design, this problem is also solved - the marks
> are still visible on the control fd, and so will be the default response
> and the state of the queue fd.

Yep, sounds good.

								Honza
-- 
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux