Re: [PATCH] fanotify: support custom default close response

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 3, 2025 at 4:43 PM Jan Kara <jack@xxxxxxx> wrote:
>
> On Thu 03-07-25 10:27:17, Amir Goldstein wrote:
> > On Thu, Jul 3, 2025 at 9:10 AM Ibrahim Jirdeh <ibrahimjirdeh@xxxxxxxx> wrote:
> > >
> > > > On Wed, Jul 2, 2025 at 6:15 PM Jan Kara <jack@xxxxxxx> wrote:
> > > > > Eventually the new service starts and we are in the situation I describe 3
> > > > > paragraphs above about handling pending events.
> > > > >
> > > > > So if we'd implement resending of pending events after group closure, I
> > > > > don't see how default response (at least in its current form) would be
> > > > > useful for anything.
> > > > >
> > > > > Why I like the proposal of resending pending events:
> > > > > a) No spurious FAN_DENY errors in case of service crash
> > > > > b) No need for new concept (and API) for default response, just a feature
> > > > >    flag.
> > > > > c) With additional ioctl to trigger resending pending events without group
> > > > >    closure, the newly started service can simply reuse the
> > > > >    same notification group (even in case of old service crash) thus
> > > > >    inheriting all placed marks (which is something Ibrahim would like to
> > > > >    have).
> > > >
> > >
> > > I'm also a fan of the approach of support for resending pending events. As
> > > mentioned exposing this behavior as an ioctl and thereby removing the need to
> > > recreate fanotify group makes the usage a fair bit simpler for our case.
> > >
> > > One basic question I have (mainly for understanding), is if the FAN_RETRY flag is
> > > set in the proposed patch, in the case where there is one existing group being
> > > closed (ie no handover setup), what would be the behavior for pending events?
> > > Is it the same as now, events are allowed, just that they get resent once?
> >
> > Yes, same as now.
> > Instead of replying FAN_ALLOW, syscall is being restarted
> > to check if a new watcher was added since this watcher took the event.
>
> Yes, just it isn't the whole syscall that's restarted but only the
> fsnotify() call.
>

Right. I missed that.

> > Wondering out loud:
> > Currently we order the marks on the mark obj_list,
> > within the same priority group, first-subscribed-last-handled.
> >
> > I never stopped to think if this order made sense or not.
> > Seems like it was just the easier way to implement insert by priority order.
> >
> > But if we order the marks first-subscribed-first-handled within the same
> > priority group, we won't need to restart the syscall to restart mark
> > list iteration.
> >
> > The new group of the newly started daemon, will be next in the mark list
> > after the current stopped group returns FAN_ALLOW.
> > Isn't that a tad less intrusive handover then restarting the syscall
> > and causing a bunch of unrelated subsystems to observe the restart?
> >
> > And I think that the first-subscribed-first-handled order makes a bit more
> > sense logically.
> > I mean, if admin really cares about making some super important security
> > group first, admin could make sure that its a priority service that
> > starts very early
> > admin cannot make sure that the important group starts last...
>
> So this idea also briefly crossed my mind yesterday but I didn't look into
> it in detail. Looking at how we currently do mark iteration in fsnotify
> this won't be very easy to implement. iter_info has an array of marks, some
> of those are marks we are currently reporting to, some of those may be from
> the next group to report to. Some may be even NULL because for this mark
> type there were no more marks to report to. So it's difficult to make sure
> iter_into will properly pick freshly added group and its marks without
> completely restarting mark iteration. I'm not saying it cannot be done but
> I'm not sure it's worth the hassle.
>

Yes, I see what you mean.
Restarting fsnotify() seems more sensible.

Thanks,
Amir.





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux