On Fri, Jul 18, 2025 at 07:57:15PM +0200, Bernd Schubert wrote: > > > On 7/18/25 19:50, Joanne Koong wrote: > > On Fri, Jul 18, 2025 at 9:37 AM Bernd Schubert <bernd@xxxxxxxxxxx> wrote: > >> > >> On 7/18/25 01:26, Darrick J. Wong wrote: > >>> From: Darrick J. Wong <djwong@xxxxxxxxxx> > >>> > >>> +/* > >>> + * Flush all pending requests and wait for them. Only call this function when > >>> + * it is no longer possible for other threads to add requests. > >>> + */ > >>> +void fuse_flush_requests(struct fuse_conn *fc, unsigned long timeout) > >> > >> I wonder if this should have "abort" in its name. Because it is not a > >> simple flush attempt, but also sets fc->blocked and fc->max_background. I don't want to abort the connection here, because later I'll use this same function to flush pending commands before sending a syncfs to the fuse server and waiting for that as well: https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/commit/?h=fuse-iomap-attrs&id=d4936bb06a81886de844f089ded85f1461b41e59 The server is still alive and accepting requests, so this what I want is to push all the FUSE_RELEASE requests to the server so that it will delete all the "unlinked" open files (aka the .fusehiddenXXXX files) in the filesystem and wait for that to complete. > >>> +{ > >>> + unsigned long deadline; > >>> + > >>> + spin_lock(&fc->lock); > >>> + if (!fc->connected) { > >>> + spin_unlock(&fc->lock); > >>> + return; > >>> + } > >>> + > >>> + /* Push all the background requests to the queue. */ > >>> + spin_lock(&fc->bg_lock); > >>> + fc->blocked = 0; > >>> + fc->max_background = UINT_MAX; Yeah, I was a little confused about this -- it looked like these two lines will push all the pending background commands into the queue and turn off max_background throttling. That might not be optimal for what's otherwise still a live fuse server. All I need here is for fc->bg_queue to be empty when flush_bg_queue returns. I suppose I could wait in a loop, too: while (!list_empty(&fc->bg_queue)) { flush_bg_queue(fc); wait_event_timeout(..., fc->active_background > 0, HZ); } But that's more complicated. ;) > >>> + flush_bg_queue(fc); > >>> + spin_unlock(&fc->bg_lock); > >>> + spin_unlock(&fc->lock); > >>> + > >>> + /* > >>> + * Wait 30s for all the events to complete or abort. Touch the > >>> + * watchdog once per second so that we don't trip the hangcheck timer > >>> + * while waiting for the fuse server. > >>> + */ > >>> + deadline = jiffies + timeout; > >>> + smp_mb(); > >>> + while (fc->connected && > >>> + (!timeout || time_before(jiffies, deadline)) && > >>> + wait_event_timeout(fc->blocked_waitq, > >>> + !fc->connected || atomic_read(&fc->num_waiting) == 0, > >>> + HZ) == 0) > >>> + touch_softlockup_watchdog(); > >>> +} > >>> + > >>> /* > >>> * Abort all requests. > >>> * > >>> diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c > >>> index 9572bdef49eecc..1734c263da3a77 100644 > >>> --- a/fs/fuse/inode.c > >>> +++ b/fs/fuse/inode.c > >>> @@ -2047,6 +2047,7 @@ void fuse_conn_destroy(struct fuse_mount *fm) > >>> { > >>> struct fuse_conn *fc = fm->fc; > >>> > >>> + fuse_flush_requests(fc, 30 * HZ); > >> > >> I think fc->connected should be set to 0, to avoid that new requests can > >> be allocated. > > > > fuse_abort_conn() logic is gated on "if (fc->connected)" so I think > > fc->connected can only get set to 0 within fuse_abort_conn() Keep in mind that the function says that it should not be used when other threads can add new requests. All current callers are in the unmount call stack so the only thread that could add a new request is the current one. > Hmm yeah, I wonder if we should allow multiple values in there. Like > fuse_abort_conn sets UINT64_MAX and checks that and other functions > could set values in between? We could add another variable, but given > that it is used on every request allocation might be better to avoid too > many conditions. <shrug> It /would/ be nifty if fuse requests were associated with an epoch and one could wait for an epoch to complete. But for something that only gets called during unmount I didn't think it was worth the extra surgery and object bloat. --D > > > Thanks, > Bernd > >