On Tue, 2025-03-25 at 13:59 -0400, Jeff Layton wrote: > On Tue, 2025-03-25 at 12:17 -0400, trondmy@xxxxxxxxxx wrote: > > From: Trond Myklebust <trond.myklebust@xxxxxxxxxxxxxxx> > > > > If someone calls nfs_mark_client_ready(clp, status) with a negative > > value for status, then that should signal that the nfs_client is no > > longer valid. > > > > Signed-off-by: Trond Myklebust <trond.myklebust@xxxxxxxxxxxxxxx> > > --- > > fs/nfs/nfs4state.c | 4 ++-- > > 1 file changed, 2 insertions(+), 2 deletions(-) > > > > diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c > > index 542cdf71229f..738eb2789266 100644 > > --- a/fs/nfs/nfs4state.c > > +++ b/fs/nfs/nfs4state.c > > @@ -1198,7 +1198,7 @@ void nfs4_schedule_state_manager(struct > > nfs_client *clp) > > struct rpc_clnt *clnt = clp->cl_rpcclient; > > bool swapon = false; > > > > - if (clnt->cl_shutdown) > > + if (clnt->cl_shutdown || clp->cl_cons_state < 0) > > Would it be simpler to just set cl_shutdown when this occurs instead > of > having to check cl_cons_state as well? Do we need the check for clnt->cl_shutdown at all here? I'd expect any caller of this function to already hold a reference to the client, which means that the RPC client should still be up. I'm a little suspicious of the check in nfs41_sequence_call_done() too. > > > return; > > > > set_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state); > > @@ -1403,7 +1403,7 @@ int nfs4_schedule_stateid_recovery(const > > struct nfs_server *server, struct nfs4_ > > dprintk("%s: scheduling stateid recovery for server %s\n", > > __func__, > > clp->cl_hostname); > > nfs4_schedule_state_manager(clp); > > - return 0; > > + return clp->cl_cons_state < 0 ? clp->cl_cons_state : 0; > > } > > EXPORT_SYMBOL_GPL(nfs4_schedule_stateid_recovery); > > > -- Trond Myklebust Linux NFS client maintainer, Hammerspace trond.myklebust@xxxxxxxxxxxxxxx