When file access conflicts occur between clients, the server recalls delegations. If the client holding delegation fails to return it after a recall, nfs4_laundromat adds the delegation to cl_revoked list. This causes subsequent SEQUENCE operations to set the SEQ4_STATUS_RECALLABLE_STATE_REVOKED flag, forcing the client to validate all delegations and return the revoked one. However, if the client fails to return the delegation like this: nfs4_laundromat nfsd4_delegreturn unhash_delegation_locked list_add // add dp to reaplist // by dl_recall_lru list_del_init // delete dp from // reaplist destroy_delegation unhash_delegation_locked // do nothing but return false revoke_delegation list_add // add dp to cl_revoked // by dl_recall_lru The delegation will remain in the server's cl_revoked list while the client marks it revoked and won't find it upon detecting SEQ4_STATUS_RECALLABLE_STATE_REVOKED. This leads to a loop: the server persistently sets SEQ4_STATUS_RECALLABLE_STATE_REVOKED, and the client repeatedly tests all delegations, severely impacting performance when numerous delegations exist. Since abnormal delegations are removed from flc_lease via nfs4_laundromat --> revoke_delegation --> destroy_unhashed_deleg --> nfs4_unlock_deleg_lease --> kernel_setlease, and do not block new open requests indefinitely, retaining such a delegation on the server is unnecessary. Reported-by: Zhang Jian <zhangjian496@xxxxxxxxxx> Fixes: 3bd64a5ba171 ("nfsd4: implement SEQ4_STATUS_RECALLABLE_STATE_REVOKED") Closes: https://lore.kernel.org/all/ff8debe9-6877-4cf7-ba29-fc98eae0ffa0@xxxxxxxxxx/ Signed-off-by: Li Lingfeng <lilingfeng3@xxxxxxxxxx> --- Changes in v2: 1) Set SC_STATUS_CLOSED unconditionally in destroy_delegation(); 2) Determine whether to remove the delegation based on SC_STATUS_CLOSED, rather than by timeout; 3) Modify the commit message. fs/nfsd/nfs4state.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c index 88c347957da5..bb9e1df4e41f 100644 --- a/fs/nfsd/nfs4state.c +++ b/fs/nfsd/nfs4state.c @@ -1336,6 +1336,11 @@ static void destroy_delegation(struct nfs4_delegation *dp) spin_lock(&state_lock); unhashed = unhash_delegation_locked(dp, SC_STATUS_CLOSED); + /* + * Unconditionally set SC_STATUS_CLOSED, regardless of whether the + * delegation is hashed, to mark the current delegation as invalid. + */ + dp->dl_stid.sc_status |= SC_STATUS_CLOSED; spin_unlock(&state_lock); if (unhashed) destroy_unhashed_deleg(dp); @@ -4326,6 +4331,8 @@ nfsd4_sequence(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate, int buflen; struct net *net = SVC_NET(rqstp); struct nfsd_net *nn = net_generic(net, nfsd_net_id); + struct list_head *pos, *next; + struct nfs4_delegation *dp; if (resp->opcnt != 1) return nfserr_sequence_pos; @@ -4470,6 +4477,19 @@ nfsd4_sequence(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate, default: seq->status_flags = 0; } + if (!list_empty(&clp->cl_revoked)) { + spin_lock(&clp->cl_lock); + list_for_each_safe(pos, next, &clp->cl_revoked) { + dp = list_entry(pos, struct nfs4_delegation, dl_recall_lru); + if (dp->dl_stid.sc_status & SC_STATUS_CLOSED) { + list_del_init(&dp->dl_recall_lru); + spin_unlock(&clp->cl_lock); + nfs4_put_stid(&dp->dl_stid); + spin_lock(&clp->cl_lock); + } + } + spin_unlock(&clp->cl_lock); + } if (!list_empty(&clp->cl_revoked)) seq->status_flags |= SEQ4_STATUS_RECALLABLE_STATE_REVOKED; if (atomic_read(&clp->cl_admin_revoked)) -- 2.46.1