On Fri, Sep 12, 2025 at 11:11 AM Trond Myklebust <trondmy@xxxxxxxxxx> wrote: > > On Fri, 2025-09-12 at 10:41 -0400, Olga Kornievskaia wrote: > > On Fri, Sep 12, 2025 at 10:29 AM Trond Myklebust <trondmy@xxxxxxxxxx> > > wrote: > > > > > > On Fri, 2025-09-12 at 10:21 -0400, Olga Kornievskaia wrote: > > > > Any comments on or objections to this patch? It does lead to > > > > possible > > > > data corruption. > > > > > > > > > > Sorry, I think was travelling when you originally sent this patch. > > > > > > > On Mon, Aug 11, 2025 at 2:25 PM Olga Kornievskaia > > > > <okorniev@xxxxxxxxxx> wrote: > > > > > > > > > > RFC7530 states that clients should be prepared for the return > > > > > of > > > > > NFS4ERR_GRACE errors for non-reclaim lock and I/O requests. > > > > > > > > > > Signed-off-by: Olga Kornievskaia <okorniev@xxxxxxxxxx> > > > > > --- > > > > > fs/nfs/nfs4proc.c | 4 ++-- > > > > > 1 file changed, 2 insertions(+), 2 deletions(-) > > > > > > > > > > diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c > > > > > index 341740fa293d..fa9b81300604 100644 > > > > > --- a/fs/nfs/nfs4proc.c > > > > > +++ b/fs/nfs/nfs4proc.c > > > > > @@ -7867,10 +7867,10 @@ int nfs4_lock_delegation_recall(struct > > > > > file_lock *fl, struct nfs4_state *state, > > > > > return err; > > > > > do { > > > > > err = _nfs4_do_setlk(state, F_SETLK, fl, > > > > > NFS_LOCK_NEW); > > > > > - if (err != -NFS4ERR_DELAY) > > > > > + if (err != -NFS4ERR_DELAY && err != - > > > > > NFS4ERR_GRACE) > > > > > break; > > > > > ssleep(1); > > > > > - } while (err == -NFS4ERR_DELAY); > > > > > + } while (err == -NFS4ERR_DELAY || err == - > > > > > NFSERR_GRACE); > > > > > return nfs4_handle_delegation_recall_error(server, > > > > > state, > > > > > stateid, fl, err); > > > > > } > > > > > > > > > > -- > > > > > 2.47.1 > > > > > > > > > > > > > > > > Should the server be sending NFS4ERR_GRACE in this case, though? > > > The > > > client already holds a delegation, so it is clear that other > > > clients > > > cannot reclaim any locks that would conflict. > > > > > > ..or is the issue that this could happen before the client has a > > > chance > > > to reclaim the delegation after a reboot? > > > To answer my own question here: in that case the server would return > NFS4ERR_BAD_STATEID, and not NFS4ERR_GRACE. > > > The scenario was, v4 client had an open file and a lock and upon > > server reboot (during grace) sends the reclaim open, to which the > > server replies with a delegation. How a v3 client comes in and > > requests the same lock. The linux server at this point sends a > > delegation recall to v4 client, the client sends its local lock > > request and gets ERR_GRACE. > > > > And the spec explicitly notes as I mention in the commit comment that > > the client is supposed to handle ERR_GRACE for non-reclaim locks. > > Thus > > this patch. > > > > Sure, however the same spec also says (Section 9.6.2.): > > If the server can reliably determine that granting a non-reclaim > request will not conflict with reclamation of locks by other clients, > the NFS4ERR_GRACE error does not have to be returned and the > non-reclaim client request can be serviced. > > The server can definitely reliably determine that is the case here, > since it already granted the delegation to the client. I'll take your word for it as I'm not that versed in the server code. But it's an optimization and hard to argue that a server must do it and thus the client really should handle the case that actually happens in practice now? > I'm not saying that the client shouldn't also handle NFS4ERR_GRACE, but > I am stating that the server shouldn't really be putting us in this > situation in the first place. > I'm also saying that if we're going to handle NFS4ERR_GRACE, then we > also need to handle all the other possible errors under a reboot > scenario. I don't see how the "if" and "then" are combined. I think if there are other errors we don't handle in reclaim then we should but I don't see it's conditional on handling ERR_GRACE error. > -- > Trond Myklebust > Linux NFS client maintainer, Hammerspace > trondmy@xxxxxxxxxx, trond.myklebust@xxxxxxxxxxxxxxx >