Re: [RFC] a possible way of reducing the PITA of ->d_name audits

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Sep 14, 2025 at 02:37:30AM +0100, Al Viro wrote:

> AFAICS, it can happen if you are there from nfs4_file_open(), hit
> _nfs4_opendata_to_nfs4_state(opendata), find ->rpc_done to be true
> in there, hit nfs4_opendata_find_nfs4_state(), have it call
> nfs4_opendata_get_inode() and run into a server without
> NFS_CAP_ATOMIC_OPEN_V1.  Then you get ->o_arg.claim set to
> NFS4_OPEN_CLAIM_NULL and hit this:
>                 inode = nfs_fhget(data->dir->d_sb, &data->o_res.fh,
>                                 &data->f_attr);
> finding not the same inode as your dentry has attached to it.
> 
> So the test might end up not being true, at least from my reading of
> that code.
> 
> What I don't understand is the reasons for not failing immediately
> with EOPENSTALE in that case.
> 
> TBH, I would be a lot more comfortable if the "attach inode to dentry"
> logics in there had been taken several levels up the call chains - analysis
> would be much easier that way...
 
BTW, that's one place where your scheme with locking dentry might cause
latency problems - two opens on the same cached dentry could be sent
in parallel, but if you hold it against renames, etc., you might end up
with those two roundtrips serialized against each other...




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux