Re: [RFC PATCH 1/2] NFSD: fix misaligned DIO READ to not use a start_extra_page, exposes rpcrdma bug?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/2/25 5:06 PM, Mike Snitzer wrote:
> On Tue, Sep 02, 2025 at 01:59:12PM -0400, Chuck Lever wrote:
>> On 9/2/25 11:56 AM, Chuck Lever wrote:
>>> On 8/30/25 1:38 PM, Mike Snitzer wrote:
>>
>>>> dt (j:1 t:1): File System Information:
>>>> dt (j:1 t:1):            Mounted from device: 192.168.0.105:/hs_test
>>>> dt (j:1 t:1):           Mounted on directory: /mnt/hs_test
>>>> dt (j:1 t:1):                Filesystem type: nfs4
>>>> dt (j:1 t:1):             Filesystem options: rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,fatal_neterrors=none,proto=tcp,nconnect=16,port=20491,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.106,local_lock=none,addr=192.168.0.105
>>>
>>> I haven't been able to reproduce a similar failure in my lab with
>>> NFSv4.2 over RDMA with FDR InfiniBand. I've run dt 6-7 times, all
>>> successful. Also, for shit giggles, I tried the fsx-based subtests in
>>> fstests, no new failures there either. The export is xfs on an NVMe
>>> add-on card; server uses direct I/O for READ and page cache for WRITE.
>>>
>>> Notice the mount options for your test run: "proto=tcp" and
>>> "nconnect=16". Even if your network fabric is RoCE, "proto=tcp" will
>>> not use RDMA at all; it will use bog standard TCP/IP on your ultra
>>> fast Ethernet network.
>>>
>>> What should I try next? I can apply 2/2 or add "nconnect" or move the
>>> testing to my RoCE fabric after lunch and keep poking at it.
> 
> Hmm, I'll have to check with the Hammerspace performance team to
> understand how RDMA used if the client mount has proto=tcp.
> 
> Certainly surprising, thanks for noticing/reporting this aspect.
> 
> I also cannot reproduce on a normal tcp mount and testbed.  This
> frankenbeast of a fast "RDMA" network that is misconfigured to use
> proto=tcp is the only testbed where I've seen this dt data mismatch.
> 
>>> Or, I could switch to TCP. Suggestions welcome.
>>
>> The client is not sending any READ procedures/operations to the server.
>> The following is NFSv3 for clarity, but NFSv4.x results are similar:
>>
>>             nfsd-1669  [003]  1466.634816: svc_process:
>> addr=192.168.2.67 xid=0x7b2a6274 service=nfsd vers=3 proc=NULL
>>             nfsd-1669  [003]  1466.635389: svc_process:
>> addr=192.168.2.67 xid=0x7d2a6274 service=nfsd vers=3 proc=FSINFO
>>             nfsd-1669  [003]  1466.635420: svc_process:
>> addr=192.168.2.67 xid=0x7e2a6274 service=nfsd vers=3 proc=PATHCONF
>>             nfsd-1669  [003]  1466.635451: svc_process:
>> addr=192.168.2.67 xid=0x7f2a6274 service=nfsd vers=3 proc=GETATTR
>>             nfsd-1669  [003]  1466.635486: svc_process:
>> addr=192.168.2.67 xid=0x802a6274 service=nfsacl vers=3 proc=NULL
>>             nfsd-1669  [003]  1466.635558: svc_process:
>> addr=192.168.2.67 xid=0x812a6274 service=nfsd vers=3 proc=FSINFO
>>             nfsd-1669  [003]  1466.635585: svc_process:
>> addr=192.168.2.67 xid=0x822a6274 service=nfsd vers=3 proc=GETATTR
>>             nfsd-1669  [003]  1470.029208: svc_process:
>> addr=192.168.2.67 xid=0x832a6274 service=nfsd vers=3 proc=ACCESS
>>             nfsd-1669  [003]  1470.029255: svc_process:
>> addr=192.168.2.67 xid=0x842a6274 service=nfsd vers=3 proc=LOOKUP
>>             nfsd-1669  [003]  1470.029296: svc_process:
>> addr=192.168.2.67 xid=0x852a6274 service=nfsd vers=3 proc=FSSTAT
>>             nfsd-1669  [003]  1470.039715: svc_process:
>> addr=192.168.2.67 xid=0x862a6274 service=nfsacl vers=3 proc=GETACL
>>             nfsd-1669  [003]  1470.039758: svc_process:
>> addr=192.168.2.67 xid=0x872a6274 service=nfsd vers=3 proc=CREATE
>>             nfsd-1669  [003]  1470.040091: svc_process:
>> addr=192.168.2.67 xid=0x882a6274 service=nfsd vers=3 proc=WRITE
>>             nfsd-1669  [003]  1470.040469: svc_process:
>> addr=192.168.2.67 xid=0x892a6274 service=nfsd vers=3 proc=GETATTR
>>             nfsd-1669  [003]  1470.040503: svc_process:
>> addr=192.168.2.67 xid=0x8a2a6274 service=nfsd vers=3 proc=ACCESS
>>             nfsd-1669  [003]  1470.041867: svc_process:
>> addr=192.168.2.67 xid=0x8b2a6274 service=nfsd vers=3 proc=FSSTAT
>>             nfsd-1669  [003]  1470.042109: svc_process:
>> addr=192.168.2.67 xid=0x8c2a6274 service=nfsd vers=3 proc=REMOVE
>>
>> So I'm probably missing some setting on the reproducer/client.
>>
>> /mnt from klimt.ib.1015granger.net:/export/fast
>>  Flags:	rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,
>>   fatal_neterrors=none,proto=rdma,port=20049,timeo=600,retrans=2,
>>   sec=sys,mountaddr=192.168.2.55,mountvers=3,mountproto=tcp,
>>   local_lock=none,addr=192.168.2.55
>>
>> Linux morisot.1015granger.net 6.15.10-100.fc41.x86_64 #1 SMP
>>  PREEMPT_DYNAMIC Fri Aug 15 14:55:12 UTC 2025 x86_64 GNU/Linux
> 
> If you're using LOCALIO (client on server) that'd explain your not
> seeing any READs coming over the wire to NFSD.
> 
> I've made sure to disable LOCALIO on my client, with:
> echo N > /sys/module/nfs/parameters/localio_enabled

I am testing with a physically separate client and server, so I believe
that LOCALIO is not in play. I do see WRITEs. And other workloads (in
particular "fsx -Z <fname>") show READ traffic and I'm getting the
new trace point to fire quite a bit, and it is showing misaligned
READ requests. So it has something to do with dt.

If I understand your two patches correctly, they are still pulling a
page from the end of rq_pages to do the initial pad page. That, I
think, is a working implementation, not the failing one.

EOD -- will continue tomorrow.


-- 
Chuck Lever




[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux