Re: NFS client read bandwidth is not scaling up with nconnect value higher 4+

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Update -

I have thought that TCP limitation and switched to RDMA, but the same
performance, even slightly worse than TCP.

So I can't get more 16 GiB/s per mount point with increasing nconnect=4+

Anton

сб, 5 апр. 2025 г. в 19:01, Anton Gavriliuk <antosha20xx@xxxxxxxxx>:
>
> There is the file I can locally read in one thread ~55 GB/s,
>
> [root@localhost anton]# fio --name=test --rw=read --bs=128k
> --filename=/mnt/testfile --direct=1 --numjobs=1 --iodepth=64 --exitall
> --group_reporting --ioengine=io_uring --runtime=30 --time_based
> test: (g=0): rw=read, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T)
> 128KiB-128KiB, ioengine=io_uring, iodepth=64
> fio-3.39-31-gc283
> Starting 1 process
> Jobs: 1 (f=1): [R(1)][100.0%][r=51.6GiB/s][r=422k IOPS][eta 00m:00s]
> test: (groupid=0, jobs=1): err= 0: pid=19114: Sat Apr  5 18:52:55 2025
>   read: IOPS=423k, BW=51.7GiB/s (55.5GB/s)(1551GiB/30001msec)
>
> I exported the file to the NFS client (hw exactly the same as NFS
> server), which is directly connected using 200 Gbps ConnectX-7 and 2m
> DAC cable.
>
> Without nconnect option, reading in exactly the same way as locally I got 4GB/s
> With nconnect=2, 8GB/s
> With nconnect=4, 15GB/s
> With nconnect=6, 15.5 GB/s
> With nconnect=8, 15.8 GB/s
>
> Why it doesn't scales up to 23-24 GB/s with increasing number of nconnect=4+ ?





[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux