Re: [PATCH 0/6] NFSD: add enable-dontcache and initially use it to add DIO support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/12/25 3:08 PM, Mike Snitzer wrote:
> On Thu, Jun 12, 2025 at 09:46:12AM -0400, Chuck Lever wrote:
>> But, can we get more insight into specifically where the CPU
>> utilization reduction comes from? Is it lock contention? Is it
>> inefficient data structure traversal? Any improvement here benefits
>> everyone, so that should be a focus of some study.
> 
> Buffered IO just commands more resources than O_DIRECT for workloads
> with a working set that exceeds system memory.

No doubt. However, using direct I/O has some consequences that we might
be able to avoid if we understand better how to manage the server's
cache rather than not caching at all.


> Each of the 6 servers has 1TiB of memory.
> 
> So for the above 6 client 128 PPN IOT "easy" run, each client thread
> is writing and then reading 266 GiB.  That creates an aggregate
> working set of 199.50 TiB
> 
> The 199.50 TiB working set dwarfs the servers' aggregate 6 TiB of
> memory.  Being able to drive each of the 8 NVMe in each server as
> efficiently as possible is critical.
> 
> As you can see from the above NVMe performance above O_DIRECT is best.

Well, I see that it is the better choice between full caching v. direct
I/O when the backing storage is nearly as fast as memory. The sticking
point for me there is what will happen with slower backing storage.


> "The nfs client largely aligns all of the page caceh based IO, so I'd
> think that O_DIRECT on the server side would be much more performant
> than RWF_DONTCACHE. Especially as XFS will do concurrent O_DIRECT
> writes all the way down to the storage....."
> 
> (Dave would be correct about NFSD's page alignment if RDMA used, but
> obviously not the case if TCP used due to SUNRPC TCP's WRITE payload
> being received into misaligned pages).

RDMA gives us the opportunity to align the sink buffer pages on the NFS
server, yes. However I'm not sure if NFSD currently goes to the trouble
of actually doing that alignment before starting RDMA Reads. There
always seems to be one or more data copies needed when going through
nfsd_vfs_write().

If the application has aligned the WRITE payload already, we might not
notice that deficiency for many common workloads. For example, if most
unaligned writes come from small payloads, server-side re-alignment
might not matter -- there could be intrinsic RMW cycles that erase the
benefits of buffer alignment. Big payloads are usually aligned to
memory and file pages already.

Something to look into.


-- 
Chuck Lever




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux