On Thu, Jun 12, 2025 at 10:46:01PM -0700, Christoph Hellwig wrote: > On Thu, Jun 12, 2025 at 12:22:42PM -0400, Jeff Layton wrote: > > If you're against the idea, I won't waste my time. > > > > It would require some fairly hefty rejiggering of the receive code. The > > v4 part would be pretty nightmarish to work out too since you'd have to > > decode the compound as you receive to tell where the next op starts. > > > > The potential for corruption with unaligned writes is also pretty > > nasty. > > Maybe I'm missing an improvement to the receive buffer handling in modern > network hardware, but AFAIK this still would only help you to align the > sunrpc data buffer to page boundaries, but avoid the data copy from the > hardware receive buffer to the sunrpc data buffer as you still don't have > hardware header splitting. Correct, everything that Jeff detailed is about ensuring the WRITE payload is received into page aligned boundary. Which in practice has proven a hard requirement for O_DIRECT in my testing -- but I could be hitting some bizarre driver bug in my TCP testbed (which sadly sits ontop of older VMware guests/drivers). But if you looking at patch 5 in this series: https://lore.kernel.org/linux-nfs/20250610205737.63343-6-snitzer@xxxxxxxxxx/ I added fs/nfsd/vfs.c:is_dio_aligned(), which is basically a tweaked ditto of fs/btrfs/direct-io.c:check_direct_IO(): static bool is_dio_aligned(const struct iov_iter *iter, loff_t offset, const u32 blocksize) { u32 blocksize_mask; if (!blocksize) return false; blocksize_mask = blocksize - 1; if ((offset & blocksize_mask) || (iov_iter_alignment(iter) & blocksize_mask)) return false; return true; } And fs/nfsd/vfs.c:nfsd_vfs_write() has (after my patch 5): nvecs = xdr_buf_to_bvec(rqstp->rq_bvec, rqstp->rq_maxpages, payload); iov_iter_bvec(&iter, ITER_SOURCE, rqstp->rq_bvec, nvecs, *cnt); if (nfsd_enable_dontcache) { if (is_dio_aligned(&iter, offset, nf->nf_dio_offset_align)) flags |= RWF_DIRECT; What I found is that unless SUNRPC TPC stored the WRITE payload in a page-aligned boundary then iov_iter_alignment() would fail. The @payload arg above, with my SUNRPC TCP testing, was always offset 148 bytes into the first page of the pages allocated for xdr_buf's use, which is rqstp->rq_pages, which is allocated by net/sunrpc/svc_xprt.c:svc_alloc_arg(). > And I don't even know what this is supposed to buy the nfs server. > Direct I/O writes need to have the proper file offset alignment, but as > far as Linux is concerned we don't require any memory alignment. Most > storage hardware has requirements for the memory alignment that we pass > on, but typically that's just a dword (4-byte) alignment, which matches > the alignment sunrpc wants for most XDR data structures anyway. So what > additional alignment is actually needed for support direct I/O writes > assuming that is the goal? (I might also simply misunderstand the > problem). THIS... this is the very precise question/detail I discussed with Hammerspace's CEO David Flynn when discussing Linux's O_DIRECT support. David shares your understanding and confusion. And all I could tell him is that in practice I always page-aligned my data buffers used to issue O_DIRECT. And that in this instance if I don't then O_DIRECT doesn't work (if I commented out the iov_iter_alignment check in is_dio_aligned above). But is that simply due to xdr_buf_to_bvec()'s use of bvec_set_virt() for xdr_buf "head" page (first page of rqstp->rg_pages)? Whereas you can see xdr_buf_to_bvec() uses bvec_set_page() to add each of the other pages that immediately follow the first "head" page. All said, if Linux can/should happily allow non-page-aligned DIO (and we only need to worry about the on-disk DIO alignment requirements) that'd be wonderful. Then its just a matter of finding where that is broken... Happy to dig into this further if you might nudge me in the right direction. Thanks, Mike