Re: [PATCH v8 5/7] NFSD: issue READs using O_DIRECT even if IO is misaligned

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/27/25 3:41 PM, Mike Snitzer wrote:
> On Wed, Aug 27, 2025 at 11:34:03AM -0400, Chuck Lever wrote:
>> On 8/26/25 2:57 PM, Mike Snitzer wrote:

>>> +	if (WARN_ONCE(!nf->nf_dio_mem_align || !nf->nf_dio_read_offset_align,
>>> +		      "%s: underlying filesystem has not provided DIO alignment info\n",
>>> +		      __func__))
>>> +		return false;
>>> +	if (WARN_ONCE(dio_blocksize > PAGE_SIZE,
>>> +		      "%s: underlying storage's dio_blocksize=%u > PAGE_SIZE=%lu\n",
>>> +		      __func__, dio_blocksize, PAGE_SIZE))
>>> +		return false;
>>
>> IMHO these checks do not warrant a WARN. Perhaps a trace event, instead?
> 
> I won't die on this hill, I just don't see the risk of these given
> they are highly unlikely ("famous last words").
> 
> But if they trigger we should surely be made aware immediately.  Not
> only if someone happens to have a trace event enabled (which would
> only happen with further support and engineering involvement to chase
> "why isn't O_DIRECT being used like NFSD was optionally configured
> to!?").
A. It seems particularly inefficient to make this check for every I/O
   rather than once per file system

B. Once the warning has fired for one file, it won't fire again, making
   it pretty useless if there are multiple similar mismatches. You still
   end up with "No direct I/O even though I flipped the switch, and I
   can't tell why."


>>> +	/* Return early if IO is irreparably misaligned (len < PAGE_SIZE,
>>> +	 * or base not aligned).
>>> +	 * Ondisk alignment is implied by the following code that expands
>>> +	 * misaligned IO to have a DIO-aligned offset and len.
>>> +	 */
>>> +	if (unlikely(len < dio_blocksize) || ((base & (nf->nf_dio_mem_align-1)) != 0))
>>> +		return false;
>>> +
>>> +	init_nfsd_read_dio(read_dio);
>>> +
>>> +	read_dio->start = round_down(offset, dio_blocksize);
>>> +	read_dio->end = round_up(orig_end, dio_blocksize);
>>> +	read_dio->start_extra = offset - read_dio->start;
>>> +	read_dio->end_extra = read_dio->end - orig_end;
>>> +
>>> +	/*
>>> +	 * Any misaligned READ less than NFSD_READ_DIO_MIN_KB won't be expanded
>>> +	 * to be DIO-aligned (this heuristic avoids excess work, like allocating
>>> +	 * start_extra_page, for smaller IO that can generally already perform
>>> +	 * well using buffered IO).
>>> +	 */
>>> +	if ((read_dio->start_extra || read_dio->end_extra) &&
>>> +	    (len < NFSD_READ_DIO_MIN_KB)) {
>>> +		init_nfsd_read_dio(read_dio);
>>> +		return false;
>>> +	}
>>> +
>>> +	if (read_dio->start_extra) {
>>> +		read_dio->start_extra_page = alloc_page(GFP_KERNEL);
>>
>> This introduces a page allocation where there weren't any before. For
>> NFSD, I/O pages come from rqstp->rq_pages[] so that memory allocation
>> like this is not needed on an I/O path.
> 
> NFSD never supported DIO before. Yes, with this patch there is
> a single page allocation in the misaligned DIO READ path (if it
> requires reading extra before the client requested data starts).
> 
> I tried to succinctly explain the need for the extra page allocation
> for misaligned DIO READ in this patch's header (in 2nd paragraph
> of the above header).
> 
> I cannot see how to read extra, not requested by the client, into the
> head of rq_pages without causing serious problems. So that cannot be
> what you're saying needed.
> 
>> So I think the answer to this is that I want you to implement reading
>> an entire aligned range from the file and then forming the NFS READ
>> response with only the range of bytes that the client requested, as we
>> discussed before.
> 
> That is what I'm doing. But you're taking issue with my implementation
> that uses a single extra page.
> 
>> The use of xdr_buf and bvec should make that quite
>> straightforward.
> 
> Is your suggestion to, rather than allocate a disjoint single page,
> borrow the extra page from the end of rq_pages? Just map it into the
> bvec instead of my extra page?

Yes, the extra page needs to come from rq_pages. But I don't see why it
should come from the /end/ of rq_pages.

- Extend the start of the byte range back to make it align with the
  file's DIO alignment constraint

- Extend the end of the byte range forward to make it align with the
  file's DIO alignment constraint

- Fill in the sink buffer's bvec using pages from rq_pages, as usual

- When the I/O is complete, adjust the offset in the first bvec entry
  forward by setting a non-zero page offset, and adjust the returned
  count downward to match the requested byte count from the client

If the byte range requested by the NFS READ was already aligned, then
the first entry offset value remains zero. As SteveD says, Boom. Done.


>> To properly evaluate the impact of using direct I/O for reads with real
>> world user workloads, we will want to identify (or construct) some
>> metrics (and this is future work, but near-term future).
>>
>> Seems like allocating memory becomes difficult only when too many pages
>> are dirty. I am skeptical that the issue is due to read caching, since
>> clean pages in the page cache are pretty easy to evict quickly, AIUI. If
>> that's incorrect, I'd like to understand why.
> 
> The much more problematic case is heavy WRITE workload with a working
> set that far exceeds system memory.

OK, that makes sense. And, there is a parallel writeback effort ongoing
to help address some of that problem, AIUI. It makes sense to keep a
close watch on that to see how NFSD can benefit, while we're working
through the complexities of handling NFS WRITE using direct I/O.


> But I agree it doesn't make a whole lot of sense that clean pages in
> the page cache would be getting in the way.  All I can tell you is
> that in my experience MM seems to _not_ evict them quickly (but more
> focused read-only testing is warranted to further understand the
> dynamics and heuristics in MM and beyond -- especially if/when
> READ-only then a pivot to a mix of heavy READ and WRITE or
> WRITE-only).

Starting by examining read-only workloads seems like a nice way to
simplify the problem space to get started.


> NFSD using DIO is optional. I thought the point was to get it as an
> available option so that _others_ could experiment and help categorize
> the benefits/pitfalls further?

Yes, that is the point. But such experiments lose value if there is no
data collection plan to go with them.


> I cannot be a one man show on all this. I welcome more help from
> anyone interested.

I think it's important for you to learn how the NFSD I/O path works
rather than simply handing us a drive-by contribution. It's going to
take some time, so be patient.

If you would rather make this drive-by, then you'll have to realize
that you are requesting more than simple review from us. You'll have
to be content with the pace at which us overloaded maintainers can get
to the work.

It's not the usual situation that a maintainer has to sit down and
do extensive rewrites on a contribution. That really doesn't scale
well. That's why I'm pushing back.


-- 
Chuck Lever




[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux