On 4/30/25 1:11 AM, NeilBrown wrote: > On Tue, 29 Apr 2025, cel@xxxxxxxxxx wrote: >> From: Chuck Lever <chuck.lever@xxxxxxxxxx> >> >> In order to make RPCSVC_MAXPAYLOAD larger (or variable in size), we >> need to do something clever with the payload arrays embedded in >> struct svc_rqst and elsewhere. >> >> My preference is to keep these arrays allocated all the time because >> allocating them on demand increases the risk of a memory allocation >> failure during a large I/O. This is a quick-and-dirty approach that >> might be replaced once NFSD is converted to use large folios. >> >> The downside of this design choice is that it pins a few pages per >> NFSD thread (and that's the current situation already). But note >> that because RPCSVC_MAXPAGES is 259, each array is just over a page >> in size, making the allocation waste quite a bit of memory beyond >> the end of the array due to power-of-2 allocator round up. This gets >> worse as the MAXPAGES value is doubled or quadrupled. > > I wonder if we should special-case those 3 extra. > We don't need any for rq_vec and only need 2 (I think) for rq_bvec. For rq_vec, I believe we need one extra entry in case part of the payload is in the xdr_buf's head iovec. For rq_bvec, we need one for the transport header, and one each for the xdr_buf's head and tail iovecs. But, I agree, the rationales for the size of each of these arrays are slightly different. > We could use the arrays only for payload and have dedicated > page/vec/bvec for request, reply, read-padding. I might not fully understand what you are suggesting, but it has occurred to me that for NFSv4, both Call and Reply can be large for one RPC transaction (though that is going to be quite infrequent). A separate rq_pages[] array each for receive and send is possibly in NFSD's future. > Or maybe we could not allow read requests that result in the extra page > due to alignment needs. Would that be much cost? > > Apart from the one issue I noted separately, I think the series looks > good. > > Reviewed-by: NeilBrown <neil@xxxxxxxxxx> Thanks for having a look. -- Chuck Lever