On 08.07.25 00:38, Andrew Morton wrote:
On Mon, 7 Jul 2025 16:23:14 +0200 "Pankaj Raghav (Samsung)" <kernel@xxxxxxxxxxxxxxxx> wrote:
There are many places in the kernel where we need to zeroout larger
chunks but the maximum segment we can zeroout at a time by ZERO_PAGE
is limited by PAGE_SIZE.
This concern was raised during the review of adding Large Block Size support
to XFS[1][2].
This is especially annoying in block devices and filesystems where we
attach multiple ZERO_PAGEs to the bio in different bvecs. With multipage
bvec support in block layer, it is much more efficient to send out
larger zero pages as a part of a single bvec.
Some examples of places in the kernel where this could be useful:
- blkdev_issue_zero_pages()
- iomap_dio_zero()
- vmalloc.c:zero_iter()
- rxperf_process_call()
- fscrypt_zeroout_range_inline_crypt()
- bch2_checksum_update()
...
We already have huge_zero_folio that is allocated on demand, and it will be
deallocated by the shrinker if there are no users of it left.
At moment, huge_zero_folio infrastructure refcount is tied to the process
lifetime that created it. This might not work for bio layer as the completions
can be async and the process that created the huge_zero_folio might no
longer be alive.
Can we change that? Alter the refcounting model so that dropping the
final reference at interrupt time works as expected?
I would hope that we can drop that whole shrinking+freeing mechanism at
some point, and simply always keep it around once allocated.
Any unprivileged process can keep the huge zero folio mapped and,
therefore, around, until that process is killed ...
But I assume some people might still have an opinion on the shrinker, so
for the time being having a second static model might be less controversial.
(I don't think we should be refcounting the huge zerofolio in the long term)
--
Cheers,
David / dhildenb