On 22.05.25 13:31, Mike Rapoport wrote:
Hi Pankaj,
On Thu, May 22, 2025 at 11:02:41AM +0200, Pankaj Raghav wrote:
There are many places in the kernel where we need to zeroout larger
chunks but the maximum segment we can zeroout at a time by ZERO_PAGE
is limited by PAGE_SIZE.
This concern was raised during the review of adding Large Block Size support
to XFS[1][2].
This is especially annoying in block devices and filesystems where we
attach multiple ZERO_PAGEs to the bio in different bvecs. With multipage
bvec support in block layer, it is much more efficient to send out
larger zero pages as a part of a single bvec.
Some examples of places in the kernel where this could be useful:
- blkdev_issue_zero_pages()
- iomap_dio_zero()
- vmalloc.c:zero_iter()
- rxperf_process_call()
- fscrypt_zeroout_range_inline_crypt()
- bch2_checksum_update()
...
We already have huge_zero_folio that is allocated on demand, and it will be
deallocated by the shrinker if there are no users of it left.
But to use huge_zero_folio, we need to pass a mm struct and the
put_folio needs to be called in the destructor. This makes sense for
systems that have memory constraints but for bigger servers, it does not
matter if the PMD size is reasonable (like x86).
Add a config option THP_HUGE_ZERO_PAGE_ALWAYS that will always allocate
the huge_zero_folio, and it will never be freed. This makes using the
huge_zero_folio without having to pass any mm struct and a call to put_folio
in the destructor.
I don't think this config option should be tied to THP. It's perfectly
sensible to have a configuration with HUGETLB and without THP.
Such configs are getting rarer ...
I assume we would then simply reuse that page from THP code if available?
--
Cheers,
David / dhildenb