Re: [PATCH v2 3/5] mm: add static PMD zero page

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07.07.25 16:23, Pankaj Raghav (Samsung) wrote:
From: Pankaj Raghav <p.raghav@xxxxxxxxxxx>

There are many places in the kernel where we need to zeroout larger
chunks but the maximum segment we can zeroout at a time by ZERO_PAGE
is limited by PAGE_SIZE.

This is especially annoying in block devices and filesystems where we
attach multiple ZERO_PAGEs to the bio in different bvecs. With multipage
bvec support in block layer, it is much more efficient to send out
larger zero pages as a part of single bvec.

This concern was raised during the review of adding LBS support to
XFS[1][2].

Usually huge_zero_folio is allocated on demand, and it will be
deallocated by the shrinker if there are no users of it left. At moment,
huge_zero_folio infrastructure refcount is tied to the process lifetime
that created it. This might not work for bio layer as the completitions
can be async and the process that created the huge_zero_folio might no
longer be alive.

Of course, what we could do is indicating that there is any untracked reference to the huge zero folio, and then simply refuse to free it for all eternity.

Essentially, every any non-mm reference -> un-shrinkable.

We'd still be allocating the huge zero folio dynamically. We could try allocating it on first usage either from memblock, or from the buddy if
already around.

Then, we'd only need a config option to allow for that to happen.

--
Cheers,

David / dhildenb





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux