On 8/13/25 07:09, Byungchul Park wrote:
On Tue, Jul 29, 2025 at 08:02:10PM +0900, Byungchul Park wrote:
...>> For net_iov, use ->pp to identify if it's pp, with making sure that ->pp
is NULL for non-pp net_iov.
This work was inspired by the following link:
[1] https://lore.kernel.org/all/582f41c0-2742-4400-9c81-0d46bf4e8314@xxxxxxxxx/
While at it, move the sanity check for page pool to on free.
Hi, Andrew and Jakub
I will spin the next one with some modified, once the following patch,
[1], gets merged.
[1] https://lore.kernel.org/all/a8643abedd208138d3d550db71631d5a2e4168d1.1754929026.git.asml.silence@xxxxxxxxx/
This is about both mm and network. I have no idea which tree should I
aim at between mm tree and network tree? I prefer the network tree tho.
However, it's totally fine regardless of what it would be. Suggestion?
It should go to net, there will be enough of conflicts otherwise.
mm maintainers, do you like it as a shared branch or can it just
go through the net tree?
It'd also be better to split mm and net changes into a separate
patches. A patch I had before, it might need a rebase though.
From: Pavel Begunkov <asml.silence@xxxxxxxxx>
Date: Thu, 17 Jul 2025 11:46:21 +0100
Subject: [PATCH] mm: introduce a page type for page pool
Page pool currently uses ->pp_magic aliased with lru.next to check
whether a page belongs to it. Add a new page type, a later patch will
convert page pool to use it.
Suggested-by: David Hildenbrand <david@xxxxxxxxxx>
Signed-off-by: Pavel Begunkov <asml.silence@xxxxxxxxx>
---
include/linux/mm.h | 20 --------------------
include/linux/page-flags.h | 6 ++++++
mm/page_alloc.c | 7 +++----
3 files changed, 9 insertions(+), 24 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 0d4ee569aa6b..21db02e92b33 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -4205,26 +4205,6 @@ int arch_lock_shadow_stack_status(struct task_struct *t, unsigned long status);
#define PP_DMA_INDEX_MASK GENMASK(PP_DMA_INDEX_BITS + PP_DMA_INDEX_SHIFT - 1, \
PP_DMA_INDEX_SHIFT)
-/* Mask used for checking in page_pool_page_is_pp() below. page->pp_magic is
- * OR'ed with PP_SIGNATURE after the allocation in order to preserve bit 0 for
- * the head page of compound page and bit 1 for pfmemalloc page, as well as the
- * bits used for the DMA index. page_is_pfmemalloc() is checked in
- * __page_pool_put_page() to avoid recycling the pfmemalloc page.
- */
-#define PP_MAGIC_MASK ~(PP_DMA_INDEX_MASK | 0x3UL)
-
-#ifdef CONFIG_PAGE_POOL
-static inline bool page_pool_page_is_pp(const struct page *page)
-{
- return (page->pp_magic & PP_MAGIC_MASK) == PP_SIGNATURE;
-}
-#else
-static inline bool page_pool_page_is_pp(const struct page *page)
-{
- return false;
-}
-#endif
-
#define PAGE_SNAPSHOT_FAITHFUL (1 << 0)
#define PAGE_SNAPSHOT_PG_BUDDY (1 << 1)
#define PAGE_SNAPSHOT_PG_IDLE (1 << 2)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 8d3fa3a91ce4..0afdf2ee3fbd 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -933,6 +933,7 @@ enum pagetype {
PGTY_zsmalloc = 0xf6,
PGTY_unaccepted = 0xf7,
PGTY_large_kmalloc = 0xf8,
+ PGTY_net_pp = 0xf9,
PGTY_mapcount_underflow = 0xff
};
@@ -1077,6 +1078,11 @@ PAGE_TYPE_OPS(Zsmalloc, zsmalloc, zsmalloc)
PAGE_TYPE_OPS(Unaccepted, unaccepted, unaccepted)
FOLIO_TYPE_OPS(large_kmalloc, large_kmalloc)
+/*
+ * Marks pages allocated by page_pool. See (see net/core/page_pool.c)
+ */
+PAGE_TYPE_OPS(Net_pp, net_pp, net_pp)
+
/**
* PageHuge - Determine if the page belongs to hugetlbfs
* @page: The page to test.
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d1d037f97c5f..67dfd6d8a124 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1042,7 +1042,6 @@ static inline bool page_expected_state(struct page *page,
#ifdef CONFIG_MEMCG
page->memcg_data |
#endif
- page_pool_page_is_pp(page) |
(page->flags & check_flags)))
return false;
@@ -1069,8 +1068,6 @@ static const char *page_bad_reason(struct page *page, unsigned long flags)
if (unlikely(page->memcg_data))
bad_reason = "page still charged to cgroup";
#endif
- if (unlikely(page_pool_page_is_pp(page)))
- bad_reason = "page_pool leak";
return bad_reason;
}
@@ -1379,9 +1376,11 @@ __always_inline bool free_pages_prepare(struct page *page,
mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
folio->mapping = NULL;
}
- if (unlikely(page_has_type(page)))
+ if (unlikely(page_has_type(page))) {
+ WARN_ON_ONCE(PageNet_pp(page));
/* Reset the page_type (which overlays _mapcount) */
page->page_type = UINT_MAX;
+ }
if (is_check_pages_enabled()) {
if (free_page_is_bad(page))
--
Pavel Begunkov