memmap pages can be allocated either from the memblock (boot) allocator during early boot or from the buddy allocator. When these memmap pages are removed via arch_remove_memory(), the deallocation path depends on their source: * For pages from the buddy allocator, depopulate_section_memmap() is called, which also decrements the count of nr_memmap_pages. * For pages from the boot allocator, free_map_bootmem() is called. But it currently does not adjust the nr_memmap_boot_pages. To fix this inconsistency, update free_map_bootmem() to also decrement the nr_memmap_boot_pages count by invoking memmap_boot_pages_add(), mirroring how free_vmemmap_page() handles this for boot-allocated pages. This ensures correct tracking of memmap pages regardless of allocation source. Cc: stable@xxxxxxxxxxxxxxx Fixes: 15995a352474 ("mm: report per-page metadata information") Signed-off-by: Sumanth Korikkar <sumanthk@xxxxxxxxxxxxx> --- mm/sparse.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/sparse.c b/mm/sparse.c index 3c012cf83cc2..d7c128015397 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -688,6 +688,7 @@ static void free_map_bootmem(struct page *memmap) unsigned long start = (unsigned long)memmap; unsigned long end = (unsigned long)(memmap + PAGES_PER_SECTION); + memmap_boot_pages_add(-1L * (DIV_ROUND_UP(end - start, PAGE_SIZE))); vmemmap_free(start, end, NULL); } -- 2.48.1