On 14.08.25 08:47, lizhe.67@xxxxxxxxxxxxx wrote:
From: Li Zhe <lizhe.67@xxxxxxxxxxxxx>
Let's add a simple helper for determining the number of contiguous pages
that represent contiguous PFNs.
In an ideal world, this helper would be simpler or not even required.
Unfortunately, on some configs we still have to maintain (SPARSEMEM
without VMEMMAP), the memmap is allocated per memory section, and we might
run into weird corner cases of false positives when blindly testing for
contiguous pages only.
One example of such false positives would be a memory section-sized hole
that does not have a memmap. The surrounding memory sections might get
"struct pages" that are contiguous, but the PFNs are actually not.
This helper will, for example, be useful for determining contiguous PFNs
in a GUP result, to batch further operations across returned "struct
page"s. VFIO will utilize this interface to accelerate the VFIO DMA map
process.
Implementation based on Linus' suggestions to avoid new usage of
nth_page() where avoidable.
Suggested-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Suggested-by: Jason Gunthorpe <jgg@xxxxxxxx>
Signed-off-by: Li Zhe <lizhe.67@xxxxxxxxxxxxx>
Co-developed-by: David Hildenbrand <david@xxxxxxxxxx>
Signed-off-by: David Hildenbrand <david@xxxxxxxxxx>
---
include/linux/mm.h | 7 ++++++-
include/linux/mm_inline.h | 35 +++++++++++++++++++++++++++++++++++
2 files changed, 41 insertions(+), 1 deletion(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1ae97a0b8ec7..ead6724972cf 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1763,7 +1763,12 @@ static inline unsigned long page_to_section(const struct page *page)
{
return (page->flags >> SECTIONS_PGSHIFT) & SECTIONS_MASK;
}
-#endif
+#else /* !SECTION_IN_PAGE_FLAGS */
+static inline unsigned long page_to_section(const struct page *page)
+{
+ return 0;
+}
+#endif /* SECTION_IN_PAGE_FLAGS */
/**
* folio_pfn - Return the Page Frame Number of a folio.
diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 89b518ff097e..5ea23891fe4c 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -616,4 +616,39 @@ static inline bool vma_has_recency(struct vm_area_struct *vma)
return true;
}
+/**
+ * num_pages_contiguous() - determine the number of contiguous pages
+ * that represent contiguous PFNs
+ * @pages: an array of page pointers
+ * @nr_pages: length of the array, at least 1
+ *
+ * Determine the number of contiguous pages that represent contiguous PFNs
+ * in @pages, starting from the first page.
+ *
+ * In kernel configs where contiguous pages might not imply contiguous PFNs
+ * over memory section boundaries, this function will stop at the memory
> + * section boundary.
Jason suggested here instead:
"
In some kernel configs contiguous PFNs will not have contiguous struct
pages. In these configurations num_pages_contiguous() will return a
smaller than ideal number. The caller should continue to check for pfn
contiguity after each call to num_pages_contiguous().
"
--
Cheers
David / dhildenb