Re: [PATCH v10 03/13] khugepaged: generalize hugepage_vma_revalidate for mTHP support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, Nico

Some nit below.

On Tue, Aug 19, 2025 at 07:41:55AM -0600, Nico Pache wrote:
>For khugepaged to support different mTHP orders, we must generalize this
>to check if the PMD is not shared by another VMA and the order is enabled.
>
>To ensure madvise_collapse can support working on mTHP orders without the
>PMD order enabled, we need to convert hugepage_vma_revalidate to take a
>bitmap of orders.
>
>No functional change in this patch.
>
>Reviewed-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
>Acked-by: David Hildenbrand <david@xxxxxxxxxx>
>Co-developed-by: Dev Jain <dev.jain@xxxxxxx>
>Signed-off-by: Dev Jain <dev.jain@xxxxxxx>
>Signed-off-by: Nico Pache <npache@xxxxxxxxxx>
>---
> mm/khugepaged.c | 13 ++++++++-----
> 1 file changed, 8 insertions(+), 5 deletions(-)
>
>diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>index b7b98aebb670..2d192ec961d2 100644
>--- a/mm/khugepaged.c
>+++ b/mm/khugepaged.c

There is a comment above this function, which says "revalidate vma before
taking mmap_lock".

I am afraid it is "after taking mmap_lock"? or "after taking mmap_lock again"?

>@@ -917,7 +917,7 @@ static int collapse_find_target_node(struct collapse_control *cc)
> static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
> 				   bool expect_anon,
> 				   struct vm_area_struct **vmap,
>-				   struct collapse_control *cc)
>+				   struct collapse_control *cc, unsigned long orders)
> {
> 	struct vm_area_struct *vma;
> 	enum tva_type type = cc->is_khugepaged ? TVA_KHUGEPAGED :
>@@ -930,9 +930,10 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
> 	if (!vma)
> 		return SCAN_VMA_NULL;
> 
>+	/* Always check the PMD order to insure its not shared by another VMA */
> 	if (!thp_vma_suitable_order(vma, address, PMD_ORDER))
> 		return SCAN_ADDRESS_RANGE;
>-	if (!thp_vma_allowable_order(vma, vma->vm_flags, type, PMD_ORDER))
>+	if (!thp_vma_allowable_orders(vma, vma->vm_flags, type, orders))
> 		return SCAN_VMA_CHECK;
> 	/*
> 	 * Anon VMA expected, the address may be unmapped then

Below is a comment, "thp_vma_allowable_order may return".

Since you use thp_vma_allowable_orders, maybe we need to change the comment
too.

>@@ -1134,7 +1135,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> 		goto out_nolock;
> 
> 	mmap_read_lock(mm);
>-	result = hugepage_vma_revalidate(mm, address, true, &vma, cc);
>+	result = hugepage_vma_revalidate(mm, address, true, &vma, cc,
>+					 BIT(HPAGE_PMD_ORDER));
> 	if (result != SCAN_SUCCEED) {
> 		mmap_read_unlock(mm);
> 		goto out_nolock;
>@@ -1168,7 +1170,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> 	 * mmap_lock.
> 	 */
> 	mmap_write_lock(mm);
>-	result = hugepage_vma_revalidate(mm, address, true, &vma, cc);
>+	result = hugepage_vma_revalidate(mm, address, true, &vma, cc,
>+					 BIT(HPAGE_PMD_ORDER));
> 	if (result != SCAN_SUCCEED)
> 		goto out_up_write;
> 	/* check if the pmd is still valid */
>@@ -2807,7 +2810,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
> 			mmap_read_lock(mm);
> 			mmap_locked = true;
> 			result = hugepage_vma_revalidate(mm, addr, false, &vma,
>-							 cc);
>+							 cc, BIT(HPAGE_PMD_ORDER));
> 			if (result  != SCAN_SUCCEED) {
> 				last_fail = result;
> 				goto out_nolock;
>-- 
>2.50.1
>

-- 
Wei Yang
Help you, Help me




[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux