Without this patch, when HugeTLB folio metadata is stashed, the vmemmap_optimized flag, stored in a HugeTLB folio's folio->private was stashed as set. The first splitting works, but on merging, when the folio metadata was unstashed, vmemmap_optimized is unstashed as set, making the call to hugetlb_vmemmap_optimize_folio() skip actually applying optimizations. On a second split, hugetlb_vmemmap_restore_folio() attempts to reapply optimizations when it was already applied, hence hitting the BUG(). Signed-off-by: Ackerley Tng <ackerleytng@xxxxxxxxxx> --- mm/guestmem_hugetlb.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/mm/guestmem_hugetlb.c b/mm/guestmem_hugetlb.c index 8727598cf18e..2c0192543676 100644 --- a/mm/guestmem_hugetlb.c +++ b/mm/guestmem_hugetlb.c @@ -200,16 +200,21 @@ static int guestmem_hugetlb_split_folio(struct folio *folio) return 0; orig_nr_pages = folio_nr_pages(folio); - ret = guestmem_hugetlb_stash_metadata(folio); + + /* + * hugetlb_vmemmap_restore_folio() has to be called ahead of the rest + * because it checks page type. This doesn't actually split the folio, + * so the first few struct pages are still intact. + */ + ret = hugetlb_vmemmap_restore_folio(folio_hstate(folio), folio); if (ret) return ret; /* - * hugetlb_vmemmap_restore_folio() has to be called ahead of the rest - * because it checks and page type. This doesn't actually split the - * folio, so the first few struct pages are still intact. + * Stash metadata after vmemmap stuff so the outcome of the vmemmap + * restoration is stashed. */ - ret = hugetlb_vmemmap_restore_folio(folio_hstate(folio), folio); + ret = guestmem_hugetlb_stash_metadata(folio); if (ret) goto err; @@ -254,8 +259,7 @@ static int guestmem_hugetlb_split_folio(struct folio *folio) return 0; err: - guestmem_hugetlb_unstash_free_metadata(folio); - + hugetlb_vmemmap_optimize_folio(folio_hstate(folio), folio); return ret; } -- 2.49.0.1101.gccaa498523-goog