On Tue, Aug 26, 2025 at 3:43 PM Yafang Shao <laoar.shao@xxxxxxxxx> wrote: [...] > @@ -4510,13 +4510,18 @@ static struct folio *alloc_swap_folio(struct vm_fault *vmf) > if (!zswap_never_enabled()) > goto fallback; > > + suggested_orders = get_suggested_order(vma->vm_mm, vma, vma->vm_flags, > + TVA_PAGEFAULT, > + BIT(PMD_ORDER) - 1); Can we separate this case from the normal anonymous page faults below? We’ve observed that swapping in large folios can lead to more swap thrashing for some workloads- e.g. kernel build. Consequently, some workloads might prefer swapping in smaller folios than those allocated by alloc_anon_folio(). > + if (!suggested_orders) > + goto fallback; > entry = pte_to_swp_entry(vmf->orig_pte); > /* > * Get a list of all the (large) orders below PMD_ORDER that are enabled > * and suitable for swapping THP. > */ > orders = thp_vma_allowable_orders(vma, vma->vm_flags, TVA_PAGEFAULT, > - BIT(PMD_ORDER) - 1); > + suggested_orders); > orders = thp_vma_suitable_orders(vma, vmf->address, orders); > orders = thp_swap_suitable_orders(swp_offset(entry), > vmf->address, orders); > @@ -5044,12 +5049,12 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf) > { > struct vm_area_struct *vma = vmf->vma; > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > + int order, suggested_orders; > unsigned long orders; > struct folio *folio; > unsigned long addr; > pte_t *pte; > gfp_t gfp; > - int order; > > /* > * If uffd is active for the vma we need per-page fault fidelity to > @@ -5058,13 +5063,18 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf) > if (unlikely(userfaultfd_armed(vma))) > goto fallback; > > + suggested_orders = get_suggested_order(vma->vm_mm, vma, vma->vm_flags, > + TVA_PAGEFAULT, > + BIT(PMD_ORDER) - 1); > + if (!suggested_orders) > + goto fallback; Thanks Barry