On 7/22/2025 12:47 AM, Sean Christopherson wrote:
On Thu, Jul 17, 2025, Fuad Tabba wrote:
From: Ackerley Tng <ackerleytng@xxxxxxxxxx>
Update the KVM MMU fault handler to service guest page faults
for memory slots backed by guest_memfd with mmap support. For such
slots, the MMU must always fault in pages directly from guest_memfd,
bypassing the host's userspace_addr.
This ensures that guest_memfd-backed memory is always handled through
the guest_memfd specific faulting path, regardless of whether it's for
private or non-private (shared) use cases.
Additionally, rename kvm_mmu_faultin_pfn_private() to
kvm_mmu_faultin_pfn_gmem(), as this function is now used to fault in
pages from guest_memfd for both private and non-private memory,
accommodating the new use cases.
Co-developed-by: David Hildenbrand <david@xxxxxxxxxx>
Signed-off-by: David Hildenbrand <david@xxxxxxxxxx>
Signed-off-by: Ackerley Tng <ackerleytng@xxxxxxxxxx>
Co-developed-by: Fuad Tabba <tabba@xxxxxxxxxx>
Signed-off-by: Fuad Tabba <tabba@xxxxxxxxxx>
---
arch/x86/kvm/mmu/mmu.c | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 94be15cde6da..ad5f337b496c 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -4511,8 +4511,8 @@ static void kvm_mmu_finish_page_fault(struct kvm_vcpu *vcpu,
r == RET_PF_RETRY, fault->map_writable);
}
-static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu,
- struct kvm_page_fault *fault)
+static int kvm_mmu_faultin_pfn_gmem(struct kvm_vcpu *vcpu,
+ struct kvm_page_fault *fault)
{
int max_order, r;
@@ -4536,13 +4536,18 @@ static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu,
return RET_PF_CONTINUE;
}
+static bool fault_from_gmem(struct kvm_page_fault *fault)
Drop the helper. It has exactly one caller, and it makes the code *harder* to
read, e.g. raises the question of what "from gmem" even means. If a separate
series follows and needs/justifies this helper, then it can/should be added then.
there is another place requires the same check introduced by your
"KVM: x86/mmu: Extend guest_memfd's max mapping level to shared
mappings" provided in [*]
[*] https://lore.kernel.org/kvm/aH7KghhsjaiIL3En@xxxxxxxxxx/
---
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 1ff7582d5fae..2d1894ed1623 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3335,8 +3336,9 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm,
struct kvm_page_fault *fault,
if (max_level == PG_LEVEL_4K)
return PG_LEVEL_4K;
- if (is_private)
- host_level = kvm_max_private_mapping_level(kvm, fault,
slot, gfn);
+ if (is_private || kvm_memslot_is_gmem_only(slot))
+ host_level = kvm_gmem_max_mapping_level(kvm, fault,
slot, gfn,
+ is_private);
else
host_level = host_pfn_mapping_level(kvm, gfn, slot);
return min(host_level, max_level);
+{
+ return fault->is_private || kvm_memslot_is_gmem_only(fault->slot);
+}
+
static int __kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu,
struct kvm_page_fault *fault)
{
unsigned int foll = fault->write ? FOLL_WRITE : 0;
- if (fault->is_private)
- return kvm_mmu_faultin_pfn_private(vcpu, fault);
+ if (fault_from_gmem(fault))
+ return kvm_mmu_faultin_pfn_gmem(vcpu, fault);
foll |= FOLL_NOWAIT;
fault->pfn = __kvm_faultin_pfn(fault->slot, fault->gfn, foll,
--
2.50.0.727.gbf7dc18ff4-goog