Re: [PATCH v16 15/22] KVM: x86/mmu: Extend guest_memfd's max mapping level to shared mappings

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 25, 2025, Ackerley Tng wrote:
> Sean Christopherson <seanjc@xxxxxxxxxx> writes:
> 
> > On Thu, Jul 24, 2025, Ackerley Tng wrote:
> >> Fuad Tabba <tabba@xxxxxxxxxx> writes:
> >> >  int kvm_mmu_max_mapping_level(struct kvm *kvm, struct kvm_page_fault *fault,
> >> > @@ -3362,8 +3371,9 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, struct kvm_page_fault *fault,
> >> >  	if (max_level == PG_LEVEL_4K)
> >> >  		return PG_LEVEL_4K;
> >> >  
> >> > -	if (is_private)
> >> > -		host_level = kvm_max_private_mapping_level(kvm, fault, slot, gfn);
> >> > +	if (is_private || kvm_memslot_is_gmem_only(slot))
> >> > +		host_level = kvm_gmem_max_mapping_level(kvm, fault, slot, gfn,
> >> > +							is_private);
> >> >  	else
> >> >  		host_level = host_pfn_mapping_level(kvm, gfn, slot);
> >> 
> >> No change required now, would like to point out that in this change
> >> there's a bit of an assumption if kvm_memslot_is_gmem_only(), even for
> >> shared pages, guest_memfd will be the only source of truth.
> >
> > It's not an assumption, it's a hard requirement.
> >
> >> This holds now because shared pages are always split to 4K, but if
> >> shared pages become larger, might mapping in the host actually turn out
> >> to be smaller?
> >
> > Yes, the host userspace mappens could be smaller, and supporting that scenario is
> > very explicitly one of the design goals of guest_memfd.  From commit a7800aa80ea4
> > ("KVM: Add KVM_CREATE_GUEST_MEMFD ioctl() for guest-specific backing memory"):
> >
> >  : A guest-first memory subsystem allows for optimizations and enhancements
> >  : that are kludgy or outright infeasible to implement/support in a generic
> >  : memory subsystem.  With guest_memfd, guest protections and mapping sizes
> >  : are fully decoupled from host userspace mappings.   E.g. KVM currently
> >  : doesn't support mapping memory as writable in the guest without it also
> >  : being writable in host userspace, as KVM's ABI uses VMA protections to
> >  : define the allow guest protection.  Userspace can fudge this by
> >  : establishing two mappings, a writable mapping for the guest and readable
> >  : one for itself, but that’s suboptimal on multiple fronts.
> >  : 
> >  : Similarly, KVM currently requires the guest mapping size to be a strict
> >  : subset of the host userspace mapping size, e.g. KVM doesn’t support
> >  : creating a 1GiB guest mapping unless userspace also has a 1GiB guest
> >  : mapping.  Decoupling the mappings sizes would allow userspace to precisely
> >  : map only what is needed without impacting guest performance, e.g. to
> >  : harden against unintentional accesses to guest memory.
> 
> Let me try to understand this better. If/when guest_memfd supports
> larger folios for shared pages, and guest_memfd returns a 2M folio from
> kvm_gmem_fault_shared(), can the mapping in host userspace turn out
> to be 4K?

It can be 2M, 4K, or none.

> If that happens, should kvm_gmem_max_mapping_level() return 4K for a
> memslot with kvm_memslot_is_gmem_only() == true?

No.

> The above code would skip host_pfn_mapping_level() and return just what
> guest_memfd reports, which is 2M.

Yes.

> Or do you mean that guest_memfd will be the source of truth in that it
> must also know/control, in the above scenario, that the host mapping is
> also 2M?

No.  The userspace mapping, _if_ there is one, is completely irrelevant.  The
entire point of guest_memfd is eliminate the requirement that memory be mapped
into host userspace in order for that memory to be mapped into the guest.

Invoking host_pfn_mapping_level() isn't just undesirable, it's flat out wrong, as
KVM will not verify slot->userspace_addr actually points at the (same) guest_memfd
instance.

To demonstrate, this must pass (and does once "KVM: x86/mmu: Handle guest page
faults for guest_memfd with shared memory" is added back).

---
 .../testing/selftests/kvm/guest_memfd_test.c  | 64 +++++++++++++++++++
 1 file changed, 64 insertions(+)

diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c
index 088053d5f0f5..b86bf89a71e0 100644
--- a/tools/testing/selftests/kvm/guest_memfd_test.c
+++ b/tools/testing/selftests/kvm/guest_memfd_test.c
@@ -13,6 +13,7 @@
 
 #include <linux/bitmap.h>
 #include <linux/falloc.h>
+#include <linux/sizes.h>
 #include <setjmp.h>
 #include <signal.h>
 #include <sys/mman.h>
@@ -21,6 +22,7 @@
 
 #include "kvm_util.h"
 #include "test_util.h"
+#include "ucall_common.h"
 
 static void test_file_read_write(int fd)
 {
@@ -298,6 +300,66 @@ static void test_guest_memfd(unsigned long vm_type)
 	kvm_vm_free(vm);
 }
 
+static void guest_code(uint8_t *mem, uint64_t size)
+{
+	size_t i;
+
+	for (i = 0; i < size; i++)
+		__GUEST_ASSERT(mem[i] == 0xaa,
+			       "Guest expected 0xaa at offset %lu, got 0x%x", i, mem[i]);
+
+	memset(mem, 0xff, size);
+	GUEST_DONE();
+}
+
+static void test_guest_memfd_guest(void)
+{
+	/*
+	 * Skip the first 4gb and slot0.  slot0 maps <1gb and is used to back
+	 * the guest's code, stack, and page tables, and low memory contains
+	 * the PCI hole and other MMIO regions that need to be avoided.
+	 */
+	const uint64_t gpa = SZ_4G;
+	const int slot = 1;
+
+	struct kvm_vcpu *vcpu;
+	struct kvm_vm *vm;
+	uint8_t *mem;
+	size_t size;
+	int fd, i;
+
+	if (!kvm_has_cap(KVM_CAP_GUEST_MEMFD_MMAP))
+		return;
+
+	vm = __vm_create_shape_with_one_vcpu(VM_SHAPE_DEFAULT, &vcpu, 1, guest_code);
+
+	TEST_ASSERT(vm_check_cap(vm, KVM_CAP_GUEST_MEMFD_MMAP),
+		    "Default VM type should always support guest_memfd mmap()");
+
+	size = vm->page_size;
+	fd = vm_create_guest_memfd(vm, size, GUEST_MEMFD_FLAG_MMAP);
+	vm_set_user_memory_region2(vm, slot, KVM_MEM_GUEST_MEMFD, gpa, size, NULL, fd, 0);
+
+	mem = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
+	TEST_ASSERT(mem != MAP_FAILED, "mmap() on guest_memfd failed");
+	memset(mem, 0xaa, size);
+	munmap(mem, size);
+
+	virt_pg_map(vm, gpa, gpa);
+	vcpu_args_set(vcpu, 2, gpa, size);
+	vcpu_run(vcpu);
+
+	TEST_ASSERT_EQ(get_ucall(vcpu, NULL), UCALL_DONE);
+
+	mem = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
+	TEST_ASSERT(mem != MAP_FAILED, "mmap() on guest_memfd failed");
+	for (i = 0; i < size; i++)
+		TEST_ASSERT_EQ(mem[i], 0xff);
+
+	close(fd);
+	kvm_vm_free(vm);
+}
+
 int main(int argc, char *argv[])
 {
 	unsigned long vm_types, vm_type;
@@ -314,4 +376,6 @@ int main(int argc, char *argv[])
 
 	for_each_set_bit(vm_type, &vm_types, BITS_PER_TYPE(vm_types))
 		test_guest_memfd(vm_type);
+
+	test_guest_memfd_guest();
 }

base-commit: 9a82b11560044839b10b1fb83ff230d9a88785b8
-- 





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux