Re: [PATCH v8 06/13] KVM: x86: Generalize private fault lookups to guest_memfd fault lookups

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, May 05, 2025, David Hildenbrand wrote:
> On 03.05.25 00:00, Ackerley Tng wrote:
> > Sean Christopherson <seanjc@xxxxxxxxxx> writes:
> > 
> > > On Fri, May 02, 2025, David Hildenbrand wrote:
> > > > On 30.04.25 20:58, Ackerley Tng wrote:
> > > > > > -	if (is_private)
> > > > > > +	if (is_gmem)
> > > > > >    		return max_level;
> > > > > 
> > > > > I think this renaming isn't quite accurate.
> > > > 
> > > > After our discussion yesterday, does that still hold true?
> > > 
> > > No.
> > > 
> > > > > IIUC in __kvm_mmu_max_mapping_level(), we skip considering
> > > > > host_pfn_mapping_level() if the gfn is private because private memory
> > > > > will not be mapped to userspace, so there's no need to query userspace
> > > > > page tables in host_pfn_mapping_level().
> > > > 
> > > > I think the reason was that: for private we won't be walking the user space
> > > > pages tables.
> > > > 
> > > > Once guest_memfd is also responsible for the shared part, why should this
> > > > here still be private-only, and why should we consider querying a user space
> > > > mapping that might not even exist?
> > > 
> > > +1, one of the big selling points for guest_memfd beyond CoCo is that it provides
> > > guest-first memory.  It is very explicitly an intended feature that the guest
> > > mappings KVM creates can be a superset of the host userspace mappings.  E.g. the
> > > guest can use larger page sizes, have RW while the host has RO, etc.
> > 
> > Do you mean that __kvm_mmu_max_mapping_level() should, in addition to
> > the parameter renaming from is_private to is_gmem, do something like
> > 
> > if (is_gmem)
> > 	return kvm_gmem_get_max_mapping_level(slot, gfn);

No, kvm_gmem_get_pfn() already provides the maximum allowed order, we "just" need
to update that to constrain the max order based on shared vs. private.  E.g. from
the original guest_memfd hugepage support[*] (which never landed), to take care
of the pgoff not being properly aligned to the memslot.

+	/*
+	 * The folio can be mapped with a hugepage if and only if the folio is
+	 * fully contained by the range the memslot is bound to.  Note, the
+	 * caller is responsible for handling gfn alignment, this only deals
+	 * with the file binding.
+	 */
+	huge_index = ALIGN(index, 1ull << *max_order);
+	if (huge_index < ALIGN(slot->gmem.pgoff, 1ull << *max_order) ||
+	    huge_index + (1ull << *max_order) > slot->gmem.pgoff + slot->npages)
 		*max_order = 0;

[*] https://lore.kernel.org/all/20231027182217.3615211-18-seanjc@xxxxxxxxxx

> I assume you mean, not looking at lpage_info at all?
> 
> I have limited understanding what lpage_info is or what it does. I believe
> all it adds is a mechanism to *disable* large page mappings.

Correct.  It's a bit of a catch-all that's used by a variety of KVM x86 features
to disable hugepages.

> We want to disable large pages if (using 2M region as example)
> 
> (a) Mixed memory attributes. If a PFN falls into a 2M region, and parts
>     of that region are shared vs. private (mixed memory attributes ->
>     KVM_LPAGE_MIXED_FLAG)
> 
>  -> With gmem-shared we could have mixed memory attributes, not a PFN
>     fracturing. (PFNs don't depend on memory attributes)
> 
> (b) page track: intercepting (mostly write) access to GFNs

It's also used to handle misaligned memslots (or sizes), e.g. if a 1GiB memory
region spanse 1GiB+4KiB => 2GiB+4KiB, KVM will disallow 1GiB hugepages, and 2MiB
hugepages for the head and tails.  Or if the host virtual address isn't aligned
with the guest physical address (see above for guest_memfd's role when there is
no hva).

> So, I wonder if we still have to take care of lpage_info, at least for
> handling (b) correctly [I assume so].

Ya, we do.

> Regarding (a) I am not sure: once memory attributes are handled by gmem in
> the gmem-shared case. IIRC, with AMD SEV we might still have to honor it? But
> gmem itself could handle that.
> 
> What we could definitely do here for now is:
> 
> if (is_gmem)
> 	/* gmem only supports 4k pages for now. */
> 	return PG_LEVEL_4K;
> 
> And not worry about lpage_infor for the time being, until we actually do
> support larger pages.

I don't want to completely punt on this, because if it gets messy, then I want
to know now and have a solution in hand, not find out N months from now.

That said, I don't expect it to be difficult.  What we could punt on is
performance of the lookups, which is the real reason KVM maintains the rather
expensive disallow_lpage array.

And that said, memslots can only bind to one guest_memfd instance, so I don't
immediately see any reason why the guest_memfd ioctl() couldn't process the
slots that are bound to it.  I.e. why not update KVM_LPAGE_MIXED_FLAG from the
guest_memfd ioctl() instead of from KVM_SET_MEMORY_ATTRIBUTES?




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux