Re: [RFC PATCH v2 00/51] 1G page support for guest_memfd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jul 8, 2025 at 11:55 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
>
> On Tue, Jul 08, 2025, Rick P Edgecombe wrote:
> > On Tue, 2025-07-08 at 11:03 -0700, Sean Christopherson wrote:
> > > > I think there is interest in de-coupling it?
> > >
> > > No?
> >
> > I'm talking about the intra-host migration/reboot optimization stuff. And not
> > doing a good job, sorry.
> >
> > >   Even if we get to a point where multiple distinct VMs can bind to a single
> > > guest_memfd, e.g. for inter-VM shared memory, there will still need to be a
> > > sole
> > > owner of the memory.  AFAICT, fully decoupling guest_memfd from a VM would add
> > > non-trivial complexity for zero practical benefit.
> >
> > I'm talking about moving a gmem fd between different VMs or something using
> > KVM_LINK_GUEST_MEMFD [0]. Not advocating to try to support it. But trying to
> > feel out where the concepts are headed. It kind of allows gmem fds (or just
> > their source memory?) to live beyond a VM lifecycle.
>
> I think the answer is that we want to let guest_memfd live beyond the "struct kvm"
> instance, but not beyond the Virtual Machine.  From a past discussion on this topic[*].
>
>  : No go.  Because again, the inode (physical memory) is coupled to the virtual machine
>  : as a thing, not to a "struct kvm".  Or more concretely, the inode is coupled to an
>  : ASID or an HKID, and there can be multiple "struct kvm" objects associated with a
>  : single ASID.  And at some point in the future, I suspect we'll have multiple KVM
>  : objects per HKID too.
>  :
>  : The current SEV use case is for the migration helper, where two KVM objects share
>  : a single ASID (the "real" VM and the helper).  I suspect TDX will end up with
>  : similar behavior where helper "VMs" can use the HKID of the "real" VM.  For KVM,
>  : that means multiple struct kvm objects being associated with a single HKID.
>  :
>  : To prevent use-after-free, KVM "just" needs to ensure the helper instances can't
>  : outlive the real instance, i.e. can't use the HKID/ASID after the owning virtual
>  : machine has been destroyed.
>  :
>  : To put it differently, "struct kvm" is a KVM software construct that _usually_,
>  : but not always, is associated 1:1 with a virtual machine.
>  :
>  : And FWIW, stashing the pointer without holding a reference would not be a complete
>  : solution, because it couldn't guard against KVM reusing a pointer.  E.g. if a
>  : struct kvm was unbound and then freed, KVM could reuse the same memory for a new
>  : struct kvm, with a different ASID/HKID, and get a false negative on the rebinding
>  : check.
>
> Exactly what that will look like in code is TBD, but the concept/logic holds up.

I think we can simplify the role of guest_memfd in line with discussion [1]:
1) guest_memfd is a memory provider for userspace, KVM, IOMMU.
         - It allows fallocate to populate/deallocate memory
2) guest_memfd supports the notion of private/shared faults.
3) guest_memfd supports memory access control:
         - It allows shared faults from userspace, KVM, IOMMU
         - It allows private faults from KVM, IOMMU
4) guest_memfd supports changing access control on its ranges between
shared/private.
         - It notifies the users to invalidate their mappings for the
ranges getting converted/truncated.

Responsibilities that ideally should not be taken up by guest_memfd:
1) guest_memfd can not initiate pre-faulting on behalf of it's users.
2) guest_memfd should not be directly communicating with the
underlying architecture layers.
         - All communication should go via KVM/IOMMU.
3) KVM should ideally associate the lifetime of backing
pagetables/protection tables/RMP tables with the lifetime of the
binding of memslots with guest_memfd.
         - Today KVM SNP logic ties RMP table entry lifetimes with how
long the folios are mapped in guest_memfd, which I think should be
revisited.

Some very early thoughts on how guest_memfd could be laid out for the long term:
1) guest_memfd code ideally should be built-in to the kernel.
2) guest_memfd instances should still be created using KVM IOCTLs that
carry specific capabilities/restrictions for its users based on the
backing VM/arch.
3) Any outgoing communication from guest_memfd to it's users like
userspace/KVM/IOMMU should be via notifiers to invalidate similar to
how MMU notifiers work.
4) KVM and IOMMU can implement intermediate layers to handle
interaction with guest_memfd.
     - e.g. there could be a layer within kvm that handles:
             - creating guest_memfd files and associating a
kvm_gmem_context with those files.
             - memslot binding
                       - kvm_gmem_context will be used to bind kvm
memslots with the context ranges.
             - invalidate notifier handling
                        - kvm_gmem_context will be used to intercept
guest_memfd callbacks and
                          translate them to the right GPA ranges.
             - linking
                        - kvm_gmem_context can be linked to different
KVM instances.

This line of thinking can allow cleaner separation between
guest_memfd/KVM/IOMMU [2].

[1] https://lore.kernel.org/lkml/CAGtprH-+gPN8J_RaEit=M_ErHWTmFHeCipC6viT6PHhG3ELg6A@xxxxxxxxxxxxxx/#t
[2] https://lore.kernel.org/lkml/31beeed3-b1be-439b-8a5b-db8c06dadc30@xxxxxxx/



>
> [*] https://lore.kernel.org/all/ZOO782YGRY0YMuPu@xxxxxxxxxx
>
> > [0] https://lore.kernel.org/all/cover.1747368092.git.afranji@xxxxxxxxxx/
> > https://lore.kernel.org/kvm/cover.1749672978.git.afranji@xxxxxxxxxx/





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux