Re: [RFC PATCH v2 00/51] 1G page support for guest_memfd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 9, 2025 at 8:17 AM Edgecombe, Rick P
<rick.p.edgecombe@xxxxxxxxx> wrote:
>
> On Wed, 2025-07-09 at 07:28 -0700, Vishal Annapurve wrote:
> > I think we can simplify the role of guest_memfd in line with discussion [1]:
> > 1) guest_memfd is a memory provider for userspace, KVM, IOMMU.
> >          - It allows fallocate to populate/deallocate memory
> > 2) guest_memfd supports the notion of private/shared faults.
> > 3) guest_memfd supports memory access control:
> >          - It allows shared faults from userspace, KVM, IOMMU
> >          - It allows private faults from KVM, IOMMU
> > 4) guest_memfd supports changing access control on its ranges between
> > shared/private.
> >          - It notifies the users to invalidate their mappings for the
> > ranges getting converted/truncated.
>
> KVM needs to know if a GFN is private/shared. I think it is also intended to now
> be a repository for this information, right? Besides invalidations, it needs to
> be queryable.

Yeah, that interface can be added as well. Though, if possible KVM can
just directly pass the fault type to guest_memfd and it can return an
error if the fault type doesn't match the permission. Additionally KVM
does query the mapping order for a certain pfn/gfn which will need to
be supported as well.

>
> >
> > Responsibilities that ideally should not be taken up by guest_memfd:
> > 1) guest_memfd can not initiate pre-faulting on behalf of it's users.
> > 2) guest_memfd should not be directly communicating with the
> > underlying architecture layers.
> >          - All communication should go via KVM/IOMMU.
>
> Maybe stronger, there should be generic gmem behaviors. Not any special
> if (vm_type == tdx) type logic.
>
> > 3) KVM should ideally associate the lifetime of backing
> > pagetables/protection tables/RMP tables with the lifetime of the
> > binding of memslots with guest_memfd.
> >          - Today KVM SNP logic ties RMP table entry lifetimes with how
> > long the folios are mapped in guest_memfd, which I think should be
> > revisited.
>
> I don't understand the problem. KVM needs to respond to user accessible
> invalidations, but how long it keeps other resources around could be useful for
> various optimizations. Like deferring work to a work queue or something.

I don't think it could be deferred to a work queue as the RMP table
entries will need to be removed synchronously once the last reference
on the guest_memfd drops, unless memory itself is kept around after
filemap eviction. I can see benefits of this approach for handling
scenarios like intrahost-migration.

>
> I think it would help to just target the ackerly series goals. We should get
> that code into shape and this kind of stuff will fall out of it.
>
> >
> > Some very early thoughts on how guest_memfd could be laid out for the long term:
> > 1) guest_memfd code ideally should be built-in to the kernel.
> > 2) guest_memfd instances should still be created using KVM IOCTLs that
> > carry specific capabilities/restrictions for its users based on the
> > backing VM/arch.
> > 3) Any outgoing communication from guest_memfd to it's users like
> > userspace/KVM/IOMMU should be via notifiers to invalidate similar to
> > how MMU notifiers work.
> > 4) KVM and IOMMU can implement intermediate layers to handle
> > interaction with guest_memfd.
> >      - e.g. there could be a layer within kvm that handles:
> >              - creating guest_memfd files and associating a
> > kvm_gmem_context with those files.
> >              - memslot binding
> >                        - kvm_gmem_context will be used to bind kvm
> > memslots with the context ranges.
> >              - invalidate notifier handling
> >                         - kvm_gmem_context will be used to intercept
> > guest_memfd callbacks and
> >                           translate them to the right GPA ranges.
> >              - linking
> >                         - kvm_gmem_context can be linked to different
> > KVM instances.
>
> We can probably look at the code to decide these.
>

Agree.





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux