On Mon, Jul 21, 2025 at 10:29 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > > > > > > 2) KVM fetches shared faults through userspace page tables and not > > > > guest_memfd directly. > > > > > > This is also irrelevant. KVM _already_ supports resolving shared faults through > > > userspace page tables. That support won't go away as KVM will always need/want > > > to support mapping VM_IO and/or VM_PFNMAP memory into the guest (even for TDX). As a combination of [1] and [2], I believe we are saying that for memslots backed by mappable guest_memfd files, KVM will always serve both shared/private faults using kvm_gmem_get_pfn(). And I think the same story will be carried over when we get the stage2 i.e. mmap+conversion support. [1] https://lore.kernel.org/kvm/20250717162731.446579-10-tabba@xxxxxxxxxx/ [2] https://lore.kernel.org/kvm/20250717162731.446579-14-tabba@xxxxxxxxxx/ > > > > > > > I don't see value in trying to go out of way to support such a usecase. > > > > > > But if/when KVM gains support for tracking shared vs. private in guest_memfd > > > itself, i.e. when TDX _does_ support mmap() on guest_memfd, KVM won't have to go > > > out of its to support using guest_memfd for the @userspace_addr backing store. > > > Unless I'm missing something, the only thing needed to "support" this scenario is: > > > > As above, we need 1) mentioned by Vishal as well, to prevent userspace from > > passing mmapable guest_memfd to serve as private memory. > > Ya, I'm talking specifically about what the world will look like once KVM tracks > private vs. shared in guest_memfd. I'm not in any way advocating we do this > right now. I think we should generally strive to go towards single memory backing for all the scenarios, unless there is a real world usecase that can't do without dual memory backing (We should think hard before committing to supporting it). Dual memory backing was just a stopgap we needed until the *right* solution came along.