On Fri, Jun 20, 2025 at 11:59 AM Edgecombe, Rick P <rick.p.edgecombe@xxxxxxxxx> wrote: > > On Fri, 2025-06-20 at 07:24 -0700, Sean Christopherson wrote: > > > The patch was tested with QEMU which AFAICT does not touch memslots when > > > shutting down. Is there a reason to? > > > > In this case, the VMM process is not shutting down. To emulate a reboot, the > > VMM destroys the VM, but reuses the guest_memfd files for the "new" VM. > > Because guest_memfd takes a reference to "struct kvm", through memslot > > bindings, memslots need to be manually destroyed so that all references are > > put and the VM is freed by the kernel. > > Sorry if I'm being dumb, but why does it do this? It saves freeing/allocating > the guestmemfd pages? Or the in-place data gets reused somehow? The goal is just to be able to reuse the same physical memory for the next boot of the guest. Freeing and faulting-in the same amount of memory is redundant and time-consuming for large VM sizes. > > The series Vishal linked has some kind of SEV state transfer thing. How is it > intended to work for TDX? The series[1] unblocks intrahost-migration [2] and reboot usecases. [1] https://lore.kernel.org/lkml/cover.1747368092.git.afranji@xxxxxxxxxx/#t [2] https://lore.kernel.org/lkml/cover.1749672978.git.afranji@xxxxxxxxxx/#t > > > E.g. otherwise multiple reboots would manifest as memory leakds and > > eventually OOM the host. > > This is in the case of future guestmemfd functionality? Or today? Intrahost-migration and guest reboot are important usecases for Google to support guest VM lifecycles.