On Mon, 2025-05-19 at 06:33 -0700, Sean Christopherson wrote: > Was this hit by a real VMM? If so, why is a TDX VMM removing a memslot without > kicking vCPUs out of KVM? > > Regardless, I would prefer not to add a new RET_PF_* flag for this. At a glance, > KVM can simply drop and reacquire SRCU in the relevant paths. During the initial debugging and kicking around stage, this is the first direction we looked. But kvm_gmem_populate() doesn't have scru locked, so then kvm_tdp_map_page() tries to unlock without it being held. (although that version didn't check r == RET_PF_RETRY like you had). Yan had the following concerns and came up with the version in this series, which we held review on for the list: > However, upon further consideration, I am reluctant to implement this fix for > the following reasons: > - kvm_gmem_populate() already holds the kvm->slots_lock. > - While retrying with srcu unlock and lock can workaround the > KVM_MEMSLOT_INVALID deadlock, it results in each kvm_vcpu_pre_fault_memory() > and tdx_handle_ept_violation() faulting with different memslot layouts. I'm not sure why the second one is really a problem. For the first one I think that path could just take the scru lock in the proper order with kvm- >slots_lock? I need to stare at these locking rules each time, so low quality suggestion. But that is the context.