On Tue, Aug 26, 2025 at 05:05:19PM -0700, Sean Christopherson wrote: > Fold tdx_mem_page_record_premap_cnt() into tdx_sept_set_private_spte() as > providing a one-off helper for effectively three lines of code is at best a > wash, and splitting the code makes the comment for smp_rmb() _extremely_ > confusing as the comment talks about reading kvm->arch.pre_fault_allowed > before kvm_tdx->state, but the immediately visible code does the exact > opposite. > > Opportunistically rewrite the comments to more explicitly explain who is > checking what, as well as _why_ the ordering matters. > > No functional change intended. > > Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> > --- > arch/x86/kvm/vmx/tdx.c | 49 ++++++++++++++++++------------------------ > 1 file changed, 21 insertions(+), 28 deletions(-) > > diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c > index b7559ea1e353..e4b70c0dbda3 100644 > --- a/arch/x86/kvm/vmx/tdx.c > +++ b/arch/x86/kvm/vmx/tdx.c > @@ -1608,29 +1608,6 @@ static int tdx_mem_page_aug(struct kvm *kvm, gfn_t gfn, > return 0; > } > > -/* > - * KVM_TDX_INIT_MEM_REGION calls kvm_gmem_populate() to map guest pages; the > - * callback tdx_gmem_post_populate() then maps pages into private memory. > - * through the a seamcall TDH.MEM.PAGE.ADD(). The SEAMCALL also requires the > - * private EPT structures for the page to have been built before, which is > - * done via kvm_tdp_map_page(). nr_premapped counts the number of pages that > - * were added to the EPT structures but not added with TDH.MEM.PAGE.ADD(). > - * The counter has to be zero on KVM_TDX_FINALIZE_VM, to ensure that there > - * are no half-initialized shared EPT pages. > - */ > -static int tdx_mem_page_record_premap_cnt(struct kvm *kvm, gfn_t gfn, > - enum pg_level level, kvm_pfn_t pfn) > -{ > - struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm); > - > - if (KVM_BUG_ON(kvm->arch.pre_fault_allowed, kvm)) > - return -EIO; > - > - /* nr_premapped will be decreased when tdh_mem_page_add() is called. */ > - atomic64_inc(&kvm_tdx->nr_premapped); > - return 0; > -} > - > static int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn, > enum pg_level level, kvm_pfn_t pfn) > { > @@ -1641,14 +1618,30 @@ static int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn, > return -EIO; > > /* > - * Read 'pre_fault_allowed' before 'kvm_tdx->state'; see matching > - * barrier in tdx_td_finalize(). > + * Ensure pre_fault_allowed is read by kvm_arch_vcpu_pre_fault_memory() > + * before kvm_tdx->state. Userspace must not be allowed to pre-fault > + * arbitrary memory until the initial memory image is finalized. Pairs > + * with the smp_wmb() in tdx_td_finalize(). > */ > smp_rmb(); > - if (likely(kvm_tdx->state == TD_STATE_RUNNABLE)) > - return tdx_mem_page_aug(kvm, gfn, level, pfn); > > - return tdx_mem_page_record_premap_cnt(kvm, gfn, level, pfn); > + /* > + * If the TD isn't finalized/runnable, then userspace is initializing > + * the VM image via KVM_TDX_INIT_MEM_REGION. Increment the number of > + * pages that need to be initialized via TDH.MEM.PAGE.ADD (PAGE.ADD > + * requires a pre-existing S-EPT mapping). KVM_TDX_FINALIZE_VM checks > + * the counter to ensure all mapped pages have been added to the image, > + * to prevent running the TD with uninitialized memory. To prevent the mismatch between mirror EPT and the S-EPT? e.g., Before KVM_TDX_FINALIZE_VM, if userspace performs a zap after the TDH.MEM.PAGE.ADD, the page will be removed from the S-EPT. The count of nr_premapped will not change after the successful TDH.MEM.RANGE.BLOCK and TDH.MEM.PAGE.REMOVE. As a result, the TD will still run with uninitialized memory. > + */ > + if (unlikely(kvm_tdx->state != TD_STATE_RUNNABLE)) { > + if (KVM_BUG_ON(kvm->arch.pre_fault_allowed, kvm)) > + return -EIO; > + > + atomic64_inc(&kvm_tdx->nr_premapped); > + return 0; > + } > + > + return tdx_mem_page_aug(kvm, gfn, level, pfn); > } > > static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn, > -- > 2.51.0.268.g9569e192d0-goog >