On Thu, Aug 28, 2025 at 08:54:48AM +0800, Edgecombe, Rick P wrote: > On Wed, 2025-08-27 at 16:25 +0800, Yan Zhao wrote: > > > +{ > > > + struct kvm_page_fault fault = { > > > + .addr = gfn_to_gpa(gfn), > > > + .error_code = PFERR_GUEST_FINAL_MASK | > > > PFERR_PRIVATE_ACCESS, > > > + .prefetch = true, > > > + .is_tdp = true, > > > + .nx_huge_page_workaround_enabled = > > > is_nx_huge_page_enabled(vcpu->kvm), > > > + > > > + .max_level = KVM_MAX_HUGEPAGE_LEVEL, > > Looks the kvm_tdp_mmu_map_private_pfn() is only for initial memory mapping, > > given that ".prefetch = true" and RET_PF_SPURIOUS is not a valid return value. > > Hmm, what are you referring to regarding RET_PF_SPURIOUS? If kvm_tdp_mmu_map_private_pfn() can also be invoked after initial memory mapping stage, RET_PF_SPURIOUS is a valid return case. But in this patch, only RET_PF_RETRY and RET_PF_FIXED are valid. So, I think it's expected to be invoked only by initial memory mapping stage :)