On Thu, Jul 10, 2025 at 01:33:41AM +0000, Edgecombe, Rick P wrote: > On Mon, 2025-06-09 at 22:13 +0300, Kirill A. Shutemov wrote: > > int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn, > > enum pg_level level, kvm_pfn_t pfn) > > { > > + struct kvm_vcpu *vcpu = kvm_get_running_vcpu(); > > This is unfortunate. In practice, all of the callers will be in a vCPU context, > but __tdp_mmu_set_spte_atomic() can be called for zap's which is why there is no > vCPU. IIUC, __tdp_mmu_set_spte_atomic() to zap, only for shared case which is !is_mirror_sptep() and will not get us here. !shared case get to tdx_sept_remove_private_spte(). > We don't want to split the tdp mmu calling code to introduce a variant that has > a vCPU. > > What about a big comment? Or checking for NULL and returning -EINVAL like > PG_LEVEL_4K below? I guess in this case a NULL pointer will be plenty loud. So > probably a comment is enough. Yes, comment is helpful here > Hmm, the only reason we need the vCPU here is to get at the the per-vCPU pamt > page cache. This is also the reason for the strange callback scheme I was > complaining about in the other patch. It kind of seems like there are two > friction points in this series: > 1. How to allocate dpamt pages > 2. How to serialize the global DPAMT resource inside a read lock > > I'd like to try to figure out a better solution for (1). (2) seems good. But I'm > still processing. I tried few different approached to address the problem. See phys_prepare and phys_cleanup in v1. > > > struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm); > > struct page *page = pfn_to_page(pfn); > > + int ret; > > + > > + ret = tdx_pamt_get(page, level, tdx_alloc_pamt_page_atomic, vcpu); > > + if (ret) > > + return ret; > -- Kiryl Shutsemau / Kirill A. Shutemov