Re: [PATCHv2 08/12] KVM: TDX: Handle PAMT allocation in fault path

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2025-06-09 at 22:13 +0300, Kirill A. Shutemov wrote:
>  int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn,
>  			      enum pg_level level, kvm_pfn_t pfn)
>  {
> +	struct kvm_vcpu *vcpu = kvm_get_running_vcpu();

This is unfortunate. In practice, all of the callers will be in a vCPU context,
but __tdp_mmu_set_spte_atomic() can be called for zap's which is why there is no
vCPU.

We don't want to split the tdp mmu calling code to introduce a variant that has
a vCPU. 

What about a big comment? Or checking for NULL and returning -EINVAL like
PG_LEVEL_4K below? I guess in this case a NULL pointer will be plenty loud. So
probably a comment is enough.

Hmm, the only reason we need the vCPU here is to get at the the per-vCPU pamt
page cache. This is also the reason for the strange callback scheme I was
complaining about in the other patch. It kind of seems like there are two
friction points in this series:
1. How to allocate dpamt pages
2. How to serialize the global DPAMT resource inside a read lock

I'd like to try to figure out a better solution for (1). (2) seems good. But I'm
still processing.

>  	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
>  	struct page *page = pfn_to_page(pfn);
> +	int ret;
> +
> +	ret = tdx_pamt_get(page, level, tdx_alloc_pamt_page_atomic, vcpu);
> +	if (ret)
> +		return ret;





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux