Re: [RFC PATCH v2 13/22] iommufd: amd-iommu: Add vdevice support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 10/4/25 23:05, Jason Gunthorpe wrote:
On Thu, Apr 10, 2025 at 04:39:39PM +1000, Alexey Kardashevskiy wrote:
@@ -2549,12 +2561,15 @@ amd_iommu_domain_alloc_paging_flags(struct device *dev, u32 flags,
   {
   	struct amd_iommu *iommu = get_amd_iommu_from_dev(dev);
   	const u32 supported_flags = IOMMU_HWPT_ALLOC_DIRTY_TRACKING |
+						IOMMU_HWPT_ALLOC_PASID |
+						IOMMU_HWPT_ALLOC_NEST_PARENT;
+	const u32 supported_flags2 = IOMMU_HWPT_ALLOC_DIRTY_TRACKING |
   						IOMMU_HWPT_ALLOC_PASID;

Just ignore NEST_PARENT? That seems wrong, it should force a V1 page
table??


Ahhh... This is because I still have troubles with what IOMMU_DOMAIN_NESTED
means (and iommufd.rst does not help me). There is one device, one IOMMU
table buuut 2 domains? Uh.

It means whatever you want it to mean, so long as it holds a reference
to a NEST_PARENT :)

ahhhh ;)

You can get 1:1 domain objects linked to the viommu by creating the
'S1' type domains, maybe that is what you want here. A special domain
type that is TSM that has a special DTE.

Should not IOMMU_DOMAIN_NESTED be that "S1" domain?

Yes that is how ARM is doing it.

Minimally IOMMU_DOMAIN_NESTED on AMD should refere to a partial DTE
fragment that sets the GCR3 information and other guest controlled
bits from the vDTE. It should hold a reference to the viommu and the
S2 NEST_PARENT.

 From that basis then you'd try to fit in the CC stuff.

Though I'd really rather see the domain attach logic and DTE formation
in the AMD driver be fixed up before we made it more complex :\

It would be nice to see normal nesting and viommu support first too :\

It is in the works too. Thanks,

I think your work will be easier to understand when viewed on top of
working basic nesting support as it is just a special case of that

Really not sure about that "easier" thing :)

GCR3 is orthogonal to what I am doing here right now - this exercise does not use any additional guest table, instead it tells the host IOMMU (yeah, via the PSP) how to treat all IOVAs - private or shared (a bar called "vTOM" == virtual top of memory, below that bar everything is private, above - shared, I set it to the maximum). So even when we get vIOMMU in SNP VMs, unenlightened VM will be still using vTOM (SVSM == privileged VM FW which will talk to the PSP about vTOM).

This vTOM is very limited vIOMMU really (communicated just an address limit), not what people usually think when read "vIOMMU" with guest tables and 2 level translation.


--
Alexey





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux