[no subject]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We need a way to pass Control register details to iommufd -> AMD driver so that
we can program the VF control MMIO register.

Since iommu_vcmdq_alloc structure doesn't have user_data, how do we communicate
control register?




> 
>> [1]
>> https://www.amd.com/content/dam/amd/en/documents/processor-tech-docs/specifications/48882_IOMMU.pdf
> 
> Thanks for the doc. So, AMD has:
> 
> Command Buffer Base Address Register [MMIO Offset 0008h]
> "used to program the system physical base address and size of the
>  command buffer. The command buffer occupies contiguous physical
>  memory starting at the programmed base address, up to the
>  programmed size."
> Command Buffer Head Pointer Register [MMIO Offset 2000h]
> Command Buffer Tail Pointer Register [MMIO Offset 2008h]
> 
> IIUIC, AMD should do the same: VMM traps VM's Command Buffer Base
> Address register when the guest kernel allocates a command buffer
> by programming the VM's Command Buffer Base Address register, to
> capture the guest PA and size. Then, VMM allocates a vCMDQ object
> (for this command buffer) forwarding its buffer address and size
> via @addr and @length to the host kernel. Then, the kernel should
> translate the guest PA to host PA to program the HW.
> 
> We can see that the Head/Tail registers are in a different MMIO
> page (offset by two 4K pages), which is very like NVIDIA CMDQV
> that allows VMM to mmap that MMIO page of the Head/Tail registers
> for guest OS to directly control the HW (i.e. VMM doesn't trap
> these two registers.
> 
> When guest OS wants to issue a new command, the guest kernel can
> just fill the guest command buffer at the entry that the Head
> register points to, and program the Tail register (backed by an
> mmap'd MMIO page), then the HW will read the programmed physical
> address from the entry (Head) till the entry (Tail) in the guest
> command buffer.


Right.


> 
>>> @@ -170,3 +170,97 @@ int iommufd_vdevice_alloc_ioctl(struct iommufd_ucmd *ucmd)
>>>  	iommufd_put_object(ucmd->ictx, &viommu->obj);
>>>  	return rc;
>>>  }
>>> +
>>> +void iommufd_vcmdq_destroy(struct iommufd_object *obj)
>>> +{
>>
>> I didn't understood destroy flow in general. Can you please help me to understand:
>>
>> VMM is expected to track all buffers and call this interface?  OR iommufd will
>> take care of it? What happens if VM crashes ?
> 
> In a normal routine, VMM gets a vCMDQ object ID for each vCMDQ
> object it allocated. So, it should track all the IDs and release
> them when VM shuts down.
> 
> The iommufd core does track all the objects that belong to an
> iommufd context (ictx), and automatically release them. But, it
> can't resolve certain dependency on other FD, e.g. vEVENTQ and
> FAULT QUEUE would return another FD that user space listens to
> and must be closed properly to destroy the QUEUE object.

Got it.

> 
>>> +	/* The underlying physical pages must be pinned in the IOAS */
>>> +	rc = iopt_pin_pages(&viommu->hwpt->ioas->iopt, cmd->addr, cmd->length,
>>> +			    pages, 0);
>>
>> Why do we need this? is it not pinned already as part of vfio binding?
> 
> I think this could be clearer:
> 	/*
> 	 * The underlying physical pages must be pinned to prevent them from
> 	 * being unmapped (via IOMMUFD_CMD_IOAS_UNMAP) during the life cycle
> 	 * of the vCMDQ object.
> 	 */

Understood.

Thanks
-Vasant





[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux