<dan.j.williams@xxxxxxxxx> writes: > Aneesh Kumar K.V (Arm) wrote: >> This patch series implements support for Device Assignment in the ARM CCA >> architecture. The code changes are based on Alp12 specification published here >> [1]. >> >> The code builds on the TSM framework patches posted at [2]. We add extension to >> that framework so that TSM is now used in both the host and the guest. >> >> A DA workflow can be summarized as below: >> >> Host: >> step 1. >> echo ${DEVICE} > /sys/bus/pci/devices/${DEVICE}/driver/unbind >> echo vfio-pci > /sys/bus/pci/devices/${DEVICE}/driver_override >> echo ${DEVICE} > /sys/bus/pci/drivers_probe >> >> step 2. >> echo 1 > /sys/bus/pci/devices/$DEVICE/tsm/connect > > Just for my own understanding... presumably there is no ordering > constraint for ARM CCA between step1 and step2, right? I.e. The connect > state is independent of the bind state. > > In the v4 PCI/TSM scheme the connect command is now: > > echo $tsm_dev > /sys/bus/pci/devices/$DEVICE/tsm/connect > >> Now in the guest we follow the below steps > > I assume a signifcant amount of kvmtool magic happens here to get the > TDI into a "bind capable" state, can you share that command? > lkvm run --realm -c 2 -m 256 -k /kselftest/Image -p "$KERNEL_PARAMS" -d ./rootfs-guest.ext2 --iommufd-vdevice --vfio-pci $DEVICE1 --vfio-pci $DEVICE2 > I had been assuming that everyone was prototyping with QEMU. Not a > problem per se, but the memory management for shared device assignment / > bounce buffering has had a quite of bit of work on the QEMU side, so > just curious about the difference in approach here. Like, does kvmtool > support operating the device in shared mode with bounce buffering and > page conversion (shared <=> private) support? In any event, happy to see > mutiple simultaneous consumers of this new kernel infrastructure. > -aneesh