On Thu, Aug 7, 2025 at 6:56 PM Eugenio Perez Martin <eperezma@xxxxxxxxxx> wrote: > > On Tue, Jun 10, 2025 at 10:36 AM Jason Wang <jasowang@xxxxxxxxxx> wrote: > > > > On Mon, Jun 9, 2025 at 2:11 PM Eugenio Perez Martin <eperezma@xxxxxxxxxx> wrote: > > > > > > On Mon, Jun 9, 2025 at 3:50 AM Jason Wang <jasowang@xxxxxxxxxx> wrote: > > > > > > > > On Mon, Jun 9, 2025 at 9:41 AM Jason Wang <jasowang@xxxxxxxxxx> wrote: > > > > > > > > > > On Fri, Jun 6, 2025 at 7:50 PM Eugenio Pérez <eperezma@xxxxxxxxxx> wrote: > > > > > > > > > > > > This allows to define all functions checking the API version set by the > > > > > > userland device. > > > > > > > > > > > > Signed-off-by: Eugenio Pérez <eperezma@xxxxxxxxxx> > > > > > > > > > > It might be worth clarifying how it works. > > > > > > > > > > For example, > > > > > > > > > > 1) would VDUSE behave differently or if it's just some new ioctls > > > > > > I'd like to test more in-depth, but a device can just bump the version > > > ID and then implement the replies to the vduse messages. No need to > > > implement new ioctls. If the VDUSE device sets 0 in either number of > > > ASID or vq groups, the kernel assumes 1. > > > > Right, this is the way we use now and I think maybe we can document > > this somewhere. > > > > > > > > But you have a very good point here, I think it is wise to evaluate > > > the shortcut of these messages in the VDUSE kernel module. If a VDUSE > > > device only has one vq group and one ASID, it can always return group > > > 0 and asid 0 for everything, and fail every try to ser asid != 0. > > > > Yes, and vhost-vDPA needs to guard against the misconfiguration. > > > > > This > > > way, the update is transparent for the VDUSE device, and future > > > devices do not need to implement the reply of these. What do you > > > think? > > > > This should work. > > > > > > > > > > 2) If VDUSE behave differently, do we need a ioctl to set the API > > > > > version for backward compatibility? > > > > > > > > Speak too fast, there's a VDUSE_SET_API_VERSION actually. > > > > > > > > I think we need to think if it complicates the migration compatibility or not. > > > > > > > > > > Do you mean migration as "increase the VDUSE version number", not "VM > > > live migration from vduse version 0 to vduse version 1", isn't it? The > > > second should not have any problem but I haven't tested it. > > > > I mean if we bump the version, we can't migrate from version 1 to > > version 0. Or we can offload this to the management (do we need to > > extend the vdpa tool for this)? > > > > I just noticed I left this unreplied. But I still do not get what > migrate means here :). > > If migrate means to run current VDUSE devices on kernel with this > series applied these devices don't set V1 API so they have one vq > group, and one asid. I'm actually testing this with my libfuse+VDUSE > modifications that don't use V1 at all. Adding this explanation to the > patch as it is a very good point indeed. Right. > > If it means to migrate a guest from using a V1 VDUSE device to a V0 > device "it should work", as it is just a backend implementation > detail. For example src is the VDUSE with multiqueue support (v1) but dest doesn't have this support (v0). I think the qemu should fail to launch in dest. > If we migrate from or to a vdpa device backed by hardware, for > example, one of the devices does not even have the concept of VDUSE > API version. > > In the case of net, it does not work at the moment because the only > way to set features like mq are through the shadow CVQ. I think you mean qemu should fail, I'm not sure this is friendly to libvirt. > If a VDUSE net > device implements, let's say, admin vq, something similar to PF and > VF, and dirty bitmap, I guess it should be possible. Maybe it is > easier to play with this wuth block devices. > It looks like another topics: 1) harden CVQ for VDUSE 2) support hardware dirty page tracking Thanks