On Thu, May 01, 2025 at 09:24:11AM +0100, Marc Zyngier wrote: > nit: in keeping with the existing arm64 patches, please write the > subject as "KVM: arm64: Use ..." > > On Wed, 30 Apr 2025 21:30:10 +0100, > Maxim Levitsky <mlevitsk@xxxxxxxxxx> wrote: > > [...] > > > > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > > index 68fec8c95fee..d31f42a71bdc 100644 > > --- a/arch/arm64/kvm/arm.c > > +++ b/arch/arm64/kvm/arm.c > > @@ -1914,49 +1914,6 @@ int kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) > > } > > } > > > > -/* unlocks vcpus from @vcpu_lock_idx and smaller */ > > -static void unlock_vcpus(struct kvm *kvm, int vcpu_lock_idx) > > -{ > > - struct kvm_vcpu *tmp_vcpu; > > - > > - for (; vcpu_lock_idx >= 0; vcpu_lock_idx--) { > > - tmp_vcpu = kvm_get_vcpu(kvm, vcpu_lock_idx); > > - mutex_unlock(&tmp_vcpu->mutex); > > - } > > -} > > - > > -void unlock_all_vcpus(struct kvm *kvm) > > -{ > > - lockdep_assert_held(&kvm->lock); > > Note this assertion... > > > - > > - unlock_vcpus(kvm, atomic_read(&kvm->online_vcpus) - 1); > > -} > > - > > -/* Returns true if all vcpus were locked, false otherwise */ > > -bool lock_all_vcpus(struct kvm *kvm) > > -{ > > - struct kvm_vcpu *tmp_vcpu; > > - unsigned long c; > > - > > - lockdep_assert_held(&kvm->lock); > > and this one... > > > - > > - /* > > - * Any time a vcpu is in an ioctl (including running), the > > - * core KVM code tries to grab the vcpu->mutex. > > - * > > - * By grabbing the vcpu->mutex of all VCPUs we ensure that no > > - * other VCPUs can fiddle with the state while we access it. > > - */ > > - kvm_for_each_vcpu(c, tmp_vcpu, kvm) { > > - if (!mutex_trylock(&tmp_vcpu->mutex)) { > > - unlock_vcpus(kvm, c - 1); > > - return false; > > - } > > - } > > - > > - return true; > > -} > > - > > static unsigned long nvhe_percpu_size(void) > > { > > return (unsigned long)CHOOSE_NVHE_SYM(__per_cpu_end) - > > [...] > > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > > index 69782df3617f..834f08dfa24c 100644 > > --- a/virt/kvm/kvm_main.c > > +++ b/virt/kvm/kvm_main.c > > @@ -1368,6 +1368,40 @@ static int kvm_vm_release(struct inode *inode, struct file *filp) > > return 0; > > } > > > > +/* > > + * Try to lock all of the VM's vCPUs. > > + * Assumes that the kvm->lock is held. > > Assuming is not enough. These assertions have caught a number of bugs, > and I'm not prepared to drop them. > > > + */ > > +int kvm_trylock_all_vcpus(struct kvm *kvm) > > +{ > > + struct kvm_vcpu *vcpu; > > + unsigned long i, j; > > + > > + kvm_for_each_vcpu(i, vcpu, kvm) > > + if (!mutex_trylock_nest_lock(&vcpu->mutex, &kvm->lock)) This one includes an assertion that kvm->lock is actually held. That said, I'm not at all sure what the purpose of all this trylock stuff is here. Can someone explain? Last time I asked someone said something about multiple VMs, but I don't know enough about kvm to know what that means. Are those vcpu->mutex another class for other VMs? Or what gives?