On Thu, 01 May 2025 12:15:52 +0100, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote: > > > > + */ > > > +int kvm_trylock_all_vcpus(struct kvm *kvm) > > > +{ > > > + struct kvm_vcpu *vcpu; > > > + unsigned long i, j; > > > + > > > + kvm_for_each_vcpu(i, vcpu, kvm) > > > + if (!mutex_trylock_nest_lock(&vcpu->mutex, &kvm->lock)) > > This one includes an assertion that kvm->lock is actually held. Ah, cunning. Thanks. > That said, I'm not at all sure what the purpose of all this trylock > stuff is here. > > Can someone explain? Last time I asked someone said something about > multiple VMs, but I don't know enough about kvm to know what that means. Multiple VMs? That'd be real fun. Not. > Are those vcpu->mutex another class for other VMs? Or what gives? Nah. This is firmly single VM. The purpose of this contraption is that there are some rare cases where we need to make sure that if we update some global state, all the vcpus of a VM need to see, or none of them. For these cases, the guarantee comes from luserspace, and it gives the pinky promise that none of the vcpus are running at that point. But being of a suspicious nature, we assert that this is true by trying to take all the vcpu mutexes in one go. This will fail if a vcpu is running, as KVM itself takes the vcpu mutex before doing anything. Similar requirement exists if we need to synthesise some state for userspace from all the individual vcpu states. If the global locking fails, we return to userspace with a middle finger indication, and all is well. Of course, this is pretty expensive, which is why it is only done in setup phases, when the VMM configures the guest. The splat this is trying to address is that when you have more than 48 vcpus in a single VM, lockdep gets upset seeing up to 512 locks of a similar class being taken. Disclaimer: all the above is completely arm64-specific, and I didn't even try to understand what other architectures are doing. HTH, M. -- Without deviation from the norm, progress is not possible.