On Thu, May 01, 2025 at 01:44:28PM +0100, Marc Zyngier wrote: > On Thu, 01 May 2025 12:15:52 +0100, > Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote: > > > > > > + */ > > > > +int kvm_trylock_all_vcpus(struct kvm *kvm) > > > > +{ > > > > + struct kvm_vcpu *vcpu; > > > > + unsigned long i, j; > > > > + > > > > + kvm_for_each_vcpu(i, vcpu, kvm) > > > > + if (!mutex_trylock_nest_lock(&vcpu->mutex, &kvm->lock)) > > > > This one includes an assertion that kvm->lock is actually held. > > Ah, cunning. Thanks. > > > That said, I'm not at all sure what the purpose of all this trylock > > stuff is here. > > > > Can someone explain? Last time I asked someone said something about > > multiple VMs, but I don't know enough about kvm to know what that means. > > Multiple VMs? That'd be real fun. Not. > > > Are those vcpu->mutex another class for other VMs? Or what gives? > > Nah. This is firmly single VM. > > The purpose of this contraption is that there are some rare cases > where we need to make sure that if we update some global state, all > the vcpus of a VM need to see, or none of them. > > For these cases, the guarantee comes from luserspace, and it gives the > pinky promise that none of the vcpus are running at that point. But > being of a suspicious nature, we assert that this is true by trying to > take all the vcpu mutexes in one go. This will fail if a vcpu is > running, as KVM itself takes the vcpu mutex before doing anything. > > Similar requirement exists if we need to synthesise some state for > userspace from all the individual vcpu states. Ah, okay. Because x86 is simply doing mutex_lock() instead of mutex_trylock() -- which would end up waiting for this activity to subside I suppose. Hence the use of the killable variant I suppose, for when they get tired of waiting. If all the architectures are basically doing the same thing, it might make sense to unify this particular behaviour. But what do I know.