Re: [PATCH v3 0/6] KVM: Dirty ring fixes and cleanups

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 20, 2025, Peter Xu wrote:
> On Fri, May 16, 2025 at 02:35:34PM -0700, Sean Christopherson wrote:
> > Sean Christopherson (6):
> >   KVM: Bound the number of dirty ring entries in a single reset at
> >     INT_MAX
> >   KVM: Bail from the dirty ring reset flow if a signal is pending
> >   KVM: Conditionally reschedule when resetting the dirty ring
> >   KVM: Check for empty mask of harvested dirty ring entries in caller
> >   KVM: Use mask of harvested dirty ring entries to coalesce dirty ring
> >     resets
> >   KVM: Assert that slots_lock is held when resetting per-vCPU dirty
> >     rings
> 
> For the last one, I'd think it's majorly because of the memslot accesses
> (or CONFIG_LOCKDEP=y should yell already on resets?).  

No?  If KVM only needed to ensure stable memslot accesses, then SRCU would suffice.
It sounds like holding slots_lock may have been a somewhat unintentional,  but the
reason KVM can't switch to SRCU is that doing so would break ordering, not because
slots_lock is needed to protect the memslot accesses.

> The "serialization of concurrent RESETs" part could be a good side effect.
> After all, the dirty rings rely a lot on the userspace to do right things..
> for example, the userspace better also remember to reset before any slot
> changes, or it's possible to collect a dirty pfn with a slot index that was
> already removed and reused with a new one..
> 
> Maybe we could switch the sentences there in the comment of last patch, but
> not a huge deal.
> 
> Reviewed-by: Peter Xu <peterx@xxxxxxxxxx>
> 
> Thanks!
> 
> -- 
> Peter Xu
> 




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux