On Wed, May 21, 2025, Yan Zhao wrote: > On Fri, May 16, 2025 at 02:35:38PM -0700, Sean Christopherson wrote: > > @@ -108,15 +105,24 @@ static inline bool kvm_dirty_gfn_harvested(struct kvm_dirty_gfn *gfn) > > int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring, > > int *nr_entries_reset) > > { > > + /* > > + * To minimize mmu_lock contention, batch resets for harvested entries > > + * whose gfns are in the same slot, and are within N frame numbers of > > + * each other, where N is the number of bits in an unsigned long. For > Suppose N is 64, > > > + * simplicity, process the current set of entries when the next entry > > + * can't be included in the batch. > > + * > > + * Track the current batch slot, the gfn offset into the slot for the > > + * batch, and the bitmask of gfns that need to be reset (relative to > > + * offset). Note, the offset may be adjusted backwards, e.g. so that > > + * a sequence of gfns X, X-1, ... X-N can be batched. > X-N can't be batched, right? Hah! Yeah, off-by-one error.