Fix issues with dirty ring harvesting where KVM doesn't bound the processing of entries in any way, which allows userspace to keep KVM in a tight loop indefinitely. E.g. struct kvm_dirty_gfn *dirty_gfns = vcpu_map_dirty_ring(vcpu); if (fork()) { int r; for (;;) { r = kvm_vm_reset_dirty_ring(vcpu->vm); if (r) printf("RESET %d dirty ring entries\n", r); } } else { int i; for (i = 0; i < test_dirty_ring_count; i++) { dirty_gfns[i].slot = TEST_MEM_SLOT_INDEX; dirty_gfns[i].offset = (i * 64) % host_num_pages; } for (;;) { for (i = 0; i < test_dirty_ring_count; i++) WRITE_ONCE(dirty_gfns[i].flags, KVM_DIRTY_GFN_F_RESET); } } Patches 1-3 address that class of bugs. Patches 4-6 are cleanups. v3: - Fix typos (I apparently can't spell opportunistically to save my life). [Binbin, James] - Clean up stale comments. [Binbin] - Collect reviews. [James, Pankaj] - Add a lockdep assertion on slots_lock, along with a comment. [James] v2: - https://lore.kernel.org/all/20250508141012.1411952-1-seanjc@xxxxxxxxxx - Expand on comments in dirty ring harvesting code. [Yan] v1: https://lore.kernel.org/all/20250111010409.1252942-1-seanjc@xxxxxxxxxx Sean Christopherson (6): KVM: Bound the number of dirty ring entries in a single reset at INT_MAX KVM: Bail from the dirty ring reset flow if a signal is pending KVM: Conditionally reschedule when resetting the dirty ring KVM: Check for empty mask of harvested dirty ring entries in caller KVM: Use mask of harvested dirty ring entries to coalesce dirty ring resets KVM: Assert that slots_lock is held when resetting per-vCPU dirty rings include/linux/kvm_dirty_ring.h | 18 ++---- virt/kvm/dirty_ring.c | 111 +++++++++++++++++++++++---------- virt/kvm/kvm_main.c | 9 ++- 3 files changed, 89 insertions(+), 49 deletions(-) base-commit: 7ef51a41466bc846ad794d505e2e34ff97157f7f -- 2.49.0.1112.g889b7c5bd8-goog