When creating SEV-SNP guests with a large amount of memory (940GB or greater) the host experiences a soft cpu lockup while setting the per-page memory attributes on the whole range of memory in the guest. The underlying issue is that the implementation of setting the memory attributes using an Xarray implementation is a time-consuming operation (e.g. a 1.9TB guest takes over 30 seconds to set the attributes) Fix the lockup by modifying kvm_vm_ioctl_set_mem_attributes() so that it sets the attributes on, at most, a range of 512GB at a time and avoids holding kvm->slot_lock for too long. Apart from the lockup, the implementation to set memory attributes via Xarray also results in a delay early in the boot of SEV-SNP/TDX guests - this fix does not address that. As it happens, the slowness of setting the attributes was brought up by Michael Roth in the review of Ackerley Tng's series to add 1G page support for guest_memfd [1] where using a Maple Tree implementation is being proposed to track shareability and Michael suggested that doing it for KVM mem attributes would be useful also (it should avoid the SLU while also taking less CPU time in general to populate). If that was implemented in the future, it should address this lockup but I think there's benefit in fixing the lockup issue now with a targeted fix. [1] https://lore.kernel.org/all/20250529054227.hh2f4jmyqf6igd3i@xxxxxxx Tested with VMs up to 1900GB in size (the limit of hardware available to me) The functionality was introduced in v6.8 but I tagged as just needing backporting as far as linux-6.12.y (applies cleanly) Based on tag: kvm-6.16-1 Liam Merwick (3): KVM: Batch setting of per-page memory attributes to avoid soft lockup KVM: Add trace_kvm_vm_set_mem_attributes() KVM: fix typo in kvm_vm_set_mem_attributes() comment include/trace/events/kvm.h | 33 +++++++++++++++++++++++++++++ virt/kvm/kvm_main.c | 43 ++++++++++++++++++++++++++++++++------ 2 files changed, 70 insertions(+), 6 deletions(-) -- 2.47.1