Hello, When destroying a fully-mapped 128G VM abruptly, the following scheduler warning is observed: sched: CPU 0 need_resched set for > 100018840 ns (100 ticks) without schedule CPU: 0 UID: 0 PID: 9617 Comm: kvm_page_table_ Tainted: G O 6.16.0-smp-DEV #3 NONE Tainted: [O]=OOT_MODULE Call trace: show_stack+0x20/0x38 (C) dump_stack_lvl+0x3c/0xb8 dump_stack+0x18/0x30 resched_latency_warn+0x7c/0x88 sched_tick+0x1c4/0x268 update_process_times+0xa8/0xd8 tick_nohz_handler+0xc8/0x168 __hrtimer_run_queues+0x11c/0x338 hrtimer_interrupt+0x104/0x308 arch_timer_handler_phys+0x40/0x58 handle_percpu_devid_irq+0x8c/0x1b0 generic_handle_domain_irq+0x48/0x78 gic_handle_irq+0x1b8/0x408 call_on_irq_stack+0x24/0x30 do_interrupt_handler+0x54/0x78 el1_interrupt+0x44/0x88 el1h_64_irq_handler+0x18/0x28 el1h_64_irq+0x84/0x88 stage2_free_walker+0x30/0xa0 (P) __kvm_pgtable_walk+0x11c/0x258 __kvm_pgtable_walk+0x180/0x258 __kvm_pgtable_walk+0x180/0x258 __kvm_pgtable_walk+0x180/0x258 kvm_pgtable_walk+0xc4/0x140 kvm_pgtable_stage2_destroy+0x5c/0xf0 kvm_free_stage2_pgd+0x6c/0xe8 kvm_uninit_stage2_mmu+0x24/0x48 kvm_arch_flush_shadow_all+0x80/0xa0 kvm_mmu_notifier_release+0x38/0x78 __mmu_notifier_release+0x15c/0x250 exit_mmap+0x68/0x400 __mmput+0x38/0x1c8 mmput+0x30/0x68 exit_mm+0xd4/0x198 do_exit+0x1a4/0xb00 do_group_exit+0x8c/0x120 get_signal+0x6d4/0x778 do_signal+0x90/0x718 do_notify_resume+0x70/0x170 el0_svc+0x74/0xd8 el0t_64_sync_handler+0x60/0xc8 el0t_64_sync+0x1b0/0x1b8 The host kernel was running with CONFIG_PREEMPT_NONE=y, and since the page-table walk operation takes considerable amount of time for a VM with such a large number of PTEs mapped, the warning is seen. To mitigate this, split the walk into smaller ranges, by checking for cond_resched() between each range. Since the path is executed during VM destruction, after the page-table structure is unlinked from the KVM MMU, relying on cond_resched_rwlock_write() isn't necessary. Patch-1 splits the kvm_pgtable_stage2_destroy() function into separate 'walk' and 'free PGD' parts. Patch-2 leverages the split and performs the walk periodically over smaller ranges and calls cond_resched() between them. v2: Thanks, Oliver for the suggestions. - Apply the rescheduling to pKVM as well. - Use VTCR_EL2_IPA(pgt->mmu->vtcr) to get the full possible range of the VM for pKVM. - Deference the pgd using rcu_deference_raw() in kvm_pgtable_stage2_destroy_pgd() instead of using a null walker. - Rename/restructure the functions to avoid duplications. v1: https://lore.kernel.org/all/20250724235144.2428795-1-rananta@xxxxxxxxxx/ Thank you. Raghavendra Raghavendra Rao Ananta (2): KVM: arm64: Split kvm_pgtable_stage2_destroy() KVM: arm64: Reschedule as needed when destroying the stage-2 page-tables arch/arm64/include/asm/kvm_pgtable.h | 30 +++++++++++++++++++++++ arch/arm64/include/asm/kvm_pkvm.h | 4 +++- arch/arm64/kvm/hyp/pgtable.c | 25 +++++++++++++++---- arch/arm64/kvm/mmu.c | 36 ++++++++++++++++++++++++++-- arch/arm64/kvm/pkvm.c | 11 +++++++-- 5 files changed, 97 insertions(+), 9 deletions(-) base-commit: 8f5ae30d69d7543eee0d70083daf4de8fe15d585 -- 2.51.0.rc2.233.g662b1ed5c5-goog