On Tue, Aug 12, 2025 at 5:39 PM David Woodhouse <dwmw2@xxxxxxxxxxxxx> wrote: > > On Mon, 2025-08-11 at 09:32 -0700, Sean Christopherson wrote: > > On Fri, Aug 08, 2025, hugo lee wrote: > > > On Fri, Aug 8, 2025, Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > > > > > > > On Thu, Aug 07, 2025, hugo lee wrote: > > > > > On Thu, Aug 7, 2025 Sean Christopherson wrote: > > > > > > > > > > > > On Wed, Aug 06, 2025, Yuguo Li wrote: > > > > > > > When using split irqchip mode, IOAPIC is handled by QEMU while the LAPIC is > > > > > > > emulated by KVM. When guest disables LINT0, KVM doesn't exit to QEMU for > > > > > > > synchronization, leaving IOAPIC unaware of this change. This may cause vCPU > > > > > > > to be kicked when external devices(e.g. PIT)keep sending interrupts. > > > > > > > > > > > > I don't entirely follow what the problem is. Is the issue that QEMU injects an > > > > > > IRQ that should have been blocked? Or is QEMU forcing the vCPU to exit unnecessarily? > > > > > > > > > > > > > > > > This issue is about QEMU keeps injecting should-be-blocked > > > > > (blocked by guest and qemu just doesn't know that) IRQs. > > > > > As a result, QEMU forces vCPU to exit unnecessarily. > > > > > > > > Is the problem that the guest receives spurious IRQs, or that QEMU is forcing > > > > unnecesary exits, i.e hurting performance? > > > > > > > > > > It is QEMU is forcing unnecessary exits which will hurt performance by > > > trying to require the Big QEMU Lock in qemu_wait_io_event. > > > > Please elaborate on the performance impact and why the issue can't be solved in > > QEMU. > > Is there a corresponding QEMU patch to use this new exit reason? No, but the patch is done and will be submitted soon.