> On Jun 2, 2025, at 7:48 PM, Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > Exempt nested EPT shadow pages tables from the CR0.WP=0 handling of > supervisor writes, as EPT doesn't have a U/S bit and isn't affected by > CR0.WP (or CR4.SMEP in the exception to the exception). > > Opportunistically refresh the comment to explain what KVM is doing, as > the only record of why KVM shoves in WRITE and drops USER is buried in > years-old changelogs. > > Cc: Jon Kohler <jon@xxxxxxxxxxx> > Cc: Sergey Dyasli <sergey.dyasli@xxxxxxxxxxx> > Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> > --- > arch/x86/kvm/mmu/paging_tmpl.h | 8 ++++++-- > 1 file changed, 6 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h > index 68e323568e95..ed762bb4b007 100644 > --- a/arch/x86/kvm/mmu/paging_tmpl.h > +++ b/arch/x86/kvm/mmu/paging_tmpl.h > @@ -804,9 +804,12 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault > if (r != RET_PF_CONTINUE) > return r; > > +#if PTTYPE != PTTYPE_EPT > /* > - * Do not change pte_access if the pfn is a mmio page, otherwise > - * we will cache the incorrect access into mmio spte. > + * Treat the guest PTE protections as writable, supervisor-only if this > + * is a supervisor write fault and CR0.WP=0 (supervisor accesses ignore > + * PTE.W if CR0.WP=0). Don't change the access type for emulated MMIO, > + * otherwise KVM will cache incorrect access information in the SPTE. > */ > if (fault->write && !(walker.pte_access & ACC_WRITE_MASK) && > !is_cr0_wp(vcpu->arch.mmu) && !fault->user && fault->slot) { > @@ -822,6 +825,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault > if (is_cr4_smep(vcpu->arch.mmu)) > walker.pte_access &= ~ACC_EXEC_MASK; > } > +#endif > > r = RET_PF_RETRY; > write_lock(&vcpu->kvm->mmu_lock); > > base-commit: 3f7b307757ecffc1c18ede9ee3cf9ce8101f3cc9 > -- > 2.49.0.1204.g71687c7c1d-goog > Thanks, I’ll give it a go, but LGTM in general Reviewed-by: Jon Kohler <jon@xxxxxxxxxxx>