Re: [PATCH v2 4/5] KVM: arm64: Expose FEAT_RASv1p1 in a canonical manner

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Aug 09 2025, Marc Zyngier <maz@xxxxxxxxxx> wrote:

> On Thu, 07 Aug 2025 13:55:31 +0100,
> Joey Gouly <joey.gouly@xxxxxxx> wrote:
>> 
>> On Wed, Aug 06, 2025 at 05:56:14PM +0100, Marc Zyngier wrote:
>> > If we have RASv1p1 on the host, advertise it to the guest in the
>> > "canonical way", by setting ID_AA64PFR0_EL1 to V1P1, rather than
>> > the convoluted RAS+RAS_frac method.
>> > 
>> > Note that this also advertises FEAT_DoubleFault, which doesn't
>> > affect the guest at all, as only EL3 is concerned by this.
>> > 
>> > Signed-off-by: Marc Zyngier <maz@xxxxxxxxxx>
>> > ---
>> >  arch/arm64/kvm/sys_regs.c | 12 ++++++++++++
>> >  1 file changed, 12 insertions(+)
>> > 
>> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> > index 1b4114790024e..66e5a733e9628 100644
>> > --- a/arch/arm64/kvm/sys_regs.c
>> > +++ b/arch/arm64/kvm/sys_regs.c
>> > @@ -1800,6 +1800,18 @@ static u64 sanitise_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu, u64 val)
>> >  	if (!vcpu_has_sve(vcpu))
>> >  		val &= ~ID_AA64PFR0_EL1_SVE_MASK;
>> >  
>> > +	/*
>> > +	 * Describe RASv1p1 in a canonical way -- ID_AA64PFR1_EL1.RAS_frac
>> > +	 * is cleared separately. Note that by advertising RASv1p1 here, we
>> 
>> Where is it cleared? __kvm_read_sanitised_id_reg() is where I would have
>> expected to see it:
>> 
>>     case SYS_ID_AA64PFR1_EL1:
>
> [...]
>
> Ah crap, it is the nested code that we get rid of it, nowhere else.
> Which means that non-nested VMs have already observed RAS_frac. What a
> mess. Then RAS_frac must be exposed as writable.
>
> The question is whether we want to allow migration between one flavour
> of RASv1p1 and the other.

I guess that boils down to which kind of observable changes we want to
allow: bit-for-bit register contents, or only features? If only feature
stability is needed, then a cross-flavour migration would be fine; OTOH,
we do not know how a guest deduces feature availability, and it might
check for one flavour, but not the other (which is mostly a problem if
it re-checks during the lifetime.)

Only looking at strictly matching register contents would probably be
easier to implement for the VMM (well, it looks easier for QEMU :)





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux