Re: [PATCH v4 24/38] KVM: x86/pmu: Exclude PMU MSRs in vmx_get_passthrough_msr_slot()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/16/2025 10:45 PM, Sean Christopherson wrote:
> On Fri, May 16, 2025, Sean Christopherson wrote:
>> On Mon, Mar 24, 2025, Mingwei Zhang wrote:
>>> Reject PMU MSRs interception explicitly in
>>> vmx_get_passthrough_msr_slot() since interception of PMU MSRs are
>>> specially handled in intel_passthrough_pmu_msrs().
>>>
>>> Signed-off-by: Mingwei Zhang <mizhang@xxxxxxxxxx>
>>> Co-developed-by: Dapeng Mi <dapeng1.mi@xxxxxxxxxxxxxxx>
>>> Signed-off-by: Dapeng Mi <dapeng1.mi@xxxxxxxxxxxxxxx>
>>> ---
>>>  arch/x86/kvm/vmx/vmx.c | 12 +++++++++++-
>>>  1 file changed, 11 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
>>> index 38ecf3c116bd..7bb16bed08da 100644
>>> --- a/arch/x86/kvm/vmx/vmx.c
>>> +++ b/arch/x86/kvm/vmx/vmx.c
>>> @@ -165,7 +165,7 @@ module_param(allow_smaller_maxphyaddr, bool, S_IRUGO);
>>>  
>>>  /*
>>>   * List of MSRs that can be directly passed to the guest.
>>> - * In addition to these x2apic, PT and LBR MSRs are handled specially.
>>> + * In addition to these x2apic, PMU, PT and LBR MSRs are handled specially.
> Except y'all forgot to actually do the "special" handling, vmx_msr_filter_changed()
> needs to refresh the PMU MSR filters.  Only the x2APIC MSRs are excluded from
> userspace filtering (see kvm_msr_allowed()), everything else can be intercepted
> by userespace.  E.g. if an MSR filter is set _before_ PMU refresh, KVM's behavior
> will diverge from a filter that is set after PMU refresh.

Yes, it's indeed missed. Needs to add them. Thanks for reminding it.


>
>>>   */
>>>  static u32 vmx_possible_passthrough_msrs[MAX_POSSIBLE_PASSTHROUGH_MSRS] = {
>>>  	MSR_IA32_SPEC_CTRL,
>>> @@ -691,6 +691,16 @@ static int vmx_get_passthrough_msr_slot(u32 msr)
>>>  	case MSR_LBR_CORE_FROM ... MSR_LBR_CORE_FROM + 8:
>>>  	case MSR_LBR_CORE_TO ... MSR_LBR_CORE_TO + 8:
>>>  		/* LBR MSRs. These are handled in vmx_update_intercept_for_lbr_msrs() */
>>> +	case MSR_IA32_PMC0 ...
>>> +		MSR_IA32_PMC0 + KVM_MAX_NR_GP_COUNTERS - 1:
>>> +	case MSR_IA32_PERFCTR0 ...
>>> +		MSR_IA32_PERFCTR0 + KVM_MAX_NR_GP_COUNTERS - 1:
>>> +	case MSR_CORE_PERF_FIXED_CTR0 ...
>>> +		MSR_CORE_PERF_FIXED_CTR0 + KVM_MAX_NR_FIXED_COUNTERS - 1:
>>> +	case MSR_CORE_PERF_GLOBAL_STATUS:
>>> +	case MSR_CORE_PERF_GLOBAL_CTRL:
>>> +	case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
>>> +		/* PMU MSRs. These are handled in intel_passthrough_pmu_msrs() */
>>>  		return -ENOENT;
>>>  	}
>> This belongs in the patch that configures interception.  A better split would be
>> to have an Intel patch and an AMD patch, 
> I take that back, splitting the Intel and AMD logic makes is just as messy,
> because the control logic is very much shared between VMX and SVM.




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux