Re: [PATCH 00/17] ARM64 PMU Partitioning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Oliver Upton <oliver.upton@xxxxxxxxx> writes:

On Mon, Jun 02, 2025 at 07:26:45PM +0000, Colton Lewis wrote:
Caveats:

Because the most consistent and performant thing to do was untrap
PMCR_EL0, the number of counters visible to the guest via PMCR_EL0.N
is always equal to the value KVM sets for MDCR_EL2.HPMN. Previously
allowed writes to PMCR_EL0.N via {GET,SET}_ONE_REG no longer affect
the guest.

These improvements come at a cost to 7-35 new registers that must be
swapped at every vcpu_load and vcpu_put if the feature is enabled. I
have been informed KVM would like to avoid paying this cost when
possible.

One solution is to make the trapping changes and context swapping lazy
such that the trapping changes and context swapping only take place
after the guest has actually accessed the PMU so guests that never
access the PMU never pay the cost.

You should try and model this similar to how we manage the debug
breakpoints/watchpoints. In that case the debug register context is
loaded if either:

  (1) Self-hosted debug is actively in use by the guest, or

  (2) The guest has accessed a debug register since the last vcpu_load()

Okay

This is not done here because it is not crucial to the primary
functionality and I thought review would be more productive as soon as
I had something complete enough for reviewers to easily play with.

However, this or any better ideas are on the table for inclusion in
future re-rolls.

One of the other things that I'd like to see is if we can pare down the
amount of CPU feature dependencies for a partitioned PMU. Annoyingly,
there aren't a lot of machines out there with FEAT_FGT yet, and you
should be able to make all of this work in VHE + FEAT_PMUv3p1.

That "just" comes at the cost of extra traps (leaving TPM and
potentially TPMCR set). You can mitigate the cost of this by emulating
accesses in the fast path that don't need to go out to a kernel context
to be serviced. Same goes for requiring FEAT_HPMN0 to expose 0 event
counters, we can fall back to TPM traps if needed.

Taking perf out of the picture should still give you a significant
reduction vPMU overheads.

Okay

Last thing, let's table guest support for FEAT_PMUv3_ICNTR for the time
being. Yes, it falls in the KVM-owned range, but we can just handle it
with a fine-grained undef for now. Once the core infrastructure has
landed upstream we can start layering new features into the partitioned
implementation.

Sure

Thanks,
Oliver




[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux