Re: [PATCH 11/28] KVM: SVM: Add helpers for accessing MSR bitmap that don't rely on offsets

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/4/25 19:35, Sean Christopherson wrote:
On Wed, Jun 04, 2025, Paolo Bonzini wrote:
Replying here for patches 11/25/26.  None of this is needed, just write a
function like this:

static inline u32 svm_msr_bit(u32 msr)
{
	u32 msr_base = msr & ~(SVM_MSRS_PER_RANGE - 1);

Ooh, clever.

	if (msr_base == SVM_MSRPM_RANGE_0_BASE_MSR)
		return SVM_MSRPM_BIT_NR(0, msr);
	if (msr_base == SVM_MSRPM_RANGE_1_BASE_MSR)
		return SVM_MSRPM_BIT_NR(1, msr);
	if (msr_base == SVM_MSRPM_RANGE_2_BASE_MSR)
		return SVM_MSRPM_BIT_NR(2, msr);
	return MSR_INVALID;

I initially had something like this, but I don't like the potential for typos,
e.g. to fat finger something like:

	if (msr_base == SVM_MSRPM_RANGE_2_BASE_MSR)
		return SVM_MSRPM_BIT_NR(1, msr);

Which is how I ended up with the (admittedly ugly) CASE macros.  [...]
Actually, better idea!  Hopefully.  With your masking trick, there's no need to
do subtraction to get the offset within a range, which means getting the bit/byte
number for an MSR can be done entirely programmatically. And if we do that, then> the SVM_MSRPM_RANGE_xxx_BASE_MSR defines can go away, and the (very
trivial)
copy+paste that I dislike also goes away.

Completely untested, but how about this?

	#define SVM_MSRPM_OFFSET_MASK (SVM_MSRS_PER_RANGE - 1)

	static __always_inline int svm_msrpm_bit_nr(u32 msr)

(yeah, after hitting send I noticed that msr->msrpm would have been better)

	{
		int range_nr;

		switch (msr & ~SVM_MSRPM_OFFSET_MASK) {
		case 0:
			range_nr = 0;
			break;
		case 0xc0000000:
			range_nr = 1;
			break;
		case 0xc0010000:
			range_nr = 2;
			break;
		default:
			return -EINVAL;
		}

I actually was going to propose something very similar, I refrained only because I wasn't sure if there would be other remaining uses of SVM_MSRPM_RANGE_?_BASE_MSR. The above is nice.

		return range_nr * SVM_MSRPM_BYTES_PER_RANGE * BITS_PER_BYTE +
		       (msr & SVM_MSRPM_OFFSET_MASK) * SVM_BITS_PER_MSR)

Or this too:

  return ((range_nr * SVM_MSRS_PER_RANGE)
          + (msr & SVM_MSRPM_OFFSET_MASK)) * SVM_BITS_PER_MSR;

depending on personal taste.  A few less macros, a few more parentheses.

That removes the enjoyment of seeing everything collapse into a single LEA instruction (X*2+CONST), as was the case with SVM_MSRPM_BIT_NR. But I agree that these versions are about as nice as the code can be made.

The open coded literals aren't pretty, but VMX does the same thing, precisely
because I didn't want any code besides the innermost helper dealing with the
msr => offset math.

+#define BUILD_SVM_MSR_BITMAP_HELPERS(ret_type, action, bitop)			\
+	__BUILD_SVM_MSR_BITMAP_HELPER(ret_type, action, bitop, read,  0)	\
+	__BUILD_SVM_MSR_BITMAP_HELPER(ret_type, action, bitop, write, 1)
+
+BUILD_SVM_MSR_BITMAP_HELPERS(bool, test, test)
+BUILD_SVM_MSR_BITMAP_HELPERS(void, clear, __clear)
+BUILD_SVM_MSR_BITMAP_HELPERS(void, set, __set)
Yes it's a bit duplication, but no need for the nesting, just do:

I don't have a super strong preference, but I do want to be consistent between
VMX and SVM, and VMX has the nesting (unsurprisingly, also written by me).  And
for that, the nested macros add a bit more value due to reads vs writes being in
entirely different areas of the bitmap.

Yeah, fair enough.  Since it's copied from VMX it makes sense.

Paolo





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux