Re: [PATCHv2 00/12] TDX: Enable Dynamic PAMT

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2025-08-11 at 19:02 -0700, Sean Christopherson wrote:
> > I just did a test of simultaneously starting 10 VMs with 16GB of ram (non
> > huge
> 
> How many vCPUs?  And were the VMs actually accepting/faulting all 16GiB?

4 vCPUs.

I redid the test. Boot 10 TDs with 16GB of ram, run userspace to fault in memory
from 4 threads until OOM, then shutdown. TDs were split between two sockets. It
ended up with 1136 contentions of the global lock, 4ms waiting.

It still feels very wrong, but also not something that is a very measurable in
the real world. Like Kirill was saying, the global lock is not held very long.
I'm not sure if this may still hit scalability issues from cacheline bouncing on
bigger multisocket systems. But we do have a path forwards if we hit this.
Depending on the scale of the problem that comes up we could decide whether to
do the lock per-2MB region with more memory usage, or a hashed table of N locks
like Dave suggested.

So I'll plan to keep the existing single lock for now unless anyone has any
strong objections.

> 
> There's also a noisy neighbor problem lurking.  E.g. malicious/buggy VM spams
> private<=>shared conversions and thus interferes with PAMT allocations for
> other
> VMs.

Hmm, as long as it doesn't block it completely, it seems ok?




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux