On Tue, May 06, 2025 at 06:15:40PM -0700, Vishal Annapurve wrote: > On Tue, May 6, 2025 at 6:04 PM Yan Zhao <yan.y.zhao@xxxxxxxxx> wrote: > > > > On Mon, May 05, 2025 at 08:44:26PM +0800, Huang, Kai wrote: > > > On Fri, 2025-05-02 at 16:08 +0300, Kirill A. Shutemov wrote: > > > > +static int tdx_pamt_add(atomic_t *pamt_refcount, unsigned long hpa, > > > > + struct list_head *pamt_pages) > > > > +{ > > > > + u64 err; > > > > + > > > > + hpa = ALIGN_DOWN(hpa, SZ_2M); > > > > + > > > > + spin_lock(&pamt_lock); > > > > > > Just curious, Can the lock be per-2M-range? > > Me too. > > Could we introduce smaller locks each covering a 2M range? > > > > And could we deposit 2 pamt pages per-2M hpa range no matter if it's finally > > mapped as a huge page or not? > > > > Are you suggesting to keep 2 PAMT pages allocated for each private 2M > page even if it's mapped as a hugepage? It will lead to wastage of > memory of 4 MB per 1GB of guest memory range. For large VM sizes that > will amount to high values. Ok. I'm thinking of the possibility to aligning the time of PAMT page allocation to that of physical page allocation.