On Tue, Aug 12, 2025 at 03:12:52PM +0000, Edgecombe, Rick P wrote: > On Tue, 2025-08-12 at 09:04 +0100, kas@xxxxxxxxxx wrote: > > > > E.g. for things like TDCS pages and to some extent non-leaf S-EPT pages, > > > > on-demand > > > > PAMT management seems reasonable. But for PAMTs that are used to track > > > > guest-assigned > > > > memory, which is the vaaast majority of PAMT memory, why not hook > > > > guest_memfd? > > > > > > This seems fine for 4K page backing. But when TDX VMs have huge page > > > backing, the vast majority of private memory memory wouldn't need PAMT > > > allocation for 4K granularity. > > > > > > IIUC guest_memfd allocation happening at 2M granularity doesn't > > > necessarily translate to 2M mapping in guest EPT entries. If the DPAMT > > > support is to be properly utilized for huge page backings, there is a > > > value in not attaching PAMT allocation with guest_memfd allocation. > > > > Right. > > > > It also requires special handling in many places in core-mm. Like, what > > happens if THP in guest memfd got split. Who would allocate PAMT for it? > > Migration will be more complicated too (when we get there). > > I actually went down this path too, but the problem I hit was that TDX module > wants the PAMT page size to match the S-EPT page size. And the S-EPT size will > recall. With DPAMT, when you pass page pair to PAMT.ADD they will be stored in the PAMT_2M entry. So PAMT_2M entry cannot be used as a leaf entry anymore. In theory, TDX module could stash them somewhere else, like generic memory pool to be used for PAMT_4K when needed. But it is significantly different design to what we have now with different set of problems. -- Kiryl Shutsemau / Kirill A. Shutemov