On Thu, Apr 24, 2025 at 10:48:53AM +0300, Kirill A. Shutemov wrote: > On Thu, Apr 24, 2025 at 11:04:28AM +0800, Yan Zhao wrote: > > Enhance the SEAMCALL wrapper tdh_mem_page_aug() to support huge pages. > > > > Verify the validity of the level and ensure that the mapping range is fully > > contained within the page folio. > > > > As a conservative solution, perform CLFLUSH on all pages to be mapped into > > the TD before invoking the SEAMCALL TDH_MEM_PAGE_AUG. This ensures that any > > dirty cache lines do not write back later and clobber TD memory. > > > > Signed-off-by: Xiaoyao Li <xiaoyao.li@xxxxxxxxx> > > Signed-off-by: Isaku Yamahata <isaku.yamahata@xxxxxxxxx> > > Signed-off-by: Yan Zhao <yan.y.zhao@xxxxxxxxx> > > --- > > arch/x86/virt/vmx/tdx/tdx.c | 11 ++++++++++- > > 1 file changed, 10 insertions(+), 1 deletion(-) > > > > diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c > > index f5e2a937c1e7..a66d501b5677 100644 > > --- a/arch/x86/virt/vmx/tdx/tdx.c > > +++ b/arch/x86/virt/vmx/tdx/tdx.c > > @@ -1595,9 +1595,18 @@ u64 tdh_mem_page_aug(struct tdx_td *td, u64 gpa, int level, struct page *page, u > > .rdx = tdx_tdr_pa(td), > > .r8 = page_to_phys(page), > > }; > > + unsigned long nr_pages = 1 << (level * 9); > > PTE_SHIFT. Yes. Thanks. > > + struct folio *folio = page_folio(page); > > + unsigned long idx = 0; > > u64 ret; > > > > - tdx_clflush_page(page); > > + if (!(level >= TDX_PS_4K && level < TDX_PS_NR) || > > Do we even need this check? Maybe not if tdh_mem_page_aug() trusts KVM :) The consideration is to avoid nr_pages being too huge to cause too many tdx_clflush_page()s on any reckless error. > > + (folio_page_idx(folio, page) + nr_pages > folio_nr_pages(folio))) > > + return -EINVAL; > > + > > + while (nr_pages--) > > + tdx_clflush_page(nth_page(page, idx++)); > > + > > ret = seamcall_ret(TDH_MEM_PAGE_AUG, &args); > > > > *ext_err1 = args.rcx; > > -- > > 2.43.2 > > > > -- > Kiryl Shutsemau / Kirill A. Shutemov