On Thu, May 15, 2025 at 10:16:58AM +0800, Chao Gao wrote: > On Thu, Apr 24, 2025 at 11:04:28AM +0800, Yan Zhao wrote: > >Enhance the SEAMCALL wrapper tdh_mem_page_aug() to support huge pages. > > > >Verify the validity of the level and ensure that the mapping range is fully > >contained within the page folio. > > > >As a conservative solution, perform CLFLUSH on all pages to be mapped into > >the TD before invoking the SEAMCALL TDH_MEM_PAGE_AUG. This ensures that any > >dirty cache lines do not write back later and clobber TD memory. > > > >Signed-off-by: Xiaoyao Li <xiaoyao.li@xxxxxxxxx> > >Signed-off-by: Isaku Yamahata <isaku.yamahata@xxxxxxxxx> > >Signed-off-by: Yan Zhao <yan.y.zhao@xxxxxxxxx> > >--- > > arch/x86/virt/vmx/tdx/tdx.c | 11 ++++++++++- > > 1 file changed, 10 insertions(+), 1 deletion(-) > > > >diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c > >index f5e2a937c1e7..a66d501b5677 100644 > >--- a/arch/x86/virt/vmx/tdx/tdx.c > >+++ b/arch/x86/virt/vmx/tdx/tdx.c > >@@ -1595,9 +1595,18 @@ u64 tdh_mem_page_aug(struct tdx_td *td, u64 gpa, int level, struct page *page, u > > .rdx = tdx_tdr_pa(td), > > .r8 = page_to_phys(page), > > }; > >+ unsigned long nr_pages = 1 << (level * 9); > >+ struct folio *folio = page_folio(page); > >+ unsigned long idx = 0; > > u64 ret; > > > >- tdx_clflush_page(page); > >+ if (!(level >= TDX_PS_4K && level < TDX_PS_NR) || > >+ (folio_page_idx(folio, page) + nr_pages > folio_nr_pages(folio))) > >+ return -EINVAL; > > Returning -EINVAL looks incorrect as the return type is u64. Good catch. Thanks! I'll think about how to handle it. Looks it could be dropped if we trust KVM. > >+ while (nr_pages--) > >+ tdx_clflush_page(nth_page(page, idx++)); > >+ > > ret = seamcall_ret(TDH_MEM_PAGE_AUG, &args); > > > > *ext_err1 = args.rcx; > >-- > >2.43.2 > >