On Thu, Aug 7, 2025 at 4:47 AM Yan Zhao <yan.y.zhao@xxxxxxxxx> wrote: > > From: "Kirill A. Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx> > > The TDX module enumerates with a TDX_FEATURES0 bit if an explicit cache > flush is necessary when switching KeyID for a page, like before > handing the page over to a TD. > > Currently, none of the TDX-capable platforms have this bit enabled. > > Moreover, cache flushing with TDH.PHYMEM.PAGE.WBINVD fails if > Dynamic PAMT is active and the target page is not 4k. The SEAMCALL only > supports 4k pages and will fail if there is no PAMT_4K for the HPA. > > Avoid performing these cache flushes unless the CLFLUSH_BEFORE_ALLOC bit > of TDX_FEATURES0 is set. > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> > Signed-off-by: Yan Zhao <yan.y.zhao@xxxxxxxxx> > --- > RFC v2: > - Pulled from > git://git.kernel.org/pub/scm/linux/kernel/git/kas/linux.git tdx/dpamt-huge. > - Rebased on top of TDX huge page RFC v2 (Yan) > --- > arch/x86/include/asm/tdx.h | 1 + > arch/x86/virt/vmx/tdx/tdx.c | 19 +++++++++++++------ > 2 files changed, 14 insertions(+), 6 deletions(-) > > diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h > index f1bd74348b34..c058a82d4a97 100644 > --- a/arch/x86/include/asm/tdx.h > +++ b/arch/x86/include/asm/tdx.h > @@ -15,6 +15,7 @@ > > /* Bit definitions of TDX_FEATURES0 metadata field */ > #define TDX_FEATURES0_NO_RBP_MOD BIT_ULL(18) > +#define TDX_FEATURES0_CLFLUSH_BEFORE_ALLOC BIT_ULL(23) > #define TDX_FEATURES0_DYNAMIC_PAMT BIT_ULL(36) > > #ifndef __ASSEMBLER__ > diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c > index 9ed585bde062..b7a0ee0f4a50 100644 > --- a/arch/x86/virt/vmx/tdx/tdx.c > +++ b/arch/x86/virt/vmx/tdx/tdx.c > @@ -1648,14 +1648,13 @@ static inline u64 tdx_tdvpr_pa(struct tdx_vp *td) > return page_to_phys(td->tdvpr_page); > } > > -/* > - * The TDX module exposes a CLFLUSH_BEFORE_ALLOC bit to specify whether > - * a CLFLUSH of pages is required before handing them to the TDX module. > - * Be conservative and make the code simpler by doing the CLFLUSH > - * unconditionally. > - */ > static void tdx_clflush_page(struct page *page) > { > + u64 tdx_features0 = tdx_sysinfo.features.tdx_features0; > + > + if (tdx_features0 & TDX_FEATURES0_CLFLUSH_BEFORE_ALLOC) > + return; Isn't the logic here and below reversed? If TDX_FEATURES0_CLFLUSH_BEFORE_ALLOC bit is set, we want to perform the clflush() > + > clflush_cache_range(page_to_virt(page), PAGE_SIZE); > } > > @@ -2030,8 +2029,12 @@ EXPORT_SYMBOL_GPL(tdh_phymem_cache_wb); > > u64 tdh_phymem_page_wbinvd_tdr(struct tdx_td *td) > { > + u64 tdx_features0 = tdx_sysinfo.features.tdx_features0; > struct tdx_module_args args = {}; > > + if (tdx_features0 & TDX_FEATURES0_CLFLUSH_BEFORE_ALLOC) > + return 0; > + > args.rcx = mk_keyed_paddr(tdx_global_keyid, td->tdr_page); > > return seamcall(TDH_PHYMEM_PAGE_WBINVD, &args); > @@ -2041,10 +2044,14 @@ EXPORT_SYMBOL_GPL(tdh_phymem_page_wbinvd_tdr); > u64 tdh_phymem_page_wbinvd_hkid(u64 hkid, struct folio *folio, > unsigned long start_idx, unsigned long npages) > { > + u64 tdx_features0 = tdx_sysinfo.features.tdx_features0; > struct page *start = folio_page(folio, start_idx); > struct tdx_module_args args = {}; > u64 err; > > + if (tdx_features0 & TDX_FEATURES0_CLFLUSH_BEFORE_ALLOC) > + return 0; > + > if (start_idx + npages > folio_nr_pages(folio)) > return TDX_OPERAND_INVALID; > > -- > 2.43.2 > >