Re: [RFC PATCH v2 04/23] KVM: TDX: Introduce tdx_clear_folio() to clear huge pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Sep 02, 2025 at 10:56:25AM +0800, Binbin Wu wrote:
> 
> 
> On 8/7/2025 5:42 PM, Yan Zhao wrote:
> > After removing or reclaiming a guest private page or a control page from a
> > TD, zero the physical page using movdir64b(), enabling the kernel to reuse
> > the pages.
> > 
> > Introduce the function tdx_clear_folio() to zero out physical memory using
> > movdir64b(), starting from the page at "start_idx" within a "folio" and
> > spanning "npages" contiguous PFNs.
> > 
> > Convert tdx_clear_page() to be a helper function to facilitate the
> > zeroing of 4KB pages.
> 
> I think this sentence is outdated?
No? tdx_clear_page() is still invoked to clear tdr_page.

> > 
> > Signed-off-by: Xiaoyao Li <xiaoyao.li@xxxxxxxxx>
> > Signed-off-by: Isaku Yamahata <isaku.yamahata@xxxxxxxxx>
> > Signed-off-by: Yan Zhao <yan.y.zhao@xxxxxxxxx>
> > ---
> > RFC v2:
> > - Add tdx_clear_folio().
> > - Drop inner loop _tdx_clear_page() and move __mb() outside of the loop.
> >    (Rick)
> > - Use C99-style definition of variables inside a for loop.
> > - Note: [1] also changes tdx_clear_page(). RFC v2 is not based on [1] now.
> > 
> > [1] https://lore.kernel.org/all/20250724130354.79392-2-adrian.hunter@xxxxxxxxx
> > 
> > RFC v1:
> > - split out, let tdx_clear_page() accept level.
> > ---
> >   arch/x86/kvm/vmx/tdx.c | 22 ++++++++++++++++------
> >   1 file changed, 16 insertions(+), 6 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> > index 8eaf8431c5f1..4fabefb27135 100644
> > --- a/arch/x86/kvm/vmx/tdx.c
> > +++ b/arch/x86/kvm/vmx/tdx.c
> > @@ -277,18 +277,21 @@ static inline void tdx_disassociate_vp(struct kvm_vcpu *vcpu)
> >   	vcpu->cpu = -1;
> >   }
> > -static void tdx_clear_page(struct page *page)
> > +static void tdx_clear_folio(struct folio *folio, unsigned long start_idx,
> > +			    unsigned long npages)
> >   {
> >   	const void *zero_page = (const void *) page_to_virt(ZERO_PAGE(0));
> > -	void *dest = page_to_virt(page);
> > -	unsigned long i;
> >   	/*
> >   	 * The page could have been poisoned.  MOVDIR64B also clears
> >   	 * the poison bit so the kernel can safely use the page again.
> >   	 */
> > -	for (i = 0; i < PAGE_SIZE; i += 64)
> > -		movdir64b(dest + i, zero_page);
> > +	for (unsigned long j = 0; j < npages; j++) {
> > +		void *dest = page_to_virt(folio_page(folio, start_idx + j));
> > +
> > +		for (unsigned long i = 0; i < PAGE_SIZE; i += 64)
> > +			movdir64b(dest + i, zero_page);
> > +	}
> >   	/*
> >   	 * MOVDIR64B store uses WC buffer.  Prevent following memory reads
> >   	 * from seeing potentially poisoned cache.
> > @@ -296,6 +299,13 @@ static void tdx_clear_page(struct page *page)
> >   	__mb();
> >   }
> > +static inline void tdx_clear_page(struct page *page)
> No need to tag a local static function with "inline".
Ok.





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux