On Tue, 2025-08-19 at 21:28 +0200, Borislav Petkov wrote: > On Thu, Aug 14, 2025 at 11:59:02AM +1200, Kai Huang wrote: > > TL;DR: > > > > Prepare to unify how TDX and SME do cache flushing during kexec by > > making a percpu boolean control whether to do the WBINVD. > > > > -- Background -- > > > > On SME platforms, dirty cacheline aliases with and without encryption > > bit can coexist, and the CPU can flush them back to memory in random > > order. During kexec, the caches must be flushed before jumping to the > > new kernel otherwise the dirty cachelines could silently corrupt the > > memory used by the new kernel due to different encryption property. > > > > TDX also needs a cache flush during kexec for the same reason. It would > > be good to have a generic way to flush the cache instead of scattering > > checks for each feature all around. > > > > When SME is enabled, the kernel basically encrypts all memory including > > the kernel itself and a simple memory write from the kernel could dirty > > cachelines. Currently, the kernel uses WBINVD to flush the cache for > > SME during kexec in two places: > > > > 1) the one in stop_this_cpu() for all remote CPUs when the kexec-ing CPU > > stops them; > > 2) the one in the relocate_kernel() where the kexec-ing CPU jumps to the > > new kernel. > > > > -- Solution -- > > > > Unlike SME, TDX can only dirty cachelines when it is used (i.e., when > > SEAMCALLs are performed). Since there are no more SEAMCALLs after the > > aforementioned WBINVDs, leverage this for TDX. > > > > To unify the approach for SME and TDX, use a percpu boolean to indicate > > the cache may be in an incoherent state and needs flushing during kexec, > > and set the boolean for SME. TDX can then leverage it. > > > > While SME could use a global flag (since it's enabled at early boot and > > enabled on all CPUs), the percpu flag fits TDX better: > > > > The percpu flag can be set when a CPU makes a SEAMCALL, and cleared when > > another WBINVD on the CPU obviates the need for a kexec-time WBINVD. > > Saving kexec-time WBINVD is valuable, because there is an existing > > race[*] where kexec could proceed while another CPU is active. WBINVD > > could make this race worse, so it's worth skipping it when possible. > > > > -- Side effect to SME -- > > > > Today the first WBINVD in the stop_this_cpu() is performed when SME is > > *supported* by the platform, and the second WBINVD is done in > > relocate_kernel() when SME is *activated* by the kernel. Make things > > simple by changing to do the second WBINVD when the platform supports > > SME. This allows the kernel to simply turn on this percpu boolean when > > bringing up a CPU by checking whether the platform supports SME. > > > > No other functional change intended. > > > > [*] The aforementioned race: > > > > During kexec native_stop_other_cpus() is called to stop all remote CPUs > > before jumping to the new kernel. native_stop_other_cpus() firstly > > sends normal REBOOT vector IPIs to stop remote CPUs and waits them to > > stop. If that times out, it sends NMI to stop the CPUs that are still > > alive. The race happens when native_stop_other_cpus() has to send NMIs > > and could potentially result in the system hang (for more information > > please see [1]). > > This text is meandering a bit too much across a bunch of things and could be > made tighter... Just a nitpick anyway... Yeah agreed. I've worked to improve it but ... :-) I'll keep this in mind and do better in the future! > > > arch/x86/include/asm/kexec.h | 4 ++-- > > arch/x86/include/asm/processor.h | 2 ++ > > arch/x86/kernel/cpu/amd.c | 17 +++++++++++++++++ > > arch/x86/kernel/machine_kexec_64.c | 14 ++++++++++---- > > arch/x86/kernel/process.c | 24 +++++++++++------------- > > arch/x86/kernel/relocate_kernel_64.S | 13 ++++++++++--- > > 6 files changed, 52 insertions(+), 22 deletions(-) > > Reviewed-by: Borislav Petkov (AMD) <bp@xxxxxxxxx> Thanks!