On Thu, 17 Apr 2025 at 09:25, Qingfang Deng <dqfext@xxxxxxxxx> wrote: > > Hi Ard, > > On Thu, Apr 17, 2025 at 2:58 PM Ard Biesheuvel <ardb@xxxxxxxxxx> wrote: > > > > (cc Eric) > > > > On Thu, 17 Apr 2025 at 08:49, Qingfang Deng <dqfext@xxxxxxxxx> wrote: > > > > > > From: Qingfang Deng <qingfang.deng@xxxxxxxxxxxxxxx> > > > > > > Add a scalar implementation of GHASH for RISC-V using the Zbc (carry-less > > > multiplication) and Zbb (bit-manipulation) extensions. This implementation > > > is adapted from OpenSSL but rewritten in plain C for clarity. > > > > > > Unlike the OpenSSL one that rely on bit-reflection of the data, this > > > version uses a pre-computed (reflected and multiplied) key, inspired by > > > the approach used in Intel's CLMUL driver, to avoid reflections during > > > runtime. > > > > > > Signed-off-by: Qingfang Deng <qingfang.deng@xxxxxxxxxxxxxxx> > > > > What is the use case for this? AIUI, the scalar AES instructions were > > never implemented by anyone, so how do you expect this to be used in > > practice? > > The use case _is_ AES-GCM, as you mentioned. Without this, computing > GHASH can take a considerable amount of CPU time (monitored by perf). > I see. But do you have a particular configuration in mind? Does it have scalar AES too? I looked into that a while ago but I was told that nobody actually incorporates that. So what about these extensions? Are they commonly implemented? [0] https://web.git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/log/?h=riscv-scalar-aes > > ... > > > +static __always_inline __uint128_t get_unaligned_be128(const u8 *p) > > > +{ > > > + __uint128_t val; > > > +#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > > > > CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS means that get_unaligned_xxx() > > helpers are cheap. Casting a void* to an aligned type is still UB as > > per the C standard. > > Technically an unaligned access is UB but this pattern is widely used > in networking code. > Of course. But that is no reason to keep doing it. > > > > So better to drop the #ifdef entirely, and just use the > > get_unaligned_be64() helpers for both cases. > > Currently those helpers won't generate rev8 instructions, even if > HAVE_EFFICIENT_UNALIGNED_ACCESS and RISCV_ISA_ZBB is set, so I have to > implement my own version of this to reduce the number of instructions, > and to align with the original OpenSSL implementation. > So fix the helpers.