On Fri, 2025-06-13 at 12:13 -0400, James Bottomley wrote: > On Fri, 2025-06-13 at 15:54 +0100, David Howells wrote: > > Hi, > > > > So we need to do something about the impending quantum-related > > obsolescence of the RSA signatures that we use for module signing, > > kexec, BPF signing, IMA and a bunch of other things. > > Wait, that's not necessarily the whole threat. There are two possible > ways quantum could compromise us. One is a computer that has enough > qbits to run the shor algorithm and break non-quantum crypto. The > other is that a computer comes along with enough qbits to speed up the > brute force attacks using the grover algorithm. NIST still believes > the latter will happen way before the former, so our first step should > be doubling the number of security bits in existing algorithms, which > means ECC of at least 512 bits (so curve25519 needs replacing with at > least curve448) and for all practical purposes deprecating RSA (unless > someone wants to play with huge keys). > > > From my point of view, the simplest way would be to implement key > > verification in the kernel for one (or more) of the available post- > > quantum algorithms (of which there are at least three), driving this > > with appropriate changes to the X.509 certificate to indicate that's > > what we want to use. > > Can you at least enumerate them? There's still a dispute going on > about whether we should use pure post-quantum or hybrid. I tend to > think myself that hybrid is best for durable things like digital > signatures but given the NIST advice, we should be using > 512 bit > curves for that. > > > The good news is that Stephan Mueller has an implemementation that > > includes > > kernel bits that we can use, or, at least, adapt: > > > > https://github.com/smuellerDD/leancrypto/ > > So the only hybrid scheme in there is dilithium+25519 which doesn't > quite fit the bill (although I'm assuming dilithium+448 could easily be > implemented) > > > > > Note that we only need the signature verification bits. One > > question, though: he's done it as a standalone "leancrypto" module, > > not integrated into crypto/, but should it be integrated into crypto/ > > or is the standalone fine? > > > > The not so good news, as I understand it, though, is that the X.509 > > bits are not yet standardised. > > > > > > However! Not everyone agrees with this. An alternative proposal > > would rather get the signature verification code out of the kernel > > entirely. Simo Sorce's proposal, for example, AIUI, is to compile > > all the hashes we need into the kernel at build time, possibly with a > > hashed hash list to be loaded later to reduce the amount of > > uncompressible code in the kernel. If signatures are needed at all, > > then this should be offloaded to a userspace program (which would > > also have to be hashed and marked unptraceable and I think > > unswappable) to do the checking. > > > > I don't think we can dispense with signature checking entirely, > > though: we need it for third party module loading, quick single- > > module driver updates and all the non-module checking stuff. If it > > were to be done in userspace, this might entail an upcall for each > > signature we want to check - either that, or the kernel has to run a > > server process that it can delegate checking to. > > I agree we can't predict everything at build time, so we need a runtime > scheme (like signatures) as well. However, I'm not convinced it should > be run outside the kernel. The expansion of the TCB plus the amount of > checking the kernel has to do to make sure the upcall is secure adds to > the vulnerability over in-kernel where everything just works. > > > It's also been suggested that PQ algorithms are really slow. For > > kernel modules that might not matter too much as we may well not load > > more than 200 or so during boot - but there are other users that may > > get used more frequently (IMA, for example). > > If we go with a hybrid signature scheme, we can start off with only > verifying the pre-quantum signature and have a switch to verify both. > > > Now, there's also a possible hybrid approach, if I understand Roberto > > Sassu's proposal correctly, whereby it caches bundles of hashes > > obtained from, say, the hashes included in an RPM. These bundles of > > hashes can be checked by signature generated by the package signing > > process. This would reduce the PQ overhead to checking a bundle and > > would also make IMA's measuring easier as the hashes can be added in > > the right order, rather than being dependent on the order that the > > binaries are used. > > I think you're referring to the IMA digest list extension proposal: > > https://github.com/initlove/linux/wiki/IMA-Digest-Lists-Extension > > I'm not sure it's been progressed much. The latest iteration can be found here: https://lore.kernel.org/linux-integrity/20241119104922.2772571-1-roberto.sassu@xxxxxxxxxxxxxxx/ It is more or less ready for upstreaming (from my point of view), with the exception of a few comments that I still need to address. The main problem was parsing the RPM headers in the kernel, that Linus didn't like, but I believe I solved it now with Mimi's suggestion of making the digest list parsers pluggable (so the RPM parser has been removed from this submission, and moved to a kernel module that is expected to be signed by distros). I explored the alternative of moving the parser to user space, but as you mentioned, it is a more risky approach. Other than that, yes, it makes IMA much faster in my benchmarks (~34 seconds to lookup 12312 digests and verify 303 ECDSA signatures, as opposed to ~98 seconds to verify 12312 ECDSA signatures). Roberto