On Tue, Jun 10, 2025 at 10:23:26AM -0700, Bart Van Assche wrote: > On 6/8/25 8:55 PM, Christoph Hellwig wrote: > > The problem here is that blk_crypto_fallback_split_bio_if_needed does > > sneaky splits behind the back of the main splitting code. > > > > The fix is to include the limit imposed by it in __bio_split_to_limits > > as well if the crypto fallback is used. > > (+Eric) > > Hmm ... my understanding is that when the inline encryption fallback > code is used that the bio must be split before > blk_crypto_fallback_encrypt_bio() encrypts the bio. Making > __bio_split_to_limits() take the inline encryption limit into account > would require to encrypt the data much later. How to perform encryption > later for bio-based drivers? Would moving the blk_crypto_bio_prep() call > from submit_bio() to just before __bio_split_to_limits() perhaps require > modifying all bio-based drivers that do not call > __bio_split_to_limits()? I think you could just prep the encryption at the point the bio is split to its queue's limits, and then all you need to do after that is ensure the limits don't exceed what the fallback requires (which appears to just be a simple segment limitation). It looks like most of the bio based drivers split to limits already. And if that's all okay, it simplifies quite a lot of things in this path because the crypto fallback doesn't need to concern itself with splitting anymore. Here's a quick shot a it, but compile tested only: --- diff --git a/block/blk-core.c b/block/blk-core.c index b862c66018f25..157f4515eea7d 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -39,7 +39,6 @@ #include <linux/bpf.h> #include <linux/part_stat.h> #include <linux/sched/sysctl.h> -#include <linux/blk-crypto.h> #define CREATE_TRACE_POINTS #include <trace/events/block.h> @@ -626,9 +625,6 @@ static void __submit_bio(struct bio *bio) /* If plug is not used, add new plug here to cache nsecs time. */ struct blk_plug plug; - if (unlikely(!blk_crypto_bio_prep(&bio))) - return; - blk_start_plug(&plug); if (!bdev_test_flag(bio->bi_bdev, BD_HAS_SUBMIT_BIO)) { diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c index 005c9157ffb3d..794400382e85b 100644 --- a/block/blk-crypto-fallback.c +++ b/block/blk-crypto-fallback.c @@ -209,36 +209,6 @@ blk_crypto_fallback_alloc_cipher_req(struct blk_crypto_keyslot *slot, return true; } -static bool blk_crypto_fallback_split_bio_if_needed(struct bio **bio_ptr) -{ - struct bio *bio = *bio_ptr; - unsigned int i = 0; - unsigned int num_sectors = 0; - struct bio_vec bv; - struct bvec_iter iter; - - bio_for_each_segment(bv, bio, iter) { - num_sectors += bv.bv_len >> SECTOR_SHIFT; - if (++i == BIO_MAX_VECS) - break; - } - if (num_sectors < bio_sectors(bio)) { - struct bio *split_bio; - - split_bio = bio_split(bio, num_sectors, GFP_NOIO, - &crypto_bio_split); - if (IS_ERR(split_bio)) { - bio->bi_status = BLK_STS_RESOURCE; - return false; - } - bio_chain(split_bio, bio); - submit_bio_noacct(bio); - *bio_ptr = split_bio; - } - - return true; -} - union blk_crypto_iv { __le64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE]; u8 bytes[BLK_CRYPTO_MAX_IV_SIZE]; @@ -275,10 +245,6 @@ static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr) bool ret = false; blk_status_t blk_st; - /* Split the bio if it's too big for single page bvec */ - if (!blk_crypto_fallback_split_bio_if_needed(bio_ptr)) - return false; - src_bio = *bio_ptr; bc = src_bio->bi_crypt_context; data_unit_size = bc->bc_key->crypto_cfg.data_unit_size; diff --git a/block/blk-crypto-profile.c b/block/blk-crypto-profile.c index 94a155912bf1c..2804843310cc4 100644 --- a/block/blk-crypto-profile.c +++ b/block/blk-crypto-profile.c @@ -459,6 +459,16 @@ bool blk_crypto_register(struct blk_crypto_profile *profile, pr_warn("Integrity and hardware inline encryption are not supported together. Disabling hardware inline encryption.\n"); return false; } + + if (queue_max_segments(q) > BIO_MAX_VECS) { + struct queue_limits lim; + + lim = queue_limits_start_update(q); + lim.max_segments = BIO_MAX_VECS; + if (queue_limits_commit_update(q, &lim)) + return false; + } + q->crypto_profile = profile; return true; } diff --git a/block/blk-merge.c b/block/blk-merge.c index 3af1d284add50..d6d6f5b214c82 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -124,9 +124,10 @@ static struct bio *bio_submit_split(struct bio *bio, int split_sectors) trace_block_split(split, bio->bi_iter.bi_sector); WARN_ON_ONCE(bio_zone_write_plugging(bio)); submit_bio_noacct(bio); - return split; + bio = split; } + blk_crypto_bio_prep(&bio); return bio; error: bio->bi_status = errno_to_blk_status(split_sectors); --