Re: [PATCH 1/2] block: Make __submit_bio_noacct() preserve the bio submission order

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/8/25 8:55 PM, Christoph Hellwig wrote:
The problem here is that blk_crypto_fallback_split_bio_if_needed does
sneaky splits behind the back of the main splitting code.

The fix is to include the limit imposed by it in __bio_split_to_limits
as well if the crypto fallback is used.

(+Eric)

Hmm ... my understanding is that when the inline encryption fallback
code is used that the bio must be split before
blk_crypto_fallback_encrypt_bio() encrypts the bio. Making
__bio_split_to_limits() take the inline encryption limit into account
would require to encrypt the data much later. How to perform encryption
later for bio-based drivers? Would moving the blk_crypto_bio_prep() call
from submit_bio() to just before __bio_split_to_limits() perhaps require
modifying all bio-based drivers that do not call
__bio_split_to_limits()?

If you have time to fix this that would be great.  Otherwise I can
give it a spin, but it's public holiday and travel season here, so
my availability is a bit limited.

This is not the solution that you are looking for but this seems to
work:

diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
index 7c33e9573e5e..f4fefecdcc5e 100644
--- a/block/blk-crypto-fallback.c
+++ b/block/blk-crypto-fallback.c
@@ -213,6 +213,7 @@ blk_crypto_fallback_alloc_cipher_req(struct blk_crypto_keyslot *slot,
 static bool blk_crypto_fallback_split_bio_if_needed(struct bio **bio_ptr)
 {
 	struct bio *bio = *bio_ptr;
+	const struct queue_limits *lim = bdev_limits(bio->bi_bdev);
 	unsigned int i = 0;
 	unsigned int num_sectors = 0;
 	struct bio_vec bv;
@@ -223,6 +224,7 @@ static bool blk_crypto_fallback_split_bio_if_needed(struct bio **bio_ptr)
 		if (++i == BIO_MAX_VECS)
 			break;
 	}
+	num_sectors = min(num_sectors, get_max_io_size(bio, lim));
 	if (num_sectors < bio_sectors(bio)) {
 		struct bio *split_bio;

diff --git a/block/blk-merge.c b/block/blk-merge.c
index fb6253c07387..e308325a333c 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -192,8 +192,7 @@ static inline unsigned int blk_boundary_sectors(const struct queue_limits *lim, * requests that are submitted to a block device if the start of a bio is not
  * aligned to a physical block boundary.
  */
-static inline unsigned get_max_io_size(struct bio *bio,
-				       const struct queue_limits *lim)
+unsigned get_max_io_size(struct bio *bio, const struct queue_limits *lim)
 {
 	unsigned pbs = lim->physical_block_size >> SECTOR_SHIFT;
 	unsigned lbs = lim->logical_block_size >> SECTOR_SHIFT;
diff --git a/block/blk.h b/block/blk.h
index 37ec459fe656..5f97db919cdf 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -425,6 +425,8 @@ static inline unsigned get_max_segment_size(const struct queue_limits *lim,
 		    (unsigned long)lim->max_segment_size - 1) + 1);
 }

+unsigned get_max_io_size(struct bio *bio, const struct queue_limits *lim);
+
 int ll_back_merge_fn(struct request *req, struct bio *bio,
 		unsigned int nr_segs);
 bool blk_attempt_req_merge(struct request_queue *q, struct request *rq,


Thanks,

Bart.




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux