On 6/8/25 3:47 PM, Damien Le Moal wrote:
So yes, we need a fix. Can you work on one ?
The patch below seems to be sufficient but I'm not sure whether this
approach is acceptable:
Subject: [PATCH] block: Preserve the LBA order when splitting a bio
Preserve the bio order if bio_split() is called on the prefix returned
by an earlier bio_split() call. This can happen with fscrypt and the
inline encryption fallback code if max_sectors is less than the maximum
bio size supported by the inline encryption fallback code (1 MiB for 4 KiB
pages) or when using zoned storage and the distance from the start of the
bio to the next zone boundary is less than 1 MiB.
Fixes: 488f6682c832 ("block: blk-crypto-fallback for Inline Encryption")
Signed-off-by: Bart Van Assche <bvanassche@xxxxxxx>
---
block/bio.c | 8 ++++++++
block/blk-core.c | 12 ++++++++----
include/linux/blk_types.h | 5 +++++
3 files changed, 21 insertions(+), 4 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index 3c0a558c90f5..440ed443545c 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1689,6 +1689,14 @@ struct bio *bio_split(struct bio *bio, int sectors,
bio_advance(bio, split->bi_iter.bi_size);
+ /*
+ * If bio_split() is called on a prefix from an earlier bio_split()
+ * call, adding it at the head of current->bio_list[0] preserves the
+ * LBA order. This is essential when writing data to a zoned block
+ * device and when using REQ_OP_WRITE instead of REQ_OP_ZONE_APPEND.
+ */
+ bio_set_flag(bio, BIO_ADD_AT_HEAD);
+
if (bio_flagged(bio, BIO_TRACE_COMPLETION))
bio_set_flag(split, BIO_TRACE_COMPLETION);
diff --git a/block/blk-core.c b/block/blk-core.c
index b862c66018f2..570a14a7bcd4 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -745,12 +745,16 @@ void submit_bio_noacct_nocheck(struct bio *bio)
* to collect a list of requests submited by a ->submit_bio method while
* it is active, and then process them after it returned.
*/
- if (current->bio_list)
- bio_list_add(¤t->bio_list[0], bio);
- else if (!bdev_test_flag(bio->bi_bdev, BD_HAS_SUBMIT_BIO))
+ if (current->bio_list) {
+ if (bio_flagged(bio, BIO_ADD_AT_HEAD))
+ bio_list_add_head(¤t->bio_list[0], bio);
+ else
+ bio_list_add(¤t->bio_list[0], bio);
+ } else if (!bdev_test_flag(bio->bi_bdev, BD_HAS_SUBMIT_BIO)) {
__submit_bio_noacct_mq(bio);
- else
+ } else {
__submit_bio_noacct(bio);
+ }
}
static blk_status_t blk_validate_atomic_write_op_size(struct
request_queue *q,
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 3d1577f07c1c..0e2d3fd8d40a 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -308,6 +308,11 @@ enum {
BIO_REMAPPED,
BIO_ZONE_WRITE_PLUGGING, /* bio handled through zone write plugging */
BIO_EMULATES_ZONE_APPEND, /* bio emulates a zone append operation */
+ /*
+ * make submit_bio_noacct_nocheck() add this bio at the head of
+ * current->bio_list[0].
+ */
+ BIO_ADD_AT_HEAD,
BIO_FLAG_LAST
};