On 11/06/2025 18:53, Jens Axboe wrote:
Yes we can't revert it, and honestly I would not want to even if that was an option. If the multi-queue case is particularly important, you could just do something ala the below - keep scanning until you a merge _could_ have happened but didn't. Ideally we'd want to iterate the plug list backwards and then we could keep the same single shot logic, where you only attempt one request that has a matching queue. And obviously we could just doubly link the requests, there's space in the request linkage code to do that. But that'd add overhead in general, I think it's better to shove a bit of that overhead to the multi-queue case. I suspect the below would do the trick, however. diff --git a/block/blk-merge.c b/block/blk-merge.c index 70d704615be5..4313301f131c 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -1008,6 +1008,8 @@ bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio, rq_list_for_each(&plug->mq_list, rq) { if (rq->q != q) continue; + if (blk_try_merge(rq, bio) == ELEVATOR_NO_MERGE) + continue; if (blk_attempt_bio_merge(q, rq, bio, nr_segs, false) == BIO_MERGE_OK) return true; -- Jens Axboe
Sorry for my delayed reply here as I was on business trip for the last couple of weeks. I have done some testing on 6 SSDs aggregated as raid0 to simulate the multi-queue case but I haven't seen measurable impact from that change at least on the random write test case. Looks like the patch has been queued to 6.15 & 6.12 stable without this change so I assume we are dropping it?
Kernel | fio (B.W MiB/sec) | I/O size (iostat) -------------- +---------------------+-------------------- 6.15.2 | 639 | 4KiB 6.15.2+patchv1 | 648 | 4KiB 6.15.2+patchv2 | 665 | 4KiB --------------+----------------------+-------------------- Hazem