Re: [PATCH 2/2] md: split bio by io_opt size in md_submit_bio()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 17/08/2025 16:26, colyli@xxxxxxxxxx wrote:
From: Coly Li <colyli@xxxxxxxxxx>

Currently in md_submit_bio() the incoming request bio is split by
bio_split_to_limits() which makes sure the bio won't exceed
max_hw_sectors of a specific raid level before senting into its
.make_request method.

For raid level 4/5/6 such split method might be problematic and hurt
large read/write perforamnce. Because limits.max_hw_sectors are not
always aligned to limits.io_opt size, the split bio won't be full
stripes covered on all data disks, and will introduce extra read-in I/O.
Even the bio's bi_sector is aligned to limits.io_opt size and large
enough, the resulted split bio is not size-friendly to corresponding
raid456 level.

This patch introduces bio_split_by_io_opt() to solve the above issue,
1, If the incoming bio is not limits.io_opt aligned, split the non-
   aligned head part. Then the next one will be aligned.
2, If the imcoming bio is limits.io_opt aligned, and split is necessary,
   then try to split a by multiple of limits.io_opt but not exceed
   limits.max_hw_sectors.


this sounds like chunk_sectors functionality, apart from "split a by multiple of limits.io_opt"

Then for large bio, the sligned split part will be full-stripes covered
to all data disks, no extra read-in I/Os when rmw_level is 0. And for
rmw_level > 0 condistions, the limits.io_opt aligned bios are welcomed
for performace as well.

This patch only tests on 8 disks raid5 array with 64KiB chunk size.
By this patch, 64KiB chunk size for a 8 disks raid5 array, sequential
write performance increases from 900MiB/s to 1.1GiB/s by fio bs=10M.
If fio bs=488K (exact limits.io_opt size) the peak sequential write
throughput can reach 1.51GiB/s.

Signed-off-by: Coly Li <colyli@xxxxxxxxxx>
---
  drivers/md/md.c    | 51 +++++++++++++++++++++++++++++++++++++++++++++-
  drivers/md/raid5.c |  6 +++++-
  2 files changed, 55 insertions(+), 2 deletions(-)

diff --git a/drivers/md/md.c b/drivers/md/md.c
index ac85ec73a409..d0d4d05150fe 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -426,6 +426,55 @@ bool md_handle_request(struct mddev *mddev, struct bio *bio)
  }
  EXPORT_SYMBOL(md_handle_request);
+/**
+ * For raid456 read/write request, if bio LBA isn't aligned tot io_opt,
+ * split the non io_opt aligned header, to make the second part's LBA be
+ * aligned to io_opt. Otherwise still call bio_split_to_limits() to
+ * handle bio split with queue limits.
+ */
+static struct bio *bio_split_by_io_opt(struct bio *bio)
+{
+	sector_t io_opt_sectors, start, offset;
+	struct queue_limits lim;
+	struct mddev *mddev;
+	struct bio *split;
+	int level;
+
+	mddev = bio->bi_bdev->bd_disk->private_data;
+	level = mddev->level;
+
+	/* Only handle read456 read/write requests */
+	if (level == 1 || level == 10 || level == 0 || level == LEVEL_LINEAR ||
+	    (bio_op(bio) != REQ_OP_READ && bio_op(bio) != REQ_OP_WRITE))
+		return bio_split_to_limits(bio);

this should be taken outside this function, as we are not splitting to io_opt here

+
+	/* In case raid456 chunk size is too large */
+	lim = mddev->gendisk->queue->limits;
+	io_opt_sectors = lim.io_opt >> SECTOR_SHIFT;
+	if (unlikely(io_opt_sectors > lim.max_hw_sectors))
+		return bio_split_to_limits(bio);
+
+	/* Small request, no need to split */
+	if (bio_sectors(bio) <= io_opt_sectors)
+		return bio;

According to 1, above, we should split this if bio->bi_iter.bi_sector is not aligned, yet we possibly don't here

+
+	/* Only split the non-io-opt aligned header part */
+	start = bio->bi_iter.bi_sector;
+	offset = sector_div(start, io_opt_sectors);
+	if (offset == 0)
+		return bio_split_to_limits(bio);

this does not seem to match the description in 2, above, where we have "and split is necessary".

+
+	split = bio_split(bio, (io_opt_sectors - offset), GFP_NOIO,
+			  &bio->bi_bdev->bd_disk->bio_split);
+	if (!split)

that check is incorrect. It should be IS_ERR(). So I doubt the functionality earlier for handling "and split is necessary".

+		return bio_split_to_limits(bio);
+
+	split->bi_opf |= REQ_NOMERGE;
+	bio_chain(split, bio);
+	submit_bio_noacct(bio);
+	return split;
+}
+
  static void md_submit_bio(struct bio *bio)
  {
  	const int rw = bio_data_dir(bio);
@@ -441,7 +490,7 @@ static void md_submit_bio(struct bio *bio)
  		return;
  	}
- bio = bio_split_to_limits(bio);
+	bio = bio_split_by_io_opt(bio);
  	if (!bio)
  		return;
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 989acd8abd98..985fabeeead5 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -7759,9 +7759,13 @@ static int raid5_set_limits(struct mddev *mddev)
/*
  	 * Requests require having a bitmap for each stripe.
-	 * Limit the max sectors based on this.
+	 * Limit the max sectors based on this. And being
+	 * aligned to lim.io_opt for better I/O performance.
  	 */
  	lim.max_hw_sectors = RAID5_MAX_REQ_STRIPES << RAID5_STRIPE_SHIFT(conf);
+	if (lim.max_hw_sectors > lim.io_opt >> SECTOR_SHIFT)
+		lim.max_hw_sectors = rounddown(lim.max_hw_sectors,
+			  lim.io_opt >> SECTOR_SHIFT);
/* No restrictions on the number of segments in the request */
  	lim.max_segments = USHRT_MAX;





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux