[PATCH v24 05/18] blk-mq: Run all hwqs for sq scheds if write pipelining is enabled

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



One of the optimizations in the block layer is that blk_mq_run_hw_queues()
only calls blk_mq_run_hw_queue() for a single hardware queue for single
queue I/O schedulers instead of for all hardware queues. Disable this
optimization if ELEVATOR_FLAG_SUPPORTS_ZONED_WRITE_PIPELINING has been
set. This patch prepares for adding write pipelining support in the
mq-deadline I/O scheduler.

Cc: Damien Le Moal <dlemoal@xxxxxxxxxx>
Cc: Christoph Hellwig <hch@xxxxxx>
Signed-off-by: Bart Van Assche <bvanassche@xxxxxxx>
---
 block/blk-mq.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index e2d3239aacbc..33b639653b5d 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2387,8 +2387,7 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
 EXPORT_SYMBOL(blk_mq_run_hw_queue);
 
 /*
- * Return prefered queue to dispatch from (if any) for non-mq aware IO
- * scheduler.
+ * Return preferred queue to dispatch from for single-queue IO schedulers.
  */
 static struct blk_mq_hw_ctx *blk_mq_get_sq_hctx(struct request_queue *q)
 {
@@ -2398,6 +2397,11 @@ static struct blk_mq_hw_ctx *blk_mq_get_sq_hctx(struct request_queue *q)
 	if (!blk_queue_sq_sched(q))
 		return NULL;
 
+	if (blk_queue_is_zoned(q) && blk_pipeline_zwr(q) &&
+	    test_bit(ELEVATOR_FLAG_SUPPORTS_ZONED_WRITE_PIPELINING,
+		     &q->elevator->flags))
+		return NULL;
+
 	ctx = blk_mq_get_ctx(q);
 	/*
 	 * If the IO scheduler does not respect hardware queues when




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux