From: Yu Kuai <yukuai3@xxxxxxxxxx> request_queue->nr_requests can be changed by: a) switching elevator by update nr_hw_queues b) switching elevator by elevator sysfs attribute c) configue queue sysfs attribute nr_requests Current lock order is: 1) update_nr_hwq_lock, case a,b 2) freeze_queue 3) elevator_lock, cas a,b,c And update nr_requests is seriablized by elevator_lock() already, however, in the case c), we'll have to allocate new sched_tags if nr_requests grow, and do this with elevator_lock held and queue freezed has the risk of deadlock. Hence use update_nr_hwq_lock instead, make it possible to allocate memory if tags grow, meanwhile also prevent nr_requests to be changed concurrently. Signed-off-by: Yu Kuai <yukuai3@xxxxxxxxxx> --- block/blk-sysfs.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index f99519f7a820..7ea15bf68b4b 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -68,13 +68,14 @@ queue_requests_store(struct gendisk *disk, const char *page, size_t count) int ret, err; unsigned int memflags; struct request_queue *q = disk->queue; + struct blk_mq_tag_set *set = q->tag_set; ret = queue_var_store(&nr, page, count); if (ret < 0) return ret; - memflags = blk_mq_freeze_queue(q); - mutex_lock(&q->elevator_lock); + /* serialize updating nr_requests with switching elevator */ + down_write(&set->update_nr_hwq_lock); if (nr == q->nr_requests) goto unlock; @@ -89,13 +90,18 @@ queue_requests_store(struct gendisk *disk, const char *page, size_t count) goto unlock; } + memflags = blk_mq_freeze_queue(q); + mutex_lock(&q->elevator_lock); + err = blk_mq_update_nr_requests(disk->queue, nr); if (err) ret = err; -unlock: mutex_unlock(&q->elevator_lock); blk_mq_unfreeze_queue(q, memflags); + +unlock: + up_write(&set->update_nr_hwq_lock); return ret; } -- 2.39.2