Hi,
在 2025/08/15 15:59, Ming Lei 写道:
On Fri, Aug 15, 2025 at 09:04:53AM +0800, Yu Kuai wrote:
Hi,
在 2025/08/15 8:13, Ming Lei 写道:
On Thu, Aug 14, 2025 at 08:01:11PM +0530, Nilay Shroff wrote:
On 8/14/25 7:08 PM, Ming Lei wrote:
On Thu, Aug 14, 2025 at 06:27:08PM +0530, Nilay Shroff wrote:
On 8/14/25 6:14 PM, Ming Lei wrote:
On Thu, Aug 14, 2025 at 01:54:59PM +0530, Nilay Shroff wrote:
A recent lockdep[1] splat observed while running blktest block/005
reveals a potential deadlock caused by the cpu_hotplug_lock dependency
on ->freeze_lock. This dependency was introduced by commit 033b667a823e
("block: blk-rq-qos: guard rq-qos helpers by static key").
That change added a static key to avoid fetching q->rq_qos when
neither blk-wbt nor blk-iolatency is configured. The static key
dynamically patches kernel text to a NOP when disabled, eliminating
overhead of fetching q->rq_qos in the I/O hot path. However, enabling
a static key at runtime requires acquiring both cpu_hotplug_lock and
jump_label_mutex. When this happens after the queue has already been
frozen (i.e., while holding ->freeze_lock), it creates a locking
dependency from cpu_hotplug_lock to ->freeze_lock, which leads to a
potential deadlock reported by lockdep [1].
To resolve this, replace the static key mechanism with q->queue_flags:
QUEUE_FLAG_QOS_ENABLED. This flag is evaluated in the fast path before
accessing q->rq_qos. If the flag is set, we proceed to fetch q->rq_qos;
otherwise, the access is skipped.
Since q->queue_flags is commonly accessed in IO hotpath and resides in
the first cacheline of struct request_queue, checking it imposes minimal
overhead while eliminating the deadlock risk.
This change avoids the lockdep splat without introducing performance
regressions.
[1] https://lore.kernel.org/linux-block/4fdm37so3o4xricdgfosgmohn63aa7wj3ua4e5vpihoamwg3ui@fq42f5q5t5ic/
Reported-by: Shinichiro Kawasaki <shinichiro.kawasaki@xxxxxxx>
Closes: https://lore.kernel.org/linux-block/4fdm37so3o4xricdgfosgmohn63aa7wj3ua4e5vpihoamwg3ui@fq42f5q5t5ic/
Fixes: 033b667a823e ("block: blk-rq-qos: guard rq-qos helpers by static key")
Tested-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@xxxxxxx>
Signed-off-by: Nilay Shroff <nilay@xxxxxxxxxxxxx>
---
block/blk-mq-debugfs.c | 1 +
block/blk-rq-qos.c | 9 ++++---
block/blk-rq-qos.h | 54 ++++++++++++++++++++++++------------------
include/linux/blkdev.h | 1 +
4 files changed, 37 insertions(+), 28 deletions(-)
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 7ed3e71f2fc0..32c65efdda46 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -95,6 +95,7 @@ static const char *const blk_queue_flag_name[] = {
QUEUE_FLAG_NAME(SQ_SCHED),
QUEUE_FLAG_NAME(DISABLE_WBT_DEF),
QUEUE_FLAG_NAME(NO_ELV_SWITCH),
+ QUEUE_FLAG_NAME(QOS_ENABLED),
};
#undef QUEUE_FLAG_NAME
diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c
index b1e24bb85ad2..654478dfbc20 100644
--- a/block/blk-rq-qos.c
+++ b/block/blk-rq-qos.c
@@ -2,8 +2,6 @@
#include "blk-rq-qos.h"
-__read_mostly DEFINE_STATIC_KEY_FALSE(block_rq_qos);
-
/*
* Increment 'v', if 'v' is below 'below'. Returns true if we succeeded,
* false if 'v' + 1 would be bigger than 'below'.
@@ -319,8 +317,8 @@ void rq_qos_exit(struct request_queue *q)
struct rq_qos *rqos = q->rq_qos;
q->rq_qos = rqos->next;
rqos->ops->exit(rqos);
- static_branch_dec(&block_rq_qos);
}
+ blk_queue_flag_clear(QUEUE_FLAG_QOS_ENABLED, q);
mutex_unlock(&q->rq_qos_mutex);
}
@@ -346,7 +344,7 @@ int rq_qos_add(struct rq_qos *rqos, struct gendisk *disk, enum rq_qos_id id,
goto ebusy;
rqos->next = q->rq_qos;
q->rq_qos = rqos;
- static_branch_inc(&block_rq_qos);
+ blk_queue_flag_set(QUEUE_FLAG_QOS_ENABLED, q);
One stupid question: can we simply move static_branch_inc(&block_rq_qos)
out of queue freeze in rq_qos_add()?
What matters is just the 1st static_branch_inc() which switches the counter
from 0 to 1, when blk_mq_freeze_queue() guarantees that all in-progress code
paths observe q->rq_qos as NULL. That means static_branch_inc(&block_rq_qos)
needn't queue freeze protection.
I thought about it earlier but that won't work because we have
code paths freezing queue before it reaches upto rq_qos_add(),
For instance:
We have following code paths from where we invoke
rq_qos_add() APIs with queue already frozen:
ioc_qos_write()
-> blkg_conf_open_bdev_frozen() => freezes queue
-> blk_iocost_init()
-> rq_qos_add()
queue_wb_lat_store() => freezes queue
-> wbt_init()
-> rq_qos_add()
The above two shouldn't be hard to solve, such as, add helper
rq_qos_prep_add() for increasing the static branch counter.
I thought about this, we'll need some return value to know if rq_qos
is really added and I feel code will be much complex. We'll need at
least two different APIs for cgroup based policy iocost/iolatency and
pure rq_qos policy wbt.
Yes, but not too bad, such as:
diff --git a/block/blk-iocost.c b/block/blk-iocost.c
index 5bfd70311359..05b13235ebb3 100644
--- a/block/blk-iocost.c
+++ b/block/blk-iocost.c
@@ -3227,6 +3227,8 @@ static ssize_t ioc_qos_write(struct kernfs_open_file *of, char *input,
blkg_conf_init(&ctx, input);
+ rq_qos_prep_add();
+
memflags = blkg_conf_open_bdev_frozen(&ctx);
if (IS_ERR_VALUE(memflags)) {
ret = memflags;
@@ -3344,7 +3346,7 @@ static ssize_t ioc_qos_write(struct kernfs_open_file *of, char *input,
if (enable)
wbt_disable_default(disk);
else
- wbt_enable_default(disk);
+ wbt_enable_default(disk, false);
blk_mq_unquiesce_queue(disk->queue);
@@ -3356,6 +3358,7 @@ static ssize_t ioc_qos_write(struct kernfs_open_file *of, char *input,
ret = -EINVAL;
err:
blkg_conf_exit_frozen(&ctx, memflags);
+ rq_qos_prep_del();
return ret;
}
This is not enough for iocost:
1) ioc_qos_write() can be called with iocost already registered, we need
to call rq_qos_prep_del() in the succeed branch as well.
2) ioc_cost_model_write() have to have the same treatment.
diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c
index 848591fb3c57..27047f661e3f 100644
--- a/block/blk-rq-qos.c
+++ b/block/blk-rq-qos.c
@@ -319,7 +319,7 @@ void rq_qos_exit(struct request_queue *q)
struct rq_qos *rqos = q->rq_qos;
q->rq_qos = rqos->next;
rqos->ops->exit(rqos);
- static_branch_dec(&block_rq_qos);
+ rq_qos_prep_del();
}
mutex_unlock(&q->rq_qos_mutex);
}
@@ -346,7 +346,6 @@ int rq_qos_add(struct rq_qos *rqos, struct gendisk *disk, enum rq_qos_id id,
goto ebusy;
rqos->next = q->rq_qos;
q->rq_qos = rqos;
- static_branch_inc(&block_rq_qos);
blk_mq_unfreeze_queue(q, memflags);
diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h
index 39749f4066fb..38572a7eb2b7 100644
--- a/block/blk-rq-qos.h
+++ b/block/blk-rq-qos.h
@@ -179,4 +179,14 @@ static inline void rq_qos_queue_depth_changed(struct request_queue *q)
void rq_qos_exit(struct request_queue *);
+static inline void rq_qos_prep_add(void)
+{
+ static_branch_inc(&block_rq_qos);
+}
+
+static inline void rq_qos_prep_del(void)
+{
+ static_branch_dec(&block_rq_qos);
+}
+
#endif
Wonder can we simplify this, we already have a disk level lock
rq_qos_mutex, that is held before freeze queue for iocost and iolatency,
and we can grab it before freeze queue for wbt as well, then we can just
do the static_branch_inc/dec with rq_qos_mutex held easily since
everything is serialized.
+static inline bool rq_qos_prep_add(struct request_queue *q, enum
rq_qos_id id)
+{
+ lockdep_assert_held(q->rq_qos_mutex);
+
+ if (!rq_qos_id(q, id)) {
+ static_branch_inc(&block_rq_qos);
+ return true;
+ }
+
+ return false;
+}
+
+/* paired with rq_qos_prep_add */
+static inline bool rq_qos_prep_del(struct request_queue *q, enum
rq_qos_id id, bool prepared)
+{
+ lockdep_assert_held(q->rq_qos_mutex);
+
+ if (prepared && !rq_qos_id(q, id))
+ static_branch_dec(&block_rq_qos);
+}
+