Hi, This patchset replaces the use of a static key in the I/O path (rq_qos_ xxx()) with an atomic queue flag (QUEUE_FLAG_QOS_ENABLED). This change is made to eliminate a potential deadlock introduced by the use of static keys in the blk-rq-qos infrastructure, as reported by lockdep during blktests block/005[1]. The original static key approach was introduced to avoid unnecessary dereferencing of q->rq_qos when no blk-rq-qos module (e.g., blk-wbt or blk-iolatency) is configured. While efficient, enabling a static key at runtime requires taking cpu_hotplug_lock and jump_label_mutex, which becomes problematic if the queue is already frozen — causing a reverse dependency on ->freeze_lock. This results in a lockdep splat indicating a potential deadlock. To resolve this, we now gate q->rq_qos access with a q->queue_flags bitop (QUEUE_FLAG_QOS_ENABLED), avoiding the static key and the associated locking altogether. I compared both static key and atomic bitop implementations using ftrace function graph tracer over ~50 invocations of rq_qos_issue() while ensuring blk-wbt/blk-iolatency were disabled (i.e., no QoS functionality). For easy comparision, I made rq_qos_issue() noinline. The comparision was made on PowerPC machine. Static Key disabled (QoS is not configured): 5d0: 00 00 00 60 nop # patched in by static key framework 5d4: 20 00 80 4e blr # return (branch to link register) Only a nop and blr (branch to link register) are executed — very lightweight. atomic bitop (QoS is not configured): 5d0: 20 00 23 e9 ld r9,32(r3) # load q->queue_flags 5d4: 00 80 29 71 andi. r9,r9,32768 # check QUEUE_FLAG_QOS_ENABLED (bit 15) 5d8: 20 00 82 4d beqlr # return if bit not set This performs an ld and andi. before returning. Slightly more work, but q->queue_flags is typically hot in cache during I/O submission. With Static Key (disabled): Duration (us): min=0.668 max=0.816 avg≈0.750 With atomic bitop QUEUE_FLAG_QOS_ENABLED (bit not set): Duration (us): min=0.684 max=0.834 avg≈0.759 As expected, both versions are almost similar in cost. The added latency from an extra ld and andi. is in the range of ~9ns. There're two patches in the series. The first patch replaces static key with QUEUE_FLAG_QOS_ENABLED. The second patch ensures that we disable the QUEUE_FLAG_QOS_ENABLED when the queue no longer has any associated rq_qos policies. As usual, feedback and review comments are welcome! [1] https://lore.kernel.org/linux-block/4fdm37so3o4xricdgfosgmohn63aa7wj3ua4e5vpihoamwg3ui@fq42f5q5t5ic/ Changes from v1: - For debugging I made rq_qos_issue() noinline in my local workspace, but then inadvertently it slipped through the patchset upstream. So reverted it and made rq_qos_issue() inline again as earlier. - Added Reported-by and Closes tags in the first patch, which I obviously missed to add in the first version. Nilay Shroff (2): block: avoid cpu_hotplug_lock depedency on freeze_lock block: clear QUEUE_FLAG_QOS_ENABLED in rq_qos_del() block/blk-mq-debugfs.c | 1 + block/blk-rq-qos.c | 8 ++++---- block/blk-rq-qos.h | 43 +++++++++++++++++++++++++----------------- include/linux/blkdev.h | 1 + 4 files changed, 32 insertions(+), 21 deletions(-) -- 2.50.1