On 8/15/25 1:32 PM, Yu Kuai wrote: > From: Yu Kuai <yukuai3@xxxxxxxxxx> > > In the case user trigger tags grow by queue sysfs attribute nr_requests, > hctx->sched_tags will be freed directly and replaced with a new > allocated tags, see blk_mq_tag_update_depth(). > > The problem is that hctx->sched_tags is from elevator->et->tags, while > et->tags is still the freed tags, hence later elevator exist will try to > free the tags again, causing kernel panic. > > Fix this problem by using new allocated elevator_tags, also convert > blk_mq_update_nr_requests to void since this helper will never fail now. > > Meanwhile, there is a longterm problem can be fixed as well: > > If blk_mq_tag_update_depth() succeed for previous hctx, then bitmap depth > is updated, however, if following hctx failed, q->nr_requests is not > updated and the previous hctx->sched_tags endup bigger than q->nr_requests. > > Fixes: f5a6604f7a44 ("block: fix lockdep warning caused by lock dependency in elv_iosched_store") > Fixes: e3a2b3f931f5 ("blk-mq: allow changing of queue depth through sysfs") > Signed-off-by: Yu Kuai <yukuai3@xxxxxxxxxx> > --- > block/blk-mq.c | 19 ++++++------------- > block/blk-mq.h | 4 +++- > block/blk-sysfs.c | 21 ++++++++++++++------- > 3 files changed, 23 insertions(+), 21 deletions(-) > > diff --git a/block/blk-mq.c b/block/blk-mq.c > index 11c8baebb9a0..e9f037a25fe3 100644 > --- a/block/blk-mq.c > +++ b/block/blk-mq.c > @@ -4917,12 +4917,12 @@ void blk_mq_free_tag_set(struct blk_mq_tag_set *set) > } > EXPORT_SYMBOL(blk_mq_free_tag_set); > > -int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr) > +void blk_mq_update_nr_requests(struct request_queue *q, > + struct elevator_tags *et, unsigned int nr) > { > struct blk_mq_tag_set *set = q->tag_set; > struct blk_mq_hw_ctx *hctx; > unsigned long i; > - int ret = 0; > > blk_mq_quiesce_queue(q); > > @@ -4946,24 +4946,17 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr) > nr - hctx->sched_tags->nr_reserved_tags); > } > } else { > - queue_for_each_hw_ctx(q, hctx, i) { > - if (!hctx->tags) > - continue; > - ret = blk_mq_tag_update_depth(hctx, &hctx->sched_tags, > - nr); > - if (ret) > - goto out; > - } > + blk_mq_free_sched_tags(q->elevator->et, set); I think you also need to ensure that elevator tags are freed after we unfreeze queue and release ->elevator_lock otherwise we may get into the lockdep splat for pcpu_lock dependency on ->freeze_lock and/or ->elevator_lock. Please note that blk_mq_free_sched_tags internally invokes sbitmap_free which invokes free_percpu which acquires pcpu_lock. Thanks, --Nilay