On Thu, 24 Jul 2025 01:36:37 +0500 Mikhail Gavrilov wrote: > On Wed, Jul 23, 2025 at 6:03 AM Hillf Danton <hdanton@xxxxxxxx> wrote: > > > > In order to cure the deadlock, queue is thawed before switching elevator, > > so lets see what comes out with that warning ignored. > > > > --- x/block/elevator.c > > +++ y/block/elevator.c > > @@ -575,7 +575,6 @@ static int elevator_switch(struct reques > > struct elevator_type *new_e = NULL; > > int ret = 0; > > > > - WARN_ON_ONCE(q->mq_freeze_depth == 0); > > lockdep_assert_held(&q->elevator_lock); > > > > if (strncmp(ctx->name, "none", 4)) { > > @@ -661,6 +660,7 @@ static int elevator_change(struct reques > > unsigned int memflags; > > int ret = 0; > > > > + /* updaters should be serialized */ > > lockdep_assert_held(&q->tag_set->update_nr_hwq_lock); > > > > memflags = blk_mq_freeze_queue(q); > > @@ -674,11 +674,11 @@ static int elevator_change(struct reques > > * Disk isn't added yet, so verifying queue lock only manually. > > */ > > blk_mq_cancel_work_sync(q); > > + blk_mq_unfreeze_queue(q, memflags); > > mutex_lock(&q->elevator_lock); > > if (!(q->elevator && elevator_match(q->elevator->type, ctx->name))) > > ret = elevator_switch(q, ctx); > > mutex_unlock(&q->elevator_lock); > > - blk_mq_unfreeze_queue(q, memflags); > > if (!ret) > > ret = elevator_change_done(q, ctx); > > > > -- > > Hi Hillf, > > Thanks for the patch. > > With this patch applied, I haven't seen either the lockdep warning or > a soft lockup within 13 hours of runtime. Not sure if that's > sufficient yet for a final verdict, but it's definitely promising. Thank you for testing it. It works for you so far, but given the "correct" locking order enforced in ffa1e7ada456, I know the chance for reversing that order is not zero yet either in Jens or upstream tree. Nor simple to fix every single case. Hillf Danton