On Mon, Apr 07, 2025 at 01:59:48PM +0530, Nilay Shroff wrote: > > > On 4/7/25 8:39 AM, Ming Lei wrote: > > On Sat, Apr 05, 2025 at 07:44:19PM +0530, Nilay Shroff wrote: > >> > >> > >> On 4/4/25 2:40 PM, Christoph Hellwig wrote: > >>> On Thu, Apr 03, 2025 at 06:54:02PM +0800, Ming Lei wrote: > >>>> Fixes the following lockdep warning: > >>> > >>> Please spell the actual dependency out here, links are not permanent > >>> and also not readable for any offline reading of the commit logs. > >>> > >>>> +static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set, > >>>> + struct request_queue *q, bool lock) > >>>> +{ > >>>> + if (lock) { > >>> > >>> bool lock(ed) arguments are an anti-pattern, and regularly get Linus > >>> screaming at you (in this case even for the right reason :)) > >>> > >>>> + /* protect against switching io scheduler */ > >>>> + mutex_lock(&q->elevator_lock); > >>>> + __blk_mq_realloc_hw_ctxs(set, q); > >>>> + mutex_unlock(&q->elevator_lock); > >>>> + } else { > >>>> + __blk_mq_realloc_hw_ctxs(set, q); > >>>> + } > >>> > >>> I think the problem here is again that because of all the other > >>> dependencies elevator_lock really needs to be per-set instead of > >>> per-queue which will allows us to have much saner locking hierarchies. > >>> > >> I believe you meant here q->tag_set->elevator_lock? > > > > I don't know what locks you are planning to invent. > > > > For set->tag_list_lock, it has been very fragile: > > > > blk_mq_update_nr_hw_queues > > set->tag_list_lock > > freeze_queue > > > > If IO failure happens when waiting in above freeze_queue(), the nvme error > > handling can't provide forward progress any more, because the error > > handling code path requires set->tag_list_lock. > > I think you're referring here nvme_quiesce_io_queues and nvme_unquiesce_io_queues Yes. > which is called in nvme error handling path. If yes then I believe this function > could be easily modified so that it doesn't require ->tag_list_lock. Not sure it is easily, ->tag_list_lock is exactly for protecting the list of "set->tag_list". And the same list is iterated in blk_mq_update_nr_hw_queues() too. > > > > > So all queues should be frozen first before calling blk_mq_update_nr_hw_queues, > > fortunately that is what nvme is doing. > > > > > >> If yes then it means that we should be able to grab ->elevator_lock > >> before freezing the queue in __blk_mq_update_nr_hw_queues and so locking > >> order should be in each code path, > >> > >> __blk_mq_update_nr_hw_queues > >> ->elevator_lock > >> ->freeze_lock > > > > Now tagset->elevator_lock depends on set->tag_list_lock, and this way > > just make things worse. Why can't we disable elevator switch during > > updating nr_hw_queues? > > > I couldn't quite understand this. As we already first disable the elevator > before updating sw to hw queue mapping in __blk_mq_update_nr_hw_queues(). > Once mapping is successful we switch back the elevator. Yes, but user still may switch elevator from none to others during the period, right? thanks, Ming