On Sat 26-07-25 02:03:34, Yu Kuai wrote: > From: Yu Kuai <yukuai3@xxxxxxxxxx> > > Currently issue io can grab queue_lock three times from bfq_bio_merge(), > bfq_limit_depth() and bfq_prepare_request(), the queue_lock is not > necessary if icq is already created because both queue and ioc can't be > freed before io issuing is done, hence remove the unnecessary queue_lock > and use rcu to protect radix tree lookup. > > Noted this is also a prep patch to support request batch dispatching[1]. > > [1] https://lore.kernel.org/all/20250722072431.610354-1-yukuai1@xxxxxxxxxxxxxxx/ > Signed-off-by: Yu Kuai <yukuai3@xxxxxxxxxx> Looks good! Just one small comment below. With that fixed feel free to add: Reviewed-by: Jan Kara <jack@xxxxxxx> > diff --git a/block/blk-ioc.c b/block/blk-ioc.c > index ce82770c72ab..ea9c975aaef7 100644 > --- a/block/blk-ioc.c > +++ b/block/blk-ioc.c > @@ -308,19 +308,18 @@ int __copy_io(unsigned long clone_flags, struct task_struct *tsk) > > #ifdef CONFIG_BLK_ICQ > /** > - * ioc_lookup_icq - lookup io_cq from ioc > + * ioc_lookup_icq - lookup io_cq from ioc in io issue path > * @q: the associated request_queue > * > * Look up io_cq associated with @ioc - @q pair from @ioc. Must be called > - * with @q->queue_lock held. > + * from io issue path, either return NULL if current issue io to @q for the > + * first time, or return a valid icq. > */ > struct io_cq *ioc_lookup_icq(struct request_queue *q) > { > struct io_context *ioc = current->io_context; > struct io_cq *icq; > > - lockdep_assert_held(&q->queue_lock); > - > /* > * icq's are indexed from @ioc using radix tree and hint pointer, > * both of which are protected with RCU. All removals are done In this comment there's still reference to holding 'q lock'. I think you should replace that with justification why when called from IO issue path we are guaranteed found pointer is safe to use. Honza -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR