On Wed, Apr 23, 2025 at 07:52:19AM -0700, Caleb Sander Mateos wrote: > On Wed, Apr 23, 2025 at 7:44 AM Caleb Sander Mateos > <csander@xxxxxxxxxxxxxxx> wrote: > > > > On Wed, Apr 23, 2025 at 2:24 AM Ming Lei <ming.lei@xxxxxxxxxx> wrote: > > > > > > The in-tree code calls io_uring_cmd_complete_in_task() to schedule > > > task_work for dispatching this request to handle > > > UBLK_U_IO_NEED_GET_DATA. > > > > > > This ways is really not necessary because the current context is exactly > > > the ublk queue context, so call ublk_dispatch_req() directly for handling > > > UBLK_U_IO_NEED_GET_DATA. > > > > Indeed, I was planning to make the same change! > > > > > > > > Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx> > > > --- > > > drivers/block/ublk_drv.c | 14 +++----------- > > > 1 file changed, 3 insertions(+), 11 deletions(-) > > > > > > diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c > > > index 2de7b2bd409d..c4d4be4f6fbd 100644 > > > --- a/drivers/block/ublk_drv.c > > > +++ b/drivers/block/ublk_drv.c > > > @@ -1886,15 +1886,6 @@ static void ublk_mark_io_ready(struct ublk_device *ub, struct ublk_queue *ubq) > > > } > > > } > > > > > > -static void ublk_handle_need_get_data(struct ublk_device *ub, int q_id, > > > - int tag) > > > -{ > > > - struct ublk_queue *ubq = ublk_get_queue(ub, q_id); > > > - struct request *req = blk_mq_tag_to_rq(ub->tag_set.tags[q_id], tag); > > > - > > > - ublk_queue_cmd(ubq, req); > > > -} > > > > Looks like this will conflict with Uday's patch: > > https://lore.kernel.org/linux-block/20250421-ublk_constify-v1-3-3371f9e9f73c@xxxxxxxxxxxxxxx/ > > . Since that series already has reviews, I expect it will land first. > > > > > - > > > static inline int ublk_check_cmd_op(u32 cmd_op) > > > { > > > u32 ioc_type = _IOC_TYPE(cmd_op); > > > @@ -2103,8 +2094,9 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd, > > > if (!(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV)) > > > goto out; > > > ublk_fill_io_cmd(io, cmd, ub_cmd->addr); > > > - ublk_handle_need_get_data(ub, ub_cmd->q_id, ub_cmd->tag); > > > - break; > > > + req = blk_mq_tag_to_rq(ub->tag_set.tags[ub_cmd->q_id], tag); > > > + ublk_dispatch_req(ubq, req, issue_flags); > > > > Maybe it would make sense to factor the UBLK_IO_NEED_GET_DATA handling > > out of ublk_dispatch_req()? Then ublk_dispatch_req() (called only for > > incoming ublk requests) could assume the UBLK_IO_FLAG_NEED_GET_DATA > > flag is not yet set, and this path wouldn't need to pay the cost of > > re-checking current != ubq->ubq_daemon, ublk_need_get_data(ubq) && > > ublk_need_map_req(req), etc. > > > > > + return -EIOCBQUEUED; > > > > It's probably possible to return the result here synchronously to > > avoid the small overhead of io_uring_cmd_done(). That may be easier to > > do if the UBLK_IO_NEED_GET_DATA path is separated from > > ublk_dispatch_req(). > > And if we can avoid using io_uring_cmd_done(), calling > ublk_fill_io_cmd() for UBLK_IO_NEED_GET_DATA would no longer be > necessary. (This was my original motivation to handle > UBLK_IO_NEED_GET_DATA synchronously; UBLK_IO_NEED_GET_DATA overwriting > io->cmd is an obstacle to introducing a struct request * field that > aliases io->cmd.) All your comments are reasonable. Here I just want to keep it simple for backport purpose, and we can clean up them all by one followup. Thanks, Ming