Re: [PATCH 1/2] ublk: build per-io-ring-ctx batch list

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jun 23, 2025 at 10:51:00AM -0700, Caleb Sander Mateos wrote:
> On Sun, Jun 22, 2025 at 6:19 PM Ming Lei <ming.lei@xxxxxxxxxx> wrote:
> >
> > ublk_queue_cmd_list() dispatches the whole batch list by scheduling task
> > work via the tail request's io_uring_cmd, this way is fine even though
> > more than one io_ring_ctx are involved for this batch since it is just
> > one running context.
> >
> > However, the task work handler ublk_cmd_list_tw_cb() takes `issue_flags`
> > of tail uring_cmd's io_ring_ctx for completing all commands. This way is
> > wrong if any uring_cmd is issued from different io_ring_ctx.
> >
> > Fixes it by always building per-io-ring-ctx batch list.
> >
> > For typical per-queue or per-io daemon implementation, this way shouldn't
> > make difference from performance viewpoint, because single io_ring_ctx is
> > often taken in each daemon.
> >
> > Fixes: d796cea7b9f3 ("ublk: implement ->queue_rqs()")
> > Fixes: ab03a61c6614 ("ublk: have a per-io daemon instead of a per-queue daemon")
> > Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx>
> > ---
> >  drivers/block/ublk_drv.c | 17 +++++++++--------
> >  1 file changed, 9 insertions(+), 8 deletions(-)
> >
> > diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> > index c637ea010d34..e79b04e61047 100644
> > --- a/drivers/block/ublk_drv.c
> > +++ b/drivers/block/ublk_drv.c
> > @@ -1336,9 +1336,8 @@ static void ublk_cmd_list_tw_cb(struct io_uring_cmd *cmd,
> >         } while (rq);
> >  }
> >
> > -static void ublk_queue_cmd_list(struct ublk_io *io, struct rq_list *l)
> > +static void ublk_queue_cmd_list(struct io_uring_cmd *cmd, struct rq_list *l)
> >  {
> > -       struct io_uring_cmd *cmd = io->cmd;
> >         struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd);
> >
> >         pdu->req_list = rq_list_peek(l);
> > @@ -1420,16 +1419,18 @@ static void ublk_queue_rqs(struct rq_list *rqlist)
> >  {
> >         struct rq_list requeue_list = { };
> >         struct rq_list submit_list = { };
> > -       struct ublk_io *io = NULL;
> > +       struct io_uring_cmd *cmd = NULL;
> >         struct request *req;
> >
> >         while ((req = rq_list_pop(rqlist))) {
> >                 struct ublk_queue *this_q = req->mq_hctx->driver_data;
> > -               struct ublk_io *this_io = &this_q->ios[req->tag];
> > +               struct io_uring_cmd *this_cmd = this_q->ios[req->tag].cmd;
> >
> > -               if (io && io->task != this_io->task && !rq_list_empty(&submit_list))
> > -                       ublk_queue_cmd_list(io, &submit_list);
> > -               io = this_io;
> > +               if (cmd && io_uring_cmd_ctx_handle(cmd) !=
> > +                               io_uring_cmd_ctx_handle(this_cmd) &&
> > +                               !rq_list_empty(&submit_list))
> > +                       ublk_queue_cmd_list(cmd, &submit_list);
> 
> I don't think we can assume that ublk commands submitted to the same
> io_uring have the same daemon task. It's possible for multiple tasks
> to submit to the same io_uring, even though that's not a common or
> performant way to use io_uring. Probably we need to check that both
> the task and io_ring_ctx match.

Here the problem is in 'issue_flags' passed from io_uring, especially for
grabbing io_ring_ctx lock.

If two uring_cmd are issued via same io_ring_ctx from two tasks, it is
fine to share 'issue_flags' from one of tasks, what matters is that the
io_ring_ctx lock is handled correctly when calling io_uring_cmd_done().



Thanks,
Ming





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux