Re: [PATCH v4 2/4] ublk: mark ublk_queue as const for ublk_commit_and_fetch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 16, 2025 at 11:59:25AM -0700, Caleb Sander Mateos wrote:
> On Tue, Apr 15, 2025 at 6:00 PM Uday Shankar <ushankar@xxxxxxxxxxxxxxx> wrote:
> >
> > We now allow multiple tasks to operate on I/Os belonging to the same
> > queue concurrently. This means that any writes to ublk_queue in the I/O
> > path are potential sources of data races. Try to prevent these by
> > marking ublk_queue pointers as const when handling COMMIT_AND_FETCH.
> > Move the logic for this command into its own function
> > ublk_commit_and_fetch. Also open code ublk_commit_completion in
> > ublk_commit_and_fetch to reduce the number of parameters/avoid a
> > redundant lookup.
> >
> > Suggested-by: Ming Lei <ming.lei@xxxxxxxxxx>
> > Signed-off-by: Uday Shankar <ushankar@xxxxxxxxxxxxxxx>
> > ---
> >  drivers/block/ublk_drv.c | 91 +++++++++++++++++++++++-------------------------
> >  1 file changed, 43 insertions(+), 48 deletions(-)
> >
> > diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> > index 9a0d2547512fc8119460739230599d48d2c2a306..153f67d92248ad45bddd2437b1306bb23df7d1ae 100644
> > --- a/drivers/block/ublk_drv.c
> > +++ b/drivers/block/ublk_drv.c
> > @@ -1518,30 +1518,6 @@ static int ublk_ch_mmap(struct file *filp, struct vm_area_struct *vma)
> >         return remap_pfn_range(vma, vma->vm_start, pfn, sz, vma->vm_page_prot);
> >  }
> >
> > -static void ublk_commit_completion(struct ublk_device *ub,
> > -               const struct ublksrv_io_cmd *ub_cmd)
> > -{
> > -       u32 qid = ub_cmd->q_id, tag = ub_cmd->tag;
> > -       struct ublk_queue *ubq = ublk_get_queue(ub, qid);
> > -       struct ublk_io *io = &ubq->ios[tag];
> > -       struct request *req;
> > -
> > -       /* now this cmd slot is owned by nbd driver */
> > -       io->flags &= ~UBLK_IO_FLAG_OWNED_BY_SRV;
> > -       io->res = ub_cmd->result;
> > -
> > -       /* find the io request and complete */
> > -       req = blk_mq_tag_to_rq(ub->tag_set.tags[qid], tag);
> > -       if (WARN_ON_ONCE(unlikely(!req)))
> > -               return;
> > -
> > -       if (req_op(req) == REQ_OP_ZONE_APPEND)
> > -               req->__sector = ub_cmd->zone_append_lba;
> > -
> > -       if (likely(!blk_should_fake_timeout(req->q)))
> > -               ublk_put_req_ref(ubq, req);
> > -}
> > -
> >  /*
> >   * Called from io task context via cancel fn, meantime quiesce ublk
> >   * blk-mq queue, so we are called exclusively with blk-mq and io task
> > @@ -1918,6 +1894,45 @@ static int ublk_unregister_io_buf(struct io_uring_cmd *cmd,
> >         return io_buffer_unregister_bvec(cmd, index, issue_flags);
> >  }
> >
> > +static int ublk_commit_and_fetch(const struct ublk_queue *ubq,
> > +                                struct ublk_io *io, struct io_uring_cmd *cmd,
> > +                                const struct ublksrv_io_cmd *ub_cmd,
> > +                                struct request *req)
> > +{
> > +       if (!(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV))
> > +               return -EINVAL;
> > +
> > +       if (ublk_need_map_io(ubq)) {
> > +               /*
> > +                * COMMIT_AND_FETCH_REQ has to provide IO buffer if
> > +                * NEED GET DATA is not enabled or it is Read IO.
> > +                */
> > +               if (!ub_cmd->addr && (!ublk_need_get_data(ubq) ||
> > +                                       req_op(req) == REQ_OP_READ))
> > +                       return -EINVAL;
> > +       } else if (req_op(req) != REQ_OP_ZONE_APPEND && ub_cmd->addr) {
> > +               /*
> > +                * User copy requires addr to be unset when command is
> > +                * not zone append
> > +                */
> > +               return -EINVAL;
> > +       }
> > +
> > +       ublk_fill_io_cmd(io, cmd, ub_cmd->addr);
> > +
> > +       /* now this cmd slot is owned by ublk driver */
> > +       io->flags &= ~UBLK_IO_FLAG_OWNED_BY_SRV;
> > +       io->res = ub_cmd->result;
> > +
> > +       if (req_op(req) == REQ_OP_ZONE_APPEND)
> > +               req->__sector = ub_cmd->zone_append_lba;
> > +
> > +       if (likely(!blk_should_fake_timeout(req->q)))
> > +               ublk_put_req_ref(ubq, req);
> > +
> > +       return -EIOCBQUEUED;
> 
> I think it would be clearer to just return 0. __ublk_ch_uring_cmd()
> already takes care of returning -EIOCBQUEUED in the successful case.

Sounds good - your recommendation is also in line with the convention
Ming is using in
https://lore.kernel.org/linux-block/20250416035444.99569-2-ming.lei@xxxxxxxxxx/





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux