On Tue, Sep 02, 2025 at 09:42:37PM -0700, Caleb Sander Mateos wrote: > On Mon, Sep 1, 2025 at 3:03 AM Ming Lei <ming.lei@xxxxxxxxxx> wrote: > > > > Add helper __ublk_fetch() for the coming batch io feature. > > > > Meantime move ublk_config_io_buf() out of __ublk_fetch() because batch > > io has new interface for configuring buffer. > > > > Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx> > > --- > > drivers/block/ublk_drv.c | 31 ++++++++++++++++++++----------- > > 1 file changed, 20 insertions(+), 11 deletions(-) > > > > diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c > > index e53f623b0efe..f265795a8d57 100644 > > --- a/drivers/block/ublk_drv.c > > +++ b/drivers/block/ublk_drv.c > > @@ -2206,18 +2206,12 @@ static int ublk_check_fetch_buf(const struct ublk_queue *ubq, __u64 buf_addr) > > return 0; > > } > > > > -static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq, > > - struct ublk_io *io, __u64 buf_addr) > > +static int __ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq, > > + struct ublk_io *io) > > { > > struct ublk_device *ub = ubq->dev; > > int ret = 0; > > > > - /* > > - * When handling FETCH command for setting up ublk uring queue, > > - * ub->mutex is the innermost lock, and we won't block for handling > > - * FETCH, so it is fine even for IO_URING_F_NONBLOCK. > > - */ > > - mutex_lock(&ub->mutex); > > /* UBLK_IO_FETCH_REQ is only allowed before queue is setup */ > > if (ublk_queue_ready(ubq)) { > > ret = -EBUSY; > > @@ -2233,13 +2227,28 @@ static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq, > > WARN_ON_ONCE(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV); > > > > ublk_fill_io_cmd(io, cmd); > > - ret = ublk_config_io_buf(ubq, io, cmd, buf_addr, NULL); > > - if (ret) > > - goto out; > > > > WRITE_ONCE(io->task, get_task_struct(current)); > > ublk_mark_io_ready(ub, ubq); > > out: > > + return ret; > > If the out: section no longer releases any resources, can we replace > the "goto out" with just "return ret"? OK. > > > +} > > + > > +static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq, > > + struct ublk_io *io, __u64 buf_addr) > > +{ > > + struct ublk_device *ub = ubq->dev; > > + int ret; > > + > > + /* > > + * When handling FETCH command for setting up ublk uring queue, > > + * ub->mutex is the innermost lock, and we won't block for handling > > + * FETCH, so it is fine even for IO_URING_F_NONBLOCK. > > + */ > > + mutex_lock(&ub->mutex); > > + ret = ublk_config_io_buf(ubq, io, cmd, buf_addr, NULL); > > + if (!ret) > > + ret = __ublk_fetch(cmd, ubq, io); > > How come the order of operations was switched here? ublk_fetch() > previously checked ublk_queue_ready(ubq) and io->flags & > UBLK_IO_FLAG_ACTIVE first, which seems necessary to prevent > overwriting a ublk_io that has already been fetched. Good point, that is actually what ublk_batch_prep_io() is doing: commit the buffer descriptor into io slot only after __ublk_fetch() runs successfully. I will fix the order. Thanks, Ming