On Mon, Apr 07, 2025 at 08:02:24AM -0700, Caleb Sander Mateos wrote: > On Mon, Apr 7, 2025 at 6:15 AM Ming Lei <ming.lei@xxxxxxxxxx> wrote: > > > > When one request buffer is leased to io_uring via > > io_buffer_register_bvec(), io_uring guarantees that the buffer will > > be returned. However ublk aborts request in case that io_uring context > > is exiting, then ublk_io_release() may observe freed request, and > > kernel panic is triggered. > > Not sure I follow how the request can be freed while its buffer is > still registered with io_uring. It looks like __ublk_fail_req() > decrements the ublk request's reference count (ublk_put_req_ref()) and > the reference count shouldn't hit 0 if the io_uring registered buffer > is still holding a reference. Is the problem the if > (ublk_nosrv_should_reissue_outstanding()) case, which calls > blk_mq_requeue_request() without checking the reference count? Yeah, that is the problem, the request can be failed immediately after requeue & re-dispatch, then trigger the panic, and I verified that the following patch does fix it: diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index 2fd05c1bd30b..41bed67508f2 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -1140,6 +1140,25 @@ static void ublk_complete_rq(struct kref *ref) __ublk_complete_rq(req); } +static void ublk_do_fail_rq(struct request *req) +{ + struct ublk_queue *ubq = req->mq_hctx->driver_data; + + if (ublk_nosrv_should_reissue_outstanding(ubq->dev)) + blk_mq_requeue_request(req, false); + else + __ublk_complete_rq(req); +} + +static void ublk_fail_rq_fn(struct kref *ref) +{ + struct ublk_rq_data *data = container_of(ref, struct ublk_rq_data, + ref); + struct request *req = blk_mq_rq_from_pdu(data); + + ublk_do_fail_rq(req); +} + /* * Since ublk_rq_task_work_cb always fails requests immediately during * exiting, __ublk_fail_req() is only called from abort context during @@ -1153,10 +1172,13 @@ static void __ublk_fail_req(struct ublk_queue *ubq, struct ublk_io *io, { WARN_ON_ONCE(io->flags & UBLK_IO_FLAG_ACTIVE); - if (ublk_nosrv_should_reissue_outstanding(ubq->dev)) - blk_mq_requeue_request(req, false); - else - ublk_put_req_ref(ubq, req); + if (ublk_need_req_ref(ubq)) { + struct ublk_rq_data *data = blk_mq_rq_to_pdu(req); + + kref_put(&data->ref, ublk_fail_rq_fn); + } else { + ublk_do_fail_rq(req); + } } static void ubq_complete_io_cmd(struct ublk_io *io, int res, Thanks, Ming