On Thu, Apr 24, 2025 at 12:07:32PM -0700, Caleb Sander Mateos wrote: > On Thu, Apr 24, 2025 at 11:58 AM Ofer Oshri <ofer@xxxxxxxxxx> wrote: > > > > > > > > ________________________________ > > From: Caleb Sander Mateos <csander@xxxxxxxxxxxxxxx> > > Sent: Thursday, April 24, 2025 9:28 PM > > To: Ofer Oshri <ofer@xxxxxxxxxx> > > Cc: linux-block@xxxxxxxxxxxxxxx <linux-block@xxxxxxxxxxxxxxx>; ming.lei@xxxxxxxxxx <ming.lei@xxxxxxxxxx>; axboe@xxxxxxxxx <axboe@xxxxxxxxx>; Jared Holzman <jholzman@xxxxxxxxxx>; Yoav Cohen <yoav@xxxxxxxxxx>; Guy Eisenberg <geisenberg@xxxxxxxxxx>; Omri Levi <omril@xxxxxxxxxx> > > Subject: Re: ublk: RFC fetch_req_multishot > > > > External email: Use caution opening links or attachments > > > > > > On Thu, Apr 24, 2025 at 11:19 AM Ofer Oshri <ofer@xxxxxxxxxx> wrote: > > > > > > Hi, > > > > > > Our code uses a single io_uring per core, which is shared among all block devices - meaning each block device on a core uses the same io_uring. > > > > > > Let’s say the size of the io_uring is N. Each block device submits M UBLK_U_IO_FETCH_REQ requests. As a result, with the current implementation, we can only support up to P block devices, where P = N / M. This means that when we attempt to support block device P+1, it will fail due to io_uring exhaustion. > > > > What do you mean by "size of the io_uring", the submission queue size? > > Why can't you submit all P * M UBLK_U_IO_FETCH_REQ operations in > > batches of N? > > > > Best, > > Caleb > > > > N is the size of the submission queue, and P is not fixed and unknown at the time of ring initialization.... > > I don't think it matters whether P (the number of ublk devices) is > known ahead of time or changes dynamically. My point is that you can > submit the UBLK_U_IO_FETCH_REQ operations in batches of N to avoid > exceeding the io_uring SQ depth. (If there are other operations > potentially interleaved with the UBLK_U_IO_FETCH_REQ ones, then just > submit each time the io_uring SQ fills up.) Any values of P, M, and N > should work. Perhaps I'm misunderstanding you, because I don't know > what "io_uring exhaustion" refers to. > > Multishot ublk io_uring operations don't seem like a trivial feature > to implement. Currently, incoming ublk requests are posted to the ublk > server using io_uring's "task work" mechanism, which inserts the > io_uring operation into an intrusive linked list. If you wanted a > single ublk io_uring operation to post multiple completions, it would > need to allocate some structure for each incoming request to insert > into the task work list. There is also an assumption that the ublk > io_uring operations correspond 1-1 with the blk-mq requests for the > ublk device, which would be broken by multishot ublk io_uring > operations. For delivering ublk io command to ublk server, I feel multishot can be used in the following way: - use IORING_OP_READ_MULTISHOT to read from ublk char device, do it for each queue, queue id may be passed via offset - block in ublk_ch_read_iter() if nothing comes from this queue of the ublk block device - if any ublk block io comes, fill `ublksrv_io_desc` in mmapped area, and push the 'tag' to the read ring buffer(provided buffer) - wakeup the read IO after one whole IO batch is done For commit ublk io command result to ublk driver, it can be similar with delivering by writing 'tag' to ublk char device via IORING_OP_WRITE_FIXED or IORING_OP_WRITE, still per queue via ring_buf approach, but need one mmapped buffer for storing the io command result, 4 bytes should be enough for each io. With the above way: - use read/write to deliver io command & commit io command result, so single read/write replaces one batch of uring_cmd - needn't uring command any more, big security_uring_cmd() cost can be avoided - memory footprint is reduced a lot, no extra uring_cmd for each IO - extra task work scheduling is avoided - Probably uring exiting handling can be simplified too. Sounds like ublk 2.0 prototype, :-) Thanks, Ming