Re: ublk: RFC fetch_req_multishot

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 24, 2025 at 2:07 PM Jared Holzman <jholzman@xxxxxxxxxx> wrote:
>
> On 24/04/2025 22:07, Caleb Sander Mateos wrote:
> > On Thu, Apr 24, 2025 at 11:58 AM Ofer Oshri <ofer@xxxxxxxxxx> wrote:
> >>
> >>
> >>
> >> ________________________________
> >> From: Caleb Sander Mateos <csander@xxxxxxxxxxxxxxx>
> >> Sent: Thursday, April 24, 2025 9:28 PM
> >> To: Ofer Oshri <ofer@xxxxxxxxxx>
> >> Cc: linux-block@xxxxxxxxxxxxxxx <linux-block@xxxxxxxxxxxxxxx>; ming.lei@xxxxxxxxxx <ming.lei@xxxxxxxxxx>; axboe@xxxxxxxxx <axboe@xxxxxxxxx>; Jared Holzman <jholzman@xxxxxxxxxx>; Yoav Cohen <yoav@xxxxxxxxxx>; Guy Eisenberg <geisenberg@xxxxxxxxxx>; Omri Levi <omril@xxxxxxxxxx>
> >> Subject: Re: ublk: RFC fetch_req_multishot
> >>
> >> External email: Use caution opening links or attachments
> >>
> >>
> >> On Thu, Apr 24, 2025 at 11:19 AM Ofer Oshri <ofer@xxxxxxxxxx> wrote:
> >>>
> >>> Hi,
> >>>
> >>> Our code uses a single io_uring per core, which is shared among all block devices - meaning each block device on a core uses the same io_uring.
> >>>
> >>> Let’s say the size of the io_uring is N. Each block device submits M UBLK_U_IO_FETCH_REQ requests. As a result, with the current implementation, we can only support up to P block devices, where P = N / M. This means that when we attempt to support block device P+1, it will fail due to io_uring exhaustion.
> >>
> >> What do you mean by "size of the io_uring", the submission queue size?
> >> Why can't you submit all P * M UBLK_U_IO_FETCH_REQ operations in
> >> batches of N?
> >>
> >> Best,
> >> Caleb
> >>
> >> N is the size of the submission queue, and P is not fixed and unknown at the time of ring initialization....
> >
> > I don't think it matters whether P (the number of ublk devices) is
> > known ahead of time or changes dynamically. My point is that you can
> > submit the UBLK_U_IO_FETCH_REQ operations in batches of N to avoid
> > exceeding the io_uring SQ depth. (If there are other operations
> > potentially interleaved with the UBLK_U_IO_FETCH_REQ ones, then just
> > submit each time the io_uring SQ fills up.) Any values of P, M, and N
> > should work. Perhaps I'm misunderstanding you, because I don't know
> > what "io_uring exhaustion" refers to.
> >
> > Multishot ublk io_uring operations don't seem like a trivial feature
> > to implement. Currently, incoming ublk requests are posted to the ublk
> > server using io_uring's "task work" mechanism, which inserts the
> > io_uring operation into an intrusive linked list. If you wanted a
> > single ublk io_uring operation to post multiple completions, it would
> > need to allocate some structure for each incoming request to insert
> > into the task work list. There is also an assumption that the ublk
> > io_uring operations correspond 1-1 with the blk-mq requests for the
> > ublk device, which would be broken by multishot ublk io_uring
> > operations.
> >
> > Best,
> > Caleb
>
> Hi Caleb,
>
> I think what Ofer is trying to say is that we have a scaling issue.
>
> Our deployment could consist of 100s of ublk devices, not all of which will be dispatching IO at the same time. If we were to submit the maximum number of IO requests that our application can handle for every ublk device we need to deploy, the memory requirements would be excessive.

Thanks, I see what you mean. Yes, it's certainly a reasonable concern
in principle. The memory requirements may not be as steep as you
imagine. We have a similar architecture and haven't encountered any
issues. Each of our machines has 100+ ublk devices, each with 16
queues, with the maximum of 4096 requests per queue. The per-I/O state
for ublk and io_uring is pretty small; it's nowhere near our biggest
consumer of RAM.

>
> For this reason, we would prefer to have a global pool of IO requests that can be registered with the ublk-control device that each of the ublk devices registered to it can use.

It could probably work, but I think there are some details to iron
out. First of all, a global pool wouldn't work if there are multiple
ublk server applications whose I/Os should be isolated from each
other. And to get decent performance, I think you would definitely
want to partition these I/O request pools to avoid contention. A
possible approach would be to have one pool per ublk server thread.

Best,
Caleb





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux