Re: [PATCH v4 01/14] svcrdma: Reduce the number of rdma_rw contexts per-QP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/6/25 9:55 AM, Jason Gunthorpe wrote:
> On Tue, May 06, 2025 at 06:40:25AM -0700, Christoph Hellwig wrote:
>> On Tue, May 06, 2025 at 10:17:22AM -0300, Jason Gunthorpe wrote:
>>> On Tue, May 06, 2025 at 06:08:59AM -0700, Christoph Hellwig wrote:
>>>> On Mon, Apr 28, 2025 at 03:36:49PM -0400, cel@xxxxxxxxxx wrote:
>>>>> qp_attr.cap.max_rdma_ctxs. The QP's actual Send Queue length is on
>>>>> the order of the sum of qp_attr.cap.max_send_wr and a factor times
>>>>> qp_attr.cap.max_rdma_ctxs. The factor can be up to three, depending
>>>>> on whether MR operations are required before RDMA Reads.
>>>>>
>>>>> This limit is not visible to RDMA consumers via dev->attrs. When the
>>>>> limit is surpassed, QP creation fails with -ENOMEM. For example:
>>>>
>>>> Can we find a way to expose this limit from the HCA drivers and the
>>>> RDMA core?
>>>
>>> Shouldn't it be max_qp_wr?
>>
>> Does that allow for arbitrary combination of different WRs?  
> 
> I think it is supposed to be the maximum QP WR depth you can create..
> 
> A QP shouldn't behave differently depending on the WR operation, each
> one takes one WR entry.
> 
> Chuck do you know differently?

qp_attr.cap.max_rdma_ctxs reserves a number of SQEs over and above
qp_attr.cap.max_send_wr. The sum of those two cannot exceed max_qp_wr,
of course.

But there is a multiplier, due to whether the device wants a
registration and invalidation WR in addition to each RDMA Read WR.

Further, in drivers/infiniband/hw/mlx5/qp.c :: calc_sq_size

        wq_size = roundup_pow_of_two(attr->cap.max_send_wr * wqe_size);
        qp->sq.wqe_cnt = wq_size / MLX5_SEND_WQE_BB;
        if (qp->sq.wqe_cnt > (1 << MLX5_CAP_GEN(dev->mdev,
log_max_qp_sz))) {
                mlx5_ib_dbg(dev, "send queue size (%d * %d / %d -> %d)
exceeds limits(%d)\n",
                            attr->cap.max_send_wr, wqe_size,
MLX5_SEND_WQE_BB
                            qp->sq.wqe_cnt,

                            1 << MLX5_CAP_GEN(dev->mdev, log_max_qp_sz));
                return -ENOMEM;
        }

So when svcrdma requests a large number of ctxts on top of a Send
Queue size of 135, svc_rdma_accept() fails and the debug message above
pops out.

In this patch I'm trying to include the reg/inv multiplier in the
calculation, but that doesn't seem to be enough to make "accept"
reliable, IMO due to this extra calculation in calc_sq_size().

-- 
Chuck Lever




[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux