On 2025/08/21 12:12, Zhu Yanjun wrote:
在 2025/8/19 8:15, Daisuke Matsuda 写道:
On 2025/08/18 13:44, Zhu Yanjun wrote:
在 2025/8/17 5:37, Daisuke Matsuda 写道:
When running the test_resize_cq testcase from rdma-core, polling a
completion queue from userspace may occasionally hang and eventually fail
with a timeout:
=====
ERROR: test_resize_cq (tests.test_cq.CQTest.test_resize_cq)
Test resize CQ, start with specific value and then increase and decrease
----------------------------------------------------------------------
Traceback (most recent call last):
File "/root/deb/rdma-core/tests/test_cq.py", line 135, in test_resize_cq
u.poll_cq(self.client.cq)
File "/root/deb/rdma-core/tests/utils.py", line 687, in poll_cq
wcs = _poll_cq(cq, count, data)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/deb/rdma-core/tests/utils.py", line 669, in _poll_cq
raise PyverbsError(f'Got timeout on polling ({count} CQEs remaining)')
pyverbs.pyverbs_error.PyverbsError: Got timeout on polling (1 CQEs
remaining)
=====
The issue is caused when rxe_cq_post() fails to post a CQE due to the queue
being temporarily full, and the CQE is effectively lost. To mitigate this,
add a bounded busy-wait with fallback rescheduling so that CQE does not get
lost.
Signed-off-by: Daisuke Matsuda <dskmtsd@xxxxxxxxx>
---
drivers/infiniband/sw/rxe/rxe_cq.c | 27 +++++++++++++++++++++++++--
1 file changed, 25 insertions(+), 2 deletions(-)
diff --git a/drivers/infiniband/sw/rxe/rxe_cq.c b/drivers/infiniband/ sw/rxe/rxe_cq.c
index fffd144d509e..7b0fba63204e 100644
--- a/drivers/infiniband/sw/rxe/rxe_cq.c
+++ b/drivers/infiniband/sw/rxe/rxe_cq.c
@@ -84,14 +84,36 @@ int rxe_cq_resize_queue(struct rxe_cq *cq, int cqe,
/* caller holds reference to cq */
int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited)
{
+ unsigned long flags;
+ u32 spin_cnt = 3000;
struct ib_event ev;
- int full;
void *addr;
- unsigned long flags;
+ int full;
spin_lock_irqsave(&cq->cq_lock, flags);
full = queue_full(cq->queue, QUEUE_TYPE_TO_CLIENT);
+ if (likely(!full))
+ goto post_queue;
+
+ /* constant backoff until queue is ready */
+ while (spin_cnt--) {
+ full = queue_full(cq->queue, QUEUE_TYPE_TO_CLIENT);
+ if (!full)
+ goto post_queue;
+
+ cpu_relax();
+ }
The loop runs 3000 times.
Each iteration:
Checks queue_full()
Executes cpu_relax()
On modern CPUs, each iteration may take a few cycles, e.g., 4–10 cycles per iteration (depends on memory/cache).
Suppose 1 cycle = ~0.3 ns on a 3 GHz CPU, 10 cycles ≈ 3 ns
3000 iterations × 10 cycles ≈ 30,000 cycles
30000 cycles * 0.3 ns = 9000 ns = 9 microseconds
So the “critical section” while spinning is tens of microseconds, not milliseconds.
I was concerned that 3000 iterations might make the spin lock critical section too long, but based on the analysis above, it appears that this is still a short-duration critical section.
Thank you for the review.
Assuming the two loads in queue_full() hit in the L1 cache, I estimate each iteration could take around
15–20 cycles. Based on your calculation, the maximum total time would be approximately 18 microseconds.
======================================================================
ERROR: test_rdmacm_async_write (tests.test_rdmacm.CMTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/..../rdma-core/tests/test_rdmacm.py", line 71, in test_rdmacm_async_write
self.two_nodes_rdmacm_traffic(CMAsyncConnection,
File "/..../rdma-core/tests/base.py", line 447, in two_nodes_rdmacm_traffic
raise Exception('Exception in active/passive side occurred')
Exception: Exception in active/passive side occurred
After appying your commit, I run the following run_tests.py for 10000 times.
The above error sometimes will appear. The frequency is very low.
"
for (( i = 0; i < 10000; i++ ))
do
rdma-core/build/bin/run_tests.py --dev rxe0
done
"
It is weird.
I tried running test_rdmacm_async_write alone for 50000 times, but could not reproduce this one.
There have been multiple latency-related issues in RXE, so it is not surprising a new one is
uncovered by changing seemingly irrelevant part.
How about applying additional change below:
===
diff --git a/drivers/infiniband/sw/rxe/rxe_cq.c b/drivers/infiniband/sw/rxe/rxe_cq.c
index 7b0fba63204e..8f8d56051b8d 100644
--- a/drivers/infiniband/sw/rxe/rxe_cq.c
+++ b/drivers/infiniband/sw/rxe/rxe_cq.c
@@ -102,7 +102,9 @@ int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited)
if (!full)
goto post_queue;
+ spin_unlock_irqrestore(&cq->cq_lock, flags);
cpu_relax();
+ spin_lock_irqsave(&cq->cq_lock, flags);
}
/* try giving up cpu and retry */
===
This makes cpu_relax() almost meaningless, but ensures the lock is released in each iteration.
It would be nice if you could provide the frequency and whether it takes longer than usual in failure cases.
I think that could be helpful as a starting point to find a solution.
Thanks,
Daisuke
Yanjun.Zhu