在 2025/8/6 5:39, Philipp Reisner 写道:
Allow the comp_handler callback implementation to call ib_poll_cq().
A call to ib_poll_cq() calls rxe_poll_cq() with the rdma_rxe driver.
And rxe_poll_cq() locks cq->cq_lock. That leads to a spinlock deadlock.
The Mellanox and Intel drivers allow a comp_handler callback
implementation to call ib_poll_cq().
Avoid the deadlock by calling the comp_handler callback without
holding cq->cw_lock.
Signed-off-by: Philipp Reisner <philipp.reisner@xxxxxxxxxx>
ERROR: test_resize_cq (tests.test_cq.CQTest.test_resize_cq)
Test resize CQ, start with specific value and then increase and decrease
----------------------------------------------------------------------
Traceback (most recent call last):
File "/root/deb/rdma-core/tests/test_cq.py", line 135, in test_resize_cq
u.poll_cq(self.client.cq)
File "/root/deb/rdma-core/tests/utils.py", line 687, in poll_cq
wcs = _poll_cq(cq, count, data)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/deb/rdma-core/tests/utils.py", line 669, in _poll_cq
raise PyverbsError(f'Got timeout on polling ({count} CQEs remaining)')
pyverbs.pyverbs_error.PyverbsError: Got timeout on polling (1 CQEs
remaining)
After I applied your patch in kervel v6.16, I got the above errors.
Zhu Yanjun
---
drivers/infiniband/sw/rxe/rxe_cq.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/drivers/infiniband/sw/rxe/rxe_cq.c b/drivers/infiniband/sw/rxe/rxe_cq.c
index fffd144d509e..1195e109f89b 100644
--- a/drivers/infiniband/sw/rxe/rxe_cq.c
+++ b/drivers/infiniband/sw/rxe/rxe_cq.c
@@ -88,6 +88,7 @@ int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited)
int full;
void *addr;
unsigned long flags;
+ u8 notify;
spin_lock_irqsave(&cq->cq_lock, flags);
@@ -110,14 +111,15 @@ int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited)
queue_advance_producer(cq->queue, QUEUE_TYPE_TO_CLIENT);
- if ((cq->notify & IB_CQ_NEXT_COMP) ||
- (cq->notify & IB_CQ_SOLICITED && solicited)) {
- cq->notify = 0;
- cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context);
- }
+ notify = cq->notify;
+ cq->notify = 0;
spin_unlock_irqrestore(&cq->cq_lock, flags);
+ if ((notify & IB_CQ_NEXT_COMP) ||
+ (notify & IB_CQ_SOLICITED && solicited))
+ cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context);
+
return 0;
}