On Wed Aug 6, 2025 at 8:16 AM CEST, Maurizio Lombardi wrote: > On Wed Aug 6, 2025 at 3:57 AM CEST, Yi Zhang wrote: >> Hello >> I hit this issue when I was running blktests nvme/tcp nvme/060 on the >> latest linux-block/for-next with rt enabled, please help check it and >> let me know if you need any info/testing for it, thanks. >> >> [ 390.474378] Call trace: >> [ 390.476813] __switch_to+0x1d8/0x330 (T) >> [ 390.480731] __schedule+0x860/0x1c30 >> [ 390.484297] schedule_rtlock+0x30/0x70 >> [ 390.488037] rtlock_slowlock_locked+0x464/0xf60 >> [ 390.492559] rt_read_lock+0x2bc/0x3e0 >> [ 390.496211] nvmet_tcp_listen_data_ready+0x3c/0x118 [nvmet_tcp] >> [ 390.502125] nvmet_tcp_data_ready+0x88/0x198 [nvmet_tcp] > > I think that the problem is due to the fact that nvmet_tcp_data_ready() > calls the queue->data_ready() callback with the sk_callback_lock > locked. > The data_ready callback points to nvmet_tcp_listen_data_ready() > which tries to lock the same sk_callback_lock, hence the deadlock. > > Maybe it can be fixed by deferring the call to queue->data_ready() by > using a workqueue. > Ops sorry they are two read locks, the real problem then is that something is holding the write lock. Maurizio