On Thu, 21 Aug 2025 12:35:21 +0200, Michał Pecio wrote: > And suppose that somebody does indeed disable a slot without waiting > for pending URBs to finish unlinking, what if he also frees those > virtual endpoints that you would like to manipulate here? And maybe > forgets to clear xhci->devs[x]->eps[y] to NULL? I dug deeper and realized that this cannot happen, because virtual eps belong to the same allocation as their parent virtual dev. What is actually going to happen is that every xhci_disable_slot() is followed by xhci_free_virt_dev(), so virtual endpoint lookup at the beginning of xhci_handle_cmd_set_deq() will fail and the function will bail out silently. This 'td_cleanup' code will get no chance to run. The silent bailout is obviously wrong, but it can only be improved by logging the error, and queuing Set TR Deq onto a disabled slot needs to be prevented from happening in the first place. As far as I see, it currently is supposed to be prevented by: 1. the core not freeing devices with pending URBs 2. the driver not giving back URBs before Set TR Dequeue completes One interesting question is what happens if Set TR Dequeue is pending but the endpoint starts and completes the CLEARING_CACHE TD normally, I suspect that handle_tx_event() may give it back. Will look into it. BTW, endpoints are not supposed to start like that after Stop Endpoint retries have been implemented and I just sent a revert of a dodgy patch which broke that, but well, in theory it *might* possibly still happen. > What if a different device uses the same slot ID now? (That's possibly > a serious problem which perhaps requires a serious solution, btw). Actually, nothing interesting happens. SLOT_NOT_ENABLED means that the slot was Disabled. By now, it can at most be Enabled, because any completion of a later Enable Slot command couldn't have executed yet. There are no new TDs on the slot and no damage by giving them back. Also no point trying to give them back ;)