On Wed, 25 Jun 2025 05:29:43 +0200, Sandeep Verma wrote: > > Hi all,I’m currently investigating a sporadic issue in our system where an > audio interrupt is occasionally lost. When this happens, ALSA detects the > anomaly (presumably through its internal timestamp or delay detection), but > it doesn’t appear to take corrective action―leading to persistent audio > jitter afterward.To address this, I’m considering implementing a mechanism > in our driver or platform layer that uses a timer to check if the expected > audio interrupt hasn’t arrived within a certain timeframe (e.g., 2x the > period size). If this condition is met, I plan to explicitly trigger an > XRUN to reset the pipeline and recover cleanly.My questions are: > > 1. Is this an acceptable and “ALSA-friendly” way to handle lost IRQs? Yes, a XRUN detection in the driver side is fine, per se. > 2. If this is a reasonable approach, why doesn’t ALSA do this by default? Maybe a simple answer is that because it's a rare case. The PCM core has a correction of the lost interrupt at PCM pointer reporting, but it's done at updating the PCM pointer, e.g. via snd_pcm_period_elapsed() call. The detection of the lost interrupt is left to each driver, since there can be various mechanisms to do that. > 3. Is there a better or recommended way within the ALSA framework to > detect and recover from such missed interrupts? The recovery depends on the tolerance. The XRUN handling is a brute-force stop and restart. But it might be that you can simply ignore one lost interrupt if the buffer consists of many small periods, too. > I’d appreciate any guidance or suggestions. If this kind of timer-based > recovery would be broadly useful, I’d also be happy to explore whether it > could be proposed upstream. So feel free to submit your fix patch. But, for the kernel patches, please submit to linux-sound ML, instead of alsa-devel ML. thanks, Takashi