On Wed, May 21, 2025, at 12:24, Herbert Xu wrote: > On Wed, May 21, 2025 at 11:58:49AM +0200, Arnd Bergmann wrote: >> >> I did not see the entire background of the discussion, but would >> point out that this is not supposed to work at all: > > We're trying to find out why this driver fails under concurrent > load. It works perfectly if you do one request at a time, but > when you hit it with load coming from both CPUs, it ends up > corrupting the data. Ok. Which SoC exactly is this on? Armada XP or Armada 385? > My suscipicion right now is DMA corruption. One common thread > seems to be that if you only use dma_map_sg it works, but if > dma_alloc_coherent memory is used then it is corrupted (this > isn't proven yet, it's just what the printk patch was showing). I see. Just a few more ideas what it could be in case it's not what you suspect: - the SRAM gets mapped into kernel space using ioremap(), which on Armada 375/38x uses MT_UNCACHED rather than MT_DEVICE as a workaround for a possible deadlock on actual MMIO registers. It's possible that the SRAM should be mapped using a different map flag to ensure it's actually consistent. If a store is posted to the SRAM, it may still be in flight at the time that the DMA master looks at it. - I see a lot of chaining of DMA descriptors, but no dma_wmb() or spinlock. A dma_wmb() or stronger (wmb, dma_mb, mb) is probably required between writing to a coherent descriptor and making it visible from another one. A spinlock is of course needed if you have multiple CPUs adding data into a shared linked list (I think this one is not shared but haven't confirmed that). Arnd