On Wed, May 21, 2025 at 03:20:11PM +0100, Robin Murphy wrote: > On 2025-05-20 11:39 pm, Luis Chamberlain wrote: > > Today on x86_64 q35 guests we can't easily test some of the DMA API > > with the dmatest out of the box because we lack a DMA engine as the > > current qemu intel IOT patches are out of tree. This implements a basic > > dma engine to let us use the dmatest API to expand on it and leverage > > it on q35 guests. > > What does doing so ultimately achieve though? What do you do to test for regressions automatically today for the DMA API? This patch series didn't just add a fake-dma engine though but let's first address that as its what you raised a question for: Although I didn't add them, with this we can easily enable kernel selftests to now allow any q35 guest to easily run basic API tests for the DMA API. It's actually how I found the dma benchmark code, as its the only selftest we have for DMA. However that benchmark test is not easy to configure or enable. With kernel selftests you can test for things outside of the scope of performance. You can test for expected correctness of the APIs and to ensure no regressions exist with extected behavior, otherwise you learn about possible regressions reactively. We have many selftests that do just that without a focus on performance for many things, xarray, maple tree, sysctl, firmware loader, module loading, etc. And yes, they find bugs proactively. With this then, we should be able to easily add a CI to run these tests based on linux-next or linus' tags, even if its virtual. Who would run these? We can get this going daily on kdevops easily, if we want them, we already have a series of tests automated for different subsystems. Benchmarking can be done separatley with real hardware -- agreed. But it does not negate the need for simple virtual kernel selftests. Luis