On 21.05.2025 19:07, Luis Chamberlain wrote: > On Wed, May 21, 2025 at 03:20:11PM +0100, Robin Murphy wrote: >> On 2025-05-20 11:39 pm, Luis Chamberlain wrote: >>> Today on x86_64 q35 guests we can't easily test some of the DMA API >>> with the dmatest out of the box because we lack a DMA engine as the >>> current qemu intel IOT patches are out of tree. This implements a basic >>> dma engine to let us use the dmatest API to expand on it and leverage >>> it on q35 guests. >> What does doing so ultimately achieve though? > What do you do to test for regressions automatically today for the DMA API? > > This patch series didn't just add a fake-dma engine though but let's > first address that as its what you raised a question for: > > Although I didn't add them, with this we can easily enable kernel > selftests to now allow any q35 guest to easily run basic API tests for the > DMA API. It's actually how I found the dma benchmark code, as its the only > selftest we have for DMA. However that benchmark test is not easy to > configure or enable. With kernel selftests you can test for things > outside of the scope of performance. IMHO adding a fake driver just to use some of its side-effects that are related with dma-mapping without the obvious information what would actually be tested, is not the right approach. Maybe the dma benchmark code can be extended with similar functionality as the selftests for dma-engine, I didn't check yet. It would be better to have such self-test in the proper layer. If adding the needed functionality to dma benchmark is not possible, then maybe create another self-test, which will do similar calls to the dma-mapping api as those dma-engine self-tests do, but without the whole dma-engine related part. > You can test for expected > correctness of the APIs and to ensure no regressions exist with extected > behavior, otherwise you learn about possible regressions reactively. We > have many selftests that do just that without a focus on performance for > many things, xarray, maple tree, sysctl, firmware loader, module > loading, etc. And yes, they find bugs proactively. > > With this then, we should be able to easily add a CI to run these tests > based on linux-next or linus' tags, even if its virtual. Who would run > these? We can get this going daily on kdevops easily, if we want them, we > already have a series of tests automated for different subsystems. > > Benchmarking can be done separatley with real hardware -- agreed. > But it does not negate the need for simple virtual kernel selftests. > > Luis > Best regards -- Marek Szyprowski, PhD Samsung R&D Institute Poland