On Mon, Jun 23, 2025 at 04:12:24PM +0200, Christoph Hellwig wrote: > Add a new blk_rq_dma_map / blk_rq_dma_unmap pair that does away with > the wasteful scatterlist structure. Instead it uses the mapping iterator > to either add segments to the IOVA for IOMMU operations, or just maps > them one by one for the direct mapping. For the IOMMU case instead of > a scatterlist with an entry for each segment, only a single [dma_addr,len] > pair needs to be stored for processing a request, and for the direct > mapping the per-segment allocation shrinks from > [page,offset,len,dma_addr,dma_len] to just [dma_addr,len]. > > One big difference to the scatterlist API, which could be considered > downside, is that the IOVA collapsing only works when the driver sets > a virt_boundary that matches the IOMMU granule. For NVMe this is done > already so it works perfectly. Looks good. Reviewed-by: Keith Busch <kbusch@xxxxxxxxxx>