On Tue, Apr 29, 2025 at 10:19:46AM +0800, Baolu Lu wrote: > On 4/28/25 17:22, Leon Romanovsky wrote: > > From: Christoph Hellwig<hch@xxxxxx> > > > > For the upcoming IOVA-based DMA API we want to batch the > > ops->iotlb_sync_map() call after mapping multiple IOVAs from > > dma-iommu without having a scatterlist. Improve the API. > > > > Add a wrapper for the map_sync as iommu_sync_map() so that callers > > don't need to poke into the methods directly. > > > > Formalize __iommu_map() into iommu_map_nosync() which requires the > > caller to call iommu_sync_map() after all maps are completed. > > > > Refactor the existing sanity checks from all the different layers > > into iommu_map_nosync(). > > > > Signed-off-by: Christoph Hellwig<hch@xxxxxx> > > Acked-by: Will Deacon<will@xxxxxxxxxx> > > Tested-by: Jens Axboe<axboe@xxxxxxxxx> > > Reviewed-by: Jason Gunthorpe<jgg@xxxxxxxxxx> > > Reviewed-by: Luis Chamberlain<mcgrof@xxxxxxxxxx> > > Signed-off-by: Leon Romanovsky<leonro@xxxxxxxxxx> > > --- > > drivers/iommu/iommu.c | 65 +++++++++++++++++++------------------------ > > include/linux/iommu.h | 4 +++ > > 2 files changed, 33 insertions(+), 36 deletions(-) > > > > diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c > > index 4f91a740c15f..02960585b8d4 100644 > > --- a/drivers/iommu/iommu.c > > +++ b/drivers/iommu/iommu.c > > @@ -2443,8 +2443,8 @@ static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova, > > return pgsize; > > } > > -static int __iommu_map(struct iommu_domain *domain, unsigned long iova, > > - phys_addr_t paddr, size_t size, int prot, gfp_t gfp) > > +int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova, > > + phys_addr_t paddr, size_t size, int prot, gfp_t gfp) > > { > > const struct iommu_domain_ops *ops = domain->ops; > > unsigned long orig_iova = iova; > > @@ -2453,12 +2453,19 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova, > > phys_addr_t orig_paddr = paddr; > > int ret = 0; > > + might_sleep_if(gfpflags_allow_blocking(gfp)); > > + > > if (unlikely(!(domain->type & __IOMMU_DOMAIN_PAGING))) > > return -EINVAL; > > if (WARN_ON(!ops->map_pages || domain->pgsize_bitmap == 0UL)) > > return -ENODEV; > > + /* Discourage passing strange GFP flags */ > > + if (WARN_ON_ONCE(gfp & (__GFP_COMP | __GFP_DMA | __GFP_DMA32 | > > + __GFP_HIGHMEM))) > > + return -EINVAL; > > + > > /* find out the minimum page size supported */ > > min_pagesz = 1 << __ffs(domain->pgsize_bitmap); > > @@ -2506,31 +2513,27 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova, > > return ret; > > } > > -int iommu_map(struct iommu_domain *domain, unsigned long iova, > > - phys_addr_t paddr, size_t size, int prot, gfp_t gfp) > > +int iommu_sync_map(struct iommu_domain *domain, unsigned long iova, size_t size) > > { > > const struct iommu_domain_ops *ops = domain->ops; > > - int ret; > > - > > - might_sleep_if(gfpflags_allow_blocking(gfp)); > > - /* Discourage passing strange GFP flags */ > > - if (WARN_ON_ONCE(gfp & (__GFP_COMP | __GFP_DMA | __GFP_DMA32 | > > - __GFP_HIGHMEM))) > > - return -EINVAL; > > + if (!ops->iotlb_sync_map) > > + return 0; > > + return ops->iotlb_sync_map(domain, iova, size); > > +} > > I am wondering whether iommu_sync_map() needs a return value. The > purpose of this callback is just to sync the TLB cache after new > mappings are created, which should effectively be a no-fail operation. > > The definition of iotlb_sync_map in struct iommu_domain_ops seems > unnecessary: > > struct iommu_domain_ops { > ... > int (*iotlb_sync_map)(struct iommu_domain *domain, unsigned long > iova, > size_t size); > ... > }; > > Furthermore, currently no iommu driver implements this callback in a way > that returns a failure. We could clean up the iommu definition in a > subsequent patch series, but for this driver-facing interface, it's > better to get it right from the beginning. I see that s390 is relying on return values: 569 static int s390_iommu_iotlb_sync_map(struct iommu_domain *domain, 570 unsigned long iova, size_t size) 571 { <...> 581 ret = zpci_refresh_trans((u64)zdev->fh << 32, 582 iova, size); 583 /* 584 * let the hypervisor discover invalidated entries 585 * allowing it to free IOVAs and unpin pages 586 */ 587 if (ret == -ENOMEM) { 588 ret = zpci_refresh_all(zdev); 589 if (ret) 590 break; 591 } <...> 595 return ret; 596 } > > Thanks, > baolu