Re: [PATCH 6/6] dma-mapping: benchmark: add IOVA support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 21, 2025 at 05:08:08PM +0100, Robin Murphy wrote:
> On 2025-05-20 11:39 pm, Luis Chamberlain wrote:
> > diff --git a/include/linux/map_benchmark.h b/include/linux/map_benchmark.h
> > index 62674c83bde4..da7c9e3ddf21 100644
> > --- a/include/linux/map_benchmark.h
> > +++ b/include/linux/map_benchmark.h
> > @@ -27,5 +28,15 @@ struct map_benchmark {
> >   	__u32 dma_dir; /* DMA data direction */
> >   	__u32 dma_trans_ns; /* time for DMA transmission in ns */
> >   	__u32 granule;  /* how many PAGE_SIZE will do map/unmap once a time */
> > +	__u32 has_iommu_dma;
> 
> Why would userspace care about this? Either they asked for a streaming
> benchmark and it's irrelevant, or they asked for an IOVA benchmark, which
> either succeeded, or failed and this is ignored anyway.

Its so we inform userspace that its not possible to run IOVA tests for a
good reason, instead of just saying it failed.

> Conversely, why should the kernel have to care about this? If userspace
> wants both benchmarks, they can just run both benchmarks, with whatever
> number of threads for each they fancy. No need to have all that complexity
> kernel-side. 

I'm not following about the complexity you are referring to here. The
point to has_iommu_dma is to simply avoid running the IOVA tests so
that userspace doesn't get incorrect results for a feature it can't
possibly support.

> If there's a valid desire for running multiple different
> benchmarks *simultaneously* then we should support that in general (I can
> imagine it being potentially interesting to thrash the IOVA allocator with
> several different sizes at once, for example.)

Sure, both are supported. However, no point in running IOVA tests if
you can't possibly run them.

> That way, I'd also be inclined to give the new ioctl its own separate
> structure for IOVA results, and avoid impacting the existing ABI.

Sure.

> > -static int do_map_benchmark(struct map_benchmark_data *map)
> > +static int do_iova_benchmark(struct map_benchmark_data *map)
> > +{
> > +	struct task_struct **tsk;
> > +	int threads = map->bparam.threads;
> > +	int node = map->bparam.node;
> > +	u64 iova_loops;
> > +	int ret = 0;
> > +	int i;
> > +
> > +	tsk = kmalloc_array(threads, sizeof(*tsk), GFP_KERNEL);
> > +	if (!tsk)
> > +		return -ENOMEM;
> > +
> > +	get_device(map->dev);
> > +
> > +	/* Create IOVA threads only */
> > +	for (i = 0; i < threads; i++) {
> > +		tsk[i] = kthread_create_on_node(benchmark_thread_iova, map,
> > +				node, "dma-iova-benchmark/%d", i);
> > +		if (IS_ERR(tsk[i])) {
> > +			pr_err("create dma_iova thread failed\n");
> > +			ret = PTR_ERR(tsk[i]);
> > +			while (--i >= 0)
> > +				kthread_stop(tsk[i]);
> > +			goto out;
> > +		}
> > +
> > +		if (node != NUMA_NO_NODE)
> > +			kthread_bind_mask(tsk[i], cpumask_of_node(node));
> > +	}
> 
> Duplicating all the thread-wrangling code seems needlessly horrible - surely
> it's easy enough to factor out the stats initialisation and final
> calculation, along with the thread function itself. Perhaps as callbacks in
> the map_benchmark_data?

Could try that.

> Similarly, each "thread function" itself only only actually needs to consist
> of the respective "while (!kthread_should_stop())" loop - the rest of
> map_benchmark_thread() could still be used as a common harness to avoid
> duplicating the buffer management code as well.

If we want to have a separate data structure for IOVA tests there's more
reason to keep the threads separated as each would be touching different
data structures, otherwise we end up with a large branch.

  Luis




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux