Re: [PATCH net-next v2 3/9] xsk: introduce locked version of xskq_prod_write_addr_batch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 25, 2025 at 09:53:36PM +0800, Jason Xing wrote:
> From: Jason Xing <kernelxing@xxxxxxxxxxx>
> 
> Add xskq_prod_write_addr_batch_locked() helper for batch xmit.
> 
> xskq_prod_write_addr_batch() is used in the napi poll env which is
> already in the softirq so it doesn't need any lock protection. Later
> this function will be used in the generic xmit path that is non irq,
> so the locked version as this patch adds is needed.
> 
> Also add nb_pkts in xskq_prod_write_addr_batch() to count how many
> skbs instead of descs will be used in the batch xmit at one time, so
> that main batch xmit function can decide how many skbs will be
> allocated. Note that xskq_prod_write_addr_batch() was designed to
> help zerocopy mode because it only cares about descriptors/data itself.

I am not sure if this patch is valid after patch I cited in response to
your cover letter. in copy mode, skb destructor is responsible now for
producing cq entries.

> 
> Signed-off-by: Jason Xing <kernelxing@xxxxxxxxxxx>
> ---
>  net/xdp/xsk_queue.h | 26 +++++++++++++++++++++++---
>  1 file changed, 23 insertions(+), 3 deletions(-)
> 
> diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
> index 47741b4c285d..c444a1e29838 100644
> --- a/net/xdp/xsk_queue.h
> +++ b/net/xdp/xsk_queue.h
> @@ -389,17 +389,37 @@ static inline int xskq_prod_reserve_addr(struct xsk_queue *q, u64 addr)
>  	return 0;
>  }
>  
> -static inline void xskq_prod_write_addr_batch(struct xsk_queue *q, struct xdp_desc *descs,
> -					      u32 nb_entries)
> +static inline u32 xskq_prod_write_addr_batch(struct xsk_queue *q, struct xdp_desc *descs,
> +					     u32 nb_entries)
>  {
>  	struct xdp_umem_ring *ring = (struct xdp_umem_ring *)q->ring;
>  	u32 i, cached_prod;
> +	u32 nb_pkts = 0;
>  
>  	/* A, matches D */
>  	cached_prod = q->cached_prod;
> -	for (i = 0; i < nb_entries; i++)
> +	for (i = 0; i < nb_entries; i++) {
>  		ring->desc[cached_prod++ & q->ring_mask] = descs[i].addr;
> +		if (!xp_mb_desc(&descs[i]))
> +			nb_pkts++;
> +	}
>  	q->cached_prod = cached_prod;
> +
> +	return nb_pkts;
> +}
> +
> +static inline u32
> +xskq_prod_write_addr_batch_locked(struct xsk_buff_pool *pool,
> +				  struct xdp_desc *descs, u32 nb_entries)
> +{
> +	unsigned long flags;
> +	u32 nb_pkts;
> +
> +	spin_lock_irqsave(&pool->cq_lock, flags);
> +	nb_pkts = xskq_prod_write_addr_batch(pool->cq, descs, nb_entries);
> +	spin_unlock_irqrestore(&pool->cq_lock, flags);
> +
> +	return nb_pkts;
>  }
>  
>  static inline int xskq_prod_reserve_desc(struct xsk_queue *q,
> -- 
> 2.41.3
> 




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux