On Wed, Aug 27, 2025 at 10:33 PM Alexander Lobakin <aleksander.lobakin@xxxxxxxxx> wrote: > > From: Jason Xing <kerneljasonxing@xxxxxxxxx> > Date: Mon, 25 Aug 2025 21:53:38 +0800 > > > From: Jason Xing <kernelxing@xxxxxxxxxxx> > > > > Support allocating and building skbs in batch. > > [...] > > > + base_len = max(NET_SKB_PAD, L1_CACHE_ALIGN(dev->needed_headroom)); > > + if (!(dev->priv_flags & IFF_TX_SKB_NO_LINEAR)) > > + base_len += dev->needed_tailroom; > > + > > + if (xs->skb_count >= nb_pkts) > > + goto build; > > + > > + if (xs->skb) { > > + i = 1; > > + xs->skb_count++; > > + } > > + > > + xs->skb_count += kmem_cache_alloc_bulk(net_hotdata.skbuff_cache, > > + gfp_mask, nb_pkts - xs->skb_count, > > + (void **)&skbs[xs->skb_count]); > > Have you tried napi_skb_cache_get_bulk()? Depending on the workload, it > may give better perf numbers. Sure, my initial try is using this interface. But later I want to see a standalone cache belonging to xsk. The whole xsk_alloc_batch_skb function I added is quite similar to napi_skb_cache_get_bulk(), to some extent. And if using napi_xxx(), we need a lock to avoid the race between this context and softirq context on the same core. Thanks, Jason