On Wed, Sep 3, 2025 at 4:39 PM Amery Hung <ameryhung@xxxxxxxxx> wrote: > > > > On 8/28/25 8:36 PM, Christoph Paasch via B4 Relay wrote: > > From: Christoph Paasch <cpaasch@xxxxxxxxxx> > > > > mlx5e_skb_from_cqe_mpwrq_nonlinear() copies MLX5E_RX_MAX_HEAD (256) > > bytes from the page-pool to the skb's linear part. Those 256 bytes > > include part of the payload. > > > > When attempting to do GRO in skb_gro_receive, if headlen > data_offset > > (and skb->head_frag is not set), we end up aggregating packets in the > > frag_list. > > > > This is of course not good when we are CPU-limited. Also causes a worse > > skb->len/truesize ratio,... > > > > So, let's avoid copying parts of the payload to the linear part. We use > > eth_get_headlen() to parse the headers and compute the length of the > > protocol headers, which will be used to copy the relevant bits ot the > > skb's linear part. > > > > We still allocate MLX5E_RX_MAX_HEAD for the skb so that if the networking > > stack needs to call pskb_may_pull() later on, we don't need to reallocate > > memory. > > > > This gives a nice throughput increase (ARM Neoverse-V2 with CX-7 NIC and > > LRO enabled): > > > > BEFORE: > > ======= > > (netserver pinned to core receiving interrupts) > > $ netperf -H 10.221.81.118 -T 80,9 -P 0 -l 60 -- -m 256K -M 256K > > 87380 16384 262144 60.01 32547.82 > > > > (netserver pinned to adjacent core receiving interrupts) > > $ netperf -H 10.221.81.118 -T 80,10 -P 0 -l 60 -- -m 256K -M 256K > > 87380 16384 262144 60.00 52531.67 > > > > AFTER: > > ====== > > (netserver pinned to core receiving interrupts) > > $ netperf -H 10.221.81.118 -T 80,9 -P 0 -l 60 -- -m 256K -M 256K > > 87380 16384 262144 60.00 52896.06 > > > > (netserver pinned to adjacent core receiving interrupts) > > $ netperf -H 10.221.81.118 -T 80,10 -P 0 -l 60 -- -m 256K -M 256K > > 87380 16384 262144 60.00 85094.90 > > > > Additional tests across a larger range of parameters w/ and w/o LRO, w/ > > and w/o IPv6-encapsulation, different MTUs (1500, 4096, 9000), different > > TCP read/write-sizes as well as UDP benchmarks, all have shown equal or > > better performance with this patch. > > > > Signed-off-by: Christoph Paasch <cpaasch@xxxxxxxxxx> > > --- > > drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 5 +++++ > > 1 file changed, 5 insertions(+) > > > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c > > index 8bedbda522808cbabc8e62ae91a8c25d66725ebb..792bb647ba28668ad7789c328456e3609440455d 100644 > > --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c > > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c > > @@ -2047,6 +2047,8 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w > > dma_sync_single_for_cpu(rq->pdev, addr + head_offset, headlen, > > rq->buff.map_dir); > > > > + headlen = eth_get_headlen(skb->dev, head_addr, headlen); > > + > > Hi, > > I am building on top of this patchset and got a kernel crash. It was > triggered by attaching an xdp program. > > I think the problem is skb->dev is still NULL here. It will be set later by: > mlx5e_complete_rx_cqe() -> mlx5e_build_rx_skb() -> eth_type_trans() Hmmm... Not sure what happened here... I'm almost certain I tested with xdp as well... I will try again later/tomorrow. Thanks! Christoph > > > > frag_offset += headlen; > > byte_cnt -= headlen; > > linear_hr = skb_headroom(skb); > > @@ -2123,6 +2125,9 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w > > pagep->frags++; > > while (++pagep < frag_page); > > } > > + > > + headlen = eth_get_headlen(skb->dev, mxbuf->xdp.data, headlen); > > + > > __pskb_pull_tail(skb, headlen); > > } else { > > if (xdp_buff_has_frags(&mxbuf->xdp)) { > > >