On Wed, Sep 10, 2025 at 1:36 PM Christoph Paasch <cpaasch@xxxxxxxxxx> wrote: > > On Tue, Sep 9, 2025 at 8:17 PM Amery Hung <ameryhung@xxxxxxxxx> wrote: > > > > On Tue, Sep 9, 2025 at 11:18 AM Christoph Paasch <cpaasch@xxxxxxxxxx> wrote: > > > > > > On Mon, Sep 8, 2025 at 9:00 PM Christoph Paasch <cpaasch@xxxxxxxxxx> wrote: > > > > > > > > On Thu, Sep 4, 2025 at 4:30 PM Amery Hung <ameryhung@xxxxxxxxx> wrote: > > > > > > > > > > On Thu, Sep 4, 2025 at 3:57 PM Christoph Paasch via B4 Relay > > > > > <devnull+cpaasch.openai.com@xxxxxxxxxx> wrote: > > > > > > > > > > > > From: Christoph Paasch <cpaasch@xxxxxxxxxx> > > > > > > > > > > > > mlx5e_skb_from_cqe_mpwrq_nonlinear() copies MLX5E_RX_MAX_HEAD (256) > > > > > > bytes from the page-pool to the skb's linear part. Those 256 bytes > > > > > > include part of the payload. > > > > > > > > > > > > When attempting to do GRO in skb_gro_receive, if headlen > data_offset > > > > > > (and skb->head_frag is not set), we end up aggregating packets in the > > > > > > frag_list. > > > > > > > > > > > > This is of course not good when we are CPU-limited. Also causes a worse > > > > > > skb->len/truesize ratio,... > > > > > > > > > > > > So, let's avoid copying parts of the payload to the linear part. We use > > > > > > eth_get_headlen() to parse the headers and compute the length of the > > > > > > protocol headers, which will be used to copy the relevant bits ot the > > > > > > skb's linear part. > > > > > > > > > > > > We still allocate MLX5E_RX_MAX_HEAD for the skb so that if the networking > > > > > > stack needs to call pskb_may_pull() later on, we don't need to reallocate > > > > > > memory. > > > > > > > > > > > > This gives a nice throughput increase (ARM Neoverse-V2 with CX-7 NIC and > > > > > > LRO enabled): > > > > > > > > > > > > BEFORE: > > > > > > ======= > > > > > > (netserver pinned to core receiving interrupts) > > > > > > $ netperf -H 10.221.81.118 -T 80,9 -P 0 -l 60 -- -m 256K -M 256K > > > > > > 87380 16384 262144 60.01 32547.82 > > > > > > > > > > > > (netserver pinned to adjacent core receiving interrupts) > > > > > > $ netperf -H 10.221.81.118 -T 80,10 -P 0 -l 60 -- -m 256K -M 256K > > > > > > 87380 16384 262144 60.00 52531.67 > > > > > > > > > > > > AFTER: > > > > > > ====== > > > > > > (netserver pinned to core receiving interrupts) > > > > > > $ netperf -H 10.221.81.118 -T 80,9 -P 0 -l 60 -- -m 256K -M 256K > > > > > > 87380 16384 262144 60.00 52896.06 > > > > > > > > > > > > (netserver pinned to adjacent core receiving interrupts) > > > > > > $ netperf -H 10.221.81.118 -T 80,10 -P 0 -l 60 -- -m 256K -M 256K > > > > > > 87380 16384 262144 60.00 85094.90 > > > > > > > > > > > > Additional tests across a larger range of parameters w/ and w/o LRO, w/ > > > > > > and w/o IPv6-encapsulation, different MTUs (1500, 4096, 9000), different > > > > > > TCP read/write-sizes as well as UDP benchmarks, all have shown equal or > > > > > > better performance with this patch. > > > > > > > > > > > > Reviewed-by: Eric Dumazet <edumazet@xxxxxxxxxx> > > > > > > Reviewed-by: Saeed Mahameed <saeedm@xxxxxxxxxx> > > > > > > Signed-off-by: Christoph Paasch <cpaasch@xxxxxxxxxx> > > > > > > --- > > > > > > drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 5 +++++ > > > > > > 1 file changed, 5 insertions(+) > > > > > > > > > > > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c > > > > > > index 8bedbda522808cbabc8e62ae91a8c25d66725ebb..0ac31c7fb64cd60720d390de45a5b6b453ed0a3f 100644 > > > > > > --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c > > > > > > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c > > > > > > @@ -2047,6 +2047,8 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w > > > > > > dma_sync_single_for_cpu(rq->pdev, addr + head_offset, headlen, > > > > > > rq->buff.map_dir); > > > > > > > > > > > > + headlen = eth_get_headlen(rq->netdev, head_addr, headlen); > > > > > > + > > > > > > frag_offset += headlen; > > > > > > byte_cnt -= headlen; > > > > > > linear_hr = skb_headroom(skb); > > > > > > @@ -2123,6 +2125,9 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w > > > > > > pagep->frags++; > > > > > > while (++pagep < frag_page); > > > > > > } > > > > > > + > > > > > > + headlen = eth_get_headlen(rq->netdev, mxbuf->xdp.data, headlen); > > > > > > + > > > > > > > > > > The size of mxbuf->xdp.data is most likely not headlen here. > > > > > > > > > > The driver currently generates a xdp_buff with empty linear data, pass > > > > > it to the xdp program and assumes the layout If the xdp program does > > > > > not change the layout of the xdp_buff through bpf_xdp_adjust_head() or > > > > > bpf_xdp_adjust_tail(). The assumption is not correct and I am working > > > > > on a fix. But, if we keep that assumption for now, mxbuf->xdp.data > > > > > will not contain any headers or payload. The thing that you try to do > > > > > probably should be: > > > > > > > > > > skb_frag_t *frag = &sinfo->frags[0]; > > > > > > > > > > headlen = eth_get_headlen(rq->netdev, skb_frag_address(frag), > > > > > skb_frag_size(frag)); > > > > > > So, when I look at the headlen I get, it is correct (even with my old > > > code using mxbuf->xdp.data). > > > > > > To make sure I test the right thing, which scenario would > > > mxbuf->xdp.data not contain any headers or payload ? What do I need to > > > do to reproduce that ? > > > > A quick look at the code, could it be that > > skb_flow_dissect_flow_keys_basic() returns false so that > > eth_get_headlen() always returns sizeof(*eth)? > > No, the headlen values were correct (meaning, it was the actual length > of the headers): > Another possibility is that mxbuf->xdp is reused and not zeroed between calls to mlx5e_skb_from_cqe_mpwrq_nonlinear(). The stale headers might have been written to mxbuf->xdp.data before the XDP is attached. I am not sure what exactly happens, but my main question remains. when the XDP program is attached and does nothing, the linear data will be empty, so what is eth_get_headlen() parsing here...? > This is TCP-traffic with a simple print after eth_get_headlen: > [130982.311088] mlx5e_skb_from_cqe_mpwrq_nonlinear xdp headlen is 86 > > So, eth_get_headlen was able to correctly parse things. > > My xdp-program is as simple as possible: > SEC("xdp.frags") > int xdp_pass_prog(struct xdp_md *ctx) > { > return XDP_PASS; > } > > > > The linear part > > contains nothing meaning before __psk_pull_tail(), so it is possible > > for skb_flow_dissect_flow_keys_basic() to fail. > > > > > > > > Thanks, > > > Christoph > > > > > > > > > > > Ok, I think I understand what you mean! Thanks for taking the time to explain! > > > > > > > > I will do some tests on my side to make sure I get it right. > > > > > > > > As your change goes to net and mine to netnext, I can wait until yours > > > > is in the tree so that there aren't any conflicts that need to be > > > > taken care of. > > > > Will Copy you in the mlx5 non-linear xdp fixing patchset. > > Thx! > > > Christoph