Re: [PATCH net-next v5 2/2] net/mlx5: Avoid copying payload to the skb's linear part

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Sep 8, 2025 at 9:00 PM Christoph Paasch <cpaasch@xxxxxxxxxx> wrote:
>
> On Thu, Sep 4, 2025 at 4:30 PM Amery Hung <ameryhung@xxxxxxxxx> wrote:
> >
> > On Thu, Sep 4, 2025 at 3:57 PM Christoph Paasch via B4 Relay
> > <devnull+cpaasch.openai.com@xxxxxxxxxx> wrote:
> > >
> > > From: Christoph Paasch <cpaasch@xxxxxxxxxx>
> > >
> > > mlx5e_skb_from_cqe_mpwrq_nonlinear() copies MLX5E_RX_MAX_HEAD (256)
> > > bytes from the page-pool to the skb's linear part. Those 256 bytes
> > > include part of the payload.
> > >
> > > When attempting to do GRO in skb_gro_receive, if headlen > data_offset
> > > (and skb->head_frag is not set), we end up aggregating packets in the
> > > frag_list.
> > >
> > > This is of course not good when we are CPU-limited. Also causes a worse
> > > skb->len/truesize ratio,...
> > >
> > > So, let's avoid copying parts of the payload to the linear part. We use
> > > eth_get_headlen() to parse the headers and compute the length of the
> > > protocol headers, which will be used to copy the relevant bits ot the
> > > skb's linear part.
> > >
> > > We still allocate MLX5E_RX_MAX_HEAD for the skb so that if the networking
> > > stack needs to call pskb_may_pull() later on, we don't need to reallocate
> > > memory.
> > >
> > > This gives a nice throughput increase (ARM Neoverse-V2 with CX-7 NIC and
> > > LRO enabled):
> > >
> > > BEFORE:
> > > =======
> > > (netserver pinned to core receiving interrupts)
> > > $ netperf -H 10.221.81.118 -T 80,9 -P 0 -l 60 -- -m 256K -M 256K
> > >  87380  16384 262144    60.01    32547.82
> > >
> > > (netserver pinned to adjacent core receiving interrupts)
> > > $ netperf -H 10.221.81.118 -T 80,10 -P 0 -l 60 -- -m 256K -M 256K
> > >  87380  16384 262144    60.00    52531.67
> > >
> > > AFTER:
> > > ======
> > > (netserver pinned to core receiving interrupts)
> > > $ netperf -H 10.221.81.118 -T 80,9 -P 0 -l 60 -- -m 256K -M 256K
> > >  87380  16384 262144    60.00    52896.06
> > >
> > > (netserver pinned to adjacent core receiving interrupts)
> > >  $ netperf -H 10.221.81.118 -T 80,10 -P 0 -l 60 -- -m 256K -M 256K
> > >  87380  16384 262144    60.00    85094.90
> > >
> > > Additional tests across a larger range of parameters w/ and w/o LRO, w/
> > > and w/o IPv6-encapsulation, different MTUs (1500, 4096, 9000), different
> > > TCP read/write-sizes as well as UDP benchmarks, all have shown equal or
> > > better performance with this patch.
> > >
> > > Reviewed-by: Eric Dumazet <edumazet@xxxxxxxxxx>
> > > Reviewed-by: Saeed Mahameed <saeedm@xxxxxxxxxx>
> > > Signed-off-by: Christoph Paasch <cpaasch@xxxxxxxxxx>
> > > ---
> > >  drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 5 +++++
> > >  1 file changed, 5 insertions(+)
> > >
> > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> > > index 8bedbda522808cbabc8e62ae91a8c25d66725ebb..0ac31c7fb64cd60720d390de45a5b6b453ed0a3f 100644
> > > --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> > > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> > > @@ -2047,6 +2047,8 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w
> > >                 dma_sync_single_for_cpu(rq->pdev, addr + head_offset, headlen,
> > >                                         rq->buff.map_dir);
> > >
> > > +               headlen = eth_get_headlen(rq->netdev, head_addr, headlen);
> > > +
> > >                 frag_offset += headlen;
> > >                 byte_cnt -= headlen;
> > >                 linear_hr = skb_headroom(skb);
> > > @@ -2123,6 +2125,9 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w
> > >                                 pagep->frags++;
> > >                         while (++pagep < frag_page);
> > >                 }
> > > +
> > > +               headlen = eth_get_headlen(rq->netdev, mxbuf->xdp.data, headlen);
> > > +
> >
> > The size of mxbuf->xdp.data is most likely not headlen here.
> >
> > The driver currently generates a xdp_buff with empty linear data, pass
> > it to the xdp program and assumes the layout If the xdp program does
> > not change the layout of the xdp_buff through bpf_xdp_adjust_head() or
> > bpf_xdp_adjust_tail(). The assumption is not correct and I am working
> > on a fix. But, if we keep that assumption for now, mxbuf->xdp.data
> > will not contain any headers or payload. The thing that you try to do
> > probably should be:
> >
> >         skb_frag_t *frag = &sinfo->frags[0];
> >
> >         headlen = eth_get_headlen(rq->netdev, skb_frag_address(frag),
> > skb_frag_size(frag));

So, when I look at the headlen I get, it is correct (even with my old
code using mxbuf->xdp.data).

To make sure I test the right thing, which scenario would
mxbuf->xdp.data not contain any headers or payload ? What do I need to
do to reproduce that ?

Thanks,
Christoph

>
> Ok, I think I understand what you mean! Thanks for taking the time to explain!
>
> I will do some tests on my side to make sure I get it right.
>
> As your change goes to net and mine to netnext, I can wait until yours
> is in the tree so that there aren't any conflicts that need to be
> taken care of.
>
>
> Christoph
>
> >
> >
> >
> > >                 __pskb_pull_tail(skb, headlen);
> > >         } else {
> > >                 if (xdp_buff_has_frags(&mxbuf->xdp)) {
> > >
> > > --
> > > 2.50.1
> > >
> > >





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux