On 28 Aug 20:36, Christoph Paasch via B4 Relay wrote:
From: Christoph Paasch <cpaasch@xxxxxxxxxx> mlx5e_skb_from_cqe_mpwrq_nonlinear() copies MLX5E_RX_MAX_HEAD (256) bytes from the page-pool to the skb's linear part. Those 256 bytes include part of the payload. When attempting to do GRO in skb_gro_receive, if headlen > data_offset (and skb->head_frag is not set), we end up aggregating packets in the frag_list. This is of course not good when we are CPU-limited. Also causes a worse skb->len/truesize ratio,... So, let's avoid copying parts of the payload to the linear part. We use eth_get_headlen() to parse the headers and compute the length of the protocol headers, which will be used to copy the relevant bits ot the skb's linear part. We still allocate MLX5E_RX_MAX_HEAD for the skb so that if the networking stack needs to call pskb_may_pull() later on, we don't need to reallocate memory. This gives a nice throughput increase (ARM Neoverse-V2 with CX-7 NIC and LRO enabled): BEFORE: ======= (netserver pinned to core receiving interrupts) $ netperf -H 10.221.81.118 -T 80,9 -P 0 -l 60 -- -m 256K -M 256K 87380 16384 262144 60.01 32547.82 (netserver pinned to adjacent core receiving interrupts) $ netperf -H 10.221.81.118 -T 80,10 -P 0 -l 60 -- -m 256K -M 256K 87380 16384 262144 60.00 52531.67 AFTER: ====== (netserver pinned to core receiving interrupts) $ netperf -H 10.221.81.118 -T 80,9 -P 0 -l 60 -- -m 256K -M 256K 87380 16384 262144 60.00 52896.06 (netserver pinned to adjacent core receiving interrupts) $ netperf -H 10.221.81.118 -T 80,10 -P 0 -l 60 -- -m 256K -M 256K 87380 16384 262144 60.00 85094.90 Additional tests across a larger range of parameters w/ and w/o LRO, w/ and w/o IPv6-encapsulation, different MTUs (1500, 4096, 9000), different TCP read/write-sizes as well as UDP benchmarks, all have shown equal or better performance with this patch. Signed-off-by: Christoph Paasch <cpaasch@xxxxxxxxxx>
Reviewed-by: Saeed Mahameed <saeedm@xxxxxxxxxx>