> On Jul 07, Pablo Neira Ayuso wrote: > > On Fri, Jul 04, 2025 at 03:00:40PM +0200, Lorenzo Bianconi wrote: > > > > On Thu, Jul 03, 2025 at 04:16:02PM +0200, Lorenzo Bianconi wrote: > > > > > Introduce SW acceleration for IPIP tunnels in the netfilter flowtable > > > > > infrastructure. > > > > > IPIP SW acceleration can be tested running the following scenario where > > > > > the traffic is forwarded between two NICs (eth0 and eth1) and an IPIP > > > > > tunnel is used to access a remote site (using eth1 as the underlay device): > > > > > > > > Question below. > > > > > > > > > ETH0 -- TUN0 <==> ETH1 -- [IP network] -- TUN1 (192.168.100.2) > > > > > > > > > > $ip addr show > > > > > 6: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 > > > > > link/ether 00:00:22:33:11:55 brd ff:ff:ff:ff:ff:ff > > > > > inet 192.168.0.2/24 scope global eth0 > > > > > valid_lft forever preferred_lft forever > > > > > 7: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 > > > > > link/ether 00:11:22:33:11:55 brd ff:ff:ff:ff:ff:ff > > > > > inet 192.168.1.1/24 scope global eth1 > > > > > valid_lft forever preferred_lft forever > > > > > 8: tun0@NONE: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000 > > > > > link/ipip 192.168.1.1 peer 192.168.1.2 > > > > > inet 192.168.100.1/24 scope global tun0 > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > $ip route show > > > > > default via 192.168.100.2 dev tun0 > > > > > 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.2 > > > > > 192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.1 > > > > > 192.168.100.0/24 dev tun0 proto kernel scope link src 192.168.100.1 > > > > > > > > > > $nft list ruleset > > > > > table inet filter { > > > > > flowtable ft { > > > > > hook ingress priority filter > > > > > devices = { eth0, eth1 } > > > > > } > > > > > > > > > > chain forward { > > > > > type filter hook forward priority filter; policy accept; > > > > > meta l4proto { tcp, udp } flow add @ft > > > > > } > > > > > } > > > > > > > > > > Reproducing the scenario described above using veths I got the following > > > > > results: > > > > > - TCP stream transmitted into the IPIP tunnel: > > > > > - net-next: ~41Gbps > > > > > - net-next + IPIP flowtbale support: ~40Gbps > > > > ^^^^^^^^^ > > > > no gain on tx side. > > > > > > In this case the IPIP flowtable acceleration is effective just on the ACKs > > > packets so I guess it is expected we have ~ the same results. The real gain is > > > when the TCP stream is from the tunnel net_device to the NIC one. > > > > That is, only rx side follows the flowtable datapath. > > > > > > > - TCP stream received from the IPIP tunnel: > > > > > - net-next: ~35Gbps > > > > > - net-next + IPIP flowtbale support: ~49Gbps > > > > > > > > > > Signed-off-by: Lorenzo Bianconi <lorenzo@xxxxxxxxxx> > > > > > --- > > > > > net/ipv4/ipip.c | 21 +++++++++++++++++++++ > > > > > net/netfilter/nf_flow_table_ip.c | 34 ++++++++++++++++++++++++++++++++-- > > > > > 2 files changed, 53 insertions(+), 2 deletions(-) > > > > > > > > > > diff --git a/net/ipv4/ipip.c b/net/ipv4/ipip.c > > > > > index 3e03af073a1ccc3d7597a998a515b6cfdded40b5..05fb1c859170d74009d693bc8513183bdec3ff90 100644 > > > > > --- a/net/ipv4/ipip.c > > > > > +++ b/net/ipv4/ipip.c > > > > > @@ -353,6 +353,26 @@ ipip_tunnel_ctl(struct net_device *dev, struct ip_tunnel_parm_kern *p, int cmd) > > > > > return ip_tunnel_ctl(dev, p, cmd); > > > > > } > > > > > > > > > > +static int ipip_fill_forward_path(struct net_device_path_ctx *ctx, > > > > > + struct net_device_path *path) > > > > > +{ > > > > > + struct ip_tunnel *tunnel = netdev_priv(ctx->dev); > > > > > + const struct iphdr *tiph = &tunnel->parms.iph; > > > > > + struct rtable *rt; > > > > > + > > > > > + rt = ip_route_output(dev_net(ctx->dev), tiph->daddr, 0, 0, 0, > > > > > + RT_SCOPE_UNIVERSE); > > > > > + if (IS_ERR(rt)) > > > > > + return PTR_ERR(rt); > > > > > + > > > > > + path->type = DEV_PATH_ETHERNET; > > > > > + path->dev = ctx->dev; > > > > > + ctx->dev = rt->dst.dev; > > > > > + ip_rt_put(rt); > > > > > + > > > > > + return 0; > > > > > +} > > > > > + > > > > > static const struct net_device_ops ipip_netdev_ops = { > > > > > .ndo_init = ipip_tunnel_init, > > > > > .ndo_uninit = ip_tunnel_uninit, > > > > > @@ -362,6 +382,7 @@ static const struct net_device_ops ipip_netdev_ops = { > > > > > .ndo_get_stats64 = dev_get_tstats64, > > > > > .ndo_get_iflink = ip_tunnel_get_iflink, > > > > > .ndo_tunnel_ctl = ipip_tunnel_ctl, > > > > > + .ndo_fill_forward_path = ipip_fill_forward_path, > > > > > }; > > > > > > > > > > #define IPIP_FEATURES (NETIF_F_SG | \ > > > > > diff --git a/net/netfilter/nf_flow_table_ip.c b/net/netfilter/nf_flow_table_ip.c > > > > > index 8cd4cf7ae21120f1057c4fce5aaca4e3152ae76d..6b55e00b1022f0a2b02d9bfd1bd34bb55c1b83f7 100644 > > > > > --- a/net/netfilter/nf_flow_table_ip.c > > > > > +++ b/net/netfilter/nf_flow_table_ip.c > > > > > @@ -277,13 +277,37 @@ static unsigned int nf_flow_xmit_xfrm(struct sk_buff *skb, > > > > > return NF_STOLEN; > > > > > } > > > > > > > > > > +static bool nf_flow_ip4_encap_proto(struct sk_buff *skb, u16 *size) > > > > > +{ > > > > > + struct iphdr *iph; > > > > > + > > > > > + if (!pskb_may_pull(skb, sizeof(*iph))) > > > > > + return false; > > > > > + > > > > > + iph = (struct iphdr *)skb_network_header(skb); > > > > > + *size = iph->ihl << 2; > > > > > + > > > > > + if (ip_is_fragment(iph) || unlikely(ip_has_options(*size))) > > > > > + return false; > > > > > + > > > > > + if (iph->ttl <= 1) > > > > > + return false; > > > > > + > > > > > + return iph->protocol == IPPROTO_IPIP; > > > > > > > > > > what kind of sanity checks are we supposed to perform? Something similar to > > > what we have in ip_rcv_core()? > > > > I am not referring to sanity checks. > > > > VLAN/PPP ID (layer 2 encapsulation) is part of the lookup in the > > flowtable, why IPIP (layer 3 tunnel) does not get the same handling? > > ack, right. Do you have any suggestion about what field (or combination > of fields) we can use from the outer IP header similar to the VLAN/PPP > encapsulation? What about a hash computed over some of the outer IP header fields? (e.g IP saddr and daddr). Regards, Lorenzo > > > > > > > Once the flow is in the flowtable, it is possible to inject traffic > > > > with forged outer IP header, this is only looking at the inner IP > > > > header. > > > > > > what is the difference with the plain IP/TCP use-case? > > > > Not referring to the generic packet forging scenario. I refer to the > > scenario that would allow to forward packets for any IPIP outer header > > given the inner header finds a matching in the flowtable. I think that > > needs to be sorted out. > > ack. > > Regards, > Lorenzo
Attachment:
signature.asc
Description: PGP signature