Re: [PATCH bpf-next V2 0/7] xdp: Allow BPF to set RX hints for XDP_REDIRECTed packets

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 01/08/2025 22.38, Jakub Kicinski wrote:
On Thu, 31 Jul 2025 18:27:07 +0200 Jesper Dangaard Brouer wrote:
iirc, a xdp prog can be attached to a cpumap. The skb can be created by
that xdp prog running on the remote cpu. It should be like a xdp prog
returning a XDP_PASS + an optional skb. The xdp prog can set some fields
in the skb. Other than setting fields in the skb, something else may be
also possible in the future, e.g. look up sk, earlier demux ...etc.

I have strong reservations about having the BPF program itself trigger
the SKB allocation. I believe this would fundamentally break the
performance model that makes cpumap redirect so effective.

See, I have similar concerns about growing struct xdp_frame.


IMHO there is a huge difference in doing memory allocs+init vs. growing
struct xdp_frame.

It very is important to notice that patchset is actually not growing
xdp_frame, in the traditional sense, instead we are adding an optional
area to xdp_frame (plus some flags to tell if area is in-use).  Remember
the xdp_frame area is not allocated or mem-zeroed (except flags).  If
not used, the members in struct xdp_rx_meta are never touched. Thus,
there is actually no performance impact in growing struct xdp_frame in
this way. Do you still have concerns?


That's why the guiding principle for me would be to make sure that
the features we add, beyond "classic XDP" as needed by DDoS, are
entirely optional.

Exactly, we agree.  What we do in this patchset is entirely optional.
These changes does not slowdown "classic XDP" and our DDoS use-case.


And if we include the goal of moving skb allocation
out of the driver to the xdp_frame growth, the drivers will sooner or
later unconditionally populate the xdp_frame. Decreasing performance
of "classic XDP"?


No, that is the beauty of this solution, it will not decrease the
performance of "classic XDP".

Do keep-in-mind that "moving skb allocation out of the driver" is not
part of this patchset and a moonshot goal that will take a long time
(but we are already "simulation" this via XDP-redirect for years now).
Drivers should obviously not unconditionally populate the xdp_frame's
rx_meta area.  It is first time to populate rx_meta, once driver reach
XDP_PASS case (normal netstack delivery). Today all drivers will at this
stage populate the SKB metadata (e.g. rx-hash + vlan) from the RX-
descriptor anyway.  Thus, I don't see how replacing those writes will
decrease performance.


The key to XDP's high performance lies in processing a bulk of
xdp_frames in a tight loop to amortize costs. The existing cpumap code
on the remote CPU is already highly optimized for this: it performs bulk
allocation of SKBs and uses careful prefetching to hide the memory
latency. Allowing a BPF program to sometimes trigger a heavyweight SKB
alloc+init (4 cache-line misses) would bypass all these existing
optimizations. It would introduce significant jitter into the pipeline
and disrupt the entire bulk-processing model we rely on for performance.

This performance is not just theoretical;

Somewhat off-topic for the architecture, I think, but do you happen
to have any real life data for that? IIRC the "listification" was a
moderate success for the skb path.. Or am I misreading and you have
other benefits of a tight processing loop in mind?

Our "tight processing loop" for NAPI (net_rx_action/napi_pool) is not
performing as well as we want. One major reason is that the CPU is being
stalled each time in the loop when the NIC driver needs to clear the 4
cache-lines for the SKB.  XDP have shown us that avoiding these steps is
a huge performance boost.  The "moving skb allocation out of the driver"
is one step towards improving the NAPI loop. As you hint we also need
some bulking or "listification".  I'm not a huge fan of SKB
"listification". XDP-redirect devmap/cpumap uses an array for creating
an RX bulk "stage".  The SKB listification work was never fully
completed IMHO.  Back then, I was working on getting PoC for SKB
forwarding working, but as soon as we reached any of the netfilter hooks
points the SKB list would get split into individual SKBs. IIRC SKB
listification only works for the first part of netstack SKB input code
path. And "late" part of qdisc TX layer, but the netstack code in-
between will always cause the SKB list would get split into individual
SKBs.  IIRC only back-pressure during qdisc TX will cause listification
to be used. It would be great if someone have cycles to work on
completing more of the SKB listification.

--Jesper





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux