Re: [PATCH RFC bpf-next v2 01/17] trait: limited KV store for packet metadata

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 01, 2025 at 12:43 PM +02, Toke Høiland-Jørgensen wrote:
> Jakub Sitnicki <jakub@xxxxxxxxxxxxxx> writes:
>
>> On Wed, Apr 30, 2025 at 11:19 AM +02, Toke Høiland-Jørgensen wrote:
>>> Alexei Starovoitov <alexei.starovoitov@xxxxxxxxx> writes:
>>>
>>>> On Fri, Apr 25, 2025 at 12:27 PM Arthur Fabre <arthur@xxxxxxxxxxxxxxx> wrote:
>>>>>
>>>>> On Thu Apr 24, 2025 at 6:22 PM CEST, Alexei Starovoitov wrote:
>>>>> > On Tue, Apr 22, 2025 at 6:23 AM Arthur Fabre <arthur@xxxxxxxxxxxxxxx> wrote:
>>
>> [...]
>>
>>>>> * Hardware metadata: metadata exposed from NICs (like the receive
>>>>>   timestamp, 4 tuple hash...) is currently only exposed to XDP programs
>>>>>   (via kfuncs).
>>>>>   But that doesn't expose them to the rest of the stack.
>>>>>   Storing them in traits would allow XDP, other BPF programs, and the
>>>>>   kernel to access and modify them (for example to into account
>>>>>   decapsulating a packet).
>>>>
>>>> Sure. If traits == existing metadata bpf prog in xdp can communicate
>>>> with bpf prog in skb layer via that "trait" format.
>>>> xdp can take tuple hash and store it as key==0 in the trait.
>>>> The kernel doesn't need to know how to parse that format.
>>>
>>> Yes it does, to propagate it to the skb later. I.e.,
>>>
>>> XDP prog on NIC: get HW hash, store in traits, redirect to CPUMAP
>>> CPUMAP: build skb, read hash from traits, populate skb hash
>>>
>>> Same thing for (at least) timestamps and checksums.
>>>
>>> Longer term, with traits available we could move more skb fields into
>>> traits to make struct sk_buff smaller (by moving optional fields to
>>> traits that don't take up any space if they're not set).
>>
>> Perhaps we can have the cake and eat it too.
>>
>> We could leave the traits encoding/decoding out of the kernel and, at
>> the same time, *expose it* to the network stack through BPF struct_ops
>> programs. At a high level, for example ->get_rx_hash(), not the
>> individual K/V access. The traits_ops vtable could grow as needed to
>> support new use cases.
>>
>> If you think about it, it's not so different from BPF-powered congestion
>> algorithms and scheduler extensions. They also expose some state, kept in
>> maps, that only the loaded BPF code knows how to operate on.
>
> Right, the difference being that the kernel works perfectly well without
> an eBPF congestion control algorithm loaded because it has its own
> internal implementation that is used by default.

It seems to me that any code path on the network stack still needs to
work *even if* traits K/V is not available. There has to be a fallback -
like, RX hash not present in traits K/V? must recompute it. There is no
guarantee that there will be space available in the traits K/V store for
whatever value the network stack would like to cache there.

So if we can agree that traits K/V is a cache, with limited capacity,
and any code path accessing it must be prepared to deal with a cache
miss, then I think with struct_ops approach you could have a built-in
default implementation for exclusive use by the network stack.

This default implementation of the storage access just wouldn't be
exposed to the BPF or user-space. If you want access from BPF/userland,
then you'd need to provide a BPF-backed struct_ops for accessing traits
K/V.

> Having a hard dependency on BPF for in-kernel functionality is a
> different matter, and limits the cases it can be used for.

Notice that we already rely on XDP program being attached or the storage
for traits K/V is not available.

> Besides, I don't really see the point of leaving the encoding out of the
> kernel? We keep the encoding kernel-internal anyway, and just expose a
> get/set API, so there's no constraint on changing it later (that's kinda
> the whole point of doing that). And with bulk get/set there's not an
> efficiency argument either. So what's the point, other than doing things
> in BPF for its own sake?

There's the additional complexity in the socket glue layer, but I've
already mentioned that.

What I think makes it even more appealing is that with the high-level
struct_ops approach, we abstract away the individual K/V pair access and
leave the problem of "key registration" (e.g., RX hash is key 42) to the
user-provided implementation.

You, as the user, decide for your particular system how you want to lay
out the values and for which values you actually want to reserve
space. IOW, we leave any trade off decisions to the user in the spirit
of providing a mechanism, not policy.





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux