> On May 7, 2025, at 4:58 PM, Jesper Dangaard Brouer <hawk@xxxxxxxxxx> wrote: > > > > On 07/05/2025 21.57, Jon Kohler wrote: >>> On May 7, 2025, at 3:04 PM, Jesper Dangaard Brouer <hawk@xxxxxxxxxx> wrote: >>> >>> >>> >>> On 07/05/2025 19.47, Jon Kohler wrote: >>>>> On May 7, 2025, at 1:21 PM, Willem de Bruijn <willemdebruijn.kernel@xxxxxxxxx> wrote: >>>>> >>>>> >>>>> Jesper Dangaard Brouer wrote: >>>>>> >>>>>> >>>>>> On 07/05/2025 19.02, Zvi Effron wrote: >>>>>>> On Wed, May 7, 2025 at 9:37 AM Jesper Dangaard Brouer <hawk@xxxxxxxxxx> wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On 07/05/2025 15.29, Willem de Bruijn wrote: >>>>>>>>> Stanislav Fomichev wrote: >>>>>>>>>> On 05/06, Jon Kohler wrote: >>>>>>>>>>> Introduce new XDP helpers: >>>>>>>>>>> - xdp_headlen: Similar to skb_headlen >>>>>>>> >>>>>>>> I really dislike xdp_headlen(). This "headlen" originates from an SKB >>>>>>>> implementation detail, that I don't think we should carry over into XDP >>>>>>>> land. >>>>>>>> We need to come up with something that isn't easily mis-read as the >>>>>>>> header-length. >>>>>>> >>>>>>> ... snip ... >>>>>>> >>>>>>>>>> + * xdp_headlen - Calculate the length of the data in an XDP buffer >>>>>>> >>>>>>> How about xdp_datalen()? >>>>>> >>>>>> Yes, I like xdp_datalen() :-) >>>>> >>>>> This is confusing in that it is the inverse of skb->data_len: >>>>> which is exactly the part of the data not in the skb head. >>>>> >>>>> There is value in consistent naming. I've never confused headlen >>>>> with header len. >>>>> >>>>> But if diverging, at least let's choose something not >>>>> associated with skbs with a different meaning. >>>> Brainstorming a few options: >>>> - xdp_head_datalen() ? >>>> - xdp_base_datalen() ? >>>> - xdp_base_headlen() ? >>>> - xdp_buff_datalen() ? >>>> - xdp_buff_headlen() ? >>>> - xdp_datalen() ? (ZivE, JesperB) >>>> - xdp_headlen() ? (WillemB, JonK, StanislavF, JacobK, DanielB) >>> >>> What about keeping it really simple: xdp_buff_len() ? >> This is suspiciously close to xdp_get_buff_len(), so there could be some >> confusion there, since that takes paged/frags into account transparently. > > Good point. > >>> >>> Or even simpler: xdp_len() as the function documentation already >>> describe this doesn't include frags. >> There is a neat hint from Lorenzo’s change in bpf.h for bpf_xdp_get_buff_len() >> that talks about both linear and paged length. Also, xdp_buff_flags’s >> XDP_FLAGS_HAS_FRAGS says non-linear xdp buff. >> Taking those hints, what about: >> xdp_linear_len() == xdp->data_end - xdp->data >> xdp_paged_len() == sinfo->xdp_frags_size >> xdp_get_buff_len() == xdp_linear_len() + xdp_paged_len() > > I like xdp_linear_len() as it is descriptive/clear. Ok thanks, I’ll send out a v4 to codify that. > > >> Just a thought. If not, that’s ok. I’m happy to do xdp_len, but do you then have a >> suggestion about getting the non-linear size only? >> > > I've not checked if we have API users that need to get the non-linear size only... > > A history rant: > XDP started out as being limited to one-page ("packet-pages" was my > original bad name). With a fixed XDP_HEADROOM of 256 bytes and reserved > tailroom of 320 bytes sizeof(skb_shared_info) to be compatible with > creating an SKB (that can use this as a "head" page). Limiting max MTU > to be 3502 (assuming Eth(14)+2 VLAN headers=18). > These constraints were why XDP was so fast. As time goes on we continue > to add features and performance paper-cuts. Today, XDP_HEADROOM have > become variable, leading to checks all over. With XDP multi-buffer > support getting more features, we also have to add check all over for that. > WARNING to end-users: XDP programs that use xdp.frags and the associated > helpers are really SLOW (as these helper need to copy out data to stack > or elsewhere). XDP is only fast if your XDP prog read the linear path > with the older helpers (direct access) and ignore if packet have frags. > We are slowly but surely making XDP slower and slower by paper-cuts. > Guess, we should clearly document that, such that people don't think XDP > multi-buffer access is fast. Sorry for the rant. No worries on the rant, and I appreciate the context. My multi buffer journey is exploring the possibility of using that as the mechanism to batch up large payloads from vhost/net, as small payloads have this slick XDP-based batching mechanism that then dequeues the batch thru tun_xdp_one, but large GSO payloads go down a slower, not batched path, and also force all of the small payloads to flush first. I have it half-ish working, so I’ll get more excited (or not?) about that later. > > >>> >>> To Jon, you seems to be on a cleanup spree: >>> For SKBs netstack have this diagram documented [1]. Which also explains >>> the concept of a "head" buffer, which isn't a concept for XDP. I would >>> really like to see a diagram documenting both xdp_buff and xdp_frame >>> data structures via ascii art, like the one for SKBs. (Hint, this is >>> actually defined in the header file include/linux/skbuff.h, but >>> converted to RST/HTML format.) >>> >>> [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.kernel.org_networking_skbuff.html&d=DwIDaQ&c=s883GpUCOChKOHiocYtGcg&r=NGPRGGo37mQiSXgHKm5rCQ&m=-gFjIo6lMJVmxoGTrO99FYTlCi0KdthxFuSRv1nasjBI-qJrqdBuQDTN1TrXArrD&s=X16zsPJ_lJwLgJjKJHvKVzMFkuAjEgyZYfDsoCVugyY&e= >> I certainly am in a cleanup sort of mood, happy to help here. I see what >> you're talking about, I’ll take a stab at this in a separate patch. Thanks >> for the push and tip! > > Thanks for the cleanups. > --Jesper