Re: [Patch bpf-next v4 4/4] tcp_bpf: improve ingress redirection performance with message corking

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 03, 2025 at 01:32:08PM +0200, Jakub Sitnicki wrote:
> I'm all for reaping the benefits of batching, but I'm not thrilled about
> having a backlog worker on the path. The one we have on the sk_skb path
> has been a bottleneck:

It depends on what you compare with. If you compare it with vanilla
TCP_BPF, we did see is 5% latency increase. If you compare it with
regular TCP, it is still much better. Our goal is to make Cillium's
sockops-enable competitive with regular TCP, hence we compare it with
regular TCP.

I hope this makes sense to you. Sorry if this was not clear in our cover
letter.

> 
> 1) There's no backpressure propagation so you can have a backlog
> build-up. One thing to check is what happens if the receiver closes its
> window.

Right, I am sure there are still a lot of optimizations we can further
improve. The only question is how much we need for now. How about
optimizing it one step each time? :)

> 
> 2) There's a scheduling latency. That's why the performance of splicing
> sockets with sockmap (ingress-to-egress) looks bleak [1].

Same for regular TCP, we have to wakeup the receiver/worker. But I may
misunderstand this point?

> 
> So I have to dig deeper...
> 
> Have you considered and/or evaluated any alternative designs? For
> instance, what stops us from having an auto-corking / coalescing
> strategy on the sender side?

Auto corking _may_ be not as easy as TCP, since essentially we have no
protocol here, just a pure socket layer.

Thanks for your review!




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux