Re: [PATCH bpf-next v2 02/18] x86,bpf: add bpf_global_caller for global trampoline

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 7/15/25 16:36, Menglong Dong wrote:

On 7/15/25 10:25, Alexei Starovoitov wrote:
Pls share top 10 from "perf report" while running the bench.
I'm curious about what's hot.
Last time I benchmarked fentry/fexit migrate_disable/enable were
one the hottest functions. I suspect it's the case here as well.


You are right, the migrate_disable/enable are the hottest functions in
both bpf trampoline and global trampoline. Following is the perf top
for fentry-multi:
36.36% bpf_prog_2dcccf652aac1793_bench_trigger_fentry_multi [k] bpf_prog_2dcccf652aac1793_bench_trigger_fentry_multi 20.54% [kernel] [k] migrate_enable 19.35% [kernel] [k] bpf_global_caller_5_run 6.52% [kernel] [k] bpf_global_caller_5 3.58% libc.so.6 [.] syscall 2.88% [kernel] [k] entry_SYSCALL_64 1.50% [kernel] [k] memchr_inv 1.39% [kernel] [k] fput 1.04% [kernel] [k] migrate_disable 0.91% [kernel] [k] _copy_to_user

And I also did the testing for fentry:

54.63% bpf_prog_2dcccf652aac1793_bench_trigger_fentry [k] bpf_prog_2dcccf652aac1793_bench_trigger_fentry
10.43% [kernel] [k] migrate_enable
10.07% bpf_trampoline_6442517037 [k] bpf_trampoline_6442517037
8.06% [kernel] [k] __bpf_prog_exit_recur 4.11% libc.so.6 [.] syscall 2.15% [kernel] [k] entry_SYSCALL_64 1.48% [kernel] [k] memchr_inv 1.32% [kernel] [k] fput 1.16% [kernel] [k] _copy_to_user 0.73% [kernel] [k] bpf_prog_test_run_raw_tp
The migrate_enable/disable are used to do the recursive checking,
and I even wanted to perform recursive checks in the same way as
ftrace to eliminate this overhead :/


Sorry that I'm not familiar with Thunderbird yet, and the perf top
messed up. Following are the test results for fentry-multi:
  36.36% bpf_prog_2dcccf652aac1793_bench_trigger_fentry_multi [k] bpf_prog_2dcccf652aac1793_bench_trigger_fentry_multi
  20.54% [kernel] [k] migrate_enable
  19.35% [kernel] [k] bpf_global_caller_5_run
  6.52% [kernel] [k] bpf_global_caller_5
  3.58% libc.so.6 [.] syscall
  2.88% [kernel] [k] entry_SYSCALL_64
  1.50% [kernel] [k] memchr_inv
  1.39% [kernel] [k] fput
  1.04% [kernel] [k] migrate_disable
  0.91% [kernel] [k] _copy_to_user

And I also did the testing for fentry:
  54.63% bpf_prog_2dcccf652aac1793_bench_trigger_fentry [k] bpf_prog_2dcccf652aac1793_bench_trigger_fentry
  10.43% [kernel] [k] migrate_enable
  10.07% bpf_trampoline_6442517037 [k] bpf_trampoline_6442517037
  8.06% [kernel] [k] __bpf_prog_exit_recur
  4.11% libc.so.6 [.] syscall
  2.15% [kernel] [k] entry_SYSCALL_64
  1.48% [kernel] [k] memchr_inv
  1.32% [kernel] [k] fput
  1.16% [kernel] [k] _copy_to_user
  0.73% [kernel] [k] bpf_prog_test_run_raw_tp

The migrate_enable/disable are used to do the recursive checking,
and I even wanted to perform recursive checks in the same way as
ftrace to eliminate this overhead :/

Thanks!
Menglong Dong




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux