On Tue, Jul 1, 2025 at 11:17 PM Kumar Kartikeya Dwivedi <memxor@xxxxxxxxx> wrote: > > In preparation of figuring out the closest program that led to the > current point in the kernel, implement a function that scans through the > stack trace and finds out the closest BPF program when walking down the > stack trace. > > Special care needs to be taken to skip over kernel and BPF subprog > frames. We basically scan until we find a BPF main prog frame. The > assumption is that if a program calls into us transitively, we'll > hit it along the way. If not, we end up returning NULL. > > Contextually the function will be used in places where we know the > program may have called into us. > > Due to reliance on arch_bpf_stack_walk(), this function only works on > x86 with CONFIG_UNWINDER_ORC, arm64, and s390. Remove the warning from > arch_bpf_stack_walk as well since we call it outside bpf_throw() > context. > > Acked-by: Eduard Zingerman <eddyz87@xxxxxxxxx> > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@xxxxxxxxx> > --- > arch/x86/net/bpf_jit_comp.c | 1 - > include/linux/bpf.h | 1 + > kernel/bpf/core.c | 33 +++++++++++++++++++++++++++++++++ > 3 files changed, 34 insertions(+), 1 deletion(-) > Reviewed-by: Emil Tsalapatis <emil@xxxxxxxxxxxxxxx> > diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c > index 15672cb926fc..40e1b3b9634f 100644 > --- a/arch/x86/net/bpf_jit_comp.c > +++ b/arch/x86/net/bpf_jit_comp.c > @@ -3845,7 +3845,6 @@ void arch_bpf_stack_walk(bool (*consume_fn)(void *cookie, u64 ip, u64 sp, u64 bp > } > return; > #endif > - WARN(1, "verification of programs using bpf_throw should have failed\n"); > } > > void bpf_arch_poke_desc_update(struct bpf_jit_poke_descriptor *poke, > diff --git a/include/linux/bpf.h b/include/linux/bpf.h > index 09f06b1ea62e..4d577352f3e6 100644 > --- a/include/linux/bpf.h > +++ b/include/linux/bpf.h > @@ -3662,5 +3662,6 @@ static inline bool bpf_is_subprog(const struct bpf_prog *prog) > > int bpf_prog_get_file_line(struct bpf_prog *prog, unsigned long ip, const char **filep, > const char **linep, int *nump); > +struct bpf_prog *bpf_prog_find_from_stack(void); > > #endif /* _LINUX_BPF_H */ > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c > index b4203f68cf33..ab8b3446570c 100644 > --- a/kernel/bpf/core.c > +++ b/kernel/bpf/core.c > @@ -3262,4 +3262,37 @@ int bpf_prog_get_file_line(struct bpf_prog *prog, unsigned long ip, const char * > return 0; > } > > +struct walk_stack_ctx { > + struct bpf_prog *prog; > +}; > + > +static bool find_from_stack_cb(void *cookie, u64 ip, u64 sp, u64 bp) > +{ > + struct walk_stack_ctx *ctxp = cookie; > + struct bpf_prog *prog; > + > + /* > + * The RCU read lock is held to safely traverse the latch tree, but we > + * don't need its protection when accessing the prog, since it has an > + * active stack frame on the current stack trace, and won't disappear. > + */ > + rcu_read_lock(); > + prog = bpf_prog_ksym_find(ip); > + rcu_read_unlock(); > + if (!prog) > + return true; > + if (bpf_is_subprog(prog)) > + return true; > + ctxp->prog = prog; > + return false; > +} > + > +struct bpf_prog *bpf_prog_find_from_stack(void) > +{ > + struct walk_stack_ctx ctx = {}; > + > + arch_bpf_stack_walk(find_from_stack_cb, &ctx); > + return ctx.prog; > +} > + > #endif > -- > 2.47.1 >