Kumar Kartikeya Dwivedi <memxor@xxxxxxxxx> writes: [...] >> --- a/kernel/bpf/core.c >> +++ b/kernel/bpf/core.c >> @@ -782,7 +782,11 @@ bool is_bpf_text_address(unsigned long addr) >> >> struct bpf_prog *bpf_prog_ksym_find(unsigned long addr) >> { >> - struct bpf_ksym *ksym = bpf_ksym_find(addr); >> + struct bpf_ksym *ksym; >> + >> + rcu_read_lock(); >> + ksym = bpf_ksym_find(addr); >> + rcu_read_unlock(); >> >> return ksym && ksym->prog ? >> container_of(ksym, struct bpf_prog_aux, ksym)->prog : > > This isn't right, we need to have the read section open around ksym > access as well. > We can end the section and return the prog pointer. > The caller is responsible to ensure prog outlives RCU protection, or > otherwise hold it if necessary for prog's lifetime. > > We're using this to pick programs who have an active stack frame, so > they aren't going away. > But the ksym access itself needs to happen under correct protection. > > I can fix it in a respin, whatever is best. Are rcu_read_{lock,unlock} necessary in core.c:search_bpf_extables() after this change? Also, helpers.c:bpf_stack_walker.c does not have lock/unlock in it, this patch needs a fixes tag for commit f18b03fabaa9 ("bpf: Implement BPF exceptions")?