Re: [PATCH bpf-next] bpf: make the attach target more accurate

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 7/7/25 1:35 PM, Menglong Dong wrote:
For now, we lookup the address of the attach target in
bpf_check_attach_target() with find_kallsyms_symbol_value or
kallsyms_lookup_name, which is not accurate in some cases.

For example, we want to attach to the target "t_next", but there are
multiple symbols with the name "t_next" exist in the kallsyms. The one
that kallsyms_lookup_name() returned may have no ftrace record, which
makes the attach target not available. So we want the one that has ftrace
record to be returned.

Meanwhile, there may be multiple symbols with the name "t_next" in ftrace
record. In this case, the attach target is ambiguous, so the attach should
fail.

Introduce the function bpf_lookup_attach_addr() to do the address lookup,
which is able to solve this problem.

Signed-off-by: Menglong Dong <dongml2@xxxxxxxxxxxxxxx>

Breaks CI, see also:

First test_progs failure (test_progs-aarch64-gcc-14):
#467/1 tracing_failure/bpf_spin_lock
test_bpf_spin_lock:PASS:tracing_failure__open 0 nsec
libbpf: prog 'test_spin_lock': BPF program load failed: -ENOENT
libbpf: prog 'test_spin_lock': -- BEGIN PROG LOAD LOG --
The address of function bpf_spin_lock cannot be found
processed 0 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0
-- END PROG LOAD LOG --
libbpf: prog 'test_spin_lock': failed to load: -ENOENT
libbpf: failed to load object 'tracing_failure'
libbpf: failed to load BPF skeleton 'tracing_failure': -ENOENT
test_bpf_spin_lock:FAIL:tracing_failure__load unexpected error: -2 (errno 2)
#467/2 tracing_failure/bpf_spin_unlock
test_bpf_spin_lock:PASS:tracing_failure__open 0 nsec
libbpf: prog 'test_spin_unlock': BPF program load failed: -ENOENT
libbpf: prog 'test_spin_unlock': -- BEGIN PROG LOAD LOG --
The address of function bpf_spin_unlock cannot be found
processed 0 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0
-- END PROG LOAD LOG --
libbpf: prog 'test_spin_unlock': failed to load: -ENOENT
libbpf: failed to load object 'tracing_failure'
libbpf: failed to load BPF skeleton 'tracing_failure': -ENOENT
test_bpf_spin_lock:FAIL:tracing_failure__load unexpected error: -2 (errno 2)

  kernel/bpf/verifier.c | 76 ++++++++++++++++++++++++++++++++++++++++---
  1 file changed, 71 insertions(+), 5 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 0f6cc2275695..9a7128da6d13 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -23436,6 +23436,72 @@ static int check_non_sleepable_error_inject(u32 btf_id)
  	return btf_id_set_contains(&btf_non_sleepable_error_inject, btf_id);
  }
+struct symbol_lookup_ctx {
+	const char *name;
+	unsigned long addr;
+};
+
+static int symbol_callback(void *data, unsigned long addr)
+{
+	struct symbol_lookup_ctx *ctx = data;
+
+	if (!ftrace_location(addr))
+		return 0;
+
+	if (ctx->addr)
+		return -EADDRNOTAVAIL;
+
+	ctx->addr = addr;
+
+	return 0;
+}
+
+static int symbol_mod_callback(void *data, const char *name, unsigned long addr)
+{
+	if (strcmp(((struct symbol_lookup_ctx *)data)->name, name) != 0)
+		return 0;
+
+	return symbol_callback(data, addr);
+}
+
+/**
+ * bpf_lookup_attach_addr: Lookup address for a symbol
+ *
+ * @mod: kernel module to lookup the symbol, NULL means to lookup the kernel
+ * symbols
+ * @sym: the symbol to resolve
+ * @addr: pointer to store the result
+ *
+ * Lookup the address of the symbol @sym, and the address should has
+ * corresponding ftrace location. If multiple symbols with the name @sym
+ * exist, the one that has ftrace location will be returned. If more than
+ * 1 has ftrace location, -EADDRNOTAVAIL will be returned.
+ *
+ * Returns: 0 on success, -errno otherwise.
+ */
+static int bpf_lookup_attach_addr(const struct module *mod, const char *sym,
+				  unsigned long *addr)
+{
+	struct symbol_lookup_ctx ctx = { .addr = 0, .name = sym };
+	int err;
+
+	if (!mod)
+		err = kallsyms_on_each_match_symbol(symbol_callback, sym, &ctx);

This is also not really equivalent to kallsyms_lookup_name(). kallsyms_on_each_match_symbol()
only iterates over all symbols in vmlinux whereas kallsyms_lookup_name() looks up both vmlinux
and modules.

+	else
+		err = module_kallsyms_on_each_symbol(mod->name, symbol_mod_callback,
+						     &ctx);
+
+	if (!ctx.addr)
+		return -ENOENT;
+
+	if (err)
+		return err;
+
+	*addr = ctx.addr;
+
+	return 0;
+}
+
  int bpf_check_attach_target(struct bpf_verifier_log *log,
  			    const struct bpf_prog *prog,
  			    const struct bpf_prog *tgt_prog,
@@ -23689,18 +23755,18 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
  			if (btf_is_module(btf)) {
  				mod = btf_try_get_module(btf);
  				if (mod)
-					addr = find_kallsyms_symbol_value(mod, tname);
+					ret = bpf_lookup_attach_addr(mod, tname, &addr);
  				else
-					addr = 0;
+					ret = -ENOENT;
  			} else {
-				addr = kallsyms_lookup_name(tname);
+				ret = bpf_lookup_attach_addr(NULL, tname, &addr);
  			}
-			if (!addr) {
+			if (ret) {
  				module_put(mod);
  				bpf_log(log,
  					"The address of function %s cannot be found\n",
  					tname);
-				return -ENOENT;
+				return ret;
  			}
  		}





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux