On Fri, May 9, 2025 at 12:50 PM Kumar Kartikeya Dwivedi <memxor@xxxxxxxxx> wrote: > > > +/* unused opcode to mark special ldsx instruction. Same as BPF_NOSPEC */ > +#define BPF_PROBE_MEM32SX 0xc0 lgtm. should work. > + > /* unused opcode to mark call to interpreter with arguments */ > #define BPF_CALL_ARGS 0xe0 > > @@ -1138,6 +1141,7 @@ bool bpf_jit_supports_arena(void); > bool bpf_jit_supports_insn(struct bpf_insn *insn, bool in_arena); > bool bpf_jit_supports_private_stack(void); > bool bpf_jit_supports_timed_may_goto(void); > +bool bpf_jit_supports_signed_arena_load(void); > u64 bpf_arch_uaddress_limit(void); > void arch_bpf_stack_walk(bool (*consume_fn)(void *cookie, u64 ip, u64 sp, u64 bp), void *cookie); > u64 arch_bpf_timed_may_goto(void); > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c > index a3e571688421..2a0431a8741c 100644 > --- a/kernel/bpf/core.c > +++ b/kernel/bpf/core.c > @@ -3076,6 +3076,11 @@ bool __weak bpf_jit_supports_insn(struct bpf_insn *insn, bool in_arena) > return false; > } > > +bool __weak bpf_jit_supports_signed_arena_load(void) > +{ > + return false; > +} Instead of introducing a new weak function, let's use bpf_jit_supports_insn() ? We were planning to convert other weak functions to it, but the work wasn't done. At least let's not create more tech debt to clean up later.