On Tue, Aug 26, 2025 at 2:24 PM Arnaud Lecomte <contact@xxxxxxxxxxxxxx> wrote: > > Syzkaller reported a KASAN slab-out-of-bounds write in __bpf_get_stackid() > when copying stack trace data. The issue occurs when the perf trace > contains more stack entries than the stack map bucket can hold, > leading to an out-of-bounds write in the bucket's data array. > > Changes in v2: > - Fixed max_depth names across get stack id > > Changes in v4: > - Removed unnecessary empty line in __bpf_get_stackid > > Link to v4: https://lore.kernel.org/all/20250813205506.168069-1-contact@xxxxxxxxxxxxxx/ > > Reported-by: syzbot+c9b724fbb41cf2538b7b@xxxxxxxxxxxxxxxxxxxxxxxxx > Closes: https://syzkaller.appspot.com/bug?extid=c9b724fbb41cf2538b7b > Signed-off-by: Arnaud Lecomte <contact@xxxxxxxxxxxxxx> > Acked-by: Yonghong Song <yonghong.song@xxxxxxxxx> > --- > kernel/bpf/stackmap.c | 23 +++++++++++++---------- > 1 file changed, 13 insertions(+), 10 deletions(-) > > diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c > index 796cc105eacb..ef8269ab8d6f 100644 > --- a/kernel/bpf/stackmap.c > +++ b/kernel/bpf/stackmap.c > @@ -247,7 +247,7 @@ get_callchain_entry_for_task(struct task_struct *task, u32 max_depth) > } > > static long __bpf_get_stackid(struct bpf_map *map, > - struct perf_callchain_entry *trace, u64 flags) > + struct perf_callchain_entry *trace, u64 flags, u32 max_depth) > { > struct bpf_stack_map *smap = container_of(map, struct bpf_stack_map, map); > struct stack_map_bucket *bucket, *new_bucket, *old_bucket; > @@ -263,6 +263,8 @@ static long __bpf_get_stackid(struct bpf_map *map, > > trace_nr = trace->nr - skip; > trace_len = trace_nr * sizeof(u64); > + trace_nr = min(trace_nr, max_depth - skip); > + The patch might have fixed this particular syzbot repro with OOB in stackmap-with-buildid case, but above two line looks wrong. trace_len is computed before being capped by max_depth. So non-buildid case below is using memcpy(new_bucket->data, ips, trace_len); so OOB is still there?