On Fri, Aug 29, 2025 at 11:50 AM Song Liu <song@xxxxxxxxxx> wrote: > > On Fri, Aug 29, 2025 at 10:29 AM Alexei Starovoitov > <alexei.starovoitov@xxxxxxxxx> wrote: > [...] > > > > > > static long __bpf_get_stackid(struct bpf_map *map, > > > - struct perf_callchain_entry *trace, u64 flags) > > > + struct perf_callchain_entry *trace, u64 flags, u32 max_depth) > > > { > > > struct bpf_stack_map *smap = container_of(map, struct bpf_stack_map, map); > > > struct stack_map_bucket *bucket, *new_bucket, *old_bucket; > > > @@ -263,6 +263,8 @@ static long __bpf_get_stackid(struct bpf_map *map, > > > > > > trace_nr = trace->nr - skip; > > > trace_len = trace_nr * sizeof(u64); > > > + trace_nr = min(trace_nr, max_depth - skip); > > > + > > > > The patch might have fixed this particular syzbot repro > > with OOB in stackmap-with-buildid case, > > but above two line looks wrong. > > trace_len is computed before being capped by max_depth. > > So non-buildid case below is using > > memcpy(new_bucket->data, ips, trace_len); > > > > so OOB is still there? > > +1 for this observation. > > We are calling __bpf_get_stackid() from two functions: bpf_get_stackid > and bpf_get_stackid_pe. The check against max_depth is only needed > from bpf_get_stackid_pe, so it is better to just check here. Good point. > I have got the following on top of patch 1/2. This makes more sense to > me. > > PS: The following also includes some clean up in __bpf_get_stack. > I include those because it also uses stack_map_calculate_max_depth. > > Does this look better? yeah. It's certainly cleaner to avoid adding extra arg to __bpf_get_stackid()