Hello Namhyung, Just a single comment although this has been applied. On Thu, May 1, 2025 at 3:53 PM Namhyung Kim <namhyung@xxxxxxxxxx> wrote: > > Add a new summary mode to collect stats for each cgroup. > > $ sudo ./perf trace -as --bpf-summary --summary-mode=cgroup -- sleep 1 > > Summary of events: > > cgroup /user.slice/user-657345.slice/user@657345.service/session.slice/org.gnome.Shell@x11.service, 535 events > > syscall calls errors total min avg max stddev > (msec) (msec) (msec) (msec) (%) > --------------- -------- ------ -------- --------- --------- --------- ------ > ppoll 15 0 373.600 0.004 24.907 197.491 55.26% > poll 15 0 1.325 0.001 0.088 0.369 38.76% > close 66 0 0.567 0.007 0.009 0.026 3.55% > write 150 0 0.471 0.001 0.003 0.010 3.29% > recvmsg 94 83 0.290 0.000 0.003 0.037 16.39% > ioctl 26 0 0.237 0.001 0.009 0.096 50.13% > timerfd_create 66 0 0.236 0.003 0.004 0.024 8.92% > timerfd_settime 70 0 0.160 0.001 0.002 0.012 7.66% > writev 10 0 0.118 0.001 0.012 0.019 18.17% > read 9 0 0.021 0.001 0.002 0.004 14.07% > getpid 14 0 0.019 0.000 0.001 0.004 20.28% > <SNIP> > +static int update_cgroup_stats(struct hashmap *hash, struct syscall_key *map_key, > + struct syscall_stats *map_data) > +{ > + struct syscall_data *data; > + struct syscall_node *nodes; > + > + if (!hashmap__find(hash, map_key->cgroup, &data)) { > + data = zalloc(sizeof(*data)); > + if (data == NULL) > + return -ENOMEM; > + > + data->key = map_key->cgroup; > + if (hashmap__add(hash, data->key, data) < 0) { > + free(data); > + return -ENOMEM; > + } > + } > + > + /* update thread total stats */ > + data->nr_events += map_data->count; > + data->total_time += map_data->total_time; > + > + nodes = reallocarray(data->nodes, data->nr_nodes + 1, sizeof(*nodes)); > + if (nodes == NULL) > + return -ENOMEM; > + > + data->nodes = nodes; > + nodes = &data->nodes[data->nr_nodes++]; > + nodes->syscall_nr = map_key->nr; > + > + /* each thread has an entry for each syscall, just use the stat */ This comment shouldn't be here. Otherwise, Reviewed-by: Howard Chu <howardchu95@xxxxxxxxx> Thanks, Howard