On Wed, Jul 2, 2025 at 10:28 AM Leon Hwang <leon.hwang@xxxxxxxxx> wrote: > > > > On 2025/7/2 04:22, Andrii Nakryiko wrote: > > On Tue, Jun 24, 2025 at 9:55 AM Leon Hwang <leon.hwang@xxxxxxxxx> wrote: > >> > >> This patch adds libbpf support for the BPF_F_CPU flag in percpu_array maps, > >> introducing the following APIs: > >> > >> 1. bpf_map_update_elem_opts(): update with struct bpf_map_update_elem_opts > >> 2. bpf_map_lookup_elem_opts(): lookup with struct bpf_map_lookup_elem_opts > >> 3. bpf_map__update_elem_opts(): high-level wrapper with input validation > >> 4. bpf_map__lookup_elem_opts(): high-level wrapper with input validation > >> > >> Behavior: > >> > >> * If opts->cpu == 0xFFFFFFFF, the update is applied to all CPUs. > >> * Otherwise, it applies only to the specified CPU. > >> * Lookup APIs retrieve values from the target CPU when BPF_F_CPU is used. > >> > >> Signed-off-by: Leon Hwang <leon.hwang@xxxxxxxxx> > >> --- > >> tools/lib/bpf/bpf.c | 37 +++++++++++++++++++++++ > >> tools/lib/bpf/bpf.h | 35 +++++++++++++++++++++- > >> tools/lib/bpf/libbpf.c | 56 +++++++++++++++++++++++++++++++++++ > >> tools/lib/bpf/libbpf.h | 45 ++++++++++++++++++++++++++++ > >> tools/lib/bpf/libbpf.map | 4 +++ > >> tools/lib/bpf/libbpf_common.h | 12 ++++++++ > >> 6 files changed, 188 insertions(+), 1 deletion(-) > >> [...] > >> }; > >> -#define bpf_map_batch_opts__last_field flags > >> +#define bpf_map_batch_opts__last_field cpu > >> > >> > >> /** > >> @@ -286,6 +315,10 @@ LIBBPF_API int bpf_map_lookup_and_delete_batch(int fd, void *in_batch, > >> * Update spin_lock-ed map elements. This must be > >> * specified if the map value contains a spinlock. > >> * > >> + * **BPF_F_CPU** > >> + * As for percpu map, update value on all CPUs if **opts->cpu** is > >> + * 0xFFFFFFFF, or on specified CPU otherwise. > >> + * > >> * @param fd BPF map file descriptor > >> * @param keys pointer to an array of *count* keys > >> * @param values pointer to an array of *count* values > >> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c > >> index 6445165a24f2..30400bdc20d9 100644 > >> --- a/tools/lib/bpf/libbpf.c > >> +++ b/tools/lib/bpf/libbpf.c > >> @@ -10636,6 +10636,34 @@ int bpf_map__lookup_elem(const struct bpf_map *map, > >> return bpf_map_lookup_elem_flags(map->fd, key, value, flags); > >> } > >> > >> +int bpf_map__lookup_elem_opts(const struct bpf_map *map, const void *key, > >> + size_t key_sz, void *value, size_t value_sz, > >> + const struct bpf_map_lookup_elem_opts *opts) > >> +{ > >> + int nr_cpus = libbpf_num_possible_cpus(); > >> + __u32 cpu = OPTS_GET(opts, cpu, nr_cpus); > >> + __u64 flags = OPTS_GET(opts, flags, 0); > >> + int err; > >> + > >> + if (flags & BPF_F_CPU) { > >> + if (map->def.type != BPF_MAP_TYPE_PERCPU_ARRAY) > >> + return -EINVAL; > >> + if (cpu >= nr_cpus) > >> + return -E2BIG; > >> + if (map->def.value_size != value_sz) { > >> + pr_warn("map '%s': unexpected value size %zu provided, expected %u\n", > >> + map->name, value_sz, map->def.value_size); > >> + return -EINVAL; > >> + } > > > > shouldn't this go into validate_map_op?.. > > > > It should. > > However, to avoid making validate_map_op really complicated, I'd like to > add validate_map_cpu_op to wrap checking cpu and validate_map_op. validate_map_op is meant to handle all the different conditions, let's keep all that in one function [...]