Re: [PATCH] perf trace: Split BPF skel code to util/trace_augment.c

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Namhyung,

It does not apply, probably because the cgroup patch is merged
beforehand. Can you please rebase it so others can test it? Otherwise,
this patch looks good to me.

And sorry about the delay and breaking the promise to review it within
two days...

On Tue, Apr 29, 2025 at 11:06 PM Namhyung Kim <namhyung@xxxxxxxxxx> wrote:
>
> And make builtin-trace.c less conditional.  Dummy functions will be
> called when BUILD_BPF_SKEL=0 is used.  This makes the builtin-trace.c
> slightly smaller and simpler by removing the skeleton and its helpers.
>
> The conditional guard of trace__init_syscalls_bpf_prog_array_maps() is
> changed from the HAVE_BPF_SKEL to HAVE_LIBBPF_SUPPORT as it doesn't
> have a skeleton in the code directly.  And a dummy function is added so
> that it can be called unconditionally.  The function will succeed only
> if the both conditions are true.
>
> Do not include trace_augment.h from the BPF code and move the definition
> of TRACE_AUG_MAX_BUF to the BPF directly.
>
> Cc: Howard Chu <howardchu95@xxxxxxxxx>
> Signed-off-by: Namhyung Kim <namhyung@xxxxxxxxxx>

Reviewed-by: Howard Chu <howardchu95@xxxxxxxxx>

Thanks,
Howard





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux