On Wed Aug 27, 2025 at 6:18 AM +08, Alexei Starovoitov wrote: > On Mon, Aug 25, 2025 at 8:00 PM Leon Hwang <leon.hwang@xxxxxxxxx> wrote: >> >> >> >> On 25/8/25 23:17, Alexei Starovoitov wrote: >> > On Mon, Aug 25, 2025 at 6:15 AM Leon Hwang <leon.hwang@xxxxxxxxx> wrote: [...] >> > >> > It doesn't scale. Next thing people will ask for hard vs soft irq. >> > >> >> How about adding a 'flags'? >> >> Here are the values for 'flags': >> >> * 0: return in_interrupt(); >> * 1(NMI): return in_nmi(); >> * 2(HARDIRQ): return in_hardirq(); >> * 3(SOFTIRQ): return in_softirq(); > > That's an option, but before we argue whether to do as one kfunc with enum > vs N kfuncs let's explore bpf only option that doesn't involve changing > the kernel. > >> >> +#if defined(CONFIG_X86_64) && !defined(CONFIG_UML) >> >> + insn_buf[0] = BPF_MOV64_IMM(BPF_REG_0, (u32)(unsigned long)&__preempt_count); >> > >> > I think bpf_per_cpu_ptr() should already be able to read that per cpu var. >> > >> >> Correct. bpf_per_cpu_ptr() and bpf_this_cpu_ptr() are helpful to read it. > > Can you add them as static inline functions to bpf_experimental.h > and a selftest to make sure it's all working? > At least for x86 and !PREEMPT_RT. > Like: > bool bpf_in_interrupt() > { > bpf_this_cpu_ptr(...preempt_count..) & (NMI_MASK | HARDIRQ_MASK | > SOFTIRQ_MASK); > } > > Of course, there is a danger that kernel implementation might > diverge from bpf-only bit, but it's a risk we're taking all the time. I do a PoC of adding bpf_in_interrupt() to bpf_experimental.h. It works: extern bool CONFIG_PREEMPT_RT __kconfig __weak; #ifdef bpf_target_x86 extern const int __preempt_count __ksym; #endif struct task_struct__preempt_rt { int softirq_disable_cnt; } __attribute__((preserve_access_index)); /* Description * Report whether it is in interrupt context. Only works on x86. */ static inline int bpf_in_interrupt(void) { #ifdef bpf_target_x86 int pcnt; pcnt = *(int *) bpf_this_cpu_ptr(&__preempt_count); if (!CONFIG_PREEMPT_RT) { return pcnt & (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_MASK); } else { struct task_struct__preempt_rt *tsk; tsk = (void *) bpf_get_current_task_btf(); return (pcnt & (NMI_MASK | HARDIRQ_MASK)) | (tsk->softirq_disable_cnt | SOFTIRQ_MASK); } #else return 0; #endif } However, I only test it for !PREEMPT_RT on x86. I'd like to respin the patchset by moving bpf_in_interrupt() to bpf_experimental.h. Thanks, Leon