Re: [PATCH bpf] bpf/helpers: Skip memcg accounting in __bpf_async_init()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

On Fri, Sep 05, 2025 at 06:19:17AM +0000, Peilin Ye wrote:
> The above was reproduced on bpf-next (b338cf849ec8) by modifying
> ./tools/sched_ext/scx_flatcg.bpf.c to call bpf_timer_init() during
> ops.runnable(), and hacking [1] the memcg accounting code a bit to make
> it (much more likely to) raise an MEMCG_MAX event from a
> bpf_timer_init() call.

FWIW, below are changes I made to scx_flatcg.bpf.c to reproduce the
hardlockup.  Please let me know if there's more info that I can provide.

Thanks,
Peilin Ye

--- a/tools/sched_ext/scx_flatcg.bpf.c
+++ b/tools/sched_ext/scx_flatcg.bpf.c
@@ -504,8 +504,31 @@ static void update_active_weight_sums(struct cgroup *cgrp, bool runnable)
 		cgrp_refresh_hweight(cgrp, cgc);
 }

+struct __bpf_timer {
+	struct bpf_timer timer;
+};
+#define NUM_BPF_TIMERS	10
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY);
+	__uint(max_entries, NUM_BPF_TIMERS);
+	__type(key, u32);
+	__type(value, struct __bpf_timer);
+} timer_map SEC(".maps");
+int count = 0;
+
 void BPF_STRUCT_OPS(fcg_runnable, struct task_struct *p, u64 enq_flags)
 {
+	if (count < NUM_BPF_TIMERS) {
+		struct bpf_timer *timer;
+		u32 key = count;
+
+		timer = bpf_map_lookup_elem(&timer_map, &key);
+		if (!timer)
+			return;
+		bpf_timer_init(timer, &timer_map, CLOCK_MONOTONIC);
+		count++;
+	}
+
 	struct cgroup *cgrp;

 	cgrp = __COMPAT_scx_bpf_task_cgroup(p);





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux