On Mon, 18 Aug 2025 at 19:02, Roman Gushchin <roman.gushchin@xxxxxxxxx> wrote: > > Introduce bpf_out_of_memory() bpf kfunc, which allows to declare > an out of memory events and trigger the corresponding kernel OOM > handling mechanism. > > It takes a trusted memcg pointer (or NULL for system-wide OOMs) > as an argument, as well as the page order. > > If the wait_on_oom_lock argument is not set, only one OOM can be > declared and handled in the system at once, so if the function is > called in parallel to another OOM handling, it bails out with -EBUSY. > This mode is suited for global OOM's: any concurrent OOMs will likely > do the job and release some memory. In a blocking mode (which is > suited for memcg OOMs) the execution will wait on the oom_lock mutex. > > The function is declared as sleepable. It guarantees that it won't > be called from an atomic context. It's required by the OOM handling > code, which is not guaranteed to work in a non-blocking context. > > Handling of a memcg OOM almost always requires taking of the > css_set_lock spinlock. The fact that bpf_out_of_memory() is sleepable > also guarantees that it can't be called with acquired css_set_lock, > so the kernel can't deadlock on it. > > Signed-off-by: Roman Gushchin <roman.gushchin@xxxxxxxxx> > --- > mm/oom_kill.c | 45 +++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 45 insertions(+) > > diff --git a/mm/oom_kill.c b/mm/oom_kill.c > index 25fc5e744e27..df409f0fac45 100644 > --- a/mm/oom_kill.c > +++ b/mm/oom_kill.c > @@ -1324,10 +1324,55 @@ __bpf_kfunc int bpf_oom_kill_process(struct oom_control *oc, > return 0; > } > > +/** > + * bpf_out_of_memory - declare Out Of Memory state and invoke OOM killer > + * @memcg__nullable: memcg or NULL for system-wide OOMs > + * @order: order of page which wasn't allocated > + * @wait_on_oom_lock: if true, block on oom_lock > + * @constraint_text__nullable: custom constraint description for the OOM report > + * > + * Declares the Out Of Memory state and invokes the OOM killer. > + * > + * OOM handlers are synchronized using the oom_lock mutex. If wait_on_oom_lock > + * is true, the function will wait on it. Otherwise it bails out with -EBUSY > + * if oom_lock is contended. > + * > + * Generally it's advised to pass wait_on_oom_lock=true for global OOMs > + * and wait_on_oom_lock=false for memcg-scoped OOMs. > + * > + * Returns 1 if the forward progress was achieved and some memory was freed. > + * Returns a negative value if an error has been occurred. > + */ > +__bpf_kfunc int bpf_out_of_memory(struct mem_cgroup *memcg__nullable, > + int order, bool wait_on_oom_lock) I think this bool should be a u64 flags instead, just to make it easier to extend behavior in the future. > +{ > + struct oom_control oc = { > + .memcg = memcg__nullable, > + .order = order, > + }; > + int ret; > + > + if (oc.order < 0 || oc.order > MAX_PAGE_ORDER) > + return -EINVAL; > + > + if (wait_on_oom_lock) { > + ret = mutex_lock_killable(&oom_lock); > + if (ret) > + return ret; > + } else if (!mutex_trylock(&oom_lock)) > + return -EBUSY; > + > + ret = out_of_memory(&oc); > + > + mutex_unlock(&oom_lock); > + return ret; > +} > + > __bpf_kfunc_end_defs(); > > BTF_KFUNCS_START(bpf_oom_kfuncs) > BTF_ID_FLAGS(func, bpf_oom_kill_process, KF_SLEEPABLE | KF_TRUSTED_ARGS) > +BTF_ID_FLAGS(func, bpf_out_of_memory, KF_SLEEPABLE | KF_TRUSTED_ARGS) > BTF_KFUNCS_END(bpf_oom_kfuncs) > > static const struct btf_kfunc_id_set bpf_oom_kfunc_set = { > -- > 2.50.1 >