Re: [PATCH bpf-next v3 6/7] bpf: task work scheduling kfuncs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 5, 2025 at 9:45 AM Mykyta Yatsenko
<mykyta.yatsenko5@xxxxxxxxx> wrote:
>
> From: Mykyta Yatsenko <yatsenko@xxxxxxxx>
>
> Implementation of the new bpf_task_work_schedule kfuncs, that let a BPF
> program schedule task_work callbacks for a target task:
>  * bpf_task_work_schedule_signal() → schedules with TWA_SIGNAL
>  * bpf_task_work_schedule_resume() → schedules with TWA_RESUME
>
> Each map value should embed a struct bpf_task_work, which the kernel
> side pairs with struct bpf_task_work_kern, containing a pointer to
> struct bpf_task_work_ctx, that maintains metadata relevant for the
> concrete callback scheduling.
>
> A small state machine and refcounting scheme ensures safe reuse and
> teardown:
>  STANDBY -> PENDING -> SCHEDULING -> SCHEDULED -> RUNNING -> STANDBY
>
> A FREED terminal state coordinates with map-value
> deletion (bpf_task_work_cancel_and_free()).

I'd mention that FREED state can be switched into from any point in
the above linear sequence of states and it's always handled in a
wait-free fashion. In all cases except SCHEDULED (when there might not
be any actively participating side besides the one that does FREED
handling, as we are just pending, potentially for a while, for task
work callback execution), if there is any cleanup necessary
(cancellation, putting references, etc.), actively participating side
will notice transition to FREED and ensure clean up.

>
> Scheduling itself is deferred via irq_work to keep the kfunc callable
> from NMI context.
>
> Lifetime is guarded with refcount_t + RCU Tasks Trace.
>
> Main components:
>  * struct bpf_task_work_context – Metadata and state management per task
> work.
>  * enum bpf_task_work_state – A state machine to serialize work
>  scheduling and execution.
>  * bpf_task_work_schedule() – The central helper that initiates
> scheduling.
>  * bpf_task_work_acquire_ctx() - Attempts to take ownership of the context,
>  pointed by passed struct bpf_task_work, allocates new context if none
>  exists yet.
>  * bpf_task_work_callback() – Invoked when the actual task_work runs.
>  * bpf_task_work_irq() – An intermediate step (runs in softirq context)
> to enqueue task work.
>  * bpf_task_work_cancel_and_free() – Cleanup for deleted BPF map entries.
>
> Flow of successful task work scheduling
>  1) bpf_task_work_schedule_* is called from BPF code.
>  2) Transition state from STANDBY to PENDING, marks context is owned by

typo: mark context as owned?

>  this task work scheduler
>  3) irq_work_queue() schedules bpf_task_work_irq().
>  4) Transition state from PENDING to SCHEDULING.
>  4) bpf_task_work_irq() attempts task_work_add(). If successful, state
>  transitions to SCHEDULED.
>  5) Task work calls bpf_task_work_callback(), which transition state to
>  RUNNING.
>  6) BPF callback is executed
>  7) Context is cleaned up, refcounts released, context state set back to
>  STANDBY.
>
> Signed-off-by: Mykyta Yatsenko <yatsenko@xxxxxxxx>
> ---
>  kernel/bpf/helpers.c | 319 ++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 317 insertions(+), 2 deletions(-)
>

I don't see any more problems, it looks good to me!

Reviewed-by: Andrii Nakryiko <andrii@xxxxxxxxxx>

> +
> +static void bpf_task_work_cancel(struct bpf_task_work_ctx *ctx)
> +{
> +       /*
> +        * Scheduled task_work callback holds ctx ref, so if we successfully
> +        * cancelled, we put that ref on callback's behalf. If we couldn't
> +        * cancel, callback is inevitably run or has already completed

typo: will inevitably run

> +        * running, and it would have taken care of its ctx ref itself.
> +        */
> +       if (task_work_cancel_match(ctx->task, task_work_match, ctx))
> +               bpf_task_work_ctx_put(ctx);
> +}
> +

[...]

> +       err = task_work_add(ctx->task, &ctx->work, ctx->mode);
> +       if (err) {
> +               bpf_task_work_ctx_reset(ctx);
> +               /*
> +                * try to switch back to STANDBY for another task_work reuse, but we might have
> +                * gone to FREED already, which is fine as we already cleaned up after ourselves
> +                */
> +               (void)cmpxchg(&ctx->state, BPF_TW_SCHEDULING, BPF_TW_STANDBY);
> +
> +               /* we don't have RCU protection, so put after switching state */

heh, I guess we do have RCU protection (that guard above), but yeah,
putting after all the manipulations with ctx makes most sense, let's
fix up the comment here

> +               bpf_task_work_ctx_put(ctx);
> +       }
> +
> +       /*
> +        * It's technically possible for just scheduled task_work callback to
> +        * complete running by now, going SCHEDULING -> RUNNING and then
> +        * dropping its ctx refcount. Instead of capturing extra ref just to
> +        * protected below ctx->state access, we rely on RCU protection to
> +        * perform below SCHEDULING -> SCHEDULED attempt.
> +        */
> +       state = cmpxchg(&ctx->state, BPF_TW_SCHEDULING, BPF_TW_SCHEDULED);
> +       if (state == BPF_TW_FREED)
> +               bpf_task_work_cancel(ctx); /* clean up if we switched into FREED state */
> +}
> +

[...]





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux