On Thu, 11 Sept 2025 at 16:32, Catalin Marinas <catalin.marinas@xxxxxxx> wrote: > > On Wed, Sep 10, 2025 at 08:46:55PM -0700, Ankur Arora wrote: > > Switch out the conditional load inerfaces used by rqspinlock > > to smp_cond_read_acquire_timeout(). > > This interface handles the timeout check explicitly and does any > > necessary amortization, so use check_timeout() directly. > > It's worth mentioning that the default smp_cond_load_acquire_timeout() > implementation (without hardware support) only spins 200 times instead > of 16K times in the rqspinlock code. That's probably fine but it would > be good to have confirmation from Kumar or Alexei. > > > diff --git a/kernel/bpf/rqspinlock.c b/kernel/bpf/rqspinlock.c > > index 5ab354d55d82..4d2c12d131ae 100644 > > --- a/kernel/bpf/rqspinlock.c > > +++ b/kernel/bpf/rqspinlock.c > [...] > > @@ -313,11 +307,8 @@ EXPORT_SYMBOL_GPL(resilient_tas_spin_lock); > > */ > > static DEFINE_PER_CPU_ALIGNED(struct qnode, rqnodes[_Q_MAX_NODES]); > > > > -#ifndef res_smp_cond_load_acquire > > -#define res_smp_cond_load_acquire(v, c) smp_cond_load_acquire(v, c) > > -#endif > > - > > -#define res_atomic_cond_read_acquire(v, c) res_smp_cond_load_acquire(&(v)->counter, (c)) > > +#define res_atomic_cond_read_acquire_timeout(v, c, t) \ > > + smp_cond_load_acquire_timeout(&(v)->counter, (c), (t)) > > BTW, we have atomic_cond_read_acquire() which accesses the 'counter' of > an atomic_t. You might as well add an atomic_cond_read_acquire_timeout() > in atomic.h than open-code the atomic_t internals here. > +1, and then drop res_atomic_cond_read_acquire_timeout from this file. > Otherwise the patch looks fine to me, much simpler than the previous > attempt. > > Reviewed-by: Catalin Marinas <catalin.marinas@xxxxxxx>