Define __smp_cond_timewait_fine for callers which need fine grained timeout. To do this, use a narrowing timeout slack, equal to the remaining duration. This allows us to optimistically wait in WFE until the remaining duration drops below ARCH_TIMER_EVT_STREAM_PERIOD_US/2. Once we reach that point, we go into the spin-wait state. Cc: Will Deacon <will@xxxxxxxxxx> Cc: Catalin Marinas <catalin.marinas@xxxxxxx> Cc: Kumar Kartikeya Dwivedi <memxor@xxxxxxxxx> Cc: Alexei Starovoitov <ast@xxxxxxxxxx> Signed-off-by: Ankur Arora <ankur.a.arora@xxxxxxxxxx> --- arch/arm64/include/asm/barrier.h | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h index f4a184a96933..e4abb8f5dd97 100644 --- a/arch/arm64/include/asm/barrier.h +++ b/arch/arm64/include/asm/barrier.h @@ -247,6 +247,9 @@ static inline u64 ___cond_timewait(u64 now, u64 prev, u64 end, if (now >= end) return 0; + if (slack == 0) + slack = max(remaining, SMP_TIMEWAIT_CHECK_US); + /* * Use WFE if there's enough slack to get an event-stream wakeup even * if we don't come out of the WFE due to natural causes. @@ -273,6 +276,16 @@ static inline u64 ___cond_timewait(u64 now, u64 prev, u64 end, return now; } +/* + * Fine wait_policy: minimize the timeout delay while balancing against the + * time spent in the WFE wait state. + * + * The worst case timeout delay is ARCH_TIMER_EVT_STREAM_PERIOD_US/2, which + * would also be the worst case spin period. + */ +#define __smp_cond_timewait_fine(now, prev, end, spin, wait) \ + __smp_cond_timewait(now, prev, end, spin, wait, \ + 0) /* * Coarse wait_policy: minimizes the duration spent spinning at the cost of * potentially spending the available slack in a WFE wait state. -- 2.43.5