Re: [PATCH v3 5/5] arm64: barrier: Handle waiting in smp_cond_load_relaxed_timewait()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 26 Jun 2025, Ankur Arora wrote:

> @@ -222,6 +223,53 @@ do {									\
>  #define __smp_timewait_store(ptr, val)					\
>  		__cmpwait_relaxed(ptr, val)
>
> +/*
> + * Redefine ARCH_TIMER_EVT_STREAM_PERIOD_US locally to avoid include hell.
> + */
> +#define __ARCH_TIMER_EVT_STREAM_PERIOD_US 100UL
> +extern bool arch_timer_evtstrm_available(void);
> +
> +static inline u64 ___smp_cond_spinwait(u64 now, u64 prev, u64 end,
> +				       u32 *spin, bool *wait, u64 slack);
> +/*
> + * To minimize time spent spinning, we want to allow a large overshoot.
> + * So, choose a default slack value of the event-stream period.
> + */
> +#define SMP_TIMEWAIT_DEFAULT_US __ARCH_TIMER_EVT_STREAM_PERIOD_US
> +
> +static inline u64 ___smp_cond_timewait(u64 now, u64 prev, u64 end,
> +				       u32 *spin, bool *wait, u64 slack)
> +{
> +	bool wfet = alternative_has_cap_unlikely(ARM64_HAS_WFXT);
> +	bool wfe, ev = arch_timer_evtstrm_available();

An unitialized and initialized variable on the same line. Maybe separate
that. Looks confusing and unusual to me.

> +	u64 evt_period = __ARCH_TIMER_EVT_STREAM_PERIOD_US;
> +	u64 remaining = end - now;
> +
> +	if (now >= end)
> +		return 0;
> +	/*
> +	 * Use WFE if there's enough slack to get an event-stream wakeup even
> +	 * if we don't come out of the WFE due to natural causes.
> +	 */
> +	wfe = ev && ((remaining + slack) > evt_period);

The line above does not matter for the wfet case and the calculation is
ignored. We hope that in the future wfet will be the default case.




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux