Hi Krzysztof, Thanks, as always, for your review feedback. On Sat, 12 Jul 2025 at 18:42, Krzysztof Kozlowski <krzk@xxxxxxxxxx> wrote: > > On 11/07/2025 15:50, Peter Griffin wrote: > > > > #include <linux/soc/samsung/exynos-regs-pmu.h> > > @@ -35,6 +37,14 @@ struct exynos_pmu_context { > > const struct exynos_pmu_data *pmu_data; > > struct regmap *pmureg; > > struct regmap *pmuintrgen; > > + /* > > + * Serialization lock for CPU hot plug and cpuidle ACPM hint > > + * programming. Also protects the in_hotplug flag. > > + */ > > + raw_spinlock_t cpupm_lock; > > + bool *in_hotplug; > > This should be bitmap - more obvious code. I've done this in v7 which I just sent. > > > + atomic_t sys_suspended; > > + atomic_t sys_rebooting; > > }; > > > > void __iomem *pmu_base_addr; > > @@ -221,6 +231,15 @@ static const struct regmap_config regmap_smccfg = { > > .reg_read = tensor_sec_reg_read, > > .reg_write = tensor_sec_reg_write, > > .reg_update_bits = tensor_sec_update_bits, > > + .use_raw_spinlock = true, > > +}; > > + > > +static const struct regmap_config regmap_pmu_intr = { > > + .name = "pmu_intr_gen", > > + .reg_bits = 32, > > + .reg_stride = 4, > > + .val_bits = 32, > > + .use_raw_spinlock = true, > > }; > > > > static const struct exynos_pmu_data gs101_pmu_data = { > > @@ -330,13 +349,19 @@ struct regmap *exynos_get_pmu_regmap_by_phandle(struct device_node *np, > > EXPORT_SYMBOL_GPL(exynos_get_pmu_regmap_by_phandle); > > > > > ... > > > +/* Called from CPU PM notifier (CPUIdle code path) with IRQs disabled */ > > +static int gs101_cpu_pmu_offline(void) > > +{ > > + int cpu; > > + > > + raw_spin_lock(&pmu_context->cpupm_lock); > > + cpu = smp_processor_id(); > > + > > + if (pmu_context->in_hotplug[cpu]) { > > + raw_spin_unlock(&pmu_context->cpupm_lock); > > + return NOTIFY_BAD; > > + } > > + > > + __gs101_cpu_pmu_offline(cpu); > > + raw_spin_unlock(&pmu_context->cpupm_lock); > > + > > + return NOTIFY_OK; > > +} > > + > > +/* Called from CPU hot plug callback with IRQs enabled */ > > +static int gs101_cpuhp_pmu_offline(unsigned int cpu) > > +{ > > + unsigned long flags; > > + > > + raw_spin_lock_irqsave(&pmu_context->cpupm_lock, flags); > > + /* > > + * Mark this CPU as entering hotplug. So as not to confuse > > + * ACPM the CPU entering hotplug should not enter C2 idle state. > > + */ > > + pmu_context->in_hotplug[cpu] = true; > > + __gs101_cpu_pmu_offline(cpu); > > + > > + raw_spin_unlock_irqrestore(&pmu_context->cpupm_lock, flags); > > + > > + return 0; > > +} > > + > > +static int gs101_cpu_pm_notify_callback(struct notifier_block *self, > > + unsigned long action, void *v) > > +{ > > + switch (action) { > > + case CPU_PM_ENTER: > > + /* > > + * Ignore CPU_PM_ENTER event in reboot or > > + * suspend sequence. > > + */ > > + > > + if (atomic_read(&pmu_context->sys_suspended) || > > + atomic_read(&pmu_context->sys_rebooting)) > > I don't get exactly why you need here atomics. You don't have here > barriers, so ordering is not kept (non-RMW atomics are unordered), so > maybe ordering was not the problem to be solved here. But then you don't > use these at all as RMW and this is even explicitly described in atomic doc! > > "Therefore, if you find yourself only using the Non-RMW operations of > atomic_t, you do not in fact need atomic_t at all and are doing it wrong." > > And it is right. READ/WRITE_ONCE gives you the same. > > The question is whether you need ordering or barriers in general > (atomics don't give you these) - you have here control dependency > if-else, however it is immediately followed with gs101_cpu_pmu_offline() > which will use spin-lock (so memory barrier). > > Basically you should have here comment explaining why there is no > barrier - you rely on barrier from spin lock in next calls. > > And if my reasoning is correct, then you should use just READ/WRITE_ONCE. There was a bad assumption on my part that atomic_read etc would have barriers. After looking at this some more in v7 I decided to protect the reboot/suspend flags with the raw spinlock. I think that makes it much easier to reason about the code. Thanks, Peter