[PATCH 4/6] locking/local_lock: Introduce local_lock_irqsave_check()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Alexei Starovoitov <ast@xxxxxxxxxx>

Introduce local_lock_irqsave_check() to check that local_lock is
not taken recursively.
In !PREEMPT_RT local_lock_irqsave() disables IRQ, but
re-entrance is possible either from NMI or strategically placed
kprobe. The code should call local_lock_is_locked() before proceeding
to acquire a local_lock. Such local_lock_is_locked() might be called
earlier in the call graph and there could be a lot of code
between local_lock_is_locked() and local_lock_irqsave_check().

Without CONFIG_DEBUG_LOCK_ALLOC the local_lock_irqsave_check()
is equivalent to local_lock_irqsave().

Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx>
---
 include/linux/local_lock.h          | 13 +++++++++++++
 include/linux/local_lock_internal.h | 19 +++++++++++++++++++
 2 files changed, 32 insertions(+)

diff --git a/include/linux/local_lock.h b/include/linux/local_lock.h
index 092ce89b162a..0d6efb0fdd15 100644
--- a/include/linux/local_lock.h
+++ b/include/linux/local_lock.h
@@ -81,6 +81,19 @@
 #define local_trylock_irqsave(lock, flags)			\
 	__local_trylock_irqsave(lock, flags)
 
+/**
+ * local_lock_irqsave_check - Acquire a per CPU local lock, save and disable
+ *			      interrupts
+ * @lock:	The lock variable
+ * @flags:	Storage for interrupt flags
+ *
+ * This function checks that local_lock is not taken recursively.
+ * In !PREEMPT_RT re-entrance is possible either from NMI or kprobe.
+ * In PREEMPT_RT it checks that current task is not holding it.
+ */
+#define local_lock_irqsave_check(lock, flags)			\
+	__local_lock_irqsave_check(lock, flags)
+
 DEFINE_GUARD(local_lock, local_lock_t __percpu*,
 	     local_lock(_T),
 	     local_unlock(_T))
diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock_internal.h
index 263723a45ecd..7c4cc002bc68 100644
--- a/include/linux/local_lock_internal.h
+++ b/include/linux/local_lock_internal.h
@@ -168,6 +168,15 @@ do {								\
 /* preemption or migration must be disabled before calling __local_lock_is_locked */
 #define __local_lock_is_locked(lock) READ_ONCE(this_cpu_ptr(lock)->acquired)
 
+#define __local_lock_irqsave_check(lock, flags)					\
+	do {									\
+		if (IS_ENABLED(CONFIG_DEBUG_LOCK_ALLOC) &&			\
+		    (!__local_lock_is_locked(lock) || in_nmi()))		\
+			WARN_ON_ONCE(!__local_trylock_irqsave(lock, flags));	\
+		else								\
+			__local_lock_irqsave(lock, flags);			\
+	} while (0)
+
 #define __local_lock_release(lock)					\
 	do {								\
 		local_trylock_t *tl;					\
@@ -293,4 +302,14 @@ do {								\
 #define __local_lock_is_locked(__lock)					\
 	(rt_mutex_owner(&this_cpu_ptr(__lock)->lock) == current)
 
+#define __local_lock_irqsave_check(lock, flags)				\
+	do {								\
+		typecheck(unsigned long, flags);			\
+		flags = 0;						\
+		migrate_disable();					\
+		if (IS_ENABLED(CONFIG_DEBUG_LOCK_ALLOC))		\
+			WARN_ON_ONCE(__local_lock_is_locked(lock));	\
+		spin_lock(this_cpu_ptr((lock)));			\
+	} while (0)
+
 #endif /* CONFIG_PREEMPT_RT */
-- 
2.47.1





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux