[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-parisc
Subject:    [PATCH] parisc: Remove unnecessary barriers from spinlock.h
From:       John David Anglin <dave.anglin () bell ! net>
Date:       2018-08-12 20:31:17
Message-ID: b2e51d85-a7cf-29a4-c22b-db008a1cc441 () bell ! net
[Download RAW message or body]

Now that mb() is an instruction barrier, it will slow performance if we 
issue unnecessary barriers.

The spinlock defines have a number of unnecessary barriers.  The 
__ldcw() define is both a hardware
and compiler barrier.  The mb() barriers in the routines using __ldcw() 
serve no purpose.

The only barrier needed is the one in arch_spin_unlock().  We need to 
ensure all accesses are complete
prior to releasing the lock.

Signed-off-by: John David Anglin <dave.anglin@bell.net>


["mb.d" (text/plain)]

diff --git a/arch/parisc/include/asm/spinlock.h b/arch/parisc/include/asm/spinlock.h
index 6f84b6acc86e..8a63515f03bf 100644
--- a/arch/parisc/include/asm/spinlock.h
+++ b/arch/parisc/include/asm/spinlock.h
@@ -20,7 +20,6 @@ static inline void arch_spin_lock_flags(arch_spinlock_t *x,
 {
 	volatile unsigned int *a;
 
-	mb();
 	a = __ldcw_align(x);
 	while (__ldcw(a) == 0)
 		while (*a == 0)
@@ -30,17 +29,16 @@ static inline void arch_spin_lock_flags(arch_spinlock_t *x,
 				local_irq_disable();
 			} else
 				cpu_relax();
-	mb();
 }
 #define arch_spin_lock_flags arch_spin_lock_flags
 
 static inline void arch_spin_unlock(arch_spinlock_t *x)
 {
 	volatile unsigned int *a;
-	mb();
+
 	a = __ldcw_align(x);
-	*a = 1;
 	mb();
+	*a = 1;
 }
 
 static inline int arch_spin_trylock(arch_spinlock_t *x)
@@ -48,10 +46,8 @@ static inline int arch_spin_trylock(arch_spinlock_t *x)
 	volatile unsigned int *a;
 	int ret;
 
-	mb();
 	a = __ldcw_align(x);
         ret = __ldcw(a) != 0;
-	mb();
 
 	return ret;
 }


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic