[prev in list] [next in list] [prev in thread] [next in thread]
List: openvswitch-dev
Subject: [ovs-dev] [PATCH v3 4/8] ovs-atomic: Fix GCC4+ atomic_flag.
From: jrajahalme () nicira ! com (Jarno Rajahalme)
Date: 2014-07-31 22:21:50
Message-ID: 1406845314-24018-5-git-send-email-jrajahalme () nicira ! com
[Download RAW message or body]
The default memory order for atomic_flag is documented to be
memory_order_seq_cst (as in C11), but the GCC4+ implementation only
used the GCC builtins, which provide acquire and release semantics
only. Additional barriers are needed for in other cases.
Signed-off-by: Jarno Rajahalme <jrajahalme at nicira.com>
---
lib/ovs-atomic-gcc4+.h | 40 +++++++++++++++++++++++++---------------
1 file changed, 25 insertions(+), 15 deletions(-)
diff --git a/lib/ovs-atomic-gcc4+.h b/lib/ovs-atomic-gcc4+.h
index bb889c6..756696b 100644
--- a/lib/ovs-atomic-gcc4+.h
+++ b/lib/ovs-atomic-gcc4+.h
@@ -167,27 +167,37 @@ typedef struct {
#define ATOMIC_FLAG_INIT { false }
static inline bool
-atomic_flag_test_and_set(volatile atomic_flag *object)
-{
- return __sync_lock_test_and_set(&object->b, 1);
-}
-
-static inline bool
atomic_flag_test_and_set_explicit(volatile atomic_flag *object,
- memory_order order OVS_UNUSED)
+ memory_order order)
{
- return atomic_flag_test_and_set(object);
-}
+ bool old;
-static inline void
-atomic_flag_clear(volatile atomic_flag *object)
-{
- __sync_lock_release(&object->b);
+ /* __sync_lock_test_and_set() by itself is an acquire barrier.
+ * For anything higher additional barriers are needed. */
+ if (order > memory_order_acquire) {
+ atomic_thread_fence(order);
+ }
+ old = __sync_lock_test_and_set(&object->b, 1);
+ atomic_thread_fence_if_seq_cst(order);
+
+ return old;
}
+#define atomic_flag_test_and_set(FLAG) \
+ atomic_flag_test_and_set_explicit(FLAG, memory_order_seq_cst)
+
static inline void
atomic_flag_clear_explicit(volatile atomic_flag *object,
- memory_order order OVS_UNUSED)
+ memory_order order)
{
- atomic_flag_clear(object);
+ /* __sync_lock_release() by itself is a release barrier. For
+ * anything else additional barrier may be needed. */
+ if (order != memory_order_release) {
+ atomic_thread_fence(order);
+ }
+ __sync_lock_release(&object->b);
+ atomic_thread_fence_if_seq_cst(order);
}
+
+#define atomic_flag_clear(FLAG) \
+ atomic_flag_clear_explicit(FLAG, memory_order_seq_cst)
--
1.7.10.4
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic