pgsql: Fix double-release of spinlock
Fix double-release of spinlock
Commit 9d9b9d46f3 added spinlocks to protect the fields in ProcSignal
flags, but in EmitProcSignalBarrier(), the spinlock was released
twice. With most spinlock implementations, releasing a lock that's not
held is not easy to notice, because most of the time it does nothing,
but if the spinlock was concurrently acquired by another process, it
could lead to more serious issues. Fortunately, with the
--disable-spinlocks emulation implementation, it caused more visible
failures.
In the passing, fix a type in comment and add an assertion that the
procNumber passed to SendProcSignal looks valid.
Discussion: /messages/by-id/b8ce284c-18a2-4a79-afd3-1991a2e7d246@iki.fi
Branch
------
master
Details
-------
https://git.postgresql.org/pg/commitdiff/0393f542d72c6182271c392d9a83d0fc775113c7
Modified Files
--------------
src/backend/storage/ipc/procsignal.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
Heikki Linnakangas <heikki.linnakangas@iki.fi> writes:
Commit 9d9b9d46f3 added spinlocks to protect the fields in ProcSignal
flags, but in EmitProcSignalBarrier(), the spinlock was released
twice. With most spinlock implementations, releasing a lock that's not
held is not easy to notice, because most of the time it does nothing,
but if the spinlock was concurrently acquired by another process, it
could lead to more serious issues. Fortunately, with the
--disable-spinlocks emulation implementation, it caused more visible
failures.
There was some recent discussion about getting rid of
--disable-spinlocks on the grounds that nobody would use
hardware that lacked native spinlocks. But now I wonder
if there is a testing/debugging reason to keep it.
regards, tom lane
Hi,
On 2024-07-29 11:31:56 -0400, Tom Lane wrote:
Heikki Linnakangas <heikki.linnakangas@iki.fi> writes:
Commit 9d9b9d46f3 added spinlocks to protect the fields in ProcSignal
flags, but in EmitProcSignalBarrier(), the spinlock was released
twice. With most spinlock implementations, releasing a lock that's not
held is not easy to notice, because most of the time it does nothing,
but if the spinlock was concurrently acquired by another process, it
could lead to more serious issues. Fortunately, with the
--disable-spinlocks emulation implementation, it caused more visible
failures.There was some recent discussion about getting rid of
--disable-spinlocks on the grounds that nobody would use
hardware that lacked native spinlocks. But now I wonder
if there is a testing/debugging reason to keep it.
Seems it'd be a lot more straightforward to just add an assertion to the
x86-64 spinlock implementation verifying that the spinlock isn't already free?
Greetings,
Andres Freund
Andres Freund <andres@anarazel.de> writes:
On 2024-07-29 11:31:56 -0400, Tom Lane wrote:
There was some recent discussion about getting rid of
--disable-spinlocks on the grounds that nobody would use
hardware that lacked native spinlocks. But now I wonder
if there is a testing/debugging reason to keep it.
Seems it'd be a lot more straightforward to just add an assertion to the
x86-64 spinlock implementation verifying that the spinlock isn't already free?
I dunno, is that the only extra check that the --disable-spinlocks
implementation is providing?
I'm kind of allergic to putting Asserts into spinlocked code segments,
mostly on the grounds that it violates the straight-line-code precept.
I suppose it's not really that bad for tests that you don't expect
to fail, but still ...
regards, tom lane
Hi,
On 2024-07-29 12:33:13 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2024-07-29 11:31:56 -0400, Tom Lane wrote:
There was some recent discussion about getting rid of
--disable-spinlocks on the grounds that nobody would use
hardware that lacked native spinlocks. But now I wonder
if there is a testing/debugging reason to keep it.Seems it'd be a lot more straightforward to just add an assertion to the
x86-64 spinlock implementation verifying that the spinlock isn't already free?
FWIW, I quickly hacked that up, and it indeed quickly fails with 0393f542d72^
and passes with 0393f542d72.
I dunno, is that the only extra check that the --disable-spinlocks
implementation is providing?
I think it also provides the (valuable!) check that spinlocks were actually
initialized. But that again seems like something we'd be better off adding
more general infrastructure for - nobody runs --disable-spinlocks locally, we
shouldn't need to run this on the buildfarm to find problems like this.
I'm kind of allergic to putting Asserts into spinlocked code segments,
mostly on the grounds that it violates the straight-line-code precept.
I suppose it's not really that bad for tests that you don't expect
to fail, but still ...
I don't think the spinlock implementation itself is really affected by that
rule - after all, the --disable-spinlocks implementation actually consists out
of several layers of external function calls (including syscalls in some
cases!).
Greetings,
Andres Freund
Andres Freund <andres@anarazel.de> writes:
On 2024-07-29 12:33:13 -0400, Tom Lane wrote:
I dunno, is that the only extra check that the --disable-spinlocks
implementation is providing?
I think it also provides the (valuable!) check that spinlocks were actually
initialized. But that again seems like something we'd be better off adding
more general infrastructure for - nobody runs --disable-spinlocks locally, we
shouldn't need to run this on the buildfarm to find problems like this.
Hmm, but how? One of the things we gave up by nuking HPPA support
was that that platform's representation of an initialized, free
spinlock was not all-zeroes, so that it'd catch this type of problem.
I think all the remaining platforms do use zeroes, so it's hard to
see how anything short of valgrind would be likely to catch it.
regards, tom lane
Hi,
On 2024-07-29 09:40:26 -0700, Andres Freund wrote:
On 2024-07-29 12:33:13 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2024-07-29 11:31:56 -0400, Tom Lane wrote:
There was some recent discussion about getting rid of
--disable-spinlocks on the grounds that nobody would use
hardware that lacked native spinlocks. But now I wonder
if there is a testing/debugging reason to keep it.Seems it'd be a lot more straightforward to just add an assertion to the
x86-64 spinlock implementation verifying that the spinlock isn't already free?FWIW, I quickly hacked that up, and it indeed quickly fails with 0393f542d72^
and passes with 0393f542d72.
Thought it'd be valuable to post a patch to go along with this, to
-hackers. The thread started at [1]/messages/by-id/E1sYSF2-001lEB-D1@gemulon.postgresql.org
Other context from this discussion:
I dunno, is that the only extra check that the --disable-spinlocks
implementation is providing?I think it also provides the (valuable!) check that spinlocks were actually
initialized. But that again seems like something we'd be better off adding
more general infrastructure for - nobody runs --disable-spinlocks locally, we
shouldn't need to run this on the buildfarm to find problems like this.I'm kind of allergic to putting Asserts into spinlocked code segments,
mostly on the grounds that it violates the straight-line-code precept.
I suppose it's not really that bad for tests that you don't expect
to fail, but still ...I don't think the spinlock implementation itself is really affected by that
rule - after all, the --disable-spinlocks implementation actually consists out
of several layers of external function calls (including syscalls in some
cases!).
Greetings,
Andres Freund
[1]: /messages/by-id/E1sYSF2-001lEB-D1@gemulon.postgresql.org
Attachments:
v1-0001-Detect-unlocking-an-unlocked-spinlock.patchtext/x-diff; charset=us-asciiDownload
From 06bd992f4be2f46586bfc7bab3135b3a2ca84e72 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Mon, 29 Jul 2024 09:48:26 -0700
Subject: [PATCH v1] Detect unlocking an unlocked spinlock
Author:
Reviewed-by:
Discussion: https://postgr.es/m/20240729164026.yurg37tej34o76uk@awork3.anarazel.de
Backpatch:
---
src/include/storage/spin.h | 17 ++++++++++++++++-
1 file changed, 16 insertions(+), 1 deletion(-)
diff --git a/src/include/storage/spin.h b/src/include/storage/spin.h
index c0679c59992..7ec32cd816a 100644
--- a/src/include/storage/spin.h
+++ b/src/include/storage/spin.h
@@ -61,7 +61,22 @@
#define SpinLockAcquire(lock) S_LOCK(lock)
-#define SpinLockRelease(lock) S_UNLOCK(lock)
+static inline void
+SpinLockRelease(volatile slock_t *lock)
+{
+ /*
+ * Check that this backend actually held the spinlock. Implemented as a
+ * static inline to avoid multiple-evaluation hazards.
+ *
+ * Can't assert when using the semaphore fallback implementation - it
+ * doesn't implement S_LOCK_FREE() - but the fallback triggers failures in
+ * the case of double-release on its own.
+ */
+#ifdef HAVE_SPINLOCKS
+ Assert(!S_LOCK_FREE(lock));
+#endif
+ S_UNLOCK(lock);
+}
#define SpinLockFree(lock) S_LOCK_FREE(lock)
--
2.45.2.746.g06e570c0df.dirty
On 29/07/2024 19:51, Andres Freund wrote:
On 2024-07-29 09:40:26 -0700, Andres Freund wrote:
On 2024-07-29 12:33:13 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2024-07-29 11:31:56 -0400, Tom Lane wrote:
There was some recent discussion about getting rid of
--disable-spinlocks on the grounds that nobody would use
hardware that lacked native spinlocks. But now I wonder
if there is a testing/debugging reason to keep it.Seems it'd be a lot more straightforward to just add an assertion to the
x86-64 spinlock implementation verifying that the spinlock isn't already free?FWIW, I quickly hacked that up, and it indeed quickly fails with 0393f542d72^
and passes with 0393f542d72.
+1. Thanks!
Other context from this discussion:
I dunno, is that the only extra check that the --disable-spinlocks
implementation is providing?I think it also provides the (valuable!) check that spinlocks were actually
initialized. But that again seems like something we'd be better off adding
more general infrastructure for - nobody runs --disable-spinlocks locally, we
shouldn't need to run this on the buildfarm to find problems like this.
Note that the "check" for double-release with the fallback
implementation wasn't an explicit check either. It just incremented the
underlying semaphore, which caused very weird failures later in
completely unrelated code. An explicit assert would be much nicer.
+1 for removing --disable-spinlocks, but let's add this assertion first.
I'm kind of allergic to putting Asserts into spinlocked code segments,
mostly on the grounds that it violates the straight-line-code precept.
I suppose it's not really that bad for tests that you don't expect
to fail, but still ...I don't think the spinlock implementation itself is really affected by that
rule - after all, the --disable-spinlocks implementation actually consists out
of several layers of external function calls (including syscalls in some
cases!).
Yeah I'm not worried about that at all. Also, the assert is made when
you have already released the spinlock; you are already out of the
critical section.
--
Heikki Linnakangas
Neon (https://neon.tech)
Heikki Linnakangas <hlinnaka@iki.fi> writes:
Yeah I'm not worried about that at all. Also, the assert is made when
you have already released the spinlock; you are already out of the
critical section.
Not in the patch Andres posted.
regards, tom lane
On Mon, Jul 29, 2024 at 12:40 PM Andres Freund <andres@anarazel.de> wrote:
I think it also provides the (valuable!) check that spinlocks were actually
initialized. But that again seems like something we'd be better off adding
more general infrastructure for - nobody runs --disable-spinlocks locally, we
shouldn't need to run this on the buildfarm to find problems like this.
+1. It sucks to have to do special builds to catch a certain kind of
problem. I know I've been guilty of that (ahem, debug_parallel_query
f/k/a force_parallel_mode) but I'm not going to put it on my CV as one
of my great accomplishments. It's much better if we can find a way for
a standard 'make check-world' to tell us about as many things as
possible, so that we don't commit and then find out.
--
Robert Haas
EDB: http://www.enterprisedb.com
On 2024-07-29 12:45:19 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2024-07-29 12:33:13 -0400, Tom Lane wrote:
I dunno, is that the only extra check that the --disable-spinlocks
implementation is providing?I think it also provides the (valuable!) check that spinlocks were actually
initialized. But that again seems like something we'd be better off adding
more general infrastructure for - nobody runs --disable-spinlocks locally, we
shouldn't need to run this on the buildfarm to find problems like this.Hmm, but how?
I think there's a few ways:
One of the things we gave up by nuking HPPA support
was that that platform's representation of an initialized, free
spinlock was not all-zeroes, so that it'd catch this type of problem.
I think all the remaining platforms do use zeroes, so it's hard to
see how anything short of valgrind would be likely to catch it.
1) There's nothing forcing us to use 0/1 for most of the spinlock
implementations. E.g. for x86-64 we could use 0 for uninitialized, 1 for free
and 2 for locked.
2) We could also change the layout of slock_t in assert enabled builds, adding
a dedicated 'initialized' field when assertions are enabled. But that might be
annoying from an ABI POV?
1) seems preferrable, so I gave it a quick try. Seems to work. There's a
*slight* difference in the instruction sequence:
old:
41f6: f0 86 10 lock xchg %dl,(%rax)
41f9: 84 d2 test %dl,%dl
41fb: 75 1b jne 4218 <GetRecoveryState+0x38>
new:
4216: f0 86 10 lock xchg %dl,(%rax)
4219: 80 fa 02 cmp $0x2,%dl
421c: 74 22 je 4240 <GetRecoveryState+0x40>
I.e. the version using 2 as the locked state uses a three byte instruction vs
a two byte instruction before.
*If* we are worried about this, we could
a) Change the representation only for assert enabled builds, but that'd have
ABI issues again.
b) Instead define the spinlock to have 1 as the unlocked state and 0 as the
locked state. That makes it a bit harder to understand that initialization
is missing, compared to a dedicated state, as the first use of the spinlock
just blocks.
To make 1) b) easier to understand it might be worth changing the slock_t
typedef to be something like
typedef struct slock_t
{
char is_free;
} slock_t;
which also might help catch some cases of type confusion - the char typedef is
too forgiving imo.
Greetings,
Andres Freund
Hi,
On 2024-07-29 13:25:22 -0400, Tom Lane wrote:
Heikki Linnakangas <hlinnaka@iki.fi> writes:
Yeah I'm not worried about that at all. Also, the assert is made when
you have already released the spinlock; you are already out of the
critical section.Not in the patch Andres posted.
Which seems fairly fundamental - once outside of the critical section, we
can't actually assert that the lock isn't acquired, somebody else *validly*
might have acquired it by then.
However, I still don't think it's a problem to assert that the lock is held in
in the unlock "routine". As mentioned before, the spinlock implementation
itself has never followed the "just straight line code" rule that users of
spinlocks are supposed to follow.
Greetings,
Andres Freund
Andres Freund <andres@anarazel.de> writes:
On 2024-07-29 12:45:19 -0400, Tom Lane wrote:
Hmm, but how?
...
I.e. the version using 2 as the locked state uses a three byte instruction vs
a two byte instruction before.
*If* we are worried about this, we could
a) Change the representation only for assert enabled builds, but that'd have
ABI issues again.
Agreed, that would be a very bad idea. It would for example break the
case of a non-assert-enabled extension used with an assert-enabled
core or vice versa, which is something we've gone out of our way to
allow.
b) Instead define the spinlock to have 1 as the unlocked state and 0 as the
locked state. That makes it a bit harder to understand that initialization
is missing, compared to a dedicated state, as the first use of the spinlock
just blocks.
This option works for me.
To make 1) b) easier to understand it might be worth changing the slock_t
typedef to be something like
typedef struct slock_t
{
char is_free;
} slock_t;
+1
How much of this would we change across platforms, and how much
would be x86-only? I think there are enough people developing on
ARM (e.g. Mac) now to make it worth covering that, but maybe we
don't care so much about anything else.
regards, tom lane
Andres Freund <andres@anarazel.de> writes:
However, I still don't think it's a problem to assert that the lock is held in
in the unlock "routine". As mentioned before, the spinlock implementation
itself has never followed the "just straight line code" rule that users of
spinlocks are supposed to follow.
Yeah, that's fair.
regards, tom lane
On 29/07/2024 20:48, Andres Freund wrote:
On 2024-07-29 13:25:22 -0400, Tom Lane wrote:
Heikki Linnakangas <hlinnaka@iki.fi> writes:
Yeah I'm not worried about that at all. Also, the assert is made when
you have already released the spinlock; you are already out of the
critical section.Not in the patch Andres posted.
Which seems fairly fundamental - once outside of the critical section, we
can't actually assert that the lock isn't acquired, somebody else *validly*
might have acquired it by then.
You could do:
bool was_free = S_LOCK_FREE(lock);
S_UNLOCK(lock);
Assert(!was_free);
Depending on the underlying implementation, you could also use
compare-and-exchange. That makes the assertion-enabled instructions a
little different than without assertions though.
However, I still don't think it's a problem to assert that the lock is held in
in the unlock "routine". As mentioned before, the spinlock implementation
itself has never followed the "just straight line code" rule that users of
spinlocks are supposed to follow.
Agreed.
--
Heikki Linnakangas
Neon (https://neon.tech)
Hi,
On 2024-07-29 21:00:35 +0300, Heikki Linnakangas wrote:
On 29/07/2024 20:48, Andres Freund wrote:
On 2024-07-29 13:25:22 -0400, Tom Lane wrote:
Heikki Linnakangas <hlinnaka@iki.fi> writes:
Yeah I'm not worried about that at all. Also, the assert is made when
you have already released the spinlock; you are already out of the
critical section.Not in the patch Andres posted.
Which seems fairly fundamental - once outside of the critical section, we
can't actually assert that the lock isn't acquired, somebody else *validly*
might have acquired it by then.You could do:
bool was_free = S_LOCK_FREE(lock);
S_UNLOCK(lock);
Assert(!was_free);
I don't really see the point - we're about to crash with an assertion failure,
why would we want to do that outside of the critical section? If anything that
will make it harder to debug the issue in a core dump, because other backends
might "destroy evidence" due to being able to acquire the spinlock.
Depending on the underlying implementation, you could also use
compare-and-exchange.
That'd scale a lot worse, at least on x86-64, as it requires the unlock to be
an atomic op, whereas today it's a simple store (+ compiler barrier).
I've experimented with replacing all spinlocks with lwlocks, and the fact that
you need an atomic op for an rwlock release is one of the two major reasons
they have a higher overhead (the remainder is boring stuff like the overhead
of external function calls and ownership management).
Greetings,
Andres Freund
Hi,
Partially replying here to an email on -committers [1]/messages/by-id/2812376.1722275765@sss.pgh.pa.us.
On 2024-07-29 13:57:02 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
However, I still don't think it's a problem to assert that the lock is held in
in the unlock "routine". As mentioned before, the spinlock implementation
itself has never followed the "just straight line code" rule that users of
spinlocks are supposed to follow.Yeah, that's fair.
Cool.
On 2024-07-29 13:56:05 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
b) Instead define the spinlock to have 1 as the unlocked state and 0 as the
locked state. That makes it a bit harder to understand that initialization
is missing, compared to a dedicated state, as the first use of the spinlock
just blocks.This option works for me.
To make 1) b) easier to understand it might be worth changing the slock_t
typedef to be something liketypedef struct slock_t
{
char is_free;
} slock_t;+1
Cool. I've attached a prototype.
I just realized there's a nice little advantage to the "inverted"
representation - it detects missing initialization even in optimized builds.
How much of this would we change across platforms, and how much
would be x86-only? I think there are enough people developing on
ARM (e.g. Mac) now to make it worth covering that, but maybe we
don't care so much about anything else.
Not sure. Right now I've only hacked up x86-64 (not even touching i386), but
it shouldn't be hard to change at least some additional platforms.
My current prototype requires S_UNLOCK, S_LOCK_FREE, S_INIT_LOCK to be
implemented for x86-64 instead of using the "generic" implementation. That'd
be mildly annoying duplication if we did so for a few more platforms.
It'd be more palatable to just change all platforms if we made more of them
use __sync_lock_test_and_set (or some other intrinsic(s))...
Greetings,
Andres Freund
Attachments:
v2-0001-Detect-unlocking-an-unlocked-spinlock.patchtext/x-diff; charset=us-asciiDownload
From 06bd992f4be2f46586bfc7bab3135b3a2ca84e72 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Mon, 29 Jul 2024 09:48:26 -0700
Subject: [PATCH v2 1/2] Detect unlocking an unlocked spinlock
Author:
Reviewed-by:
Discussion: https://postgr.es/m/20240729164026.yurg37tej34o76uk@awork3.anarazel.de
Backpatch:
---
src/include/storage/spin.h | 17 ++++++++++++++++-
1 file changed, 16 insertions(+), 1 deletion(-)
diff --git a/src/include/storage/spin.h b/src/include/storage/spin.h
index c0679c59992..7ec32cd816a 100644
--- a/src/include/storage/spin.h
+++ b/src/include/storage/spin.h
@@ -61,7 +61,22 @@
#define SpinLockAcquire(lock) S_LOCK(lock)
-#define SpinLockRelease(lock) S_UNLOCK(lock)
+static inline void
+SpinLockRelease(volatile slock_t *lock)
+{
+ /*
+ * Check that this backend actually held the spinlock. Implemented as a
+ * static inline to avoid multiple-evaluation hazards.
+ *
+ * Can't assert when using the semaphore fallback implementation - it
+ * doesn't implement S_LOCK_FREE() - but the fallback triggers failures in
+ * the case of double-release on its own.
+ */
+#ifdef HAVE_SPINLOCKS
+ Assert(!S_LOCK_FREE(lock));
+#endif
+ S_UNLOCK(lock);
+}
#define SpinLockFree(lock) S_LOCK_FREE(lock)
--
2.45.2.746.g06e570c0df.dirty
v2-0002-Detect-many-uses-of-uninitialized-spinlocks-on-x8.patchtext/x-diff; charset=us-asciiDownload
From 574d02d3d4d7f38b764ca6a2e4d2617a417ab8a8 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Mon, 29 Jul 2024 10:55:11 -0700
Subject: [PATCH v2 2/2] Detect many uses of uninitialized spinlocks on x86-64
On most platforms we use 0 for a unlocked spinlock and 1 for a locked one. As
we often place spinlocks in zero-initialized memory, missing calls to
SpinLockInit() aren't noticed.
To address that, swap the meaning of the spinlock state and use 1 for unlocked
and 0 for locked. That way using an uninitialized spinlock blocks the first
time a lock is acquired. A dedicated state for initialized locks would allow
for a nicer assertion, but ends up with slightly less dense code (a three byte
cmp with an immediate, instead of a 2 byte test).
To make the swapped meaning of the slock_t state easier to understand when
debugging, change the slock_t to be a struct on x86-64, with an "is_free"
member.
Author:
Reviewed-by:
Discussion: https://postgr.es/m/20240729164026.yurg37tej34o76uk@awork3.anarazel.de
Backpatch:
---
src/include/storage/s_lock.h | 26 +++++++++++++++++++++-----
1 file changed, 21 insertions(+), 5 deletions(-)
diff --git a/src/include/storage/s_lock.h b/src/include/storage/s_lock.h
index 02c68513a53..18291b284b0 100644
--- a/src/include/storage/s_lock.h
+++ b/src/include/storage/s_lock.h
@@ -205,7 +205,10 @@ spin_delay(void)
#ifdef __x86_64__ /* AMD Opteron, Intel EM64T */
#define HAS_TEST_AND_SET
-typedef unsigned char slock_t;
+typedef struct slock_t
+{
+ char is_free;
+} slock_t;
#define TAS(lock) tas(lock)
@@ -217,21 +220,27 @@ typedef unsigned char slock_t;
* and IA32, by Michael Chynoweth and Mary R. Lee. As of this writing, it is
* available at:
* http://software.intel.com/en-us/articles/implementing-scalable-atomic-locks-for-multi-core-intel-em64t-and-ia32-architectures
+ *
+ * To catch uses of uninitialized spinlocks, we define 1 as unlocked and 0 as
+ * locked. That causes the first acquisition of an uninitialized spinlock to
+ * block forever. A dedicated state for initialized locks would allow for a
+ * nicer assertion, but ends up with slightly less dense code (a three byte
+ * cmp with an immediate, instead of a 2 byte test).
*/
-#define TAS_SPIN(lock) (*(lock) ? 1 : TAS(lock))
+#define TAS_SPIN(lock) ((lock)->is_free == 0 ? 1 : TAS(lock))
static __inline__ int
tas(volatile slock_t *lock)
{
- slock_t _res = 1;
+ char was_free = 0;
__asm__ __volatile__(
" lock \n"
" xchgb %0,%1 \n"
-: "+q"(_res), "+m"(*lock)
+: "+q"(was_free), "+m"(lock->is_free)
: /* no inputs */
: "memory", "cc");
- return (int) _res;
+ return was_free == 0;
}
#define SPIN_DELAY() spin_delay()
@@ -247,6 +256,13 @@ spin_delay(void)
" rep; nop \n");
}
+#define S_UNLOCK(lock) \
+ do { __asm__ __volatile__("" : : : "memory"); (lock)->is_free = 1; } while (0)
+
+#define S_LOCK_FREE(lock) ((lock)->is_free == 1)
+
+#define S_INIT_LOCK(lock) S_UNLOCK(lock)
+
#endif /* __x86_64__ */
--
2.45.2.746.g06e570c0df.dirty