Subject | Interlocked Compare-and-Swap on AMD64 Linux |
---|---|
Author | Jim Starkey |
Post date | 2006-07-25T20:34:28Z |
We've found a problem with some versions of gcc and the Netfrastructure
/ Vulcan / Falcon interlocked compare and swap inline assembler
implementation. The source of the problem seems to be over-aggressive
optimization by gcc that effectively ignore the result of the cmpxchg
instruction. Falcon is currently uses a different instruction sequence
that uses the condition code rather than the resulting exchange value
that avoids the problem. We haven't finished exhaustive testing of the
replacement code, but I'd like to make the revisions available as soon
as possible.
The problem is known to exist on gcc 4.0.2 distributed on AMD64 SuSE
10.0. Some combinations of optimization switches result in infinite
loops, but the most common symptom is a very small window where
inconsistent SyncObject locks can be granted as well as a small window
in MemMgr where a small block release, implemented with
COMPARE_AND_SWAP, and loose a block. The probability of a lost block is
sufficiently low that it would probably never be detected, but the
consequences of inconsistent SyncObject locks will always be bad.
The new lock in Interlock.h is
#if defined (__i386) || (__x86_64__) || defined (__sparc__)
#define COMPARE_EXCHANGE(target,compare,exchange)\
(inline_cas(target,compare,exchange))
#define COMPARE_EXCHANGE_POINTER(target,compare,exchange)\
(inline_cas_pointer((volatile
void**)target,(void*)compare,(void*)exchange))
#endif
-------------------------------------------------------------------------------------------------------------
inline int inline_cas (volatile int *target, int compare, int exchange)
{
#if defined(__i386) || (__x86_64__)
char ret;
__asm __volatile ("lock; cmpxchg %2, %1 ; sete %0"
: "=q" (ret), "+m" (*(target))
: "r" (exchange), "a" (compare)
: "cc", "memory");
return ret;
#else
return -2;
#endif
}
inline char inline_cas_pointer (volatile void **target, void *compare,
void *exchange)
{
#if defined(__i386) || defined(__x86_64__)
char ret;
__asm __volatile ("lock; cmpxchg %2, %1 ; sete %0"
: "=q" (ret), "+m" (*(target))
: "r" (exchange), "a" (compare)
: "cc", "memory");
return ret;
#else
return NULL;
#endif
}
-------------------------------------------------------------------------------------------------------------
If anyone has any trouble with this, please let me know.
--
Jim Starkey, Senior Software Architect
MySQL AB, www.mysql.com
978 526-1376
/ Vulcan / Falcon interlocked compare and swap inline assembler
implementation. The source of the problem seems to be over-aggressive
optimization by gcc that effectively ignore the result of the cmpxchg
instruction. Falcon is currently uses a different instruction sequence
that uses the condition code rather than the resulting exchange value
that avoids the problem. We haven't finished exhaustive testing of the
replacement code, but I'd like to make the revisions available as soon
as possible.
The problem is known to exist on gcc 4.0.2 distributed on AMD64 SuSE
10.0. Some combinations of optimization switches result in infinite
loops, but the most common symptom is a very small window where
inconsistent SyncObject locks can be granted as well as a small window
in MemMgr where a small block release, implemented with
COMPARE_AND_SWAP, and loose a block. The probability of a lost block is
sufficiently low that it would probably never be detected, but the
consequences of inconsistent SyncObject locks will always be bad.
The new lock in Interlock.h is
#if defined (__i386) || (__x86_64__) || defined (__sparc__)
#define COMPARE_EXCHANGE(target,compare,exchange)\
(inline_cas(target,compare,exchange))
#define COMPARE_EXCHANGE_POINTER(target,compare,exchange)\
(inline_cas_pointer((volatile
void**)target,(void*)compare,(void*)exchange))
#endif
-------------------------------------------------------------------------------------------------------------
inline int inline_cas (volatile int *target, int compare, int exchange)
{
#if defined(__i386) || (__x86_64__)
char ret;
__asm __volatile ("lock; cmpxchg %2, %1 ; sete %0"
: "=q" (ret), "+m" (*(target))
: "r" (exchange), "a" (compare)
: "cc", "memory");
return ret;
#else
return -2;
#endif
}
inline char inline_cas_pointer (volatile void **target, void *compare,
void *exchange)
{
#if defined(__i386) || defined(__x86_64__)
char ret;
__asm __volatile ("lock; cmpxchg %2, %1 ; sete %0"
: "=q" (ret), "+m" (*(target))
: "r" (exchange), "a" (compare)
: "cc", "memory");
return ret;
#else
return NULL;
#endif
}
-------------------------------------------------------------------------------------------------------------
If anyone has any trouble with this, please let me know.
--
Jim Starkey, Senior Software Architect
MySQL AB, www.mysql.com
978 526-1376