Re: RE : Apparent thread-safety bug in Glib 2.0 docs



On Sun, 2003-10-26 at 04:40, Joaquin Cuenca Abela wrote:
> This sounds a lot like an extended misconception about volatile
> and its relationship with threads.
> One example is one article of A. Alexandrescu article on CUJ
> (http://www.cuj.com/documents/s=7998/cujcexp1902alexandr/)
> 
> The article was completely wrong about volatile semantics.
> Please, see the reply from (among others) David Butenhof
> http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=UTF-8&selm=3A66E4A
> 1.54EE37AD%40compaq.com&prev=/groups%3Fq%3Dvolatile%2BButenhof%26hl%3Den
> %26lr%3D%26ie%3DUTF-8%26oe%3DUTF-8%26selm%3D3A66E4A1.54EE37AD%2540compaq
> .com%26rnum%3D1

While I'm familiar with that material, this is indeed quite different.

> (Let's say thread A is in the while loop, and thread B calls
> setFoo(0).)
> 
> In fact, the g_static_mutex_lock/unlock (if it has the same
> meaning than the POSIX Threads' lock/unlock) ensure that 
> the thread A will see the same memory value for foo than what
> thread setted.  When thread A locks the mutex, it will see the
> memory that thread B setted before it unlocked the same mutex
> (or, eventually, a value written after the mutex is unlocked
> by B, but that's not guaranteed).

I believe you are confusing the memory model issue with the registers
issue. This is true from L1 cache on up, thanks to MESI (well, not every
hardware platform has MESI, but for the most part the multiprocessing
models have something like this). Sure there is instruction and memory
access reordering going on inside a CPU, compiler and linker, but it is
all done in a way such that the original constraints of the sequential
execution semantics are preserved (at least in the single-threaded
world). 

The problem is, at least on most hardware and operating systems out
there, neither the hardware nor the kernel nor the thread libraries have
any idea which registers map to which regions of memory, so they do not
have sufficient information to ensure the registers and memory are in
sync. While the compiler and linker do have such information, in the
C/C++ execution model, there is no knowledge of threads. No single
component of the system has the knowledge required to ensure the
registers are consistent with the memory they are aliasing (nor is this
desirable as it would remove some of the performance advantages of
registers). Certainly the POSIX memory model does not address what is
happening with the registers.

The only way for g_static_mutex_lock/unlock to achieve what you are
suggesting, would be if they were to somehow cause the compiler to
generate code forcing all registers to be flushed to memory and then
cleared. Perhaps I am mistaken, but I've not seen anything in the futex
code which suggests this. While the kernel will clear registers when
switching between threads, it stores their values and restores them as
exactly they where prior to the context switch when control return back
to the original thread.

> Like Mr. Butenhof says, volatile is not necessary, nor sufficient
> to guarantee proper synchronization between two threads.

Oh absolutely. Neither are mutexes though. In most cases you need both
to be certain.

-- 
Christopher Smith <x xman org>



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]