Re: [sigc] Re: [gtkmm] libsigcx and gtkmm 2.4



Am 13.06.2004 16:33:45 schrieb(en) Daniel Elstner:
Am So, den 13.06.2004 um 2:10 Uhr +0200 schrieb Martin Schulze:
> >
> > Hmm std::string is a perfect example of an argument type that
> > requires
> > special handling.
>
> Why? The slot object is completely initialized before the dispatcher

> knows of it. Note that sigc::bind does not take arguments as
references
> by default if this is where you are heading.

std::string can be implemented with reference counting, and the
libstdc++ shipped with GCC does exactly that.

Meaning that no deep copy of the string is made although it is passed "by value"?! Then I understand the problem here. (However, if you pass a "const char*" into the std::string ctor as in my example the copy is being created at once, isn't it?)

> >  Even if it does this you still need mutex locking to protect
> > the memory being shared (ensuring that event A happens after event
B
> > is
> > not enough due to the way modern hardware works; you definitely
need
> > memory barriers too).
>
> Why would you need memory barries? Thread A creates some objects,
> thread B (the dispatcher) uses them and destroys them afterwards.
> Of course, if you pass references around, you need to make sure that

> thread A doesn't manipulate the data while thread B is handling it,

> yourself.

Wrong!  It's not that simple.  Whenever two threads access the same
data, both have to acquire the same mutex for any access to it
whatsoever, be it reading or writing.  The only situation where this
rule doesn't apply is if thread A creates the data before launching
thread B, and both threads never write to it again, or only thread B
does and thread A never accesses it at all.

I highly recommend reading Butenhof's Programming with POSIX Threads.
In particular, memorize Chapter 3.4 Memory visibility between threads.

Here's a table from that chapter:

Time	Thread 1			Thread 2
-----------------------------------------------------------------
t	write "1" to address 1 (cache)
t+1	write "2" to address 2 (cache)	read "0" from address
1
t+2	cache system flushes address 2
t+3					read "2" from address 2
t+4	cache system flushes address 1

The point here is that there are no guarantees about memory ordering
whatsoever.  As it happens reading address 2 works by chance, but the
read from address 1 returns the wrong value despite the fact that the
read happens after the write was completed.

Usage of special instructions is required to guarantee ordering,
called
"memory barriers".  Locking/unlocking a mutex issues these
instructions.

I still don't see the problem in the case where no references/pointers are being passed around: The list of slots the dispatcher operates on _is_ protected by memory barriers (there might be bugs in my code but it is perfectly possible to simply use a mutex around 'std::list:: push_back()' / 'std::list::pop_front()' as I pointed out in a comment and as Christer does).

Regards,

 Martin



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]