Re: [sigc] Re: [gtkmm] libsigcx and gtkmm 2.4
- From: Martin Schulze <martin-ml hippogriff de>
- To: libsigc-list gnome org
- Cc: libsigcx-main lists sourceforge net, Daniel Elstner <daniel elstner gmx net>, Gtkmm List <gtkmm-list gnome org>
- Subject: Re: [sigc] Re: [gtkmm] libsigcx and gtkmm 2.4
- Date: Fri, 16 Jul 2004 11:23:01 +0200
Here is an interesting article related to this thread:
http://en.wikipedia.org/wiki/Lock-free_and_wait-free_algorithms
-- Martin
Am 15.06.2004 09:58:41 schrieb(en) Martin Schulze:
Am 14.06.2004 19:34:41 schrieb(en) Christer Palm:
Daniel Elstner wrote:
Okay, you're (partly) right. ("Partly" because it's not "locking
or
unlocking": what's needed is unlock in thread A and lock in thread
B.)
I found this in Butenhof:
Whatever memory values a thread can see when it unlocks a
mutex,
either directly or by waiting on a condition variable, can
also
be seen by any thread that later locks the same mutex.
Again,
data written after the mutex is unlocked may not
necessarily be
seen by the thread that locks the mutex, even if the write
occurs before the lock.
In other words, the sequence
pthread_mutex_lock(mutex);
pthread_mutex_unlock(mutex);
issues a memory barrier instruction on the unlock. The other
thread
that wants to read the data still has to lock the same mutex
though.
A memory barrier, or synchronize, instruction is issued both on lock
and unlock and also in a bunch of other thread related functions. Of
course all threads need to agree on which mutex protects memory
location X, that's how they make sure they doesn't execute a region
of code that access memory location X simultaneously. Not because
only certain memory locations are syncronized then the mutex is
locked/unlocked.
Having said that, is there any place in mine or Martins code where
you believe that this rule isn't followed, except as a side effect
of passing objects that contain internal references?
This is what IEEE Std 1003.1-2004 has to say about memory
synchronization requirements:
4.10 Memory Synchronization
Applications shall ensure that access to any memory location by more
than one thread of control (threads or processes) is restricted such
that no thread of control can read or modify a memory location while
another thread of control may be modifying it. Such access is
restricted using functions that synchronize thread execution and
also synchronize memory with respect to other threads. The following
functions synchronize memory with respect to other threads:
...
pthread_mutex_lock()
...
pthread_mutex_unlock()
...
This gives rise to an interesting question: If no locking is required
(e.g. because atomic operations are used), which is the most
efficient call to establish a memory barrier (e.g. before doing the
atomic operation)? In a linux driver, I would call wmb(), but what
can I do on the application side? Signal a dummy condition?
Regards,
Martin
_______________________________________________
gtkmm-list mailing list
gtkmm-list gnome org
http://mail.gnome.org/mailman/listinfo/gtkmm-list
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]