Re: making GStaticMutexes faster

Owen Taylor <otaylor redhat com> writes:

> Sebastian Wilhelmi <wilhelmi ira uka de> writes:
> > recently I had an email discussion with Tim about GStaticMutexes,
> > which now have a really big overhead, because before every access to
> > them (locking, unlocking) another mutex has to be locked. 
> Not on operating systems with POSIX thread support, right?
> > This is super paranoid, as the only case it might fail without this
> > additional lock is on an MP-machine without cache coherence, which
> > should not only be very rare.
> I think "cache coherence" is a topic, not a particular property...
> The particular property you are proposing relying on is if processor
> #1 writes location A and then location B, then another processor will
> never see a write to location B then a write to location A.
> (Since the writes are here separated by a function return it is
> pretty unlikely that instruction reordering would cause problems,
> so it probably is mostly a question of cache behavior, yes.)
> I don't have any more information than what I had back when we
> originally discussed this topic, which is that it is not safe
> "in general" without an explicit memory barrier, and it is safe 
> on the common ia32 processors. I still have no idea on what
> set of processors / machines it will fail on.

Reveals a number of articles with information on the technique
that Sebastian is discussing and why it isn't safe. (Mostly
in the context of Java.)


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]