Re: gnome-vfs usage of GConf vs threads

On 10 Dec 2002, Michael Meeks wrote:

> > > 	Haha ;-) I think re-enterancy is a simpler tool than threading - as you
> > > know both schemes scatter the code with re-enterancy hazards, a pure
> > > re-enterant approach simply ensures those are at well defined points ;-)
> > 
> > This is where we have a fundamental disagreement. Threads are hard, yes, 
> > but they are manageable. There exists lots of books on the subject and 
> > people learn about it in school. Strategies for handling locking and 
> > avoiding deadlocks are well known and locking can be well encapsulated 
> > most of the time.
> > 
> > Contrast this to re-entrancy.
> 	I just see such a lack of contrast ;-) it seems to me that they are
> essentially identical, but instead of random context switches - you have
> a more controlled control flow switch at well defined points. It seems
> to me almost equivalent. 

Controlled is just what it is not. Normally with threads you cannot know 
what other threads do, but you can at least know what your thread is 
doing and make sure it makes progress. I.E. your code related to some 
particular subsystem can set up locking rules for the different functions 
(may block, must have lock foo or higher, cannot hold lock bar, etc), and 
a lock order to avoid deadlocks. As long as you follow those rules you 
avoid deadlocks and unsafe behaviour.  

With re-entrancy you can never say. "this function is safe to call with 
lock foo", because you have no control over what else will be called. 
Possibly some function that takes lock foo, which will deadlock (with 
threads that thread would just block until the lock is released). 

In other words, it's impossible to implement local rules for re-entrancy 
and lock handling in a subsystem, because you never know what your thread 
will execute. You have to take into context the global set of all corba 
calls in the app, and that may not be possible if you use libraries and 
> 	It's certainly true that by holding a lock over a CORBA method
> invocation you can be sure that the data is still consistant when you
> get back - and thus you have more control over serializing the
> processing of requests. But - this just switches the problem to the
> client instead of the server with deadlocks turning up at unexpected
> places.

I don't understand what you mean by this. It is never safe to hold a lock 
over a re-entering CORBA call, even if you know the function you call 
won't call anything else requiring that lock. Because while waiting for 
the reply some other process may call such a function and we'll deadlock.

If this were a local call that would be safe, even with other threads 
running, as long as you played by the correct locking rules.

With CORBA reentrancy there are no way to set up rules such that if you 
follow them you avoid deadlocking when holding a lock over a CORBA call. 

> 	Again - you still have to know that any given C call you make could
> trigger an incoming call that could stomp on your data and/or deadlock
> you, necessitating expensive lock/unlocks around methods in the same
> way. If you're going to do expensive lock/unlocks around invocations (I
> believe any local/remote distinction has no bearing on the issue) - you
> might as well hold refs on what you're working on instead ;-)

Take the following API:

foo_remove_item (foo, name):
  item = list_lookup (foo->list, name)
  list_remove (foo->list, item)
  unref (item)

foo_activate (foo)
  for item in foo->list:
    activate (item)

We have to make sure that the list is not modified while its being 
iterated over. If activate is a corba call that does not call any foo 
methods its pretty easy to make this safe in a multithreaded environment 
where corba calls don't reenter: 

foo_remove_item (foo, name):
  lock (foo->mutex)
  item = list_lookup (foo->list, name)
  list_remove (foo->list, item)
  unref (item)
  unlock (foo->mutex)

foo_activate (foo)
  lock (foo->mutex)
  for item in foo->list:
    activate (item)
  unlock (foo->mutex)

This works with corba requests that spawn a new thread, or if corba 
requests that are just queued while waiting for the reply. It also 
protects against other non-corba-related threads modifying the list while 
iterating over it. 

How would you make it safe if we are working in a single-threaded 
environment where corba calls reenter (and you have no control of other 
corba calls, like when you're writing a library/component)?

The threaded case is surely complicated, but the re-entering one has no 

> 	Anyway - I guess we need a ORBit_deadlock_push / pop method somewhere
> to stop the processing of incoming calls; although you can fairly easily
> replicate this with:
> 	PortableServer_POAManager_hold_requests (
> 		bonobo_poa_manager (), FALSE, &ev);
> 	.. do some blocking CORBA call ...
> 	PortableServer_POAManager_activate (
> 		bonobo_poa_manager (), &ev);
> 	Where the incoming invocations get processed on the 'activate' (ie.
> unlock).

Something like that would be good. Of course, this can cause trouble if 
another thread uses that poa. I can hardly e.g. stop all processing for 
bonobo_poa_manager() from a gnome-vfs thread. So I have to have my own POA
instance to do that. 

 Alexander Larsson                                            Red Hat, Inc 
                   alexl redhat com    alla lysator liu se 
He's a scarfaced chivalrous paramedic whom everyone believes is mad. She's a 
mistrustful cigar-chomping socialite with a flame-thrower. They fight crime! 

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]