Re: glib: processing events in multiple threads



This has been a long thread about threads ;-)

I myself have been using a quite different aproach for a long time.

Basically devide everything in ordinary unix-process. Have them talk to
one another over a software buss. I have been using spread
(www.spread.org) a lot but there are others (rabbitmq or zeromq). In
this way you focus on events from processes and protocols between
processes. This works as a charm. In Unix it is very easy to have a
process connect to the software buss, and then fork. Try this. I think
i can prommise that you wont miss threads. But you will have to invent
a good protocol instead...


/gh

2013-04-30 12:52, Tristan Van Berkom skrev:
On Tue, Apr 30, 2013 at 8:26 PM, richard boaz <ivor boaz gmail com> wrote:

[...]
I've heard from quite a few people in particular in the GNOME camp who
believe that everything should be asynchronous and that multithreading
is evil.


i don't know who these GNOME people are (very short?), but going with
"multi-threading is evil" is an unbelievable statement: i'm gobsmacked.
(and the hardware guys wonder why we're such idiots.)  all those extra CPU's
are only intended for multiple app execution and not to be used
simultaneously by the same program where possible?  i can only conclude one
thing here: it must be simply too difficult for them to implement.

I should mention this, because this seems to be a misinterpretation of
something I probably mentioned to Patrick myself.

Now, there are probably a multitude of reasons for implementing an
asynchronous API, but let's stick to two very popular and probable
reasons:

   a.) The actual work is done in another process, which means generally
         that calling an Async API consists of writing some bytes to a socket
         (D-Bus most popularly) and adding a file descriptor to a poll() call,
         awaiting your reply.

   b.) The actual work is CPU intensive, and probably already executed
        as a separate thread or process.

Now, let's say you already have access to an Async API, is there any
reason, other than the desire to play with fire, to actually execute the
Sync API *in a separate thread* manually ?

I know this sounds like a ridiculous question, but people have been doing
just that, and this is the kind of thing I would like to discourage; i.e.
playing with threads when it is absolutely uncalled for.

The temptation to do this stems from the apparent simplicity of calling
a Sync variant of an Async API, coupled with overconfidence of
playing with threads manually (which honestly, should be left to the
grown-ups, no offence intended to anyone really, but threading correctly
is just not trivial).

Basically, for D-Bus at least, a Sync variant of a method call is a matter of:

  o Creating an isolated GMainContext/GMainLoop (isolated in the sense
     that the only GSource which can occur while running the encapsulated
     mainloop is the D-Bus method reply... *everything* is blocked while
     making a synchronous D-Bus method call).

  o Issuing an Async call

  o Waiting for only the Async method call to return in encapsulated
main context

  o Collecting the return value when the call completes (or times out)
and destroying
     the temporary GMainLoop & GMainContext.

So essentially, if there is an Async variant of a D-Bus method, the Sync variant
is only a way to call the Async variant of the same API and block execution.

But, we run into situations (like in EDS) where the user facing APIs never call
the Async D-Bus methods under the hood, but instead run a thread, and call the
Sync method *from a thread*, where it will essentially just block
while performing
a method call, not buying you any cycles on your separate CPU/cores, but adding
a lot of complexity to your code.

I hope this clarifies things a bit, if there are Async APIs, it's
better to just use
them, if you have CPU intensive code to run, then you have justification for
threading, of course, by all means run that workload in another thread.

Cheers,
    -Tristan


in some vain attempt (pun intended) to restore some street cred, i offer up
a different "attitude" a programmer can have where hardware advances are
concerned and how these can be exploited to maximum benefit.

what my GUI program is required to do, very briefly, is to read any number
of multiple files sitting on disk, each to be displayed on a separate line
(seismic data files).  where this is all multi-threaded in the following
manner:

at start-up, query the computer to determine the number of CPU's available
to my program
on request to display data files, start a separate thread to:

place all files to display on a single Q
start up numCPUs threads, where each thread queries the Q (call is mutexed)
for the work details, until nothing left on Q
return to display for rendering

and what this means is: as hardware advances and more and more CPU's become
available, because of this design, program execution will get faster and
faster without one line of code having to be adjusted in the future.  i
don't even need to recompile.

and this is just one example of how i maximize having multiple CPU's
available to me.  the possibilities become endless when this concept is
wholly embraced, as it should be.  do not be afraid...


To be more specific: what I wanted to avoid was the need to protect data
with mutices by ensuring that the only code ever touching it is from the
same thread. That is easier to enforce than correct mutex locking.


this, of course, comes down to your requirements' details.  only one
recommendation here: when using mutexes, make sure that you surround
absolutely the fewest number of lines possible to guarantee single access.

good luck, make it fun, because it is.

richard

_______________________________________________
gtk-list mailing list
gtk-list gnome org
https://mail.gnome.org/mailman/listinfo/gtk-list

_______________________________________________
gtk-list mailing list
gtk-list gnome org
https://mail.gnome.org/mailman/listinfo/gtk-list


-- 
Göran Hasse
Raditex AB
email: gorhas raditex nu
mob: 070 - 5530148


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]