Re: why use guile?




Anthony Martin <amartin@engr.csulb.edu> writes:

> > > Design 1:                     Design 2:
> > > 
> > > libgnome  gtk  gwm  etc.      libgnome  gtk  gwm  etc.
> > >     |      |    |    |            |      |    |    |
> > >     ------------------            ------------------
> > >              |                             |
> > >            guile                         CORBA
> > >                                            |
> > >                          ----------------------------------
> > >                          |       |      |    |   |   |    |
> > >                        guile  Python  Perl  Tcl  C  C++  etc.
> > > 
> > > Does GNOME use Design 1, or Design 2, or both?  If it uses Design 1, or
> > > both, why?  Isn't Design 2 the best?
> > 
> > Because it is slow. I would not surprise me if version 2 produces 5 or
> > more calls overhead for every GTK call from Guile.
> 
> Are you speaking from experience, or are you guessing?  My understanding
> is that a decent ORB should handle CORBA calls with very little overhead. 
> If you are making the calls from guile or any other interpreted scripting
> language which is fairly slow anyway, the overhead should be
> insignificant. 
> 
> Does anyone have any experience with this?  Is CORBA slow?

It depends what you mean by slow. But it is certainly slower than
no CORBA.

Here are some numbers. These are not benchmarks, but they probably
aren't off by more than a factor of two or three. (All timings on a
P133)

The system is MICO-Perl (my current mini-project) - a naive, simple
language binding on top of a "naive" simple ORB. ("Naive isn't
really appropriate for MICO, but speed was not a design goal)

Calls are passing a single integer parameter.

                                                ms/call   calls/sec
  Perl client to perl server over loopback:        5        200
  C++ client to C++ server over loopback:          2.5      400
  Perl client to C++ server in process             0.8      1200
  Perl client to perl server in process            0.12     8000
  C++ client to C++ server in process              small    big

Note that perl-perl and and C++ <-> C++ short-circuit the ORB
so there is no extra overhead from MICO.

I'm not sure why perl-to-perl inter-process is so much worse
than C++-C++. It could be my factor-of-two uncertainty in the benchmarks,
or something easily fixable. 


OmniOrb does quite a bit better - from what I've heard, the comparable
interprocess performance would be about 0.5 ms/call. The difference
is that OmniOrb's stubs basically write directly into buffers
while MICO builds objects that it passes about. (Not really quite
true on either end, but close enough.) Flick takes the OmniOrb
approach one step further.

But there is no way I could have gotten prety fully function client
and server code for Perl working in 2000 lines of C with OmniOrb.
(The Perl interface is completely stubless and does everything using
DII and DSI. The client side is pretty pure DII, the server side
gets into the internals of MICO a bit more)


But in any case, I suspect the above timings will give some idea
of the regime in which CORBA operates. For inter-language or
interprocess communication it is fine for things that happen 1 or
10 times per user action, but not for things that happen 10000
times per user action. 100-1000 times per user action is the
range where implementation details are going to matter.

Regards,
                                        Owen



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]