Re: [g-a-devel]Re: Proposed implementation agnostic GNOME Speech API.
- From: Bill Haneman <Bill Haneman Sun COM>
- To: Michael Meeks <michael ximian com>
- Cc: Rich Burridge <Rich Burridge Sun COM>, "desktop-devel-list gnome org" <desktop-devel-list gnome org>, accessibility mailing list <gnome-accessibility-devel gnome org>
- Subject: Re: [g-a-devel]Re: Proposed implementation agnostic GNOME Speech API.
- Date: Mon, 15 Dec 2003 15:29:57 +0000
Michael Meeks wrote:
Hi Rich,
On Mon, 2003-12-15 at 09:57, Michael Meeks wrote:
On Fri, 2003-12-12 at 17:47, Rich Burridge wrote:
In short, I've created an implementation agnostic set of wrappers for
the existing GNOME Speech v0.2.X API that used GObject. It hides the
existing Bonobo/ORBit2 implementation under the covers.
Actually looking at the code, this doesn't seem to add a whole lot to
me. I don't think providing a different API hides much more of the
implementation really.
I agree with Michael here. The existing C bindings are fairly simple;
IMO the time spent writing more wrappers would be better spent writing a
few bits of sample code for the benefit of developers who aren't totally
at home with CORBA C bindings. There's really very little difference
between
GNOME_Speech_Speaker_getSupportedParameters (obj, &ev);
and
speech_speaker_get_supported_parameters (speaker);
In the case of at-spi, the cspi bindings provide lots of client-side
object cacheing, client-side refcounting, and other useful stuff; I
don't see a need for that in the current gnome-speech 0.2 APIs since
they are much simpler overall, and much lower-bandwidth (far less
reference counting involved).
Another thing that would help clients of gnome-speech would be a patch
to gtk-doc that could parse our IDL files to extract the existing inline
documentation.
The bit that really needs fixing is creating a new API for system-wide
driver instantiation, to remove the gross driver problems that exist
currently. It seems to me you could do that pretty trivially with a
gnome-speech server that you would activate first.
I am not sure what the suggestion Michael is making here is; although I
think instead of choosing 'by name' from multiple speech engine
services, it would indeed be nice to have some API for querying engines
by capability (beyond what we have now), or querying the 'default' engine.
Of course; the API as we know is pretty noddy - but the noddier the
better for retaining backwards compatibility - and you still have the
same issue with whatever you wrap it in.
Then of course there is the Java angle - the C binding doesn't make
life any easier for Java/Python etc. which are unlikely to want to write
extra custom bindings for gnome-speech - esp. in it's not-uber-stable
API state; CORBA/IDL de-couples you from the linking problems there.
Yes; and DBUS+Java are a potential problem, if one considers different
back-ends. I think it's vital that both client and server be
implementable in more than just C here. Certainly that was the impetus
behind making gnome-speech IDL-based; also it needs to be
remote-callable, for instance it would be fairly
common for the speech engine to be non-co-located with a user
application - for instance if the application were remote (so that the
audio device is local), or if the user has a high-bandwidth connection
for audio and a high-quality speech engine were available on a server
somewhere. This last point could be important for voice-input as well.
best regards,
- Bill
Finally of course - I'm not totally convinced that a bus architecture
necessarily maps that well to speech (but then I know little about
this).
Hmm,
Michael.
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]