Re: [orca-list] Punctuation, capital letters, exchange of characters and strings, generally error in the design of Orca



"WW" == Willie Walker <William Walker Sun COM> writes:

    WW> As I dig into it more, I believe the idea is that the core
    WW> server of SpeechDispatcher is (and will remain?) a C-based
    WW> service that talks to TTS engines.  It listens on sockets and
    WW> communicates with external applications like Orca via Brailcom's
    WW> SSIP protocol.

Current Speech Dispatcher does so.  The new implementation will be
implemented completely in Python and will allow, in addition to SSIP,
other forms of communication.  Most notably it will allow direct Python
calls from clients such as Orca.

    WW> On the client side, there will be a number of language bindings
    WW> that provide convenience mechanisms for handling SSIP.

The new implementation supports SSIP, so current language bindings can
be used without change.  New features can be made accessible by
extending SSIP (and its bindings).

    WW> So, the Python bindings in question are really for helping
    WW> Python-based clients, such as Orca, to talk to the
    WW> SpeechDispatcher service via SSIP.  

No.  As the new Speech Dispatcher is basically a Python library, Orca
can call it directly.

    WW>   As my knowledge of SpeechDispatcher grows, so do my questions
    WW> about what is complete and what is planned.  Like the AT-SPI,
    WW> which is composed of the AT-SPI IDL, ATK, GAIL, a bridge for
    WW> ATK, a bridge for Java, C bindings for the client side, etc.,
    WW> there are a lot of components to SpeechDispatcher (TTSAPI, SSIP,
    WW> language bindings, TTS engine drivers, config files, etc.) that
    WW> make the learning curve a little steep.  Couple a lack of
    WW> knowledge about the complete SpeechDispatcher picture with the
    WW> possibility that things will be changing in the future, and it
    WW> gets a little daunting.

The first stable version of the Speech Dispatcher is planned to be fully
feature and SSIP compatible with current Speech Dispatcher.  From the
point of view of applications nothing will be *necessary* to change.
From the point of view of the user it is likely that the dotconf
configuration will be replaced by a Python based configuration.

As for the internal architecture there are about six major components:
Message dispatching mechanism (queueing, priorities, etc.), client
interfaces (SSIP etc.), output modules (speech, braille, etc.), common
TTS API, TTS drivers, configuration handling.  With the exception of
configuration handling, at least something basic has already been
written for all of the parts.  But not everything is complete,
completely working, and completely documented.  Actually some things are
still quite incomplete and a lot of work still remains.  But the overall
architecture has been already mostly defined in the sense of existent
source code.

I don't think there is anything there that brave men should be scared
of :-).  By organizing the source into separate Python libraries they
should be more easy to study and use.  And one needn't study all the
parts of the software.  Client writers care only about interfaces,
people interested in supporting TTS engines can focus just on writing
TTS drivers, users of alternative output devices will write new output
modules.  It should be all easier than it is today, I hope.
    
    WW> I still need to do a more thorough examination of the Python
    WW> speechd bindings to see how well they map to the overall
    WW> requirements.

I don't consider it that important.  If the core Speech Dispatcher
Python libraries are flexible enough (and this is one of the design
goals), you can define any bindings you like on top of them.

Regards,

Milan Zamazal



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]