Re: [orca-list] plugging other language voices



Hi Halim:

Does gnomespeech support language switching 
of synths without changing locales setting?

The organization of gnome-speech is basically this:

1) There are gnome-speech drivers for a variety of speech 
   synthesis engines.  For example, there are gnome-speech
   drivers for Festival, eSpeak, DECtalk, swift, etc.

2) Each driver offers up voices.  Each voice has 
   at least the following informational attributes:
   name, language, and gender.  The language attribute
   is poorly specified, unfortunately.  

3) From a voice, you can get a handle to a speaker, which 
   is the thing used to actually generate speech.  It is
   the job of the speaker to talk to the audio device.
   That is, there are no provisions in gnome-speech for 
   the client to get to the raw audio stream of the
   synthesis engine.

4) Theoretically, an assistive technology can talk to
   multiple speakers from multiple drivers, with the main
   limitation being audio device contention between 
   multiple speech synthesis engines.

One of the problems in gnome-speech is that the 'language' attribute is
not specified well.  As a result, different drivers give the client
different interpretations of language.  I suspect it was originally
intended to be something along the lines of language_REGION, such as
en_US, en_CA, and en_UK.  The execution of this, however, is very
inconsistent in gnome-speech.  As a result, Orca merely just exposes the
language as part of the voice name, allowing the human to be the
interpreter of the language string being sent by gnome-speech.

The language field specification of gnome-speech probably should be
tightened up and the gnome-speech drivers modified accordingly. 

Will





[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]