Re: [orca-list] Patch: Experimental eSpeak support using python-espeak



Before I reply to Kendell's message, I would like to ask folks in this community to please consider breaking 
their text up into paragraphs a little more. I know its easy to forget about when using speech only, and its 
not something that you need to hink about when reviewing your message with speech, but it does make it easier 
for others to reply to specific parts of your message, and for anybody reading the list visually, I am sure 
it would make things a little easier for them too.

With that said...

On Tue, Aug 25, 2015 at 09:22:31AM AEST, kendell clark wrote:
This is definitely true. The problem is, I don't think anyone is
working on speech-dispatcher's alsa code. Luke has been saying he
needs to fix it for a while, but he keeps not doing it, sighting lack
of motivation and not understanding the alsa code very well.

Its also due to a planned refactor of audio in Speech Dispatcher, for various reasons, one being tighter 
integration with PulseAudio, and getting things in place to allow clients to directly receive spoken text as 
audio if desired. Yes ALSA will still be supported, but it makes more sense to properly fix it up once the 
audio refactor is complete. Of course if someone knows the ALSA API well enough to fix things up now, patches 
are of course welcome.

For now, there is the use of libao and its ALSA support, although I've personally found that this method does 
introduce an unpleasant amount of latency, but if you must have ALSA only, then thats your best option for 
now.

 As for varients, it makes no difference to me whether speech-dispatcher does
it or espeak does it, so long as they work, which they do now. I'd
still like a varients combo box at some point but it's not such an
important feature, I can use the current system with little trouble.

There was a small discussion on the Speech Dispatcher list a while back about supporting this, have a look at 
http://lists.freebsoft.org/pipermail/speechd/2015q2/004767.html. This cannot be done yet, as outlined in that 
thread, due to the need for a better configuration system. The plan is to use GSettings, and with that, we 
can store synth specific settings on a client basis, such that Orca for example doesn't need to know about 
every synthesizer specific feature for every supported synth that a user might want to use.

The only way I can see a direct espeak module being useful is if we
did something similar to what nvda does and build a compiled in espeak
into orca. This is what nvda does, which means it can speak on systems
where there are no installed synths. But this is probably a lot harder
to do in linux land, and not really all that useful seeing as orca
pulls in speech-dispatcher and speech-dispatcher pulls in espeak, so
the only way to have orca with no synths is to remove those packages
which would render orca unable to speak.

Distros generally frown on including libraries in a package. Even in Debian/Ubuntu, espeak uses sonic as a 
library from the libsonic package, rather than its own internal implementation. In Windows, and even in OS X, 
its common practice to include libraries with your app that are not otherwise available on a default system 
install.

Luke


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]