Re: [orca-list] punctuation pronunciation and pauses



Hello,

Willie Walker  wrote:
OK - it's probably something in the Orca module for speech dispatcher
that's not using chnames.py.  Let's just leave it at that for now.  We
can roll this into the other speech-related thread happening on this
list.

This is what braillcom guys, me and some others are trying to explain for quite some time. When speech-dispatcher backend is used in orca it does no character verbalization it-self and relies on the synth or speech-dispatcher output module to do the work depending on the synth's capabilities.

With gnome-speech you are doing character verbalization in the gnome-speech orca backend (according to your comments - I have never looked into it) and before I started to complain you were doing character recognition also in orca it-self (e.g. default.py script).

I think it would be nice if a core Orca developer or anyone who knows orca code very well would be able to check and correct this. Definatelly in orca's speech backends there should be no character verbalization taking place at all except of the speechCharacter method. In stead this has to be called from all the places where you are sure you are working with single characters depending on the context. Additionally there should be ability to turn orca's character verbalization on/off by the end user and or some smart profiling system where predefined behaviour can be scripted for each synth. This way tts specific markup can be used to pronounce or spell characters where possible and orca's verbalization should only take place when it's sure chosen synth or output module can't do that.

I would be happy to receive some corrections from MR Cerha, MR Hanke, MR Zamazal or MR Buchal if appropriate, but for sure there's a need to move forward. Currently for about a month or two we are in state "let's do something constructive". So I believe this is good starting point of something constructive and it makes sense to you all.

Please avoid arguments in windows world it never worked like this and everyone is happy. Yes of course but in windows world accessibility has not been well maintained comparing it to gnome. Even at present when revolutionary IAccessible2 is out there are still companies who don't care. Everyone can write it's own commercial solution and in most cases they are not trying to cooperate together. Instead it is more frequent that they are actually competing. If gnome people can do accessibility seriously then they for sure will be able to do text to speech at such a level.

greetings

Peter



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]