Re: [orca-list] Alsa-OSS Politics?



On Sat, 2007-11-03 at 09:45 +0100, Jan Buchal wrote:
"LY" == Luke Yelavich <themuso themuso com> writes:

    LY> On Fri, Nov 02, 2007 at 01:09:38PM EDT, Michael Whapples wrote:
    >> And I am thinking, I don't know why distros now don't use
    >> speech-dispatcher by default rather than gnome-speech, as that
    >> solves the alsa and oss issues by handling the audio itself and
    >> so can use either alsa or oss (as the user prefers) so long as
    >> the synth allows the controlling app to get the audio. Now that
    >> speech-dispatcher has the espeak specific module (rather than the
    >> generic one) and orca has the speech-dispatcher backend I would
    >> not dream of going back to gnome-speech.

    LY> The one problem with speech-dispatcher, is that you have to set
    LY> the sound card output for the synth in question at the system
    LY> level. it is currently not possible to do this per user, amking
    LY> this option unusable in multi-user setups.

    LY> If this was addressed, I would certainly consider
    LY> speech-dispatcher for Ubuntu.
It is not correct. Please read speech dispatcher documentation. Speech
dispatcher can runs under any user who can set any its option.
I am not sure if you are right Luke or you Jan, but I was under the same impression as Luke on this. I 
understood that any option set in module specific configurations would be system wide (EG. 
/etc/speech-dispatcher/modules/espeak.conf would set the same output for espeak regardless of the user or 
client using speech-dispatcher) and should a user change this then it would be changed for all users of the 
system. In my case (and possibly most) this might not be a concern, but it could be in a multi-user system 
where different users want different audio devices used for speech output.

If Luke is right, then either you would need a user specific set of
modules (which would be difficult as speech-dispatcher loads all modules
at start up), or have audio output moved from module configuration to
where clients can modify setup (and in the case of generic modules, have
a basic player (eg. named spd-play) which rather than directly produces
audio sends it to the speech-dispatcher audio system, so allowing
generic modules to match this requirement).

On the other hand, Luke how does gnome-speech meet this requirement? I
thought gnome-speech did not manage the audio so could not control this,
or does it send to the users default device (as gnome-speech is
launching the synth as that user)?

From
Michael Whapples

Have a nice day





[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]