Re: [g-a-devel] Speech-dispatcher/orca integration specification, first draft.
- From: Jason White <jason jasonjgw net>
- To: gnome-accessibility-devel gnome org
- Subject: Re: [g-a-devel] Speech-dispatcher/orca integration specification, first draft.
- Date: Wed, 25 Mar 2009 12:47:31 +1100
Eitan Isaacson <eitan ascender com> wrote:
> I think this debate goes beyond accessibility requirements and into the
> "old" and "new" way of doing stuff. I miss having absolute control over
> the network with ifconfig, but NetworkManager is very appealing to users
> who never considered Linux before, not to mention it saves a lot of
> hassle in a wifi world.
Some of us choose not to install NetworkManager for exactly the above reasons,
i.e., loss of control.
The same holds for PulseAudio.
I think there is a divide, of sorts, between those Linux distributions which
install these components by default, and which integrate them tightly into the
dependency graph of the desktop environment, and those which don't. Debian is
an example of a distribution which makes it easy not to install NetworkManager
or PulseAudio, for example, and to use whatever tools you want to configure
your audio device/network interfaces.
In considering these issues from a broader perspective, I think it is
important to make it easy for the user to opt in or out of such tools, if
desired, just as it is possible to choose different desktop environments or
shells.
>
> I believe the best solution to the current cacophony is to think how we
> fit in to this new world of system/session separation. One wild proposal
> might be to have a system-wide speech service that is only used in
> console mode (and only the current console user would have privilege to
> use it), and would not hog the sound device when not speaking. When the
> user is in-session, the speech service Orca would use would be a session
> service that would take advantage of the session's PulseAudio instance.
> I didn't put much thought into that, I am just trying to think of the
> console/desktop paradigm, and how we could make the auditory output work
> with the same paradigm that the visual output does.
I like the idea. However, it raises the question of whether there are
circumstances in which a background session should be able to interrupt what
is happening in the current console to inject a spoken message, which would of
course complicate the model somewhat.
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]