Re: Promoting idea of getting Ubuntu to adapt to users' accessibility preferences...



Dear Eric

Thanks again for sharing your thoughts.  I've replied to your comments on the Ubuntu Brainstorm site: http://brainstorm.ubuntu.com/idea/20263/

Here's the message I left:

You raise a very interesting question about what will happen when, say using the Web-4-All system, you plug your card in and the machine doesn't have the accessibility software you need? What happens, as you said, when the system doesn't have the appropriate speech recognition software you need to use the system? 

I like your idea of having portable applications. I would be very interested to see how this could all integrate and work. 

But I still believe there is value in implementing a Web-4-All type system. The 3 reasons why I think this are: 

1. Certain users just need to customize system settings: 

Aren't there certain users who simply need to customize the system settings to make system more usable? For example: 

- Users with an visual impairment (low vision): 

Set system settings to increase font size and switch to high contrast mode. 

- Users with an auditory impairment: 

Set system setting so that auditory notifications display visually. 

2. Can customize/use bundled s/w as fall back: 

I believe one of the core ideas behind Web-4-All is to abstract out the kind of accessibility settings you would like to set. So it's not just passing settings for your preferred accessibility application but also general settings. 

For example GNU/Linux, Windows, and Mac OS X all come bundled with certain accessibility applications (screen magnifiers, screen readers, etc.). I know the quality of bundled software may vary in comparison to applications developed by 3rd party providers but hopefully if you pass generic settings to the system the system will be able to transform to become usable. 

3. Use Web-4-All to launch portable apps: 

Down the track could the Web-4-All system be used to pass settings or launch portable apps?

Kind regards,
Scott.

On Fri, Jul 31, 2009 at 5:53 AM, Eric S. Johansson <esj harvee org> wrote:
Brian Cameron wrote:

> This does seem like an interesting idea.  To expand upon it, I think
> GNOME also needs a solution that works more generally.
>
> There has been talk of enhancing gnome-settings-daemon so that it is
> possible for users to hit particular keybindings or other sorts of
> gestures (e.g. mouse gestures) to launch AT programs.  This would
> allow a user to launch the on-screen-keyboard, text-to-speech, or
> magnifier by completing the appropriate gesture (e.g. keypress or
> mouse gesture).
>
> I would think that using a specific smart card or USB stick is another
> form of "gesture" that would also be good for launching AT programs.
> However, wouldn't it be better to come up with a solution that would
> support all of these sorts of "gestures" in one place?
>
> Providing a solution that can recognize different sorts of gestures
> (perhaps configurable so users can define their own sorts of gestures -
> perhaps with other unique hardware based solutions - like pressing a
> button on their braille display) seems a way to go about implementing
> your idea and also supporting other mechanisms that could be used to
> launch AT programs as needed.

as I added as a counter proposal

"""
It is unrealistic to expect all machines a user uses to have accessibility
software. There may be multiple reasons for this ranging from administrative
overhead to licensing issues to interference with normal operation. By adopting
the perspective that the user interface moves with the user and not the machine
opens up new possibilities for widely available accessibility. By associating
the user interface software (speech recognition, text-to-speech, various dog and
pony tricks, etc.), the impact on the general machine is lessened, and
administrative costs are lowered, licensing issues are reduced or eliminated,
and the user has increased control over the software they need to function.

This can be implemented today using virtual machine technology and relatively
minimal bridge software making the accessibility software interface visible on
the host and enabling interaction between the application and the accessibility
software."""

The Web for all model doesn't address something I consider fundamental flaw of
accessibility technology. I should be able to use any machine I have access to.
I shouldn't have to wait for an administrator or buy new license just because
I'm using a new machine whether it be for a lifetime or just a few minutes. I
should be able to plug-in, click a few icons and start working. After all,
that's what keyboard and mouse allow tabs to do. Why put any further barriers
disabled people?

I believe the future of accessibility will start with putting accessibility
tools on a netbook and connecting that network to other systems on demand. I
believe this because if you give me an accessibility interface, you control how
I use the computer. If you give me an API, and a remote accessibility toolkit, I
can control how I use any computer.

Yes, I'm a wee bit cranky about this because I spent the past 15 years watching
speech driven user interfaces get almost no support and I am seeing speech
recognition on Linux (NaturallySpeaking on wine) sit at the cusp of being useful
by disabled people and getting no traction with the developer community.

--
Ubuntu-accessibility mailing list
Ubuntu-accessibility lists ubuntu com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]