On 3/21/2018 3:03 PM, Mats L wrote:
Eric,
It's always very good to have a person speaking on behalf of
himself or herself as a user representing an actual need.
There is a lack of this for people with mobility based
access problems in these forums for free software, compared
to the areas of low or no vision. Part of the problem is the
really wide and diverse range of needs regarding physical
access - disabilities as well as abilities. (Regarding
cognitive disabilities and needs there is a general lack of
people at all able and interested in speaking and doing
things on behalf of those needs in the GNU/Linux world.) So
thanks for stepping in! I'll be looking at your ideas with
great interest.
I also think part of the challenge with providing mobility related
accessibility features is that those of us who are technical enough
to understand what to do no longer have hands to make it happen so
we count on people like you to work with us building prototype
features. I did that with togglename when I had enough money to pay
somebody to write code for me.
But remember that speech input is a dead end for a large part
of users with mobility based access problems, those who have
impaired or no speech.
Yes. Speech recognition is useful only if the person has a
functioning vocal apparatus. I'm reminded of this every time I get a
cold. :-)
Eye-gaze input is another hot area where it seems
unrealistic to expect any decently competitive and
user-friendly solutions in the free software domain in any
near future.
This said, I think it's very good to have Alex ask these
questions about what's available. A decent awareness about the
state of the art is always a necessary starting point for some
improvement. And people have difficulties even finding their way
to existing solutions. Things like decent head-tracking,
on-screen keyboards (OSK) etc are really important to have
available, and are life savers for some users, even though there
is a huge potential of improvements.
This is a place where a foundation could really help. Like you point
out, all the accessibility features we have are really important and
can mean the difference between watching the clock tick for the rest
your life and being able to participate in society at some level.
I wish there were some foundation money to help us build new
interfaces, not accessibility tools but different interfaces for
using tools like speech recognition, I tracking, head tracking etc.
to operate in larger chunks and not emulating the fine-grained
motions of a mouse or keyboard.
One thing that makes me frustrated is the sustained
tendency of unnecessary fragmentation and lack of collaboration
even in this area of handling basic accessibility needs. Why
don't for example the people involved in maintaining and
developing Caribou and Onboard team up and unite on one common
OSK with a wider range of functionality and options - for all
GNU/Linux distros and flavours, and with support from them?
Dasher is really an example of
the kind of needs based, unorthodox and innovative solutions
for text input that you were asking for. Have you tried it? As
a second best, compared to excellent speech recognition, I
think it could be relevant for you? But there I guess we now
also have a problem with continued maintenance since David MacKay so unfortunately
passed away.
Maintaining decent accessibility for all in an ever
changing ICT universe is not an easy task, and particularly
not on the free software platforms it seems, so far ...
You've hit on a really big issue. Too much fragmentation, not enough
concentrated support to solve the problem and use it everywhere.
dasher doesn't work for me because I can't move the mouse fast
enough or accurately enough to pick off letters and I'm terrible at
spelling. By the way, that's a serious side effect of using speech
recognition. Your ability to spell degrades...
For a while, I was going to the local A11y meet up and when I
described my issues, I got back a bunch of blank looks. These people
had no idea how to deal with accessibility needs like mine. part of
the challenge of using speech recognition is not just the speech
recognition and the application modifications but it's that using
speech recognition in open office is kind of counterproductive to
other people's work. It's about as easy to relax and speak as it
would be to try to take a pee in a bucket in an open office.
So if you want, I'll be glad to keep chiming in and be as
constructive as possible. If someone feels like trying to prototype
fitting the Dragon browser extensions into something like electron,
I'd be glad to work with them.
--- eric
|