Re: Accessibility for person with a motor disability



Eric,

It's always very good to have a person speaking on behalf of himself or herself as a user representing an actual need. There is a lack of this for people with mobility based access problems in these forums for free software, compared to the areas of low or no vision. Part of the problem is the really wide and diverse range of needs regarding physical access - disabilities as well as abilities. (Regarding cognitive disabilities and needs there is a general lack of people at all able and interested in speaking and doing things on behalf of those needs in the GNU/Linux world.) So thanks for stepping in! I'll be looking at your ideas with great interest.

I can agree with some of what you're saying, but not with all of it. I can definitely in great parts understand and share your frustration and dissatisfaction about the situation. I agree with and can confirm your picture of the situation regarding speech recognition in the GNU/Linux based environments. We don't even have decent text-to-speech solutions for wider user needs (even in English and major languages, far less in smaller languages - see for example https://opensource.com/life/15/8/interview-ken-starks-texas-linux-fest). But remember that speech input is a dead end for a large part of users with mobility based access problems, those who have impaired or no speech.

Eye-gaze input is another hot area where it seems unrealistic to expect any decently competitive and user-friendly solutions in the free software domain in any near future.

This said, I think it's very good to have Alex ask these questions about what's available. A decent awareness about the state of the art is always a necessary starting point for some improvement. And people have difficulties even finding their way to existing solutions. Things like decent head-tracking, on-screen keyboards (OSK) etc are really important to have available, and are life savers for some users, even though there is a huge potential of improvements.

One thing that makes me frustrated is the sustained tendency of unnecessary fragmentation and lack of collaboration even in this area of handling basic accessibility needs. Why don't for example the people involved in maintaining and developing Caribou and Onboard team up and unite on one common OSK with a wider range of functionality and options - for all GNU/Linux distros and flavours, and with support from them?

Dasher is really an example of the kind of needs based, unorthodox and innovative solutions for text input that you were asking for. Have you tried it? As a second best, compared to excellent speech recognition, I think it could be relevant for you? But there I guess we now also have a problem with continued maintenance since David MacKay so unfortunately passed away.

Maintaining decent accessibility for all in an ever changing ICT universe is not an easy task, and particularly not on the free software platforms it seems, so far ...

Mats

Mats Lundälv



2018-03-21 17:35 GMT+01:00 Eric Johansson <esj eggo org>:


On 3/21/2018 11:30 AM, Alex ARNAUD wrote:
> Le 21/03/2018 à 15:27, Eric Johansson a écrit :
>> On 3/20/2018 5:35 AM, Alex ARNAUD wrote:
>>>
>>> What is as you know the most efficient way to write text with a
>>> head-tracking software?
>> I'm frustrated by this kind of question because frequently, this is the
>> wrong question. you should be asking what is the appropriate interface
>> to enable the person with a disability to write, and more importantly,
>> edit text.  much of this thread has been proposing answers based on
>> what's available, not what the person needs.
>
> I understand what you mean. I just don't know what people with motor
> disability need. I'm trying to understand what it is available and
> I'll check with an association what the users use in practice. I'm in
> the first step on a long way.
We need more than just an accessibility tool, we need a different way of
accessing functionality and data embedded in applications. I've been
trying for years to figure out how to write code by speech and here's
the current state of thinking. I did this is a proposal to github for
talk it github universe.

https://docs.google.com/document/d/1M14DEoC2uTWtQv1HtRyUwK5NKT6Wb0vutu98F9Yl1b0/edit?usp=sharing

It just occurred to me that another example of building your own
interface for the moment is what I'm doing right now. I'm extracting
bank statements to give to my accountant for tax prep. When you download
a statement, my bank labels every statement PDF.pdf. Yeah, I was
thinking the same thing. So I built a grammar that I can say "statement
in June" and it creates a file name of "1234-2018-06.pdf". I still have
to, display in PDF and then click the download button before I can get
to the point where I need to enter a filename but being able to generate
filenames by speech makes it much easier.

>
>> I can't use keyboards much because of a repetitive stress injury. I
>> would say that the most efficient way to write text with a head tracking
>> software is to not even try at all. It's the wrong tool. For many kinds
>> of mobility-based disabilities (RSI, arthritis, amputation etc.) speech
>> recognition would be a better tool.
>
> Which tool are you using on your GNU/Linux distribution for doing
> speech recognition ?

I'm not using a GNU/Linux distribution because well people of promised
speech recognition on Linux for as long as I've been disabled and it
just hasn't happened. what I use is Windows with NaturallySpeaking and
what ever hacks I can get to drive free software. I'm missing tons of
functionality that's present in NaturallySpeaking plus word (i.e.
Select-and-Say and easy misrecognition correction) but I do what I can.

I think it's safe to assume that we will not see speech recognition on
linux in the near future. there are at least a half a dozen projects I
can name off the top of my head that were going to provide speech
recognition on Linux "any day now". If you going to use speech
recognition today, the recognition environment must be available now. 

The question then becomes what can we do if we put speech recognition
"in a separate machine" like a VM or an android phone. the idea is to
isolate the nonfree components so that a disabled person can make a
living, participate online etc. using a mostly free environment. I
propose this because the assumption that every machine should be
equipped with  the accessibility tools the user needs raises the cost of
accessibility and limits the disabled user to just one machine that has
been customized for them. If on the other hand, if we put the
accessibility interface in a separate box like a smart phone and provide
a gateway to drive applications then many more machines could be made
accessible at very low overhead.

remember what I said about solving the disabled person's needs? As I
said to one free software advocate, take care of the needs of the
disabled person to make them as independent as possible, to earn a
living, to live a life first. Advocate free software second if it fits
their needs. I know this is not a popular attitude in some circles but,
quite frankly if I had to wait for speech recognition from the free
software community, I would be living on disability, wasting my life
because I wouldn't be able to work, I wouldn't be able to go to school,
and I just can't tell you how many things you lose when your hands don't
work right.




_______________________________________________
gnome-accessibility-list mailing list
gnome-accessibility-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gnome-accessibility-list


Virusfritt. www.avast.com


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]