Re: RFC: keyboard/mouse usage for accessibility?



I'm a speech recognition user and quite frankly, almost all the
accessibility features people have put in place do not work  with speech
recognition. How can we get our needs injected in the conversation?

about your paper on accessibility input. You've missed the use case of
people who cannot use a keyboard or a mouse. Doesn't have to be a
profound disability.  the disability can be as RSI, neuropathy, or
arthritis in their hands. Most the people I know (including myself) with
this kind of disability have only  500 keystrokes or mouse movements per
day in their hands. Speech recognition is essential to take the load off
their upper extremities.

Navigation shortcuts are disaster for speech recognition users. If you
lose focus or trigger recognition event when you're using an application
with short keyboard shortcuts, the words you say are transformed into
commands.  imagine using Thunderbird or some other application with
keyboard shortcuts and your cat walks across the keyboard. For bonus
points, tell me what happened and how to recover.

Trying to impose an aural environment on top of a GUI is painful at
best. remember that speech interfaces are shallow but wide and GUI
interfaces are narrow and deep. However if people insist on going down
the path of layering a speech interface on top of a GUI, a speech
recognition environment needs to know the entire GUI hierarchy in order
to construct a grammar that will let you get to the endpoint without
walking the full tree. For example in Google docs if you're in a
numbered list and right-click on the mouse, there is a menu item called
restart numbering. I should be able to just say "restart numbering" and
have it activate that "right click, hunt through the menu and click on
the item" action. but if you're not in a numbered list, that option
doesn't exist so the grammar element shouldn't be available or yield  an
alert box saying "can't say that here".

A variation on this theme is every icon or link on the screen should
have a name that you can say and the speech recognition  environment
should be able to query the application for all those names.

Another thing to pay attention to is selecting a region. There is no 
way to successfully click and drag the mouse using speech or if you are
visually impaired. You need to move to the Emacs mark and point model
for easier selection. As a side note, Mark and point is an easier way
for selecting a region for almost anybody with hands older than 35.

Another way to achieve everything you want is that applications must
provide accessibility API which sidesteps the GUI  interface.  it is an
API that provides all the function necessary for any GUI, speech, or
text-to-speech. if the API is done right, the GUI interface could be
built on top of it.

On 3/18/2019 1:16 PM, Samuel Thibault wrote:
Hello,

There have been some discussion about how input will be handled in
Wayland, which is important for the availability of accessibility
features.  Currently, a lot of them are implemented with event snooping
and synthesis, which is frowned upon for various reasons (open door
to key logging, efficiency, etc.), and thus it is currently not even
available on Wayland, so we need to rethink this.  One thing we need to
determine is the actual use case, since that's the eventual goal, and
from there we can discuss with Wayland and gtk people on how to achieve
them appropriately.

I started to collect what I could think of on

https://www.freedesktop.org/wiki/Accessibility/Input/

Could people have a look at tell what I have missed there?

Samuel
_______________________________________________
gnome-accessibility-list mailing list
gnome-accessibility-list gnome org
https://mail.gnome.org/mailman/listinfo/gnome-accessibility-list


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]