efficient osk - was GNOME A11y: where do we need to improve?
- From: Francesco Fumanti <francesco fumanti gmx net>
- To: gnome-accessibility-list gnome org
- Cc: Willie Walker <William Walker Sun COM>, Peter Korn <Peter Korn Sun COM>
- Subject: efficient osk - was GNOME A11y: where do we need to improve?
- Date: Tue, 22 Jan 2008 20:37:10 +0100
Hello,
First of all, thanks for the replies.
At 2:38 PM +0000 1/21/08, Steve Lee wrote:
> There is gok, which seems to be rather targeted at users that can not
efficiently use the pointer. It has word completion without word
prediction. The keyboard is not resizable,...
Without wanting to answer for the GOK team I think I heard there may
have once been an investigation into prediction.
What features are needed that GOK does not have in its main OSK? Is it
simply they way it starts up and having a simple OSK appear would be
what you prefer?
It may well be that I do not know how to use gok
to its full potential. First disturbing issues
that come to mind:
- It does not float; when I click on another window it covers gok
- The composer (osk) disappears automatically and
the user has to reopen it and resize it to its
liking.
- As far as I could see, it has autoponctuation,
but not word prediction, only word completion
But instead of dissecting gok, maybe I should
list the main features that I would like to see
(they also are in the GetInvolved page):
- The onscreen keyboard should emulate all the
functions of a hardware keyboard: for example,
pressing shift on the onscreen keyboard should
produce the same effect (for example in
conjunction with mouse operations) as pressing
shift on the hardware keyboard
- It should have a good word prediction engine
(not only word completion), with a learning mode
that can easily be activated and deactivated (I
don't want my passwords to be in the
dictionaries). Maybe a button directly on the osk.
Example, on my current onscreen keyboard, when I
type ons it proposes among other words, the word
onscreen (word completion), and immediately
afterwards (without typing anything) it proposes
the word keyboard (word prediction).
- It should have an interface to edit the
dictionaries; for example to enter a new word.
- It should be easy to switch language; think at
people that speak several languages (they write
English in a forum, than write an email in their
native language to a friend; than go back to the
forum;...). The onscreen keyboard I am using now
has a popup on the osk to activate and deactivate
dictionaries. In fact it has two popups: one for
the dictionaries supplied by the software and one
for the dictionaries created by the user; indeed
it is possible to activate multiple dictionaries
simultaneously. (ok: multiple active dictionaries
might not be so important as long as it is easy
to make additions to the dictionary.)
- Strongly prioritise recently used words. I
think regardless of how the prediction engine
determines what words to propose, it should
prioritise the words that were recently used, as
they have a greeter chance to be reused in the
particular context about which the user is
writing at that moment.
- Autoponctuation: for example automatically type
space after selecting a predicted word; when
typing a dot, automatically remove the space,
write the dot, automatically write a space and
automatically activate the shift key;... The
activation and deactivation of the
autoponctuation should be directly clickable on
the onscreen keyboard, because the user has to
often toggle its state (it disturbs for urls, in
the terminal,...).
At 11:32 AM -0800 1/21/08, Peter Korn wrote:
I think the best place to start is for you to
describe the feature(s) you like and use in the
commercial product(s), and let that lead a
discussion of how best to achieve those results.
The more detailed you can be (both in the
description of the feature(s), and in describing
how they help you and aid
efficiency/productivity), the better.
If anybody has comments to the above points or
would like to add further points, they will
really be welcome.
I just found another point about the arrangement
of the keys on the onscreen keyboard. Indeed, as
far as I know, the usual layouts (qwerty?) were
designed for technical reasons to maximize the
distance between two subsequent letters; for
onscreen keyboard that distance should be
minimized to type faster:
http://www.almaden.ibm.com/u/zhai/ATOMIK.htm
http://1src.com/freeware/index.php?cid=52
At 11:32 AM -0800 1/21/08, Peter Korn wrote:
It may be that OnBoard fits this bill reasonably
well. But I'm not familiar enough with the
motivations behind OnBoard to say whether it is
expressly targeting the population you describe.
I think part of the reason of the creation of
onboard (initially called sok) can be found in
the summary and rationale of its specefication
page:
https://wiki.ubuntu.com/Accessibility/Specs/SOK
And probably in the review gok:
https://wiki.ubuntu.com/Accessibility/Reviews/GOK
Regarding "start it and it is ready to use",
onboard fits the bill; but it does not (yet) have
efficiency enhancement features like word
prediction... (They are in the spec, but
postponed.)
Should onboard be enhanced, should gok be
improved (for example the setup wizard idea,
concentrating on uses cases instead of technical
aspects; refine the composer; core pointer
issue,...)?
At 2:38 PM +0000 1/21/08, Steve Lee wrote:
No but I have seen demos of head tracking with simple webcams without
reflective dots on foreheads if that is what you mean?
There are other models that don't use dots, but
other "references" like the HeadMaster of PRC (if
I remember correctly, it uses ultrasounds); that
is why I tried to keep it more general with the
term "reference". But you are right, various
models use reflective dots.
There has been
discussion of webcam-based eyetracking over at www.oatsoft.org,
including combining it with headtracking to improve accuracy.
Two moving references; this surprises me: I
assumed that eyetracking required a resting head.
At 2:38 PM +0000 1/21/08, Steve Lee wrote:
AFAIK the keyboard standards are to ensure that application authors
add complete and standardised keyboard access as it is their job to do
so. For pointer access I think there is less of an issue as it is
generally predefined by OS / UI tookits used and extra accessibility
options like dwell click are features of system or device drivers. So
there should be operating/window systems guidelines rather than
application guidelines. There is a need for guidelines for custom
widget use of pointer.
At 11:32 AM -0800 1/21/08, Peter Korn wrote:
There is a very fuzzy line between what apps do
for accessibility in and of themselves, and what
they do via the use of an assistive technology
application. For a variety of reasons, on
Windows & in GNOME & in KDE, we have drawn that
line such that keyboard-only operation is
handled by the apps themselves, while mouse-only
is done via desktop AT. Of course,
"keyboard-only" is only about "full use of
keyboard-only". We bring in AT with things like
StickyKeys, MouseKeys, etc. On Macintosh, "full
use of keyboard-only" requires desktop AT
(VoiceOver) to give you all of the functionality
you have in Windows & GNOME & in KDE. Which
goes to the point that the line is fuzzy in
desktop computing.
For essentially these historical and "current
state of the art" reasons, I think it is best to
solve full use by mouse only via add-on (though
perhaps built-in to GNOME) AT which accomplishes
that task, and utilizes the AT-SPI standard for
driving apps where needed (e.g. with the
AccessibleSelection, AccessibleAction,
AccessibleValue, etc. interfaces).
And the add-on ATs are for example an osk,
mousetweaks to replace the mouse buttons, mouse
control provided by gok for switch users,...
Thanks to both of you for the explanations.
At 11:32 AM -0800 1/21/08, Peter Korn wrote:
This <python bindings> library is being shared
by a number of tools, and if GOK were ported to
Python, it could very easily move to pyatspi (in
fact, it would be the natural thing to use)
Are you talking about porting all (if not most) features of gok to python?
By the way, there are a few words concerning
switch input in the specifications of sok. But do
not get me wrong: this does not mean that I am
suggesting to use sok/onboard as a basis; I don't
have the required technical background for
suggesting it.
At 11:32 AM -0800 1/21/08, Peter Korn wrote:
There are a number of use cases in which
per-application on-screen keyboard behavior
would be very useful.
I don't know whether it would really make sense
to have a variable keyboard for the use case that
I have in mind (a user without problems to move
the pointer). It would require more learning from
the user. Maybe that I get you wrong: what
behavior are you talking about?
At 11:32 AM -0800 1/21/08, Peter Korn wrote:
Let me add that the Inference folks at Cambridge
(who brought Dasher to the world) are working on
the OpenGazer project (see
http://www.inference.phy.cam.ac.uk/opengazer/),
which uses a "£50 Logitech QuickCam Pro 4000" to
drive eye tracking (with somewhat limited
resolution), which in turn can drive Dasher or
some other on-screen keyboard. This is work I
would very much like to see expanded and refined.
I agree that it has to be refined: I don't think
that a resolution of 16x16 as indicated on the
linked page is accurate enough. (In fact, that
page says that it is accurate enough for dasher).
I don't think that it would satisfy me. (I am
currently using a Headpointer with a dot.)
Regards
Francesco
PS: I have not named the exact AT that I am using
because I don't know whether it is good practice
to name commercial items in a free software list.
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]