[orca-list] In-process solutions (Re: Links List - firefox extension)
- From: Willie Walker <William Walker Sun COM>
- To: Rich Caloggero <rjc MIT EDU>
- Cc: orca-list <orca-list gnome org>
- Subject: [orca-list] In-process solutions (Re: Links List - firefox extension)
- Date: Thu, 05 Jun 2008 11:04:45 -0400
suppose someone wanted to write a
keyboard navigation extension for Firefox to do better handling of caret
and structural navigation than we can do in Orca. Do you have an idea
of how difficult/complex this might be and do you know of someone that
might want to do it (and get paid for it)?
Well, to some extent, FireVox implements this now:
http://firevox.clcworld.net/
Ha - very interesting. I really tried to encourage the FireVox author
to give up the self-voicing chase and to focus on compelling keyboard
navigation stuff. With that, we might have ended up with a good
solution for navigation that would be beneficial to many users. Charles
was very interested in exploring the self-voicing space as much as he
could, however, and I respect his choice to do so. Charles, however,
did a good job modularizing the code and it seems as though the
navigation stuff is isolated to the "Utils" package:
http://clc4tts.clcworld.net/clc-utils_doc.html.
It seems as though the FireVox navigation code is available under
GPLv2.1. I'm not sure exactly what that means (I'm not a lawyer), but
it might be possible to start with it. Charles is a good guy, too, so
I'm sure he'd be willing to accommodate issues with forking or modifying
the code.
Will
<note philosophical="on">
By the way, don't think that I'm against self-voicing solutions. I
think they can be very interesting things, especially because they help
keep processing inside the application and can also get around trying to
jam an application's interaction model into an API.
The main problems with the current approaches to self-voicing, however,
are that they basically require each application to support some sort of
in-process plug in, the plug in tends to have to be done in whatever
programming language is supported by the application, and the user
interaction model provided by the self-voicing solution tends to differ
from application to application. The current approaches also tend to
require very specific knowledge of an application's internals. As a
result, I think that's why we see so few self-voicing solutions.
An in-process/self-voicing accessibility framework might be interesting
to try to pursue that at some point, but for now we have a good
infrastructure (AT-SPI) that works pretty well. It also helps avoid the
current problems with in-process solutions that I mention above.
</note>
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]