Re: [orca-list] Orca touch screen support and gestures



I honestly do not have anything of substance to add to what you just said, but I would like to say that I 
fully support your idea and I think that it is a great one. Please contact me when or if you start working on 
this project and I will help in anyway that I can.

Thank you,

-Reece O’Bryan
C: (502)-827-3724
1645 Parkway, Sevierville, TN 37862

On Feb 19, 2021, at 7:52 AM, Rich Morin <rdm cfcl com> wrote:

tl; dr - I have some partly-baked ideas on this topic and I seem to recall that I've posted about them 
before.

It has been a while, so maybe folks will forgive some repeated hand-waving and rampant speculation.  (You 
have been warned: SciFi ahead!)  Incidentally, I've spent the last several months moving about 1000 miles 
(from the SF Bay Area to the Kitsap Peninsula, west of Seattle).  This may help to explain why I don't 
actually have any code or even a running web site to show off.  Owell; it is what it is...

Anyway, my particular interest is in adding screen reader support to open source cell phone operating 
systems (e.g., /e/, postmarketOS, Ubuntu Touch).  There are more than a billion old, cheap cell phones that 
can't run the vendor's current OS.  Wouldn't it be lovely to repurpose these as communication and computing 
devices (e.g., notetakers) for blind and visually impaired users?  Yes, they might need to learn some 
command-line magic, but most of that could be hidden under a menu system for use by the timid...

I realize that it's possible to use (for example) a BlueTooth keyboard, but that's yet another item a blind 
user would have to obtain, set up, keep charged, and carry around.  This is hard enough in the first world; 
it's a total show stopper for a poor person in the developing world.  Voice recognition is also cool, but 
it requires either a powerful local processor, cloud support, or both...

So, I think it would be way cool to support Braille Screen Input (BSI) on cell phones that are running some 
Linux variant.  Because BSI needs to support a variety of gestures along with dot accumulation, the code 
would need a convenient way to support the definition of new gestures.  Ideally, this would be encoded in 
data structures, rather than code, because most users would not be programmers, let alone interested in 
diving into the software.  If a user wants to define a new gesture, they should be encouraged to try (and 
share) it!

This means we'll need an embedded Domain Specific Language (DSL), using a language such as TOML 
(https://github.com/toml-lang/toml) as the basis.  I have a small set of gestures that I already know I'll 
need to support, but I'd be happy to hear about others.  Feel free to contact me (off-list); I'll summarize 
and offer proposed syntax if I come up with anything...

Inquiring Minds may be curious about my implementation notions.  My preference would be to base this code 
on the Elixir programming language, which is a spin-off of Erlang.  Erlang is used to run about half of the 
world's telephone "switches" (i.e., call routing systems).  Its VM has strong support for concurrency, 
distribution, and reliability.  (It's not considered acceptable for systems carrying zillions of calls to 
crash!)  Elixir adds Ruby-like syntax, Lisp-like macros, and a number of other nifty enhancements.

All of this is very nice and quite useful, but largely irrelevant to my proposed project's requirements.  
Rather, I'm attracted by the Erlang (i.e., Actor) programming model, in which large numbers (sometimes 
millions) of light weight processes cooperate by means of asynchronous messages.  In my proposed design, 
there would be one or more of these processes tasked with recognizing each gesture.  Whichever (set of) 
recognizer code is satisfied that it has detected a gesture would announce that to the user (and the other 
recognizer processes).  This approach decomposes the problem very cleanly, so adding a new recognizer isn't 
(as) likely to get in the way of old ones.

My plan, once I find the time to get back to coding, is to load Elixir and some relevant libraries (e.g., 
for touch screen support) onto a couple of cell phones I purchased a while back for this purpose.  Once I 
have the code in a state where it is able to recognize gestures, I'll worry about interfacing it with 
screen readers such as Orca, terminal emulators, etc.  A long ways off, to be sure, but it's useful to have 
a goal in mind...

Anyway, I hope some of the folks here found this speculation interesting.  As a sighted developer, I'm not 
in the usual open source position of "scratching my own itch", so helpful comments and suggestions are all 
welcome.  (ducks :-)

-r

On Feb 19, 2021, at 01:00, Joanmarie Diggs <jdiggs igalia com> wrote:

I'm afraid Orca does not have any touch-specific support, and I believe something would need to be added 
to AT-SPI2 in order for it to be
added. Sorry!

--joanie

On Fri, 2021-02-19 at 02:42 -0600, Steve Decker via orca-list wrote:

... I'm trying to find info on how much support Orca has for touch-only devices. Is there any special 
configuration needed? Are there any built-in gestures? How similar would it behave when compared to 
screen readers on iOs, Android, or other platforms? I appreciate any help, documentation, or direction 
anyone can share. 

_______________________________________________
orca-list mailing list
orca-list gnome org
https://mail.gnome.org/mailman/listinfo/orca-list
Orca wiki: https://wiki.gnome.org/Projects/Orca
Orca documentation: https://help.gnome.org/users/orca/stable/
GNOME Universal Access guide: https://help.gnome.org/users/gnome-help/stable/a11y.html


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]