Re: A Terible Problem with accessibility of Gnome



I definitely think image recognition has improved a lot, both in speed
and accuracy. However, even a difference like 50 milliseconds may be
noticeable by an experienced screen reader user, especially if one
uses speech at 400 words per minute or more. This is one of the
reasons why many blind users (including myself) still prefer the
text-mode console over a graphical terminal for command line work. The
graphical terminal is certainly very usable and works well for some
scenarios,, but there is quite a noticeable difference in performance
when using the text-mode console.

Regards,

Rynhardt

On Sat, May 29, 2021 at 9:48 AM Matan Safriel <dev matan gmail com> wrote:

Of course, the approach whereby the application developers interleave semantics about the content beyond 
the particular way it is laid out in 2D, as you mention, is very sensible and robust as long as GUI 
toolkits and development processes enable it as a default piece of development, and application developers 
and designers put that extra semantic information in, such as pointing out that a grid actually bears plain 
list semantics and no special significance to it being a grid in a particular case.

Do bear in mind that the potential for robust text reading from images has significantly improved since the 
time that the current accessibility paradigm took over, so I am not sure I see a reason it would be "always 
slower" and things like that ― these affirmations are probably not true today ― if one embarked on a 
machine learning project on this.

On Sat, May 29, 2021 at 9:52 AM Rynhardt Kruger via gnome-accessibility-list <gnome-accessibility-list 
gnome org> wrote:

I agree with you, it would be a useful fallback. It would never be a primary solution though, it is 
essentially screen-scraping, and would have the same disadvantages as screen-scraping approaches that were 
used before accessibility API:


* Accessibility APIs make it the app developer's responsibility to implement propper accessibility, this 
is by design. App developers know not just their app, but also the content associated with it, and 
therefore can implement an accessible experience that may be different from the visual layout and yet more 
efficient for AT users. An example is the list of recommendations on Youtube. Visually they are in a grid, 
but the screen reader sees it as a list of headings.
* One of the most important requirements for a screen reader is responsiveness. The quicker a blind user 
knows about an update in the interface, the better. Even a slight delay before an announcement may result 
in an interface that feels sluggish. This is why blind users often use speech at a very fast speaking 
rate. A pattern-based approach will always be slower than just reading the state via the accessibility API.
* It is dependent on the visual layout, which means accessibility may break just because an app got new 
icons, or a few components shifted position. These things are independent from the accessibility API.


Still, it may be a useful approach to consider for special edge cases, and I am interested to see what 
happens in this space.


Regards,


Rynhardt


On Fri, 28 May 2021, 20:10 Matan Safriel via gnome-accessibility-list, <gnome-accessibility-list gnome 
org> wrote:

Hi Shadyar,

Not an immediate solution at all, but I would say that AI (Machine Learning) which snapshots the screen 
or window and is able to extract the text from the snapshot image to then read it aloud, might be 
superior to legacy accessibility API paradigms which rely on the application developers to interleave 
"accessibility" (ARIA etc.) information in each and every field.

Or at least as an augmentation that should be able to provide a really great fallback to any ARIA-like 
paradigm.

It would be a project, sure, but it is really very accomplishable at this time and age.

Hopefully one day our desktops will be more fluid than only providing voice services on top a graphical 
interaction interface, but a lot can be done till then by leveraging Computer Vision AI in this space. 
Sorry again that this is not any immediate solution.

Matan

On Fri, May 28, 2021 at 4:04 AM Shadyar Khodayari via gnome-accessibility-list <gnome-accessibility-list 
gnome org> wrote:

Hello
I'm a blind computer engineer, a developer and familiar with OS
Windows and Screen reader NVDA entirely.
I recently installed Linux Ubuntu v20.4 using Orca.
I read Accessibility section of Ubuntu documentation as well as Orca
documentation.
After logging in,
1. When I am at Desktop through either pressing Super key + D or
holding Alt + CTRL and pressing Tab, next pressing arrow keys or Tab,
Orca does not read desktop icons.
2. 3. When I open a windows Settings, I press Tab numerous times but
Orca does not read the Setting categories like wireless, Bluetooth ETC
in the window. It seems focus never moves on this part of the window.
3. When I open a windows like application files or file trash, I press
Tab numerous times but Orca does not read the main part of the window.
It seems focus never moves on this part of the window.
4. Should I do a specific config on Gnome?
5. should I install another desktop environment?
I will be appreciate if you would help me.
Thanks and Regards
Shadyar KHodayari
_______________________________________________
gnome-accessibility-list mailing list
gnome-accessibility-list gnome org
https://mail.gnome.org/mailman/listinfo/gnome-accessibility-list

_______________________________________________
gnome-accessibility-list mailing list
gnome-accessibility-list gnome org
https://mail.gnome.org/mailman/listinfo/gnome-accessibility-list

_______________________________________________
gnome-accessibility-list mailing list
gnome-accessibility-list gnome org
https://mail.gnome.org/mailman/listinfo/gnome-accessibility-list


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]