Re: trying to get started with pyatspi





On 18/11/15 19:29, Eric S. Johansson wrote:
----- Original Message -----
Most gnome applications register, as they are gtk based (or clutter
based, like the desktop). What applications in particular are not
registering in your case?
 chrome,

Chrome support for linux/at-spi2 is a wip right now. There is an ongoing recent thread on orca list in relation with that:
https://mail.gnome.org/archives/orca-list/2015-November/msg00300.html


 Mozilla.

Mozilla firefox has a propoer support for linux/at-spi2. But, they are still based on the old gnome2 support. On gnome3 applications accessibility is on by default. On gnome2 applications you would need to explicitly enable it. In any case, if you just want a quick test, afair, it is enough to ensure that GTK_MODULES is like this:
GTK_MODULES=gail:atk-bridge


      

        
I think I can work with an event based triggers. most of the community
speech extensions to dragon are level triggered, not event.
Could you elaborate level triggered vs event triggered?
Level triggered means that an action happens when  signal is in a particular state. Edge triggered means an action happens when a signal makes the transition from one state to another IE gains focus, loses focus. It's far easier to convert event triggered to level triggered  than the other way around.

Ok, thanks for the explanation.


      
This is an free software project, mostly run on volunteering time.  So
it is true, that as in several cases volunteering time from developers
interested on a specific dependency, the support is somewhat biased to
that specific dependency. We had some interesting conversations with
developers from Simon [1] in the past, that pointed some places where
at-spi2 could improve. But unfortunately it is also needed to do some
real work. If you think that the needs of the speech recognition
dependent needs extra support on at-spi2, we are all ears to suggestions
and patches.
here's the catch 22. Accessibility is so bad, I have a hard time keeping up with my workload. If I don't keep up with my workload I can't keep the bills paid. If the bills aren't paid, I can't afford to volunteer on anything, let alone an open source project. But the Catch-22 goes deeper. Trying to program by speech using a simple environment such as Python and emacs is moderately painful. Trying to program in one of the sea derived languages with lots of modem line noise (i.e. braces and special punctuation with spacing requirements) is f*****g impossible. At least impossible with the current state of accessibility and toolkits. I have some interesting models for making programming both editing and code-creation[1].. And, the Catch-22 is because I don't have easy access to tools to let me build accessibility features to editing environments to make it more accessible to write code, I can't create accessibility enhancements for toolkits like AT-SPI.

Yes, there is an inherent vicious circle here.

yes I am probably being overly sarcastic and if you want to ding me for it,

I'm not sure why you concluded that I was dinging you on my previous email. As you, I was just explaining part of the (sad) reality of at-spi2, both on APIs and implementations. I also mentioned it when I introduced GNOME accessibility on some presentations. There is a clear bias towards screen reader support, but that is basically the best supported and maintained AT is an screen reader, Orca. Or in other words, the most demanding client is Orca. And we are aware that that situation is not ideal, and that we would welcome suggestions and patches coming from the needs of other ATs. But as you mentioned, that is one vicious circle, as you explained before.

 I won't deny I deserve it. having dealt with acquiring a disability[2], I sometimes get a bit pissy when I get in these Catch-22 situations. Interestingly, my disability has give me one advantage which is that I can identify and come up the remediation for user interface failures far better than most people. I just need to find a way to monetize it. :-)

an important thing to remember about crips like me is that yes, I get pissy about accessibility problems but if I have the tools or the people who can wield the tools, I can design a solution that gets a disabled person out of the stuck spot with the added benefits of the system is easier for everyone to use. For example, to solve the problem of any  disabled person being able to use any machine instead of having to load every machine up with accessibility software, I built the speech bridge program as a model for a generalized solution of a personal box with accessibility tools that connects to any box. As soon as I can afford to get myself a Windows tablet that can run NaturallySpeaking, I'm going to expand on the base I'm using right now.  The more I play with this model the more I believe that accessibility features should travel with the person, not the machine they're using.

I have a prototype of the tool which uses a spreadsheet as a repository of information. With a very simple grammar, one can access any sell and either fetch or retrieve it. It may not sound like a lot but when it comes to filling in web forms if you can build a spreadsheet either by direct data entry or by calculated data, it's a lifesaver. I had to manage entering 15 new people into a half a dozen disparate web interfaces and with this tool, I was able to get work done in a fraction of the time of somebody with hands let alone the time it would take me to do it myself by hand.

There are tons of other potentially useful accessibility models in my head[4] but I can't get to them because of the various Catch-22's. This only adds to my frustration with accessibility tools. :-(

so  back to your point about advice and patches, the best I can do is give you the benefit of my design skills from the perspective of what the system looks like from a person with broken hands. 

Any suggestion/help/feedback would be welcome. Thank you.

I know some of my suggestions (i.e. application should be headless) aren't going to fly very far but they do speak to the point of needing an API not a user interface hack  when building accessible interface. 
 others such as being able to define the region by specifying endpoints instead of simulating clicking and dragging a mouse should be noncontroversial[3].

This was (partially) discussed with Simon developers. In general AtkAction API can be improved in several ways. In any case, right now most activatable UI elements (like buttons), can be activated without the need of simulating mouse click events using the AtkAction API.

Best regards


-- eric


[1]  most the current programming by speech techniques focus on co-creation and not editing even though editing is 90% of what we do with code.
[2]  Had an interesting conversation with a blind from birth person about adapting to a disability. Apparently, according to this person, when you're born with a disability you learn to be okay with dependency and set expectations low. This just may have been her experience but I've seen this in a few blind people that I've known. However, acquiring  a disability in your adulthood is much more painful because you know what you're losing and you are more vulnerable to intentional and unintentional criticism by people that don't understand disabilities.
[3] Something like this would be implemented by being able to acquire and hang onto location objects and given two location objects tell the application to select everything between those objects.
[4]  NaturallySpeaking is a tool called dictation box and it lets you dictate will the speech control commands outside of a non-enabled aplication. If you had filters on the input and the output you could convert the non-speakable data in the application to a speakable form in the dictation box.  If you expand this idea to general filters on a cut and paste operation then you have a powerful tool for almost anybody.


-- 
Alejandro Piñeiro (apinheiro igalia com)


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]