Re: [g-a-devel] ATK - Signal indicating new AtkObject creation.



Hi Michael:

Thanks for the response, and nice hearing from you.  :-)

	:-) I think Mark is advocating a position close to what I tend to think
is optimal. AFAICS there are not a vast number of accessibles on (or
near) the screen at any one time, and exposing them all allows for
extremely fast iteration, and querying of a consistent object hierarchy
by the AT.

If we limit the a11y hierarchy to what is burning phosphorous on the screen, then we definitely can reduce the number of accessible objects quite a bit, agreed. But, I have a lot of questions... :-)

I'm scratching my head about things such as NODE_CHILD_OF, LABEL_FOR/LABELED_BY, CONTROLLER_FOR/CONTROLLED_BY, FLOWS_FROM/FLOWS_TO, etc., relations where the related accessible is not on the screen. What happens to these things? Do the relations not exist unless the related object is also on the screen? Are their accessible peers created lazily? Do they get created/sent with the hierarchy? Do we get a brand new "NOT VISIBLE" exception when trying to access them?

Or, imagine we have a large list that is a few items larger than the screen. Does the child count of the large list only include the objects on the screen (that would be odd)? Assuming the accessible child count is indeed consistent with the logical child count of the object, do the accessible peers for the non-visual children get created lazily when the AT tries to access them? Do we get a new "NOT VISIBLE" exception?

I'd guess maybe what you're proposing is that the app always dumps the a11y hierarchy for only the stuff that's burning phosphorous, and that references/queries to stuff off screen ends up being created lazily.
Seems like it might be workable.

In this new world order, however, will an AT still be able to hang onto the handle of an accessible for the purposes of identity/equality comparison across events? An example use case is where a screen reader wants to gather information about where focus used to be and where it is now. It does so by listening for focus events and saving the event source away.

  In addition, I recall some things (OOo?) may not create accessible
peers for objects until they are rendered on the screen.

	Sure; this is the only way to do it AFAICS, consider worse cases like
"all of time" in a calendar / month-view widget or somesuch ;-) For
these cases I think we need a new 'paging' API, that allows people to
move down a page, or scroll-to-keep-cursor-centred or whatever - last we
talked this looked very useful to the impaired, for screenreaders & so
on.

One obvious use case related to this is a "Say All" feature for a screen reader such as Orca. In this mode, the user presses a key and Orca starts reading the document until Orca reaches the end of the document or the user presses a key to stop the operation. While doing this, Orca may also highlight the words as they are being spoken.

The API needs to be able to provide Orca with the ability to get the information in the document in sequence, preferably by grammatical structure (e.g., sentence by sentence, paragraph by paragraph, etc.), and also make sure the text is visible on the screen.

Also related to this would be what we would need to do in cases where a large text object is partially on/off the screen. Gedit, for example, has a single text object that visually shows a window of what could be a very large text document. What happens with the text that is not on the screen? Does all of its accessible text get sent along with the hierarchy?

Note also that in order for Orca to be able to do things such as more
sophisticated navigation (e.g., "go to the next table in this document"),
it needs access to the whole document.

	Or better we need to expose only what is on-screen, and expose a nice
powerful table navigation API: "skip to next <Foo>" or whatever (also
"get count of headings", "skip to heading <N>"). OTOH - those guys are
going to be really efficient, and of course most applications are built
around optimising screen rendering & thus surely shouldn't struggle to
create a set of a11y peers for what is on-screen (?).

I believe the Collections API is supposed to help with this, but I think it has some implementation issues that can cause it to actually behave slower in some cases. :-(

	Otherwise, this sounds like a positive step to me at least :-)

It's definitely good to have these discussions. I was mostly on the provider side of things (i.e., the infrastructure side) for many years prior to working on Orca. I've now been on the consumer side for several years. Life is much different on the consumer side. :-)

Will



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]