Re: D-BUS based magnification API



Hi Willie,

2007/8/28, Willie Walker <William Walker sun com>:
> Hi Carlos:
>
> This is a very timely discussion to have since there are a bunch of
> thoughts on many people's minds right now.  These include the following:
>
> * Eliminating the Bonobo dependency from accessibility stuff in GNOME.
>   Note that this doesn't necessarily equate to eliminating CORBA.

I really have no idea how much impact this would cause. I learned only
basic things about Bonobo and understand that it only turn easier the
IPC work. CORBA appear to be much more low-level and I have no idea
how it will still be used in GNOME desktop. I'm goingo in D-BUS
direction, since it's appear to be much more simple for desktop IPC
and it's also appear to have much more documentation available.

>
> * Understanding where the masses are going with respect the
>   composite extension manager.  Is it metacity, compiz, something
>   else?  Where/how does gnome-mag fit in this new world?

There was a big discussion about the Metacity compositor and compiz:
http://mail.gnome.org/archives/desktop-devel-list/2006-October/msg00011.html
and GNOME appear that will not replace Metacity by Compiz. The
discussion is quite old and things maybe changed, but I didn't saw any
other discussion about a replacement, so the path, IMHO, is metacity.

If Metacity doesn't introduce composite effects some day (something
that I doubt) we already have basic composite (let's say that the code
to track windows is there :-) being done in gnome-mag.

We must have a compositor and appear that the best path, considering
the effects cited in the other messages, is to use Metacity, since
it's still have a good architecture (thanks Sandmann) IMHO. Moreover,
in reply to Eitan (I send if the wrong e-mail, so it doesn't appear
yet in the list), I cited about an way to keep the resolution of text
and SVG images in the magnified image, that is: wrap cairo and, when
the magnifier is
activated, instead of draw (or instead of only draw) in the
application window, an SVG representation is created and the magnifier
is notified about it and about updates in this SVG. I thought in this
in something in the same layer as AT-SPI. Then when the magnifier
compose the final screen it will draw the window pixmap and render the
SVG.

In this world, gnome-mag doesn't fit.

>
> > First, I thought in develop a one-to-one map between the actual
> > gnome-mag API[1] and this new API, but I think that some things don't
> > need to go to the new API.
>
> One of the philosophies behind the gnome-mag API seems to be that the
> thing driving the magnifier also provides the configuration GUI for it.
> Thus, Orca provides a whole page dedicated to setting up configuration
> options for the magnifier, leaving lots of room for improvement.
>
> An alternative approach might be that the magnifier acts as its own
> entity, listening for AT-SPI events and making its own autonomous
> decisions about what to bring into view and how to bring it into view.
> With this, the magnifier would provide its own configuration GUI,
> allowing it to be as rich as it would like.  It would also help provide
> a nice division of labor among assistive technology developers.

Yeah, this is something possible from my POV and this is something
that happened with the introduction of the colorblind-applet. Probably
we can have a configuration option in System => Preferences where the
user can adjust magnification options and have the API where ATs can
tweak the magnifier behaviour on the fly.

Probably we can also split some Orca/LSR scripts to listeining for
AT-SPI events and making these autonomous decision in lightweight
python process. This way we doesn't need the entirely AT and when
these ATs need these scripts, they would communicate with they. This
will give more flexibility, but more work. What do you think?

>
> At the same time, the magnifier should be able to listen for
> hints/requests from other applications such as Orca.  These hints might
> be as simple as suggestions for what area of the display to magnify (and
> perhaps why).  For example, Orca might provide hints about what it is
> presenting as part of the SayAll and flat review operations: what is it
> presenting (e.g., where on the screen, maybe even a cross-process
> reference to an accessible) and why is it presenting this information
> (e.g., speaking a line, word, character)?  The magnifier could then
> bring things into view accordingly, doing things such as centering the
> region if the magnifier preferences dictate this.

This as simple as set the ROI for the magnifier from my POV. Maybe you
are suggesting this due the autonomous behaviour that you are thinking
for the magnifier, isn't is? I think that we must things simple and
only engineering a complex solution with lots of options with concrete
use cases that can't be addressed by the current API.

>
> To take this a step further and look at it from a different point of
> view, one might consider an API for a screen reader to broadcast what it
> is currently doing.  If a magnifier were to listen for these things, it
> could add them to its other sources of information (e.g., AT-SPI events,
> mouse movement events, etc.) to make intelligent decisions about what to
> do.  If a different assistive technology (e.g., something that
> selectively dims areas of the screen or highlights text or whatever)
> listened for these things, it could also react in its own way.  With
> this approach, a screen reader need not write support for all existing
> and future assistive technologies it might need to interact with.
> Instead, other assistive technologies can add screen reader information
> to their other sources of information and makes their own decisions.

If there is no one oppositing to implement gnome-mag API inside
metacity I will start this work without further discussions (I think
that I was the only one against it in the past :-) starting with the
actual gnome-mag API, since this is the basic that we need, than we
can start to think how these interaction can be done, since they must
be carried on by the WCM.

The needs for the applications cited by Peter Korn can also be addressed here.

>
> > Today the mouse tracking mode is managed by clients applications, but
> > appear that the AT developers would like to see this feature moved
> > inside the magnifier. I don't see problems with this, but I think that
> > we must also maintaim the possibility to also control the mouse
> > tracking logic by external ATs, since these applications track more
> > information about the environment and can alter this mouse tracking
> > logic.
>
> Agreed.  I would like to eliminate the need for Orca to have to listen
> for mouse events from the AT-SPI via CORBA, only to turn around and send
> a filtered/translated form off to gnome-mag over CORBA.  Instead, I
> would like to see gnome-mag listen for mouse events directly and update
> zoomer information accordingly.  Right now, we see four mouse tracking
> styles - none, centered, proportional, and push.  They are relatively
> simple to implement and getting them in gnome-mag would be a big plus.
>
> > I also read some stuff about the eZoom plugin for compiz-fusion and
> > some of it's API [2]. I would like to know if someone of they must be
> > in this new API? I saw some interesting comments about users that
> > would like that what they are typing be in the center of the
> > magnification window. This doesn't appear to be difficult to use in
> > the actual gnome-mag API. This just appear to be the same logic to
> > track the mouse in the center of the magnifier window.
>
> I think the best thing to do is take a step back and involve end users
> in the design.  We need to hear what they want from a magnifier, how
> they use it on a daily basis, what they expect when using it with other
> assitive technologies, etc.  I'm not sure I've seen a lot of this.

This is really important. I was in a project in the university where I
study about accessibility, specially magnification and we test
different magnifiers with some visually impaired people. The biggest
problem is that these users doesn't have access to computers daily (or
never have access to they), so they interact very few with these
magnifiers and doesn't give valuable feedback of how they can be
improved. They used ZoomText, Lunar, gnome-mag, one magnifier that we
develop with less resource then gnome-mag, Virgo and others, and they
all said that if they can have access to any of they it will be great.
They pointed the mouse cursor set that ZoomText provide and it's
ability to roll-left and right the screen below the magnified window.

They doesn't interact much with AppReader and DocReader, so I don't
know how usefull it's, but it's really impressive tool!

>
> Are you coming to GNOME Boston 2007?  Magnification is one of the
> critical things we want to talk about for the accessibility summit.

Ow... it would be a dream :-), but I don't have passport and don't
have money, so no :-(, but I really like to know what become
discussed.

Best regards,
Carlos.

>
> Will
>
>
>



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]