Re: [g-a-devel] Not so easy: About pyatspi2 deprecationg and the merge of ATK and AT-SPI



Hi Alejandro,

Thanks for getting back to me.

On Tue, 2013-11-05 at 12:14 +0100, Piñeiro wrote:
On 11/05/2013 12:58 AM, Magdalen Berns wrote:

Would need to be replaced by
  selection = accessible.get_selection_iface()
  if (selection):
       <get the selected object using the selection interface> 
  else:
    <use a fallback to get the selected object>

* Registry.registerEventListeners allow to register more than one event
on one call.
* All the utility methods at utils.py
* etc

Thanks! I think something similar was done to make the javascript
bindings work because the name's of some of the interfaces (and maybe
their methods - I can't remember off the top of my head) could not share
the same names as those already existing in gnome-shell javasscript.

As you said that you didn't remember, this is the commit you are talking
about:
https://git.gnome.org/browse/at-spi2-core/commit/?id=5b9c1723

And that change was possible because there wasn't any other app using
atspi javascript bindings, and because we had the manually created
python bindings. That is basically a big API change, but Orca was not
affected because it was manually wrapped.
 
As far as I know, the only surprise that the author found, was that DBUS
was event slower than expected. But in any case, he was already waiting
DBUS being slower.
It was a bit more than that, but my main point was that taking errors
into account should explain surprises. i.e. if the results are
surprising the experimental method is likely to be flawed.

Well, probably it is a subjective opinion, but I don't agree. Take into
account the abstract of his analysis:

"I have been under the impression that the CORBA IPC/RPC mechanism used by
the GNOME desktop environment was bloated and slow. People have
commented that the DCOP and
DBUS were speedier alternatives."

So he thought that DBUS was faster than DBUS. And he did all those
experiments to confirm that theory. And this was his conclusion:

"For repeated calls to a particular RPC function, the C bindings to the
Orbit2 orb outperform calls using
the C++ bindings to DCOP and DBUS."

So the surprise came from the fact that the empirical tests showed that
he was wrong on his initial assumption, not because the experiment
itself was wrong.

The report found CORBA to be significantly faster than DBUS (and the
other one) and what is the precision on the results? There is no way to
know because no error analysis was done. Moreover all results have been
obtained using a different method so the tests were unfair.

Having said so, we also need to take into account that those tests are
giving a really narrow perspective of the performance hits. It is only
measuring common methods of the accessibility framework individually, in
order to directly measure the performance of CORBA vs DCOP vs DBUS. But
as I said on my previous email, a good test suite should also analyze
which and how many times those accessibility methods are called.



7. Actual codes get written to implement changes decided in 6.
This is not clear. As I said, one of the ideas of measuring, is testing
a possible change.
I just meant at that stage people would finally be able to act on the
information discovered by taking tests and analysing the results.
Essentially what I was driving at was that without the information there
is nothing anyone can be sure of so it is probably worth answering the
questions and, if that can be done sooner rather than later it may be
worth it in the long run. 

Then we are at a egg-chicken problem. You are saying that without
answering the questions, we can't measure. I'm saying that I would like
to use the measures to found where are the possible problems, so questions.


No, I am not saying that. I am saying without knowing the answers to
questions nobody is in a position to make an informed decision i.e.
essentially we agree.

Because in sort this is the situation:

Problem: some people thinks that GNOME accessibility framework
performance should be better.

Not unique to GNOME (*cough* osx) and considering the numbers involved
things could be a lot worse, but that does not mean they should not be
better and ultimately, numbers do not lie and as you will probably be
aware yourself already there are only so many hours in the day, very few
a11y developers and yet the work requires a high level of skill. 

In addition, relatively few users exist to feedback about issues, let
alone performance and the majority of users (and developers alike)
seemingly, switch a11y off altogether and forget about it. Perhaps this
is because of performance, but in any case, it means that the few a11y
developers there are, are left to be proactive by themselves, or
otherwise wait and see. 

Yet these interfaces seem to be the infrastructure on which all of GNOME
a11y depends. If there is a problem and the scale of the problem is
currently unclear, then it seems like 'wait and see' could be risky.

I wonder if this has ever been considered: 

Not having any accessible interface but rather integrating accessible
functions into core GNOME API's to make sure they are all 'accessibility
compliant'. To my mind, that could offer a complete solution, but maybe
I am covering some idea that the experts have already found reasons not
to pursue. 

Things seem to be less refined in javascript as in python, so I am
probably looking at things less objectively than I need to be doing. I
thought I would throw it out there, anyway. :-)

Hypothetical solution: improve the performance of the current
performance bottlenecks.
Question: where are those bottlenecks?

Do you mean the problem of synchronous calls or something else?

My suggestion was using something similar to what was done in the past
[1] to found those bottlenecks, and the use that test suite to ensure
that we don't regress on any future change. Using the same scheme:

Problem2: we want to improve the GNOME accessibility framework API
Hypothetical solution2: there are two possible APIs

Question2: is API1 worse that API2 in terms of performance? Is any of
them worse to current state?

A comparison test seems like a good place to start. Perhaps collectively
compiling a list of any functions that are common to both API's before
devising some suitable tests to analyse how they are processed from
initialisation to function call. That might help shed some light on the
matter? 

Magdalen




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]