Re: Signals versus methods in ATK





On 01/07/15 11:15, Luca Saiu wrote:
Hello.

I see that many if not all ATK state-changing operations are captured by
*both* a method and a signal, conveying the same information by
parameters -- plus a generic user_data pointer for signals.

Is not exactly the same information. Methods are used to retrieve
information or interact with applications. Signals are used to be
notified when something happens. If in some cases the parameters are
similar is because are related, or to avoid accessible tools doing
unneeded calls to methods, as the information that need can be included
on the signal (more about that below).

A good example is the atk_text_set_caret_offset method versus the
text-caret-moved signal in AtkText: see
https://developer.gnome.org/atk/unstable/AtkText.html .  The definition
of atk_text_set_caret_offset in ATK (atk/atktext.c) simply calles the
appropriate interface function pointer when non-NULL, and does nothing
otherwise.  
I assume that you are talking about this:
https://git.gnome.org/browse/atk/tree/atk/atktext.c#n1029

Yes, that is the usual glib skeleton to reimplement interface virtual
methods.

An implementation of such a function is
gtk_text_view_accessible_set_caret_offset (GTK+,
gtk/a11y/gtktextviewaccessible.c), which explicitly emits text-caret-moved.

That method emit that signal because "something happened". In general,
if you use ATK methods that change and object, it is expected that a
signal is emitted. In your example, atk provides a way to set the caret.
This can be useful, for applications like orca, that provides some
navigation modes. So if Orca (using libatspi) wants to set the caret, at
some point atk_text_set_caret_offset will be called. And as you are
saying, it emits text-caret-moved. But if the user sets the caret
manually (ie: with the keyboard), that signal should be emitted too.

I didn't find an explicit explanation about when to use methods rather
than signals.
As mentioned, methods are the way accessible tools can access or
interact with an application. Signals are the way accessible tools get a
notification when an application state changes.

  I understand that signals can be added to listeners and
handled in "user" code, but the difference between an ATK object
implementation and its usage is quite blurry in what I am doing, and
likely just a matter of convention.

As mentioned the purpose of methods and signals are different. Is not
exactly about convention. So using again the AtkText implementation of
your example. As far as I understand, the element you want to expose is
mostly a text element with the status, that you want accessible tools,
like Orca, to expose. And that status, so the string, changes.
Accessible tools are not polling to check if that string changes. So you
need to provide the signals that notify that the text changes, so if the
status changes, accessible tools can expose it. Usually on the text
changed related signals, the text that changes are included. This is for
convenience, to avoid accessible tools to receive the signal, and then
call back (using the methods) to get the new text. But even although the
signal includes the new text, accessible tools needs a way to get that
text. For example, when your application became active. Technically the
text didn't change, so no text changed signals are emitted. But
accessible tools need to get that text. So the method to get the text
are needed.


What I can see from the code above and by experimenting is that emitting
a signal does *not* cause the corresponding method to be called, but the
vice-versa is true for ATK objects as implemented in GTK+; I suppose I
should do the same for my own ATK objects.

As mentioned, in general signals are emitted when something changes. If
the execution of any of the ATK methods has a consecuence a change, then
the signal is needed. Anyway, take into account that depending on your
implementation, you don't need two places to emit the signal. Now with
the example of caret-moved signal. Lets say that the user can set the
caret. So probably on your toolkit you will have a method called,
my_toolkit_set_caret. So on the keyboard press callback, you call
my_toolkit_set_caret. So as that method changes the caret, that will
trigger the emission or AtkText::text-caret-moved. Now lets say that you
implement atk_text_set_caret_offset. Probably that would call your
internal my_toolkit_set_caret. And that will already manage the signal
emission.

  Thanks to the bridge the
AT-SPI registry in the end receives change notifications from signals,
but not (automatically) by methods. 

As mentioned, signals are the way accessible tools get notified by a
change. Methods are the way accessible tools retrieve information from
apps, or try to interact with them.

 So I suspsect that, when changing a
widget state in "user" code, I am supposed to always just call set_
methods, and not emit signals; when implemented correctly those methods
would emit signals as needed.

This I deduced from the source code.  Would you please confirm that my
understanding is correct?

After re-reading my email, it is not exactly really well-structured. But
I hope that could help in any case.

BR

-- 
Alejandro Piñeiro (apinheiro igalia com)



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]