Re: GDK_POINTER_MOTION_HINT_MASK has no effect



On Nov 29, 2007 1:36 PM, Paul Davis <paul linuxaudiosystems com> wrote:

    On Thu, 2007-11-29 at 09:51 +0100, Richard Boaz wrote:
    > This is the paradigm I use in all my drawing apps that has served me
well:
    >
    >     1) Do all drawing to one or more background pixmaps.

    GTK already does this for you now. All widgets are double buffered
    unless you explicitly request otherwise. So you are currently drawing
    into your bg pixmap, than GTK will copy it into another bg pixmap and
    then finally render it to the screen.

Well, this confuses me a bit.  Help me understand?

Given the following expose handler has been attached to a drawing area:

=== begin code sample 1 ===

gboolean exposeME(GtkWidget *da, GdkEventExpose *event, gpointer *nil)
{
  int w = da->allocation.width;
  int h = da->allocation.height;
  GdkGC *gc = da->style->fg_gc[GTK_WIDGET_STATE(widget)]

  // some lines:
  gdk_draw_line(da->window, gc, w/2, 0, w/2, h);  // vertical
  gdk_draw_line(da->window, gc, 0, h/2, w, h/2);  // horizontal
  gdk_draw_line(da->window, gc, 0, 0, w, h);      // diagonal
  gdk_draw_line(da->window, gc, 0, h, w, 0);      // diagonal

  return TRUE;
}

=== end code sample 2 ===

I assume the double-buffering occurs at least with each call to
gdk_draw_line(), but when is this double-buffer (that you the programmer
have no access to, I assume?) actually rendered to the screen?  After each
call, or only once the expose handler has exited?

If this physical event occurs only after the expose handler has exited,
then kudos to the GTK+ designers and developers, this is very good.  And
you are correct, my method employs an extra bg pixmap that is, in this
simple case, unnecessary, though in any case, not expensive.

*If* this occurs after each call, then I can do better:

=== begin code sample 2 ====

GdkPixmap *pmap;

void drawMe(GtkWidget *da)
{
  int w = da->allocation.width;
  int h = da->allocation.height;
  GdkGC *gc = da->style->fg_gc[GTK_WIDGET_STATE(widget)]

  gdk_draw_line(pmap, gc, w/2, 0, w/2, h);  // vertical
  gdk_draw_line(pmap, gc, 0, h/2, w, h/2);  // horizontal
  gdk_draw_line(pmap, gc, 0, 0, w, h);      // diagonal
  gdk_draw_line(pmap, gc, 0, h, w, 0);      // diagonal

  gtk_widget_queue_draw_area(da, 0, 0, w, h);
}

gboolean exposeME(GtkWidget *da, GdkEventExpose *event, GdkPixmap *pmap)
{
  int w = da->allocation.width;
  int h = da->allocation.height;
  GdkGC *gc = da->style->fg_gc[GTK_WIDGET_STATE(widget)]

  gdk_draw_drawable(da->window, gc, pmap,
                    event->area.x, event->area.y,
                    event->area.width, event->area.height);
  return TRUE;
}

=== end code sample 2 ===

Why better? For at least a couple of reasons.  First, the physical event
to render to the screen has been reduced to one instead of after each
drawing call to da->window in the previous code sample.  (Again, depending
on when the physical render actually occurs.)

Second, double-buffering aside, since (I am assuming here, please correct
me if I'm wrong) you the programmer don't have access to this double
buffer pmap, if you get an expose event from the window manager using code
sample 2, you are making a request for a physical draw that corresponds
exactly to those pixels which require being exposed.  And this particular
case is wholly not addressed in code sample 1; there, you must redraw the
entire screen even if only 4 pixels in the lower right corner require a
re-draw.  As I said before, using the GdkEvent* info to be efficient in
redrawing only what's required is impossible, and if not, a complete waste
of programming time and resources.

In both cases, this equates to a program that is more efficient.  Perhaps
a negligible increase, but an increase nonetheless.  Call me a pedantic
purist, but where graphical programming is concerned, anywhere I can find
a method that is more efficient than another and that will be the method I
choose.

For the record, I have always disagreed with the scribble example in the
demo code; it gets a simple job done, but in the real world, these things
are rarely so simple.

    >     2) Do all drawing in routines separate from your configure
handler or
    > your expose handler.
    >     3) Call your drawing routine(s) from the configure handler.
    >     4) Once your drawing routine has completed its task, call
    > gtk_widget_queue_draw_area() to force the expose event.
    >     5) Do no drawing in your expose event except to
gtk_draw_drawable(),
    > rendering your background pixmap to the physical display.
    >     6) Use temporary pixmaps liberally when rendering a "temporary"
state
    > to the screen.  (For example, display of cross-hairs and coordinates
    > when user is simply dragging the mouse (no buttons down) across the
    > drawing area.)

    richard, i know you have a lot of experience with GTK, but i believe
    this is the opposite of the design structure of GTK (and X and also
    Quartz; not sure about win32). i am not saying that it doesn't work -
    clearly it does. but the internal design of GTK is assuming that drawing
    occurs inside expose event handlers, and is set up to make this as
    efficient and smooth as possible.

    > Basically, in de-coupling your configure and expose events from actual
    > drawing, you gain much more power in managing all the requirements
of your
    > drawing area than if you use a single drawing area that is the screen.
    > Not to mention that drawing directly to the screen is absolutely the
    > slower of the two options.

    this is false.

I obviously haven't done a very good job of explaining myself.  First,
though, I'm not sure how this paradigm is opposite to the GTK design
structure, why?  What are bg pixmaps for if not for drawing to?

Twenty years ago when something so beautiful as GTK+ didn't exist (motif
in its infancy, yuk), I was using Xlib and drawing to the screen directly.
 It worked, for fifteen years even, but it was ugly.  When having to draw
15 million points that make up a single seismogram (with up to 100 on
display at a time), you could literally watch the line being drawn on the
screen.

As I also stated, a particular method to choose comes down to
requirements.  I now have an application that has more than 30 drawing
areas (across several tabs/screens), each requiring different routines to
make their picture.  Doing all my drawing in an expose handler means that
I must define a different expose handler for each drawing area.  But this
goes against another principle I have come to embrace, define as few
callbacks as possible for widgets to use; I have a single configure and
expose handler for all drawing areas, a single callback for all mouse
events, drag events, radio button groups, etc.  Why?  This minimizes the
number of routines that are required to exist and be defined, and for me,
using nothing more than gedit as my editor, infinitely makes the chore of
coding much easier to manage.

In defining a single configure and expose handler, I can do the following:

gboolean configureME(GtkWidget *da, GdkEventConfigure *event, int *which)
{
  makeDrawing(*which);
  return TRUE;
}

gboolean exposeME(GtkWidget *da, GdkEvent *event, int *which)
{
  int w = da->allocation.width;
  int h = da->allocation.height;
  GdkGC *gc = da->style->fg_gc[GTK_WIDGET_STATE(widget)]

  GdkPixmap *pmap = PIXMAPS[*which];

  gdk_draw_drawable(da->window, gc, pmap,
                    event->area.x, event->area.y,
                    event->area.width, event->area.height);
  return TRUE;
}

both of these being defined once and only once for the entire lifetime of
my ever-developing program, and

where makeDrawing(*which) determines which drawing I'm being asked to make
to my bg pmap, ending with my call to gtk_widget_queue_draw_area() forcing
a call to exposeME() to be made by the main loop.

(Side note: Another reason I make the call to gtk_widget_queue_draw_area()
instead of calling exposeME() directly is yet another rule I follow when
programming with GTK+, where it is possible for the main loop to do the
work for you, let it; you cannot manage everything there is to manage
better than the main loop already does.)

This provides me with a single point of entry and exit to and from my code
to the main loop for all configure and expose events.  And I'm very fond
of single points of entry and exit, especially when it's a question of
passing control to and from a routine I have no control of (the main
loop).

For example, one of my screens has 9 drawing areas displayed at the same
time with the added complexity that each is being drawn using data that is
being retreived from a database.  The database request occurs via 9
background threads so that I can make all 9 requests at the same time (for
efficiency).  These eventually return, at which time I call g_idle_add()
to my makePixmap(*which) routine.

Given a simple application with a single screen that requires display of a
single state at a time, putting all drawing code in the expose handler is
probably the simplest way to go.  Adding the complexity of many drawing
areas, many different deterministic drawable states, within a
multi-threaded application, writing an individual expose handler that
renders directly to the screen for each and every case will be a
nightmare.

Sorry for the effusion of wordage, "this is false" is a little to bold for
me not to respond to, when, though it might be for you, it very isn't
false in the cases that are necessarily interesting to me.

And I'll say it again, GTK+ gives everyone enough rope to tie the knot a
million different ways.  This is, for me, the beauty of GTK+: there is no
single way something should be done.  Each application comes with its own
requirements and subsequent dictates, and everyone is free to hang
themselves however they see fit.

cheers,

richard

p.s. this could have all been easily reduced to nothing more than: design
follows function, not the other way 'round.



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]