GTK_ADJUSTMENT redraw performance with gtk/gtkglext



Hi,

I'm new to GTK and I'm seeing some unexpected performance issues with gtk+
2.24 and gtkglext (gtkmm and gtkglextmm, specifically), looking for some
help.

Naofumi on gtkglext-develop suggested that I use invalidate_rect and
process_updates for sychronous draws, instead of calling queue_draw.  I
can't get that working on the HScale object (its Gtk::Widget::get_window()
method is returning a null Glib::RefPtr, I'm probably doing something
stupid), but it doesn't seem to make much difference for the GL window.  

I'd expect that simply calling set_value on the gtk adjustment object
would efficiently redraw the widget, but if I don't call queue_draw on it,
it only updates itself very rarely... does that mean I'm not spending any
time in the idle function?

If I disconnect the slider from the draw loop and just jerk it around 
quickly during playback, I see similar slowdowns, which I just can't 
figure out.

thanks for any help!

-dwh-

p.s. I'm packing the HScale object into an HBox.  When I use PACK_SHRINK
to squeeze the HScale object down basically to just the slider bar, I get
back to about 19fps; the bar doesn't move much because there's not really
any place for it to go, but the value is being drawn on top of the bar.  
I'm surprised that I'm still dropping about 3fps (~15%) when the widget is
so small.


---------- Forwarded message ----------
Date: Mon, 3 Nov 2003 20:53:21 -0800 (PST)
From: Drew Hess <dhess ilm com>
To: gtkglext-develop lists sourceforge net
Subject: performance issues with gtkglext


Hey all,

I'm using gtkglextmm-1.0 and gtkmm-2.0 in a full motion video app.  The 
app maps video frames to textures and then renders them by drawing a quad.

The app has 3 threads: a decoder thread which decodes frames from the file 
and inserts them into a ring buffer; a GTK thread which manages the 
application window and draws frames via OpenGL; and a timer thread which 
wakes up once a frame and tells the GTK thread when to draw.

To accomplish the latter, I'm using the Glib::Dispatcher object.  The
timer thread invokes the Dispatcher, and the other end of the dispatcher
calls queue_draw on the Gtk::GL::DrawingArea object.  

In the Gtk::GL::DrawingArea object's on_expose_event method, it gets a
pointer to the current frame from the decoder, draws the frame, and then
tells the decoder thread that it's done using the frame.  Finally, it
returns true.

I haven't touched the idle handler in the GTK thread, it's whatever gtkmm 
sets up as the default handler.

All of the communication between threads happens via semaphores, except
for the Dispatcher call I described above.

This all works fine and I get good frame rates.

Today I started adding some GTK widgets.  The first thing I did was add an 
HScale object to the GTK window, in the same VBox as the 
Gtk::GL::DrawingArea object.  The HScale object is supposed to update once 
per frame, i.e., whenever the GL widget draws a new frame.  The HScale 
range is 1 to n, where n is the number of frames in the movie.

In my first implementation, I called queue_draw on the HScale object from
the GL widget's on_expose_event method.  The HScale object updated
properly on every frame, but my frame rate on a large movie went from
22fps to about 15fps.  (This is a movie with large frames that chews up 
100% CPU when the app plays it back.)

Next, I switched the Glib::Dispatcher object to call queue_draw on the
VBox object, which contains the GL widget and the HScale (along with the
HBox which contains the HScale object and a Label -- the Label only gets
updated about once a second to display the frame rate).  This was my
attempt to inject fewer events into the GTK loop.  That didn't help,
performance is comparable to what I got with the previous method.

Then I disconnected the HScale object from the draw loop altogether.  The
frame rate went back to the normal 22fps, as expected.  But if I drag the
HScale object back and forth with the mouse while the movie is playing, it
drops to 13-15fps again, depending on how fast I drag it.

I can understand a little overhead for drawing the HScale object, but
30-40% seems excessive.  Anybody got any ideas why this might be
happening?  I counted the number of expose events in the GL window, and
it's only being drawn when the timer wakes up and tells it to, so that
doesn't appear to be the problem.  Is there some X/OpenGL synchronization
or flushing going on when the HScale widget is redrawn?

thanks for any help

-dwh-








[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]