Re: Scrolling performance



Hello John,

Thanks a lot for beeing that patient and constructive, its really motivating!
I know I am not the dream-guy to start working on this but hey what
can GTK loose ;)

I think the artifacts are because, with double-buffering turned off,
expose events do not get an automatic clear-to-background. There's a
Yea I read the note yesterday which made it clear for me too.

 1.) Repainting would be fast but flickery and incredibly ugly. People with
older machines and nvidia hardware acceleration would see an FPS
improvement, but their eyes would be watering.
Yes I saw this large amount of flickering, I think this would be more
or less a dirty hack when looking at the results. Its really ugly ;)

I'll do some tests to see which influence composition managers have,
they do their own sort of double buffering, maybe we could dave all
these efforts under these circumstances at all?

2) Use nvidia's pixmap placement hint

I think this would require a new driver from nvidia and changes to the
X server to make the option accessible. Not going to happen any time
soon.

3) Persuade nvidia to change their pixmap caching policy

It seems to me that their policy is broken. If an app creates a
pixmap, you'd expect it to be used soon. Making new pixmaps default to
slow memory is rather odd.

But I guess they've done a lot of profiling (of non-GTK apps, heh) and
like it the way it is.
All these concepts act on the assumption that what GTK currently does
is more or less a good thing. However best-case for GTKs buffering
scenario are onboard shared-mem cards, which do not require (much)
bus-interactivity ... but even allocating several megs each repaint in
system memory is expensive and more or less a do-not.

What nvidia does is very common, on windows its defacto-standard.
Also if they would change they semantics GTK would be hardware
accalerated but a lot of time would be spent inside the GPU waiting
for the requested piece of VRAM.

4) Have a single expose pixmap

You could allocate a single large expose pixmap (maybe as big as the
enclosing window?) and reuse that, with clipping, rather than creating
and destroying a new pixmap each time.
To be honest I like this approach most, especially because I've seen
this technique in several other toolkits (Swing, LwVCL) working very
well.
Swing (a java toolkit) has even support for smaller backbuffers than
the rendered area is, they simply repaint it as often to the
backbuffer with different clipping/location till they've filled the
whole screen. This could help making window resizing smooth (more
painting should be in >90% be much faster than allocating a new pixmap
each resize), after resizing a new buffer could be created with window
size.
Another advantage java can take from this design (starting with
mustang) is that if the buffer is as large as the window, they
actually just paint the pixmap if the window receives expose events.

gdk_window_begin_paint_region() actually maintains a stack of pending
pixmaps, though I've no idea how often the stack feature is used.
Perhaps you could get away with having a single permanent expose
pixmap, and dynamically create and destroy sub-pixmaps if the stack
starts working. If the stack is used infrequently maybe this would
work OK.

This would chew up graphics card memory :-( and main memory for
software drivers :-( and small-footprint people would hate it. My
machine has 10 virtual desktops, each is 1920x1200 (so about 10MB), if
each screen is 50% covered in gtk2 apps, this change will make my X
server need an extra 50MB of RAM. Ouch!
Maybe there could be a timeout to free the backing pixmap if it's not
used for a couple of seconds.
Well but to run that many applications you also should have a descent
powered system wheer grahic cards with 64+m are quite common ;)

I too see a problem here, for java its maybe not that too dramatical,
if the runtime itself is consuming 15mb another 5mb for the backbuffer
don't hurt that much but for GTK and tons of long running apps
realized with it the situation is a lot different.

I am not enough of an expert to know a exact answer to this, I'll do
some reseach at QT and other toolkits that provide double buffering
how they deal with this issue.
Maybe some kind of ergonomics could do the job, deciding when its
worth to keep a buffer for how long and when to destroy it (would we
need a timer thread for pixmap freeing *outch*?).
As I said I simply don't know an answer maybe some experiments will
show ... I've the whole summer for coding on this ^^

lg Clemens

I browsed a bit through GTKs source code (only gtk, not gdk till now)
and I've to admit that I never coded GTK apps before except some small
examples and one java-gome application.
As far as I understood from the gtk-tutorial there are some widgets
which are windows themself ... which is a bit confusing for me.
I tried to find the entrance point where X tells gtk windows to
expose, does this happen once for the main "window" and gtk repaints
the widgets in an hirarchial order or do the widgets which are windows
themself get seperate expose events delivered?

How (and where) does the buffering take place? How does GTK deal with
widgets that don't want to be drawn doublebuffered?

I know tons of awkward newbie quetsions, I guess just pointers to the
source would be enough ;)



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]