Re: performance problems double buffering to 1600x1200 window
- From: Robert Gibbs <gibbsrc gmail com>
- To: jcupitt gmail com
- Cc: gtk-list gnome org
- Subject: Re: performance problems double buffering to 1600x1200 window
- Date: Wed, 3 Feb 2010 20:24:11 -0500
No. Doing my own double buffering gives me a 5 to 10X speedup when I have an NVidia card (any type) and am using the Nvidia closed source driver. I am using the wall clock timer from my original example program (not g_timer) so my numbers are not scaled the same as yours.
One computer I tried was a 2 year old Dell M90 Precision laptop running Fedora 11.
I saw about 40 FPS.
After running the test, I noticed that it was using the NV driver. I installed the Nvidia driver and retested. I saw 400 FPS. I made no other changes except to go to run level 3 and back to install the driver.
Another computer was a 5 year old IBM T41 laptop with ATI video running Ubuntu 9.10 using the open source driver. I saw 10 FPS (unacceptable). I did not try the ATI drivers as I have had problems with them in the past.
Ideally, I would like to be able to update at at least 20-30 FPS (wall clock time) even on old hardware like the IBM above. I am sure I could do this with OpenGL, but rather than try that right now, I am going to experiment some more by making Xlib calls directly.
Bob
On Wed, Feb 3, 2010 at 6:17 AM,
<jcupitt gmail com> wrote:
On 2 February 2010 03:14, Robert Gibbs <
gibbsrc gmail com> wrote:
> Conclusion: self implemented GdkPixmap helps only when the X server can use
> certain optimizations from the graphics card.
Or perhaps that's backwards: doing your own double-buffering can help
if your X server does not support fast pixmap allocate/deallocate. It
looks like this problem only happens with some older nvidia cards (is
that right?)
John
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]