Re: paint clock



Havoc Pennington <hp pobox com> writes:

> We had terrible luck with anything that involved installing a timeout
> (i.e. polling with nonzero timeout). In part this may be that the
> shipping litl product is based on 2008 or so Linux which I believe has
> 10ms resolution on poll timeouts... so the noise in timeouts is
> already half a frame. This just can't end well. I believe newer
> kernels have high-resolution timeouts though... all I know on this
> subject is from
> http://lwn.net/Articles/296578/

I think the 10 ms comes from 1/HZ on kernels with HZ=100 ms, but
that's 2.4 era. The default was changed to 1000 ms in 2.6.0, I
think. Which is still bad (1/17th of a frame), but not _that_ bad.

> I think another issue here was that frames are just not that uniform.
> Some frames have a bunch of crap that happens to happen, like incoming
> IO, some have none; or some frames might have to upload a texture or
> something and then the next 10 frames don't have to do that. It isn't
> very predictable. litl shell is a number of different apps plus a
> compositing/window manager all crammed into a single process so it may
> have had more trouble than most things.
>
> In short we haven't managed to dynamically pick the 5ms number. But
> just setting it to 5ms seems to work pretty well.

The framehandlers.txt document is purely speculative btw. There is no
actual code running it - I'm sure it wouldn't survive contact with the
enemy.

An idea that might be worth considering is a cooperative thread system
where the main loop would hand out time slices of, say, 1 ms.  For
GTK+ there might be portability concerns with that, but presumably
Litl only needs to care about one hardware architecture.

> Another number that we've tried to get clever with but failed is the
> frame timestamp used for tweening. You have this correct in your
> document of course.  The thing that's tempting is to use actual wall
> clock time. But it seems to be true that animations are prettier if
> you just add the frame length to it every time, and if the result gets
> "too far" (for us, 1 frame) behind the wall clock, skip ahead by whole
> frame intervals thus dropping frames. 
>
> I guess the reason is that vsync is always going to show the stuff
> at exact frame intervals, so using any frame timestamp not on those
> intervals is just wrong. Anyway this is one reason why GtkImage
> animations and the "traditional JavaScript technique with
> Date.now()" described in roc's mozRequestAnimationFrame post are
> inherently not smooth looking. The lag between generating the frame
> and getting it on the screen is enough to make wall clock time when
> generating almost irrelevant.

Yeah, an assumption I didn't write down is that you can get feedback
from the underlying graphics system about when the thing you drew
actually got shown to the user. Ie., breadcrumbs in the compositing
manager and the graphics driver. That way you can measure how much
time was spent by each one and take it into account when calculating
ptime.

You definitely can't ignore the lag from drawing to getting it on the
screen. Some people make the assumption that the graphics card is
infinitely fast, but that's completely wrong, especially for
integrated chips that are starved for memory bandwidth.


Soren


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]