Re: Doubts about GPeriodic



Hi,

On Fri, Oct 22, 2010 at 5:06 PM, Paul Davis <paul linuxaudiosystems com> wrote:
> starting from scratch, and thinking about the parallels with a
> pull-model realtime audio design, it seems to me that if you were
> designing this entirely from scratch you wouldn't serialize painting
> and other source handling. you'd double buffer everything and run the
> paint/expose/vblank cycle in a different thread. whenever the
> non-paint/expose/vblank threads were done with refilling a new buffer,
> it would be pushed (preferably lock free) into a place where it could
> be used by a compositor and blitted to the h/w during the
> paint/expose/vblank iteration.
>

Right - Miguel pointed out to me (unrelated to this thread) that Mac
and Windows both work this way these days, and I'd already been
noticing that Chrome, new Pepper plugin API, lightspark, and probably
other stuff all seem to be moving to a "build a stack of
software-rendered layers then use GL to squish them" model. So
building a bunch of layers (ideally in threads) then GL-compositing
the layers (with optional transformations and shaders) seems to be
kind of the thing to do these days. You can animate the layers and
shaders totally in the GL thread without mixing up with the app code.

It seems like a big hairy change though. I definitely haven't taken
the time to start working it out.

Havoc


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]