Re: Adding synchronization to the WM spec
- From: Owen Taylor <otaylor redhat com>
- To: James Jones <jajones nvidia com>
- Cc: "wm-spec-list gnome org" <wm-spec-list gnome org>
- Subject: Re: Adding synchronization to the WM spec
- Date: Mon, 07 Nov 2011 12:46:25 -0500
On Tue, 2011-11-01 at 22:43 -0700, James Jones wrote:
> I'm trying to make time to read through your proposals/code in more
> detail, but my record in the "making time for things" area is pretty
> abysmal, so some brief initial comments on the un-implemented fence-sync
> portion of the spec below:
Thanks for the response here - it's very useful even at a high level.
[...]
> > * I'm not really sure how the fence synchronization is supposed
> > to work for the case of a direct rendering GL client. Is the combination
> > of glXSwapBuffers() and XSyncTriggerFence() sufficient?
>
> Good point. This is messy specification-wise, but in practice, I think
> this is indeed sufficient. Our implementation always sends damage
> events for GL rendering only after the rendering that generates the
> damage has been completed on the GPU, and from what I understand, open
> source implementations order their accesses to the surfaces implicitly
> and/or do the swap operation in the GLX server anyway, so the
> XSyncTriggerFence() isn't strictly necessary, but it isn't particularly
> harmful if you want to leave it to keep the design clean. The composite
> manager will wait for both the fence sync and the damage event I assume,
> so even though the fence trigger doesn't execute in the right GPU
> thread, it will work out right.
Hmm, does the wait for the GPU occur at the time that buffers or
swapped, or are damage events emitted at some later point? If the damage
events are done asynchronously later than the WM manager would see:
Counter Update to Odd Value
Counter Update to Even Value
Damage event
Things should still display correctly, but the way I've written things
the compositor is supposed to fall back to doing it's own
XSyncTriggerFence() in that case, which would spoil the point of the
exercise from an efficiency point of view. And frame completion clien
messages wouldn't work right.
> At some point I would like to add a second GL extension that allows
> triggering the X Sync Fence objects from GL so this could be done
> properly, but it's low priority given the above. I omitted it from the
> initial spec because it's much harder to implement efficiently.
I guess the question is the migration path - how do you go from a world
where a GL app can do anything it wants to damage the front buffer, to a
world where apps can optionally pass fence information to the compositor
and get useful performance improvements from that.
> One problem I see with your spec: When are the fence sync objects
> reset? In the current form, it will only run properly for "L"
> iterations. I've found this non-trivial to solve efficiently in my code
> (but very doable). The solution you'll need is slightly different,
> since the consumer and producer of fence sync triggers are separate
> applications.
You are certainly correct this is something that needs to be discussed
in the specification.
If I read the sync spec correctly, one easy answer is that the client
just does:
XsyncAwaitFence()
XSyncResetFence()
XSyncTriggerFece()
<update counter>
right in a row when it finishes a frame. When the window manager sees
the updated counter via an AlarmNotify event, the ResetFence is
guaranteed to have been handled by the X server, since ResetFence is
done "immediately". The client can potentially omit the AwaitFence if
it's already gotten a _NET_WM_SYNC_DRAWN for the previous usage of the
fence back from the window manager. I don't have enough of an idea about
implemention of fences to know whether that's worthwhile optimization or
not - what is the expected cost of waiting for an already triggered fence.
- Owen
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]