Re: 3.8 "feature": Drop or Fix Fallback Mode
- From: Adam Jackson <ajax redhat com>
- To: desktop-devel-list gnome org
- Subject: Re: 3.8 "feature": Drop or Fix Fallback Mode
- Date: Tue, 23 Oct 2012 13:49:49 -0400
On 10/22/12 10:55 AM, Vincent Untz wrote:
The discussion about features is supposed to heat up next week, but I'll
actually be offline. So I'd like to start discussion on the fallback
mode today.
First of all, go read the wiki page:
https://live.gnome.org/ThreePointSeven/Features/DropOrFixFallbackMode
To add some data, there are a number of scenarios where shell is not
currently an option, where fallback mode may currently be happily used.
Eliminating the fallback mode is therefore a tacit statement that
either these problems need to be fixed, or that they are simply not
design criteria.
Many of these have to do with technical limitations of the interaction
with the window system. I'll try to point out how these would be
affected by picking a window system less dire than X11, but in general
these problems don't go away by simply switching away from X. In any
case I'd like to get a better idea for which of these cases Gnome still
cares, as that will help determine where we should be focusing on
improving whichever window system we're running this week.
1) Multiple GPUs. Currently Xorg's GLX implementation doesn't work when
multiple [1] GPUs are active as a single X screen. The GLX drivers from
some binary driver vendors do handle this, in some limited scenarios,
though the Composite extension also has issues in this mode. (Wayland
moves this problem around a bit: the compositor needs to be able to
handle each GPU, and clients need to divine which GPU they should render
on, but otherwise the kernel is simply responsible for getting buffers
from point A to B.)
2) Multiple Screens, as in the X protocol :0.n screens. This is
interesting in that it could be a short-term workaround for item 1, as
Xorg's GLX _does_ work when each GPU has its own protocol screen.
3) Rendering limits on the GPU. Say you bought an Atom with an Intel
GPU instead of a PowerVR GPU. At the moment that gives you Pineview,
which can't draw (in 3d) to anything wider than 2048 pixels. Spanning
dualhead? No 3D for you. There's been some work on this in Xorg but
it's not in xserver 1.13 and may not make 1.14. (Wayland, again, moves
the problem. No longer is there a renderer that promises to work at
arbitrarily large surface sizes, but apps still do need to draw
arbitrarily large things, and the compositor needs to present them.)
4) XDMCP. At least on my F18 machine gnome-shell does not work in
indirect GLX contexts. To a first approximation I think that's more
bugs than anything fundamentally broken. However, the remote GLX
protocol as currently defined doesn't give you anything newer than
OpenGL 1.5; it's not clear to me whether cogl/clutter will continue to
work against so old of a GL. We could also extend XDMCP to allow for
the wm/cm to run on the same machine as xserver, which would allow
mutter to run in a direct GL context; the downside there is it would
require a firmware update to enable this usage model on existing thin
clients. (Wayland remoting is, let's say, an open research problem.)
[1] - Peer GPUs; not the Optimus assymetric case.
- ajax
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]