Re: Multi DPI user interface



On Tue, 2016-07-19 at 23:03 +0800, Jonas Ådahl wrote:
Hi,

Over at mutter we've been working towards supporting proper multi DPI
setups when running GNOME using Wayland. Proper multi DPI means to
support having multiple monitors where two or more monitors have
significantly different DPI but applications showing correctly on
both
monitors at all times.

Apart from making mutter, gnome-shell and gtk+ draw things correctly,
supporting proper multi DPI has some implications on things touching
more than just Wayland backends here and there; more specifically
screen
shooting and screen casting.

Until soon, gnome-shell have always drawn the content of all the
monitors into one large framebuffer. This framebuffer was then used
as a
source for screen casting and screen shots; monitors with different
scales were still simply regions of this framebuffers where windows
were
enlarged. Soon mutter/gnome-shell will be able to draw each monitor
onto
separate framebuffers; in the future, these framebuffers may have
different scales, i.e. there will be no way to create an exact
representation of what is currently displayed, making it less obvious
how to create a screenshot or screencast frame.

To illustrate, if we have two monitors, one (A) with the resolution
800x600 and the other (B) with 1600x1200, where the second one
physically small enough to make it HiDPI with output scale 2, today
we
get the following configuration:

    +----------+---------------------+
    |          |                     |
    |    A     |                     |
    |          |                     |
    +----------+         B           |
               |                     |
    |          |                     |
               |                     |
    -  -  -  - +---------------------+

A large framebuffer with two regions representing the two monitors.
Windows would be rendered twice as large when positioned on B than on
A.
When dragging a window from A to B it'd "flip" the size and suddenly
become large when mostly within B.

A proper representation of this setup should be:

    +----------+----------+
    |          |          |
    |    A     |    B     |
    |          |          |
    +----------+----------+

Two regions of a coordinate space, but with B having much higher
pixel
density. A window would be drawn larger on B's framebuffer, but at
the
same time, the correct size on A's framebuffer, would it be displayed
there.

With this comes the question; how do we provide a user interface for
screen shooting and screen casting? As I see it there are two
options:

1) Scale up every monitor to the largest scale and draw onto a large
framebuffer.

2) Represent each monitor separately, generating one file for each

Both have bad and good sides. For example, 1) doesn't need to change
the
user interface, while 2) more correctly represent what is displayed.
For
screencasts, 2) would mean we need two video encoding streams; but
also
make it easier to do reasonable post production.

Any opinions on in what way we should deal with this? What user
interface do we want?

In terms of user interface, I'm fairly certain we want screen "B" to
behave as if it was an 800x600 screen, as that's what it's acting as.

For screenshots and screencasts, you have 2 options, either you double
up everything so that "x2" is normal scale, and you have a slightly
fuzzy screen A, or you "lose data" by shrinking everything as well. I
would go for "more data". In any case, as in the original screen case,
you'd need the screens to line up.

Cheers


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]