Re: Multi DPI user interface

On Tue, Jul 19, 2016 at 05:15:05PM +0200, Bastien Nocera wrote:
On Tue, 2016-07-19 at 23:03 +0800, Jonas Ådahl wrote:

Over at mutter we've been working towards supporting proper multi DPI
setups when running GNOME using Wayland. Proper multi DPI means to
support having multiple monitors where two or more monitors have
significantly different DPI but applications showing correctly on
monitors at all times.

Apart from making mutter, gnome-shell and gtk+ draw things correctly,
supporting proper multi DPI has some implications on things touching
more than just Wayland backends here and there; more specifically
shooting and screen casting.

Until soon, gnome-shell have always drawn the content of all the
monitors into one large framebuffer. This framebuffer was then used
as a
source for screen casting and screen shots; monitors with different
scales were still simply regions of this framebuffers where windows
enlarged. Soon mutter/gnome-shell will be able to draw each monitor
separate framebuffers; in the future, these framebuffers may have
different scales, i.e. there will be no way to create an exact
representation of what is currently displayed, making it less obvious
how to create a screenshot or screencast frame.

To illustrate, if we have two monitors, one (A) with the resolution
800x600 and the other (B) with 1600x1200, where the second one
physically small enough to make it HiDPI with output scale 2, today
get the following configuration:

    |          |                     |
    |    A     |                     |
    |          |                     |
    +----------+         B           |
               |                     |
    |          |                     |
               |                     |
    -  -  -  - +---------------------+

A large framebuffer with two regions representing the two monitors.
Windows would be rendered twice as large when positioned on B than on
When dragging a window from A to B it'd "flip" the size and suddenly
become large when mostly within B.

A proper representation of this setup should be:

    |          |          |
    |    A     |    B     |
    |          |          |

Two regions of a coordinate space, but with B having much higher
density. A window would be drawn larger on B's framebuffer, but at
same time, the correct size on A's framebuffer, would it be displayed

With this comes the question; how do we provide a user interface for
screen shooting and screen casting? As I see it there are two

1) Scale up every monitor to the largest scale and draw onto a large

2) Represent each monitor separately, generating one file for each

Both have bad and good sides. For example, 1) doesn't need to change
user interface, while 2) more correctly represent what is displayed.
screencasts, 2) would mean we need two video encoding streams; but
make it easier to do reasonable post production.

Any opinions on in what way we should deal with this? What user
interface do we want?

In terms of user interface, I'm fairly certain we want screen "B" to
behave as if it was an 800x600 screen, as that's what it's acting as.

The user interface I was referring to was screenshooting/screencasting,
sorry if that was unclear. But yes, the general HiDPI UI should be as in
the second illustration.

For screenshots and screencasts, you have 2 options, either you double
up everything so that "x2" is normal scale, and you have a slightly
fuzzy screen A, or you "lose data" by shrinking everything as well. I
would go for "more data". In any case, as in the original screen case,
you'd need the screens to line up.

Well, 3 options with the other one being to have "A" and "B" content be
placed in individual files, i.e. Screenshot - 2016-07-19 - 12:34 -
LVDS1.png and Screenshot - 2016-07-19 - 12:34 - HDMI1.png, each one with
their correct resolution



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]