Re: Gtk+4.0

Thanks for starting this discussion.

On 21/06/16 15:26, Peter Weber wrote:
Users will see
modern applications from GNOME and a lot of old stuff, mainly
well-known applications. As time of writing, neither Gimp, Inkscape,
Geeqie, Pidgin nor Geany merged to Gtk3. After five years we see now
Firefox and LibreOffice (?) on Gtk3, and progress on Gimp and
Inscape, Gtk3 was released in 2011.

I am neither a GTK user nor a frequent GTK app developer, but as far as
I can tell, a large part of this has been because non-GNOME projects
have been reluctant to move to GTK 3.x due to its perceived instability.
They're finally moving to GTK 3.x anyway, because it has features they
want, whatever those features are (potentially different for each
project, but Wayland is one that I've heard cited).

From the developers side, we will
forced to choose between to nasty options, an already outdated stable
API/ABI or a shiny new API/ABI, which will break fast.
Very short stable-release-cycle, every two years an API/ABI-Break

Which is it - very short, or so long that it's already outdated? :-)

A new stable-branch every 2 years is certainly not set in stone, just
like there's no reason there *has* to be a new stable release of Debian
approximately every 2 years. It's a trade-off between two factors. If
stable releases are too frequent, you're right that third-party
developers will either spend a disproportionate amount of time catching
up with the latest version, or go looking for something else. If stable
releases are not frequent enough, third-party developers will find that
they can't have the latest features or fixes (including the things that
can't be fixed without breaking API!) other than by using the unstable

We have exactly the same opposing pressures in Debian, and we've chosen
to compromise on a cycle of around 2 years, but the compromise that's
right for us isn't necessarily exactly the right one for GNOME. As a
data point, we get criticized for releasing too slowly, and we also get
criticized for releasing too fast - I tend to assume this means we must
be doing something right :-)

Ideally, we'd choose the trade-off such that projects that want to stick
to a stable-branch version are happy with its stability, while also not
feeling that they are missing out on too much new stuff by doing so. A
year is probably too fast? 5 years is probably too slow? I don't know
what the right middle ground between those two is, but 2 years sounds
like a good first guess.

Nobody named a reason, why it is really necessary to break the
API/ABI. Wayland? Already done (great job!). A lot of new featuers?
Already done and doing (great job!).

I'm not one of the people doing the work, so I can't make an informed
comment on what future plans would require an API/ABI break. However, if
the people writing the code say they would benefit from API/ABI breaks,
it seems sensible to believe them.

For two prominent examples: Wayland is in the past, but it did need an
API/ABI break - Gtk 2 didn't have it, Gtk 3 does. CSS-based theming
didn't remove any C symbols, but it did change the meaning of existing
code, which wasn't treated as an ABI break but arguably should have been.

Microsoft (stability) and Qt
(stability&portability) are investing lot of in this, for decades.

Sure, and I think the key words there might be: investing a lot.
Changing the things you do want to change, without breaking the things
you don't want to change, becomes prohibitively expensive after a while.
I shudder to think how much time and money Microsoft must have spent on
maintaining their "bug-for-bug compatibility" policy in the Windows 95
era. (It's perhaps interesting to note that even Microsoft have backed
away from taking backwards-compatibility to such extremes - even with
their massive resources, it was holding them back.)

Of course, there's a very easy way to make a library
backwards-compatible forever: stop changing it, and do all new
development under a new name (so that library users can choose either).
That's unsatisfactory because its bugs will never get fixes. If we
refine that by continuing to make low-risk/high-reward fixes to the
version with the old name, while focusing new development on the new
name, that's exactly a parallel-installable stable-branch: at the moment
the old name is GTK 2, and the new name is GTK 3.

1. Tiring, but with the most impact, keep Gtk3 as stable as possible while
carefully adding new things

That has been the intention for GTK 3 since 2011, and before that, for
GTK 2 since 2002; but it seems as though every 6 months, we see
complaints that GTK has broken something. Maybe those complaints were
all caused by application developers relying on things that specifically
should not be relied on, but I think it's more likely that in at least
some of those cases, we're running into situations where a semantic
change genuinely did break applications. If we assume that to be true,
then either GTK reviewers need to be more strict about the changes they
will accept, or this approach isn't working and we need a new one.

(Even if application developers *are* relying on things they
specifically shouldn't have relied on, like CSS classes in theming prior
to 3.20, an end-user doesn't necessarily really care - they see a broken
application either way.)

Like many open source projects, the limiting factors for GTK development
include getting proposed changes reviewed, and having contributors make
the changes to a contribution that the reviewers have requested. If we
expect reviewers to be even more careful, and correspondingly expect
contributors to spend more time restructuring their changes to keep more
strictly compatible, that will just make it more expensive (measuring in
time, money or both) to add a new feature, fix a bug, or change a flawed

For some context here, I used to work on Telepathy, where we tried very
hard to keep a stable D-Bus API and C API/ABI. We put so much effort
into it (both by not breaking the APIs we had, and by not adding new
APIs until we were sure we could keep them) that after a while our
contributors were getting put off by the amount of time that they and
the reviewers had to put into landing their contributions. For
commercial contributors, this translated directly into new features
costing more than they were comfortable with spending; for volunteer
contributors, even if the contributor has a lot of free time, a feature
taking what seems a disproportionate amount of time and effort is very

Even with all that, Telepathy *still* had subtle semantic breaks, like
the points at which our protocol implementations dropped support for
legacy APIs, causing older versions of clients like Empathy to lose
previously-working features.

2. Add experimental features through external libraries (libsexy and so

A series of tiny libraries is not a great way to build a coherent
platform, and each of those libraries needs to manage its API, ABI and
stability too. We've been here with libgnomewhatever, libsexy, libegg,
libunique and so on.

(There are also technical considerations here: widgets in GTK proper can
make use of internal interfaces that third-party widgets can't, and
linking a large number of tiny libraries has a measurable startup cost
for applications.)

3. Add experimental features behind compiler MACROS, like

Either applications don't use them, in which case they're pointless, or
applications do use them, in which case they're part of the ABI and we
should do a proper ABI transition (with SONAMEs, parallel-installable
runtime libraries, old applications continuing to use the old library,
and all the rest of it). It's not clear that this really helps us.

Simon McVittie
Collabora Ltd. <>

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]