Re: Sandboxed Gnome apps



On Wed, 2014-09-10 at 09:11 +0200, Alexander Larsson wrote:
On tis, 2014-09-09 at 15:06 +0200, Bastien Nocera wrote:
On Thu, 2014-09-04 at 19:05 +0200, Alexander Larsson wrote:
I guess by now most people have seen lennarts latest post on how to
build linux systems:


http://0pointer.net/blog/revisiting-how-we-put-together-linux-systems.html

<snip>
So, what does Gnome need to provide here?

1. A platform definition
<snip>
2. A reference implementation
<snip>
3. A SDK and other tooling
<snip>
For 1), 2) and 3), and at the risk of causing a flamewar, couldn't we
rely on packages from a distribution to bundle those?

I could imagine rather trivially creating images from Fedora for
example, so that the GNOME project doesn't have to take care of tracking
NSS, glibc or Mesa (as you mentioned later).

Whichever platform we end up building from, we probably don't want to do
the building of those parts ourselves, with all the security,
trackability and reproducibility that we would need to implement to take
care of this.

That would be one alternative for the platform. It kind of leaves us at
the mercy of an external project though. Not to mention that it may be
alienating to other distros.

There are different parts of this, there is the base things like glibc
and NSS that you mention above, and there is the "gnome" stuff like
glib, gtk+, etc. I *definitely* think we should do our own build of the
gnome stuff, as that is a core part of what we contribute to with our
runtime specification. However, the base part could easily come from
another source. In my current experiments i'm using openembedded for a
lot of the base, as this is what gnome-continuous is using, and i'm
reusing that. 

4. IPC stability guarantees

   In theory we can put any kind of library in the runtime, as we rely
   on the global version of the runtime itself to keep applications
   running if API or ABI change. (Although we should of course try to
   minimize such breaks.) However, any kind of IPC mechanism we include
   in the runtime must have very strict backwards compatible
   requirements.

   Historically we have not done very well here. For instance, the GVfs
   dbus interfaces have generally been tightly coupled with the client
   libraries, such that you have to update both in lockstep. With an
   "app" based setup we have to take backwards compatibility much more  
   seriously, across the board. 

   This is not a code delivery, but rather a change in mentality that
   we as a project have to accept.

I don't understand the problem with gvfs. gvfs would be at the same
level as GIO or glib, and the clients would only see the public GIO API,
nothing outside it.

The runtime would bundle the gio library and the gvfs gio module, but
not the gvfs daemons. Rather, the gvfs daemons would be running in the
host. We want an older version of an app+runtime to work on a newer
host, so that the older gvfs gio module must be able to talk to a newer
version of the gvfs daemons. Historically this has not been the case,
we've been doing incompatible breaks in the internal gvfs protocols
because we assumed that gvfs modules and gvfs daemons are always updated
in lock-step, but we can no longer assume this.

Or you could backport any ABI breaks to older versions of the library,
so all of your run-times would be updated. When, Apple or Google ship a
new version of their mobiles OSes with support for older versions, I
doubt that the older versions are actually the older version, but they
are libraries with the same behaviour made to run alongside the newer
frameworks.

That's assuming that the ABI changes don't change behaviour obviously.

<snip>
Hopefully I will have some initial cut of a runtime based on
gnome-continuos some time next week. This will give us a runtime, but
also a lot of stuff that can be used to make the SDK.

This will give us a runtime that's not easily reproducible or trackable.

Using gnome-continuous in an incremental fashion will give us a constant
source of testing runtimes for the next version. However, the final
released runtime needs to be done from a manifest that specifies exact
versions of every module and does a build from scratch. That should be
reasonably reproducible, or do you object to that too?

That's doable as long as:
- we keep copies of the git trees used to create the various runtimes
- we keep branches for each runtime version
- git trees don't use non-fast-forward merges

That also means tracking stable branches for each of the items in the
runtime, then picking fixes for each of them.

There are also lots of technical issues. For instance how do we ship
mesa and DRI drivers which have dependencies on the kernel part of the
driver? How do we support e.g. the nvidia drivers in this? Do we need
some kind of overlay for the runtime image for local hardware drivers?

That's also something I mentioned to Lennart and David Airlie. I don't
think there are that many interdependencies between the kernel and Mesa
when it comes to normal usage, but the runtime providers updating Mesa
will be a requirement to having new hardware supported.

What are the exact dependencies here? Does a newer kernel work with
older drivers in general, or does things have to be very tightly synced?

There are "drivers" on both sides, in Mesa and in the kernel. Supporting
newer hardware will need new versions of both. Supporting older hardware
should work if the kernel is newer but Mesa isn't, or if Mesa was
updated, as long as you don't go older than the baseline for supporting
that hardware in both.

Well, this was a lot of text from my side with nothing practicaly
useful yet. I think its good to get people started thinking about these
things though, and I'll keep people updated as I make progress actually
building something.

My advice is, as for the test ISO images where we scrub Fedora branding
out of the images, to use an established distro to base our reference
image and SDK. I don't particularly mind if it's Fedora, Debian or
something else, those who do the work will get to choose :)

While I think this will let us avoid a bunch of work it think it may be
a very hard sell politically. Also, depending on what we ship in the
runtime I think you're overestimating the amount of work that we have to
do.

I don't think I'm overestimating it one bit. You're more than welcome
experimenting with it, but I wouldn't want you to become a build monkey,
replicating work done in distros when there are plenty of other things
that could be worked on.

I also don't fancy somebody non-qualified to start cherry-picking glibc
or NSS "fixes". But below you say that you expect distros to ship
security updates. We'd create a runtime that won't be updated for each
GNOME version, security holes and all?

 I think we should be reasonably minimal wrt what is in there that is
not produced by gnome, so maintaining the build may not be such a large
burden. Of course, these are just feelings, I think we have to start
working on alternatives to see how it works out in practice.

I think one large question here is how we see the runtime being
delivered. Do we expect that all users download the runtime from
gnome.org, or do we want distros to ship a runtime? If it is the later,
then each distro must be able to rebuild a compatible version of the
runtime so that they can do security updates and things like dri
hardware support updates.

My expectations would be that distros ship compatible versions. They
will have to, otherwise you end up with the same kind of license
compliance problems that are affecting a lot of hardware makers ("I got
that blob from somewhere, don't know where the sources are"). In the
best case, they'll use the same build system we are.

Bear in mind that we expect runtimes for the same framework and version
to be bug compatible. That means self-hosting.

I'll stop being the fearmonger and let you get on with it, and I hope
it's as straight forward as you think it is.

Cheers



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]