Re: Updating the OS data that ostree doesn't manage
- From: "Will Manley" <will williammanley net>
- To: "Davis Roman via ostree-list" <ostree-list gnome org>
- Subject: Re: Updating the OS data that ostree doesn't manage
- Date: Tue, 18 Feb 2020 17:25:39 +0000
Thanks for sharing. It sounds like a sensible design given the constraints you've described. I've made a
few comments below about how we handle our deployment process. Our use-case is a lot simpler than yours as
we have a lot of control over the devices, and the devices contain little state, but in the spirit of sharing
I thought I'd write it up:
On Thu, 13 Feb 2020, at 08:24, Daniel Drake wrote:
...
I imagine other system integrators that use ostree may perform a
higher degree of file system modifications at installation time
Our use-case is embedded. We want each of our devices to be as similar to each other as possible, to allow
them to be interchangeable. They receive configuration from the network on boot. Our use-case is a lot more
limited than yours. We don't need to be too careful about state, because we can always recreate it later. A
reboot is almost the same as a factory reset in our case.
Regarding managing /etc: We do have device-specific files in /etc (hostname, keys, certificates, machine-id),
but they are always new files, not modifications to files that are in our ostree images. This means that for
managing /etc the default ostree deploy 3-way merge is fine for us.
Regarding managing /var: This mostly consists of updating permissions, creating directories, etc. We rely
on systemd-tmpfiles for this which runs at boot time to perform these updates.
Regarding managing /sysroot/ostree: We have a systemd unit called post-upgrade-cleanup.service which is run
at boot after a deploy. This allows us to perform housekeeping including running `ostree admin cleanup`.
We implement this with a marker file: /sysroot/.ostree-cleaned. The marker is created by
post-upgrade-cleanup.service after it runs successfully. Its presence prevents post-upgrade-cleanup.service
from running again before a deploy because the unit file includes:
ConditionPathExists=!/sysroot/.ostree-cleaned
We delete /sysroot/.ostree-cleaned as part of our deploy process.
Regarding managing other filesystems/partitioning: We use loopback mounts for user-data. This means that
it's easier to make changes like adjusting the relative sizes of these filesystems without having to perform
online adjustments to /.
We've not yet had to make any changes to our rootfs that require repartitioning or reformatting. I can't see
that this could be done in a safe atomic way without Android style A/B filesystems.
, as
there is a class of changes that make sense as "sensible defaults" but
probably don't want to be hardcoded in the ostree - e.g. installing
some default flatpak remotes, preinstalling some flatpaks, setting up
some site-specific networking config into /etc. Any changes made along
those lines may need to be updated later. I collectively refer to all
of these disk changes mentioned so far as "extra configuration".
I think it's worth splitting changes to /var and to /etc conceptually rather than considering them together.
It seems to me that they are quite different in the way they're handled by ostree. In particular there is a
separate /etc per deploy, which makes changes atomic there, while /var is shared, meaning that you have to
take a lot more care making modifications there.
The problem space is even wider for Endless, where we have multiple
products built around a single ostree; the difference in extra
configuration is one of the key distinguishing factors between each
product.
I'd be interested to hear more about why you want to have a single ostree image for all your products? It
seems to me that it might be less complex to do the merging of the configuration as the last step in your
build process, rather than on the client?
You mentioned "site-specific networking config" earlier. When you talk about different products, would a
different site-specific configuration constitute a different product by your definition?
And the inability to update extra configuration after
installation time has become a growing pain over the years. Through
maintaining a fairly broad product over the years we've accumulated
many details to tweak on existing installs, big and small, such as:
- Adding collection ID to existing ostree/flatpak repos
- Adding flathub remotes
- Moving stuff from the core OS into flatpaks, requires the flatpak
to be auto-installed on OS update to avoid loss of functionality
For things like this we like to follow the systemd convention of vendor configuration under /usr which is
overridden by system-specific config under /etc. I don't know anything about flatpak, but I imagine you
could have `/usr/lib/flatpak/collections.d` containing a file per collection, which would be
overridden/invalidated by a file under `/etc/flatpak/collections.d`. This way the user can still delete
pre-installed collections, but new collections will show up naturally.
- Tweaking swap setup based on newer learnings
We would manage this with systemd .swap units stored on /usr, so it's applied at boot. As it is we don't use
swap :).
- Fixing permissions of stuff in /var
We use systemd-tmpfiles for this with the tmpfiles.d configuration stored under /usr - thus included in the
ostree image.
In the Endless case though, with multiple products built from a single
ostree, we plan to
ship the product-specific extra configuration data separately, but in
a way that each bundle of such data is considered to be specific to
and fully compatible with a certain ostree commit (I'm avoiding all
difficulties around trying to support backwards and forwards
compatibility along the dimension of extra configuration data vs
ostree commit). And the tool used to apply the configuration data
(i.e. Ansible) will be shipped in the ostree.
I'd like to hear more about why you have a single ostree image for all your products. It seems like it could
be a lot simpler to perform this merge as the last step in your build process rather than on the client at
deploy time.
The OS upgrade process would then look something like:
5. chroot into the new deployment and run the configuration manager
(i.e. Ansible) to apply all the extra configuration
I'd be interested in hearing what you think the advantages/disadvantages of using chroot rather than
rebooting and making the changes at boot time. It seems to me that it's only "safe" to make changes to /etc
as that forms part of the deploy, and not safe to make changes to /var as this is part of the current running
system.
One advantage to chroot I can see is that you can download additional data, like new flatpaks before
rebooting, and use this data in some way to control the networking configuration for the next boot. This
might not be possible otherwise if the lack of these changes would cause the device to fail to connect to the
network.
But the above covers the core idea from the ostree standpoint.
Comments would be very welcome.
Thank you for taking the time to write this up, it was very interesting to me.
Thanks
Will
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]