Re: Updating the OS data that ostree doesn't manage





On Thu, Feb 13, 2020, at 3:24 AM, Daniel Drake wrote:
This is typically
done by some kind of image builder or installation app, which will
handle those aspects immediately before or after deploying the ostree.
I'll refer to this as "installation time" here.

For Fedora CoreOS we pair Ignition <https://github.com/coreos/ignition> and OSTree together.  Ignition gives 
us a declarative way to do "firstboot provisioning" that works uniformly across bare metal *and* cloud 
environments (AWS/GCP/OpenStack/etc.).

And most interesting we are working on
https://github.com/coreos/fedora-coreos-tracker/issues/94
which supports switching the root filesystem on firstboot.
It's basically (in the initrd):

- move the ostree into RAM
- provision / with RAID/LUKS/whatever
- copy ostree back

Now...


It's easy to handle these actions at installation time,
straightforward enough that you can do the whole thing fairly
comfortably in shell script. But what happens if you want to change
any of these aspects later? For example, tweak the swap partition,
adjust the filesystem flags, update the bootloader, etc. 

For Fedora CoreOS there isn't a story for any of that, the idea is you reprovision.  This is easier in cloud 
environments of course.  

But that said I do think even though OSTree makes in-place updates easier, it's really a best practice for 
everyone to ensure that they have backups of their OS configuration and data and *can* fully reprovision when 
they need to.  
Related: https://diogomonica.com/2017/09/01/two-metrics-that-matter-for-host-security/

One nice thing with OSTree is that you can tar up /etc and /var and know you have (almost) everything, and 
filter out stuff from there.

While ostree
lets us update the main filesystem content on existing installations
with ease, other solutions are needed for post-install updates to
these other bits.

I checked a fresh install of Silverblue and actually it seems to be
impressively "pure" in that it doesn't do too much in addition to
ostree deploy other than the completely essential stuff, but there are
a handful of changes made like writing /etc/default/grub, locale,
timezone.

That's related to Anaconda.  For Fedora CoreOS all of that comes from Ignition.

(Also I am working on rebasing Silverblue on FCOS, see https://github.com/cgwalters/fedora-silverblue-config 
but that's a whole huge topic)

It's an implementation detail but in this case I am experimenting with
Ansible playbooks as the extra configuration format. Ansible makes it
nice to express changes along the lines of "make this specific change
if it hasn't already been made"; if you use Ansible well then your
playbooks have the nice property of being valid both for first-time
setup at installation time and also for upgrading those details on
existing installations later.

And hopefully you're using `-c local`?
Yeah, though a problem Ansible has and you need to keep in mind is that if you use e.g.
https://docs.ansible.com/ansible/latest/modules/lineinfile_module.html
or really any of the modules to change something, and then later change the playbook *not* to do that,
unless you explicitly revert it in your playbook you'll get "state drift".

This also gets into https://github.com/coreos/rpm-ostree/issues/702


In the case of something like Silverblue where it seems like all users
fundamentally run the same single software product, the extra
configuration data could be shipped in the ostree itself.

Hmm, no?  Silverblue and other ostree-using projects from Fedora like FCOS come unconfigured and have to do 
so.

(Of course, if one is doing custom builds then one can embed configuration)

In the Endless case though, with multiple products built from a single
ostree, we plan to
ship the product-specific extra configuration data separately, but in
a way that each bundle of such data is considered to be specific to
and fully compatible with a certain ostree commit (I'm avoiding all
difficulties around trying to support backwards and forwards
compatibility along the dimension of extra configuration data vs
ostree commit). And the tool used to apply the configuration data
(i.e. Ansible) will be shipped in the ostree.

Hmm, interesting.
 
The OS upgrade process would then look something like:
 1. Pull latest ostree ref
 2. Pull corresponding configuration data ref
 3. Make a new ostree commit that combines the configuration data and
ostree into a single tree
 4. Deploy that new tree
 5. chroot into the new deployment and run the configuration manager
(i.e. Ansible) to apply all the extra configuration
 6. Mark the new deployment as active for next boot
 7. Reboot

I think libostree should have better first-class support for some of this; basically as I noted in the above 
rpm-ostree issue, one should be able to control the behavior of the resulting /etc.

Compared to the current "pull, deploy, reboot" path, the added
complexity here is a little worrying, but thinking more I presume it's
actually rather similar to the case of Silverblue updates where the
user has layered RPMs on top, which is presumably something like:
 1. Pull latest ostree ref
 2. Make a new ostree commit that combines the user's layered RPMs and
the base ostree into a single tree
 3. Deploy that new tree
 4. chroot into the new deployment and run the rpm post install scripts
 5. Mark the new deployment as active for next boot
 6. Reboot

So one thing to bear in mind about rpm-ostree is that it almost entirely reimplements librpm - using ostree 
and bubblewrap.  For example, rather than raw `chroot()` rpm-ostree uses bubblewrap to run scripts in the new 
root.

I am still proud of https://github.com/coreos/rpm-ostree/pull/888 =)
Also related https://github.com/coreos/rpm-ostree/pull/1099

Anyways you are close but not quite - the %post scripts are run *before* committing, not after.

Most notably, rpm %post scripts don't have access to the real /var and /etc for example.


I need to digest and think about your Ansible/ostree proposal a bit more to comment intelligently but 
hopefully some of the above references are useful in the interim.

And thanks a lot for posting this, I love seeing this type of discussion here!  


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]