Re: ostree use cases and value



On Mon, Nov 8, 2021 at 10:35 AM Will Manley <ostree williammanley net> wrote:
Here's some random thoughts, may or may not be useful to you.

Thanks for the reply!  It took a lot of simmering in my head before a semi-fully-baked idea popped out. The original message below was me struggling to understand the distinction between an "OS" and an "application" both in 'ostree' vs 'flatpak' as well as in the context of an embedded device where there's few built-in distinctions. At that point I had a woefully lacking understanding of flatpak.

Putting this at the top so anybody who has thoughts can see and potentially chime in: I'm mulling over and would welcome thoughts from others: bringing in 'flatpak' and 'ostree' into our base vs just using 'ostree' and building our own flatpak-like structure for applications and runtime. (We currently use systemd-nspawn for containerization, so flatpak's 'bubblewrap' isnt really needed or useful to us. we dont have a desktop use case or unprivileged container requirements that flatpak enables)

Where I landed is a split between "OS", "runtimes", and "applications". Each A/B partition would have its own ostree repo. The ostree repo would contain a 'base os' as one "ostree-style" commit and each app as it's own separate "flatpak-style" branch (conceptually: merging the "OS" ostree repository and the flatpak repository into one repo). For the base os, the installed os would also serve as the flatpak-style runtime for each application. 

Next, the data partition would have another ostree repo for either "new" applications or "updates" to base os applications. We'd have our application runner basically search each location and run the first version it finds (subject to policy via cryptographic signature checks and licensing). We'd also have option of using base os as runtime here, or installing specific 'runtime' versions where needed.

On Thu, 4 Nov 2021, at 18:45, Michael Brown wrote:
I am a designer for an embedded device and am investigating ostree. I have some questions that I have not been able to find answers for.

My device has an A/B partition update scheme currently (plus third data partition), and it's mandatory that the currently active partition be read-only at the flash/hardware level. I'm trying to understand what value, if any, we could get from moving that partition to ostree with a deployment created when we create the rootfs. If we do not make this partition writable, (ie. staying with squashfs), I'm not quite seeing obvious value in moving over to ostree. Is there something I'm missing?

Next, if we move to f2fs-on-loopback (we need compression, cant change underlying ext fs at this point), we still cant write to the system while it is running.

One option would be to have both A and B partitions have ostree installed. Then (when booted into A) you'd update the B partition using `chroot ostree admin update`.  Whether that would buy you anything on top of what you've already got depends on your current pain points. Ostree may be faster than writing a whole partition, or maybe re-using ostree's delta compression could be useful for you.

That's an interesting idea. One thing we've struggled with in the past is the issue of firmware upgrades requiring the $currentversion of the firmware to have detailed knowledge of how the "alternate" partition layout works. Sufficiently creative changes to firmware layout often break current update code and I'd like to have firmware updates that rely as little as possible on one running image having knowledge of the other. (For example, if customer replaces the entire alternate boot partition with homegrown firmware, it can be tricky as we have no root of trust to safely run the unknown alt part code from a 'trusted' image)
 
  - On development systems (which we have an existing process to cryptographically unlock), would like to be able to let developers install updated packages/files

We use ostree for this on our own (embedded) systems. Our build system spits out an ostree commit that is then deployed to our devices.  I've patched ostree so updating an existing deployment is O(changes) rather than O(files in rootfs).  This makes the dev on PC, deploy to device loop much faster - helpful for this use-case. See https://github.com/ostreedev/ostree/pull/1408 .

This is relatively quick, but still requires a reboot. I'd also planned to add updates in place for dev, but haven't got round to that yet.

Reboots are fine. Right now many developers do things the slowest possible way:  rebuild the entire firmware image, flash entire firmware image, reboot. (a process that can easily be an hour).  The smarter developers ssh over and bind mount new binaries.

To me, what seemed to make sense to solve for these would be something like:
  - best solution would be a way to have split repositories?

IIUC this is how flatpak works: the ostree repo for your rootfs and the repo for your flatpak applications are completely separate.

When I was asking this question I was asking from a flawed perspective.

I now believe that this whole "split repository" idea is unnecessary. And I also believe that flatpak and ostree should be able to use the same underlying repository. (at least for us. if needed we can hack whatever we need)
 
    - Is there a way to use symlinks for deployments?

I think deployments make sense for the rootfs, but I'm not sure that it could be applied to the applications. It would be fairly easy to implement yourself though: Checkout the new applcation under /apps/appname/SHA/ and afterwards update a /apps/appname/latest symlink to point at the version you want to use. This is the approach gobolinux takes, and maybe nix does too? I'm not sure.

The deployment symlink is fine. I was actually asking about "symlinks" for the binaries themselves vs the current "hardlink" strategy. I've abandoned that path.
 
    - is there a way to use overlayfs? Base-os as lowest layer, directory on the data partition as upper/work layers. Then deployments can be on the overlayfs.
      - solve use case for locked down system by only allowing base os daemons to run from deployment on the read only rootfs.
      - plugins can run from the overlayfs
      - if enabled, if there is an installed update for base os daemons, those can be run from the overlayfs.

`ostree admin unlock` sets up your rootfs with overlayfs.  It could be useful for dev use-cases, but probably not in production.  It's not something I've used myself as I always deploy new dev versions as ostree commits.

This is awesome to know. I'll file it away. May come in useful.
 
Is any of the above viable or are there better solutions?

Next, I was investigating flatpak, which looks like it's built on top of ostree. I'm thinking that (conceptually) we can take the base os on our device and have it set up as basically just systemd and dbus, and run all daemons out of separate application deployments. Is this straightforward to do with base ostree or does flatpak bring value here that we should be looking at?

I think much of what you're trying to achieve here is outside of the scope of ostree.  Ostree could be useful as a part of what you're trying to achieve. You could use it as part of your system like flatpak does as a mechanism for efficiently storing/transferring container images.  Ostree itself won't help much with understanding which applications to launch in what circumstance.

Regarding the idea of a minimal base system of systemd and dbus with applications built on top: I'd recommend looking at CoreOS. It's the same idea, but with docker containers.  They also use minimal A/B partitions for the base system with everything else installable as docker containers.

This is basically the direction I'm headed. Took a while to understand the split, though.

Thanks!
Michael


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]