Re: ostree buildsystems, packages and pinning
- From: Will Manley <will williammanley net>
- To: ostree-list gnome org
- Subject: Re: ostree buildsystems, packages and pinning
- Date: Tue, 06 Nov 2018 20:41:37 +0000
On Mon, 5 Nov 2018, at 11:15, Colin Walters wrote:
On Mon, Oct 29, 2018, at 1:51 PM, Will Manley wrote:
We build ostree images on x86 for deployment to an ARMv7 device. So
development happens on our developer laptops and it's deployed to the
device using ostree - even during development. We're not currently using
ostree admin unlock or the like.
Are you doing this for CI too? If I were in your case I'd probably try
to support at least doing an x86_64 build too and run through basic sanity tests
in a VM as well.
Yes we do this for CI. We used to have an x86_64 build, but we ditched it. We had to have a deploy and test
on the real device in CI anyway so failures on x86 generally provided little useful additional signal, but
some extra delay and noise. Maintaining x86 was also extra effort given that our ARM device has specialized
GPU hardware that we take advantage of.
Our build and deploy process is fast (thanks to ninja and ostree respectively) and our hardware plentiful so
testing in a VM just doesn't provide any advantages.
To fix this we took a leaf from modern programming language package
managers. We use the lockfile concept as used by rust's cargo package
manager (cargo.lock[1]) or nodejs's npm (package-lock.json[2]).
This makes a lot of sense.
Your CI job then is a lot like e.g.:
https://dependabot.com/
I hadn't come across dependabot before, but yes, it's just like this. We use Jenkins to do the update and
git push.
Although I think the first time I personally saw the "CI bot updating
pinned data"
pattern was in the context of the Cockpit project, which does it for
fixed VM images
it uses for testing; a recent example is:
https://github.com/cockpit-project/cockpit/pull/10480
Interesting.
Another thing that's strongly related to this though is that IMO,
classic package metadata (dpkg/rpm) need versioning. And it'd
probably be nice if e.g. crates.io too had a version number one
could reference in addition to the git sha1.
Using that then one could specify e.g. `apt/yum install --repoversion X.Y.Z`
and also have reproducibility. The reason I really want this though
is because for rpm-ostree on the client side, one quickly runs
into the fact that ostree has a very nice git-like history model with
clear checksums and versions, and rpm...has no such thing.
https://github.com/projectatomic/rpm-ostree/issues/415
Actually in Fedora today the "pungi" tool does output versioned
directories: https://kojipkgs.fedoraproject.org/compose/updates/
But it's not an API today and nothing in the libdnf ecosystem understands
how to parse it (there's not an index other than the autogenerated
HTML as far as I know, etc.)
Yeah, I can see how versioning the metadata would make a lot of sense for the distros. I don't think it
would interest me though - we don't snapshot the whole set of metadata, only the bits that actually affect
our builds. This means we don't need to update our lockfiles when unrelated changes happen to the upstream
package repos.
I would like to extract each deb immediately after downloading into its
own ostree
Yep, rpm-ostree does this, although sadly right now after downloading
all of the packages, it's not interleaved yet.
We're using ninja, so the interleaving would comes naturally to us, but I'd have to implement the extraction.
Incidentally ostree integrates quite naturally with ninja. If you're creating a commit you can use the ref
file as the target of the build rule like this:
build ostree_repo/refs/heads/some_ref: ostree_rule
This would also save more disk-space: we'd no longer need to
store the debs themselves, but could refer to the contents by ostree
ref e.g. the ref dpkg/data/<sha256> might refer to the deb with that
sha256. The lockfile has these SHA256s recorded so you'd know which
ostree refs to use.
This is of-course a much larger step - you'd still need to handle the
metadata, pre-inst scripts, etc under control.gz which might be a little
tricky, but multistrap manages it.
Having a lot of experience with this I can say the benefit and cost
is exactly that: it's a major leap from what apt/yum etc. do today,
with some nice benefits, but one also ends up maintaining a
separate parallel path. Which so far is definitely worth it I think.
My motivation for taking it further would be to further speed up our build process. Currently we only need to
do these expensive build steps when we change the lockfiles due to updates or wanting to install new packages
so the benefit is unlikely to be great enough to justify the costs for us. Curiosity still might get the
better of me though :).
Thanks
Will
[
Date Prev][
Date Next] [
Thread Prev][Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]