Re: Repo scalability issues and solutions

On Tue, 2020-09-15 at 11:43 -0600, Dan Nicholson wrote:
On Tue, Sep 15, 2020 at 8:42 AM Alexander Larsson via ostree-list

We used the `ostree-metadata` branch for p2p, and I've had nothing
issues with it. The problem is that modifying a repo like that is
stateful and global. With a summary file its just a simple download
atomic mmap which works well for any kind of parallel access to a
But, if you have multiple clients running against a repo and they
need to update the ostree-metadata ref on-disk, possibly with
versions of the summary (in the p2p multi-peer case) you run into
sorts of issues with atomicity, races, write permissions in the
etc, etc.

I don't see why there are any more issues with updating the
ostree-metadata ref locally than any other ref or the summary file.
Why is it different than multiple clients fetching and storing the
summary file or any other commit to the local repo? Definitely the
code caused a lot of headaches and probably needs to be redesigned
(preferably in the ostree project context), but I think you already
yanked the p2p code from flatpak, right? And if you want, you could
definitely just fetch the ostree-metadata commit object without
to disk in exactly the same way you can with the summary file.
nothing magical there except that the current pull code doesn't do

Well, fundamentally the ostree-metadata commit file is just one file
with a side-loaded signature. As such it could be handled exactly like
the summary file is, except that you'd have to first read the ref file
to find the current checksum for it. However, I thought the reason for
using a branch in the first place was that it allows us to reuse
existing code and features, like checksum+verification, deltas, etc. To
get at those features we have to use the regular way to pull the
branch, but then you run into these issues.

For example, one of the issues we had with ostree-metadata in the
flatpak p2p code is that in the multi-client p2p case each peer can
have a copy of the master ostree-metadata from different times, and
each would (hopefully) have the metadata that matches the subset of
refs it carries. However, we can really only have *one* copy of the
ostree-metadata in the local flatpak repo. So which one do we pull? The
latest? What if we ended up pulling the ref from another
peer, then the metadata in ostree-metadata we pulled may not match the
commit we pull. If we're just using summary-style files its much easier
to just keep a mmaped copy of each summary in the transaction.

But yeah, these issues lead to me dropping multi-peer p2p from flatpak,
and we now just use a subset of ostree p2p (essentially the collection
id) and do offline side-loading and redirecting to a single local repo.

But, even if we ignore p2p there are issues. Take for example the
flatpak system repo. In the summary case each client can download the
summary via libostree, and to speed things up it uses
ostree_repo_set_cache_dir(repo, "~/.cache/flatpak/system-cache") to get
a writable location for the summary cache. This means consecutive
summary reads are fast, and the simplicity of the summary cache means
its pretty robust in the parallel multi-client case. 

However, in the ostree-metadata case we can't use
ostree_repo_set_cache_dir(), because we need a full pull operation with
write rights to the (root owned, system) repo. So, we have to do a
system-helper callout to update the local copy of the ostree-metadata
branch, and then resolve the ref locally. Not only is this very
cumbersome, but also it involves a great many steps (dbus callouts,
download summary, deltas, write objects, update ref file) that are non-
atomic and hard to reason about. 

I can imagine, for example, a client read the ostree-metadata ref file,
but then it raced with another client also updating ostree-metadata
which got an new commit for it, and then a prune got in ahead and the
old commit file is gone by the time the first client reads it. Kinda
far-fetched, but it just goes to show the vulnerability to various
races in this complex setup.

I'm not saying any of this is unsolvable, but I want to make it clear
that "just reuse existing machinery to easily distribute the summary
file info" is not actually easy, or simplifying.

That said, there are 2 major issues with this approach. A major
benefit of using a commit for the metadata is to be able to make
static deltas of the data to cut down the download size. However, the
commit object (i.e., the commit metadata) is not deltad, so you'd
to move the interesting information into a file object in the commit.
That's not the end of the world, but if the list of deltas lives in
the ostree-metadata commit object, then you don't know what delta to
fetch. You'd have to do something like a commit metadata only fetch,
get the list of deltas out of there and then pull the delta to get
the commit contents.

Yeah, you have to do it like that. Although that is roughly the same
set of ops you would have to do in a solution not using commit objects
too. The main issue here is really not that you have to this work, it's
more that a custom solution could be simplified to exactly the minimum
that is required for the "pull initial setup" stage, whereas reusing
the existing pull code for the pull setup doesn't actually make thing

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]