Re: Initial indicators for a build tool and how to get more detailed ones
- From: Jim MacArthur <jim macarthur codethink co uk>
- To: buildstream-list gnome org
- Subject: Re: Initial indicators for a build tool and how to get more detailed ones
- Date: Mon, 16 Apr 2018 16:35:47 +0100
Hi Agustín.
To try and summarise your email, it looks to me like what you are asking
for is:
* A record of build successes and failures of the trunk over time
* Aggregate statistics of the time needed to build, as suggested -
median and standard deviation (over all builds, or the last 10, or last
week, for example?)
* Separate timings for builds from scratch and build which use existing
artifact caches
BuildStream doesn't (so far as I know) recommend any particular style of
project management or continuous integration, so these would seem to be
recommendations outside the scope of BuildStream. Where we can do things
to support those activities, though, we should do. Building from
scratch, for example, can be done unofficially at the moment but we
don't have an explicit method for doing it.
With respect to records of trunk builds, ideally, if your source has a
single version (i.e. one repository) then there should be no build
failures on trunk. Branches which fail testing shouldn't be merged at
all. Things get more complicated if you have several repositories, or if
you have known failures in your test set. In the latter case, you'll
need more detail than simple pass/fail since you'll probably want to
record the progress of going from 100 failures to 5. Recording the time
between trunk builds is also possible, but that alone wouldn't seem to
get you any more information than is in the git commit log, assuming you
run tests when branches are pushed, as the current BuildStream CI does.
Perhaps you have a previous scenario in mind where these statistics
would have been useful - if so, can you share any details of it?
So far for build time metrics, we've gone down the road of making a tool
which can be given various versions of the source and recreate the test
results and performance data, which is the main mode of operation for
the current 'benchmarks' repository. I would much rather keep historical
testing results, at least until we have strong evidence that
retroactively running benchmarks produces the same results, but this
hasn't been popular so far. It also requires people to keep their own
database of results, as GitLab's CI (for example) will not store results
indefinitely.
(**) Please, please, do trunk base development.
Genuine question: Who doesn't? Many people press this point, but I've
clearly had a very lucky career and only seen teams who aim to merge to
trunk as soon as possible.
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]