Re: Automated benchmarks of BuildStream



On Thu, 2018-03-22 at 09:22 +0000, Dominic Brown wrote:
On 27/02/18 18:15, Sam Thursfield wrote:
[...]

Hey,

to add to this, I have been working on converting the current JSON 
output from BuildStream
benchmarks into something more human readable, currently the JSON files look
something like this:

{
     "start_timestamp": 1520986910.445044,
     "tests": [
         {
             "name": "Startup time",
             "results": [
                 {
                     "measurements": [
                         {
                             "max-rss-kb": 21496,
                             "total-time": 0.27
                         },
                         {
                             "max-rss-kb": 21596,
                             "total-time": 0.31
                         },
                         {
                             "max-rss-kb": 21584,
                             "total-time": 0.28
                         }
                     ],
                     "repeats": 3,
                     "version": "master"


Repeated for each of the different BuildStream versions. I have 
condensed them
down to something that looks like this in CSV format:

Name,Version,Total Time,Max RSS
Startup time,master,0.2866666666666667,21558.666666666668
Startup time,1.1.0,0.19666666666666666,21368
Startup time,1.0.1,0.45,31501.333333333332

Attached: the results.csv

If you would prefer you can open the .csv in some spreadsheet software
(e.g. LibreOffice Calc) and it should look like a nice table.

The results.csv currently show an average number for the three tests for 
each version
of BuildStream.

If anyone has any thoughts/changes they think should be made please let 
me know
by replying to this email, or pinging me on #buildstream, my IRC nick is
"dominic"

Hi,

  I am personally not immensely concerned with the CSV format so much
as I am concerned with the general approaches we've discussed multiple
times (which I have to say I fear a little is falling between the
cracks, but this is only because I have not seen many progress
reports).

That said, if people want to view this in CSV, your sample output could
*really* use some column alignment - looking through a CSV file and
attempting to spot an abnormality is something I've done for other
benchmarks in the past, it's not so uncommon, and is near impossible
without column alignment.

Cheers,
    -Tristan



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]