Benchmarking BuildStream commands from user point of view

Hi Tristan,

I'm going to propose to you an alternative and IMO more productive course of
action - you obviously take performance very seriously, and I would also like
to, let's do it properly then.

To this end, I suggest that you do the following:

  o Collect some use cases where performance is critical, I mean use
    cases that are visible from the BuildStream CLI frontend.

    E.g. `bst build`, `bst checkout`, etc

    What about cases where `bst build` is using import elements, vs
    compose elements, etc.

  o Create a benchmarks test suite based on these frontend toplevel
    cases, not based on internal operations.

    I would like to see graphs rendered from collected data points,
    using simulated projects:

      o What is the "per element" build time for a simulated project
        with only import elements.

        What does a graph look like when simulating one element vs
        500 elements vs 1000 elements vs 10,000 elements, etc

      o How does the above graph compare with an import element which
        stages 1 file, vs 10, 100, 10,000, 100,000 files ?

    Further, we should re-run the whole benchmarks for every publicly
    released BuildStream version over time. How does 1.0 compare to
    1.1, 1.2, etc - Have we regressed or improved performance ?

    Lets really know about it.

I like this proposal! It addresses my top concern - the responsiveness of bst
commands from the user's point of view.

YAML vs. JSON is a side-show for me. It only became important when I saw the
performance hit I would introduce for 'manifest.yaml' in issue 82.

Issue 82 is also a side-show for me, it is only a small part of the cost of
staging. By working on that I hoped to gain more of your attention on the
"staging performance" thread :)

I'll make benchmarking use-cases from the front-end my next thing.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]