[BuildStream] Adding benchmarking to buildstream CI



Hi,

I have been asked to put together a proposal to add benchmarking into the buildstream and after some discussion I would like to put forward the following.

The solution will not add anything to the buildstream pipeline, but will
utilize a "benchmarking bot" to orchestrate benchmarking of requested MR
branches. The request process will require the addition of an extra MR label which will need to be set by a user when a particular MR branch needs to be
benchmarked.

The benchmarking feedback will be provided through bot added discussion points
in the relevant MR.

The "benchmarking bot" behavior would be as follows:

- The bot would parse open MRs looking for those which have been "labeled" benchmarking. It would then cross correlate these against pipelines the have
    finished and completed successfully.
- The bot would trigger a curl build request to benchmarking gitlab CI to carry out benchmarking run on the MR branch which has the oldest tagged
    "benchmarking" request whose buildstream pipeline has completed
    successfully.
- The bot will log a new discussion point in the relevant buildstream MR
    stating that benchmarking has started (with link to the benchmarking
    CI page if possible).
- The bot will tag the benchmarking build request with build variables to
    denote:
    - the buildstream MR
    - the trigger buildstream pipeline id
    - the bot added buildstream discussion point ref.
    - the buildstream branch
    - the buildstream sha
    - a time stamp of the original buildstream pipeline start.
- The bot will wait on triggered benchmarking CI runs to complete and will annotate the relevant buildstream MR with a link to the benchmarking results
    CI page.
- The bot will remove the benchmarking label from the relevant buildstream MR to indicate completion and prevent the MR being automatically processed
    again.
- The order the bot will process requests in will always start from oldest
    pending pipeline and it will use the variables contained within the
benchmarking pipeline builds as a reference point to determine which MR
    needs testing next.

This (as I understand it) will require the creation of a "benchmarking bot" user (with suitable credentials) in the buildstream and benchmarking projects with suitable group access and privilidges to append new discussion points to MRs.

It is expected that developers will pick up generated discussion points and review results via the benchmarking gitlab ci, rather than adding them to the
discussion verbosely.

These are my initial thoughts, please feel free to provide feedback.

Regards

Lachlan Mackenzie Software Consultant
Codethink Ltd
3rd Floor, Dale House, 35 Dale Street, MANCHESTER, M1 2Hf, United Kingdom
Telephone: +44 161 236 5575
http://www.codethink.co.uk/
We respect your privacy.   See https://www.codethink.co.uk/privacy.html


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]