A Software Heritage instance is created from scratch on grid5000 and a pre-defined workload is applied to it. Metrics are collected during the run and analyzed when it is complete. It is very similar to [what was done in the initial synthetic benchmarks](https://git.easter-eggs.org/biceps/biceps).
* Create code and methodology to benchmark the object storage on grid5000
* Run the benchmark on grid5000
Outcome:
* The code of the benchmark in a repository tree
* The documentation of the benchmark methodology in the Software Heritage developer documentation
* The outcome of the benchmark in a repository tree tagged with the date of the run