- Write benchmarks measuring performances for contents in the order of 100GB and 10 millions entries
- Include the benchmarks in the python package
- Functional tests of the benchmarks to run in the CI
- Run the benchmarks on grid5000 hardware and archive the results
Outcome:
- A repository tree containing (i) instructions to run the benchmarks, (ii) the software to run the benchmarks
- The grid5000 results added to the repository at a tag marked with the date at which they were run