Page MenuHomeSoftware Heritage

Grid5000 for benchmarking
Closed, MigratedEdits Locked


It is possible to use resources from Grid5000 to run the benchmarks, subject to approval.

Part of a cluster can be reserved during week days (roughly less than 20% of the cluster and at most 2 hours or so) which would be convenient to test the benchmarks work as expected during a short period.

An entire cluster can be reserved during a week-end (from friday night to monday morning, i.e. roughly 60 hours) to run the actual benchmark.

When getting the machines, a tool should be used to install the desired operating system (Debian GNU/Linux for instance) and an ssh root access is provided in return. The machines can then be configured with no restrictions.

Event Timeline

dachary changed the task status from Open to Work in Progress.Mar 9 2021, 3:12 PM
dachary triaged this task as Normal priority.
dachary created this task.
dachary created this object in space S1 Public.

Followed the instructions at to get an account. Waiting for approval.

Looking at the available hardware, here is what could be used:

  • for Read Storage, with the caveat that it has more horsepower than what is required and less disk. It however has a total of 40TB and the upside of having 32 machine is that it should be possible to verify that an erasure coded pool with k=4,m=2 scales out.
  • for Write Storage, with the caveat that it has more horsepower than what is required and less disk. It however has extra 2TB disks on which it would be possible to store a global index with 50 billion entries to avoid generating at the beginning of every run.

I think your assessment is correct. Maybe gros in Nancy, with its 1TB SSD / machine, could be useful for the write storage benchmarking? I'm not sure how much it would matter that both read and write storage are on the same site.

Thanks for the feedback. has 1.6TB nvme which seems better. It would be better to have a total of 4TB nvme available to get closer to the target global index size (i.e. 40 bytes 100 billions entries = 4TB). I'm told it is possible to donate hardware to Grid5000: if testing with the current configuration is not convincing enough, 4 more nvme pcie drives could be donated and they would be installed in the machines. No idea how much delay to expect but its good to know it is possible.

The account request was approved, I'll proceed with a minimal reservation to figure out how it is done.

Additional nvme drives for yeti should be something similar to but confirmation is needed to verify the machines actually have the required SFF-8639 to plug them in.

Another option would be to use an SSD with a SAS / SATA interface. It is slower than a nvme but if the benchmark results are bad because of that difference, it means there is something wrong in the design.

There is a mattermost channel dedicated to Grid5000 but one has to be invited to join, it is not open to the public.

With a little help from the mattermost channel and after approval of the account, it was possible to boot a physical machine with a Debian GNU/Linux installed from scratch and get root access to it.

root@dahu-1:~# lsblk
sda      8:0    0 223.6G  0 disk 
├─sda1   8:1    0   3.7G  0 part [SWAP]
├─sda2   8:2    0  19.6G  0 part 
├─sda3   8:3    0  22.4G  0 part /
├─sda4   8:4    0     1K  0 part 
└─sda5   8:5    0   178G  0 part /tmp
sdb      8:16   0 447.1G  0 disk 
├─sdb1   8:17   0 111.8G  0 part 
├─sdb2   8:18   0 107.3G  0 part 
├─sdb3   8:19   0 107.3G  0 part 
└─sdb4   8:20   0 102.9G  0 part 
sdc      8:32   0   3.7T  0 disk 
└─sdc1   8:33   0   3.7T  0 part