Thu, Mar 18
Mar 11 2021
Mar 8 2021
Should this be closed now? The documentation is at https://docs.softwareheritage.org/devel/swh-dataset/
Jan 11 2021
Jan 5 2021
Can this task be closed since the subject was addressed in T2620 ?
Nov 17 2020
Sep 22 2020
We've definitely improved on this (notably using proper hostnames for the instance label on prom metrics). I think we should make this task more actionable if we want to keep it open.
Sep 16 2020
Sep 8 2020
Sep 4 2020
Wikimedia is using netbox as the source of trust in their infrastructure and puppet is configuring the facts from it. It's not exactly the same use case we want as we would like to have netbox automatically provisioned.
and their documentation : https://wikitech.wikimedia.org/wiki/Netbox
A docker-compose is available to easily test netbox : https://github.com/netbox-community/netbox-docker
This is the puppet configuration used at wikimedia : https://gerrit.wikimedia.org/r/c/operations/puppet/+/387880/
Feb 11 2020
Netbox looks pretty nice as a full hardware/device inventory tool: https://netbox.readthedocs.io/en/stable/
Nov 27 2019
Puppet changes added in 17b2b3041212aca9e0a9a35c510885de7bb78230.
Ideally the Debian package should now be added to the Software Heritage private repository.
Nov 26 2019
Nov 25 2019
Instructions to create Debian packages have been added in D2352.
Nov 24 2019
AFAIU from last week work, munin is now gone
Nov 19 2019
Nov 8 2019
No relevant problem has been reported with our dataset/usage of Prometheus. Closing.
Nov 6 2019
I do not see any missing piece in the Grafana dashboard, the Munin graph service/VM can be shut down.
Any chance we can close this now?
Oct 1 2019
Sep 6 2019
Sep 5 2019
Aug 29 2019
Aug 6 2019
Aug 1 2019
Jul 16 2019
Jul 11 2019
Jun 12 2019
The most recent update of the state of this task has shown a regression in the journal test coverage, which, per se, is not a big deal (just a few points). But it does raise the question of how, once we have attained whatever "minimum" coverage we are OK with, we monitor overtime that there is no regression. For instance, I think that code reviews should show to the reviewers how the submitted diff affects code coverage. Ideally, reviewers should be able to so if it has a net positive or negative effect on coverage, and take that into account in their review decisions. (Which is not to say we should never accept diffs that decrease code coverage—there might be reasons to do so. But it is a data point that would be useful for reviewers to see.)
Jun 6 2019
May 25 2019
only 3% to go in -lister and -core \o/