- User Since
- Sep 6 2017, 1:06 PM (101 w, 3 d)
Fri, Aug 9
The yaxis scale was explicitly forced to begin at zero.
Removing that constraint allows the graphs to scale and fill their allocated vertical space.
When resizing the browser window or when loading the page the first time after having pasted its name in the URL bar, it is obvious the "Source files" data is used for all graphs.
Dedicated task created as T1949 .
Thu, Aug 8
Wed, Aug 7
Most of the relevant commits use the 192.168.128.0/24 address space.
Tue, Aug 6
Mon, Aug 5
Fri, Aug 2
A bit too big to understand quickly but no real choice here.
Thu, Aug 1
Typo line 17:
- # Install so that terrafor actually sees the plugin
+ # Install so that terraform actually sees the plugin
Wed, Jul 31
Some route definitions look unnecessary and could be cleaned up in a second pass.
Tue, Jul 30
Mon, Jul 29
Fri, Jul 26
Wed, Jul 24
This technically looks good but from a security point of view, why put the secret "private" and "provenance-index" directories in a publically accessible location ?
Fwiw, a manual connection to esnode1:9200 doesn't show this error
Depending on Prometheus for all data is not a hard requirement.
Removed T1017 Kafka subtask, it really has no relation to the Elasticsearch cluster being a true cluster or not.
Hardware is too old / starting to fall apart for other reasons.
It would be more cost-effective to replace it.
Wrote backup tools documentation in T1372 .
No backup system changes wanted at this time.
Tue, Jul 23
Mon, Jul 22
Fwiw, I never got an answer from Dell on that topic.
Jul 18 2019
For the "March 2019 problem", the json output generated from the Prometheus API itself misses the more recent data points.
Prometheus data has been exported to a json file similar to the format produced by the Muni/RRD based toolchain.
Results are visible on https://www-dev.softwareheritage.org/archive/
(vs https://www.softwareheritage.org/archive/ for original graphs)
Jul 16 2019
Even though it is not necessarily obvious, the object counter has been stored in Prometheus since December 2018.
Grafanalib dashboards to swh-grafana-dashboards in rTGRAee5d3074bf58 .
Jul 8 2019
Backup done, a full copy of the main MongoDB databases is now present on banco.
Jul 5 2019
Jul 3 2019
Approximately 85% of the dump data has been copied so far.
Jul 1 2019
Dumps of the six MongoDB databases have been created at the Paris office.
They are being copied to banco:/srv/storage/space/mongo_dumps at Rocquencourt.
Jun 28 2019
Jun 27 2019
Jun 25 2019
Jun 24 2019
The solution to this problem is to first identify the partition devices and then remove them:
dmsetup remove ssd-vm--100--disk--2p1
This behavior appears to be caused by partitions present on top of device mappers devices.
These partitions in turn are used to create other dm devices and these latest device keep an open reference to the base one.
Jun 13 2019
Jun 11 2019
Jun 6 2019
The reason of this behavior is Debian uses dynamic UIDs for most of its system users.
Looks good for a first draft.
May 29 2019
May 28 2019
Looks good to me.
Always using the fqdn belvedere.internal.softwareheritage.org would be more consistent though ;-)
May 22 2019
May 16 2019
May 14 2019
We will use VMs running on the orsay.softwareinternal.org hypervisor for now.
May 13 2019
Apr 30 2019
Grafanalib dashboards added to https://grafana.softwareheritage.org/ via the new provisioning mechanism of Grafana 5.x.
Fully automated provisioning is still a work-in-progress.
Prometheus does not provide storage device statistics for Proxmox container-based hosts.
The data can be read from their parent machine dashboards though.
Some disk space usage statistics with ~= one month of snapshots
Apr 25 2019
Grafanalib based dashboards do not require special handling, the nfs filesystem on orangerie:/srv/softwareheritage is shown by default for example.