- User Since
- Sep 6 2017, 1:06 PM (176 w, 4 d)
Nov 27 2019
Puppet changes added in 17b2b3041212aca9e0a9a35c510885de7bb78230.
Ideally the Debian package should now be added to the Software Heritage private repository.
Nov 26 2019
Nov 25 2019
Instructions to create Debian packages have been added in D2352.
Nov 22 2019
Nov 19 2019
Only temperature data should not be present in non-physical machines.
This ticket should be fixed by d3ad9bda4c7b7fcc19c340c2b7ac559882d8f934 .
What is the purpose of this task ?
How is it different from T1974 ?
Initial commit pushed, comments pushed in followup 66b3e07ed9d9dbde2333cefe0e3375742dc76231 .
Nov 18 2019
New work-in-progress dashboard visible here:
Nov 8 2019
Thanks for this review, changes have been incorporated in D2240.
Thanks for this review. Changes added to D2223.
No relevant problem has been reported with our dataset/usage of Prometheus. Closing.
Nov 6 2019
I do not see any missing piece in the Grafana dashboard, the Munin graph service/VM can be shut down.
Nov 5 2019
Oct 15 2019
Some work-in-progress Sphinxdoc documentation is visible in this Phabricator review: https://forge.softwareheritage.org/D2140 .
Oct 14 2019
Oct 8 2019
Oct 7 2019
Sep 4 2019
Aug 29 2019
Softwareheritage: low-level storage
Aug 28 2019
ftigeot (François Tigeot) wrote:
ftigeot changed the task status from "Open" to "Work in Progress".
ftigeot added a comment.
Added a gandi Forwarded address to forward firstname.lastname@example.org
mails to email@example.com.
Added a gandi Forwarded address to forward firstname.lastname@example.org mails to email@example.com.
Aug 27 2019
Some post-4.15 commits seem to fix this kind of issue.
Aug 26 2019
The changes look fine to me.
The important thing is to get the same output.
With regard to the data directory, share/swh-data seemed to be a logical place and compatible with the usual hier(7) filesystem hierarchy.
Aug 9 2019
The yaxis scale was explicitly forced to begin at zero.
Removing that constraint allows the graphs to scale and fill their allocated vertical space.
When resizing the browser window or when loading the page the first time after having pasted its name in the URL bar, it is obvious the "Source files" data is used for all graphs.
Dedicated task created as T1949 .
Aug 8 2019
Aug 7 2019
Most of the relevant commits use the 192.168.128.0/24 address space.
Aug 6 2019
Aug 5 2019
Aug 2 2019
A bit too big to understand quickly but no real choice here.
Aug 1 2019
Typo line 17:
- # Install so that terrafor actually sees the plugin
+ # Install so that terraform actually sees the plugin
Jul 31 2019
Some route definitions look unnecessary and could be cleaned up in a second pass.
Jul 30 2019
Jul 29 2019
Jul 26 2019
Jul 24 2019
This technically looks good but from a security point of view, why put the secret "private" and "provenance-index" directories in a publically accessible location ?
Fwiw, a manual connection to esnode1:9200 doesn't show this error
Depending on Prometheus for all data is not a hard requirement.
Removed T1017 Kafka subtask, it really has no relation to the Elasticsearch cluster being a true cluster or not.
Hardware is too old / starting to fall apart for other reasons.
It would be more cost-effective to replace it.
Wrote backup tools documentation in T1372 .
No backup system changes wanted at this time.
Jul 23 2019
Jul 22 2019
Fwiw, I never got an answer from Dell on that topic.
Jul 18 2019
For the "March 2019 problem", the json output generated from the Prometheus API itself misses the more recent data points.
Prometheus data has been exported to a json file similar to the format produced by the Muni/RRD based toolchain.
Results are visible on https://www-dev.softwareheritage.org/archive/
(vs https://www.softwareheritage.org/archive/ for original graphs)
Jul 16 2019
Even though it is not necessarily obvious, the object counter has been stored in Prometheus since December 2018.
Grafanalib dashboards to swh-grafana-dashboards in rTGRAee5d3074bf58 .
Jul 8 2019
Backup done, a full copy of the main MongoDB databases is now present on banco.
Jul 5 2019
Jul 3 2019
Approximately 85% of the dump data has been copied so far.
Jul 1 2019
Dumps of the six MongoDB databases have been created at the Paris office.
They are being copied to banco:/srv/storage/space/mongo_dumps at Rocquencourt.