- User Since
- Sep 6 2017, 1:06 PM (109 w, 6 d)
Some work-in-progress Sphinxdoc documentation is visible in this Phabricator review: https://forge.softwareheritage.org/D2140 .
Mon, Oct 14
Tue, Oct 8
Mon, Oct 7
Sep 4 2019
Aug 29 2019
Softwareheritage: low-level storage
Aug 28 2019
ftigeot (François Tigeot) wrote:
ftigeot changed the task status from "Open" to "Work in Progress".
ftigeot added a comment.
Added a gandi Forwarded address to forward firstname.lastname@example.org
mails to email@example.com.
Added a gandi Forwarded address to forward firstname.lastname@example.org mails to email@example.com.
Aug 27 2019
Some post-4.15 commits seem to fix this kind of issue.
Aug 26 2019
The changes look fine to me.
The important thing is to get the same output.
With regard to the data directory, share/swh-data seemed to be a logical place and compatible with the usual hier(7) filesystem hierarchy.
Aug 9 2019
The yaxis scale was explicitly forced to begin at zero.
Removing that constraint allows the graphs to scale and fill their allocated vertical space.
When resizing the browser window or when loading the page the first time after having pasted its name in the URL bar, it is obvious the "Source files" data is used for all graphs.
Dedicated task created as T1949 .
Aug 8 2019
Aug 7 2019
Most of the relevant commits use the 192.168.128.0/24 address space.
Aug 6 2019
Aug 5 2019
Aug 2 2019
A bit too big to understand quickly but no real choice here.
Aug 1 2019
Typo line 17:
- # Install so that terrafor actually sees the plugin
+ # Install so that terraform actually sees the plugin
Jul 31 2019
Some route definitions look unnecessary and could be cleaned up in a second pass.
Jul 30 2019
Jul 29 2019
Jul 26 2019
Jul 24 2019
This technically looks good but from a security point of view, why put the secret "private" and "provenance-index" directories in a publically accessible location ?
Fwiw, a manual connection to esnode1:9200 doesn't show this error
Depending on Prometheus for all data is not a hard requirement.
Removed T1017 Kafka subtask, it really has no relation to the Elasticsearch cluster being a true cluster or not.
Hardware is too old / starting to fall apart for other reasons.
It would be more cost-effective to replace it.
Wrote backup tools documentation in T1372 .
No backup system changes wanted at this time.
Jul 23 2019
Jul 22 2019
Fwiw, I never got an answer from Dell on that topic.
Jul 18 2019
For the "March 2019 problem", the json output generated from the Prometheus API itself misses the more recent data points.
Prometheus data has been exported to a json file similar to the format produced by the Muni/RRD based toolchain.
Results are visible on https://www-dev.softwareheritage.org/archive/
(vs https://www.softwareheritage.org/archive/ for original graphs)
Jul 16 2019
Even though it is not necessarily obvious, the object counter has been stored in Prometheus since December 2018.
Grafanalib dashboards to swh-grafana-dashboards in rTGRAee5d3074bf58 .
Jul 8 2019
Backup done, a full copy of the main MongoDB databases is now present on banco.
Jul 5 2019
Jul 3 2019
Approximately 85% of the dump data has been copied so far.
Jul 1 2019
Dumps of the six MongoDB databases have been created at the Paris office.
They are being copied to banco:/srv/storage/space/mongo_dumps at Rocquencourt.
Jun 28 2019
Jun 27 2019
Jun 25 2019
Jun 24 2019
The solution to this problem is to first identify the partition devices and then remove them:
dmsetup remove ssd-vm--100--disk--2p1
This behavior appears to be caused by partitions present on top of device mappers devices.
These partitions in turn are used to create other dm devices and these latest device keep an open reference to the base one.
Jun 13 2019
Jun 11 2019
Jun 6 2019
The reason of this behavior is Debian uses dynamic UIDs for most of its system users.
Looks good for a first draft.
May 29 2019
May 28 2019
Looks good to me.
Always using the fqdn belvedere.internal.softwareheritage.org would be more consistent though ;-)
May 22 2019
May 16 2019
May 14 2019
We will use VMs running on the orsay.softwareinternal.org hypervisor for now.