- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Jan 8 2023
Oct 19 2022
Jan 11 2021
Sep 24 2020
I guess this can be closed now
Jan 27 2020
Jun 18 2019
Jun 4 2019
For the record, we do have a docker based deployment for this, but not a 'production' (aka based on debian packages) one. So I close this task for now. The task is now to document this (T1782).
May 14 2019
So that took a few tries in puppet, but adding new brokers to the kafka deployment should now be seamless.
Apr 18 2019
Apr 12 2019
Closed by D1345
Apr 10 2019
Apr 2 2019
Apr 1 2019
Mar 29 2019
Btw, i'll remove the @timed get_storage as this is just noisy on the graph ;)
(That concurs with what you said orally)
Some more granular probes inside the main storage implementation would be helpful, e.g. counting the actual number of objects inserted by each request.
Mar 28 2019
The current state of these probes has been deployed in production (and the workers restarted).
This is now deployed on the main storage. Let's see how it holds up.
Mar 27 2019
In T1609#29868, @anlambert wrote:@olasd, urllib3 1.124.1 is available on stretch-backports [1], including the commit you mentioned.
[1] https://packages.debian.org/stretch-backports/python-urllib3
@olasd, urllib3 1.24.1 is available on stretch-backports [1], including the commit you mentioned.
Mar 26 2019
https://grafana.softwareheritage.org/d/jScG7g6mk/objstorage-object-counts shows the data that we're currently able to collect.
Mar 25 2019
The intent is to start feeding kafka with some "production-grade" data, in a more reliable fashion that the current PostgreSQL notify mechanism. This will also help us get a better sense of the shape of the data, to make the analysis in T1602 more relevant.
conflict policy -> update