Reopening as the release is not working on the stable branch
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Apr 7 2021
Register the history endpoint only if the configuration is present
- Return a 404 if the requested file does not exist
- Use a fixture to configure the tests to make them easier to read
update according the review's feedbacks
fix typos on the commit message
Apr 6 2021
fix another typo
The requested change was make by the DSI. Everything is working well now.
In D5428#137737, @ardumont wrote:Why not a single backend class for both?
because it's not the same backend implementation
so not the same concern/perimeter as mentioned in the diff description.
The wrong network profile was asked for the staging gateway so it seems it doesn't have a complete access to internet.
A mail was sent to the DSI to request an unfiltered access.
Apr 2 2021
After solving the problem the upgrade was pretty smooth. The firewall perform the following steps:
- upgrade to the last current minor version of the current major branch
- upgrade to the first minor version of the next major branch
- upgrade to the last minor version ot the current major branch
lgtm
Before starting the upgrade, we discovered 2 problem we had to fix:
- The backup had no access to internet we block the upgrade
- The master/backup switch was not working for 4 of the 8 VIPs
Apr 1 2021
Is the user not used for the creation of the pgpass file ?
In T3190#61917, @ardumont wrote:An improvment of the journal client is necessary to add the support of this configuration like for the producer:
Do you need such improvment though? According to the code you linked, you could pass a
producer_config dict with that key and value.
The journal client supports dynamic configuration via kwargs so no there is no need to improve it.
It seems the problem is not present anymore with a higher max message size ('500 * 1024 * 1024').
for the record, increasing the property message.max.bytes to 100 * 1024 * 1024 in the consumer configuration is not solving the problem
The same problem occured during the poc, theses messages were ignored by using this consumer configuration "errors.tolerance": 'all' [1].
I will try to find if there is a more elegant way to deal with this issue ;)
Mar 31 2021
After talking with @rdicosmo, we finally chose to replace on each server the 4 HDD 2.4To by 6 SSD 1.9To to be sure we will have good performances and enought space for the future.
The quote wil nowl be sent to the purchasing service according to the usual procedure [1]
Mar 30 2021
Final quotation sent for approval.
The details are:
3 PowerEdge R6515 (1u) with per server:
- 10 disks enclosure
- BOSS controller with 2 240Go cards (for system)
- 4 SAS 2.5" 10k 2.4To disks
- SFP+ network card
- 2 SFP cables
- 2 power supplies with their cables
- IDRac enterprise
- Rack mount rails with cable management
lgtm
credentials sent by PM
- unprivileged user :
username=swh-douardda password=XXXXX
rebase
Mar 29 2021
Mar 26 2021
An improvment idea came to me during the refactoring, the script can be splitted and integrated in the 'swh-counters' codebase.
lgtm
Mar 25 2021
node counters1.internal.softwareheritage.org deployed by terraform. The inventory section is created accordingly[1].
The journal_client is running.
:)
thanks
The counters are now exposed throught a /metrics enpoint and ingested by prometheus.
They are well tagged per environment so we will be able to isolate the counters for each one: