I think just showing how many are already cooked would be a huge improvement. Then users would at least be able to observe progress happening. The current system just indicates 'processing' so it is not obvious if it is actually doing something or perhaps the task died.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Nov 16 2021
Nov 15 2021
they were just surprisingly long tasks; T3727 should help with this
the cooking has completed now
(marking T2220 as dependency, because we need an up-to-date graph to show an ETA on objects loaded in the last ~year)
Oct 1 2021
Sep 23 2021
Sep 22 2021
Sep 21 2021
Deployed in staging. @vlorentz validated it worked.
Sep 20 2021
Sep 17 2021
Sep 15 2021
Sep 12 2021
Sep 10 2021
We have an icinga plugin implementing some of this: https://forge.softwareheritage.org/source/swh-icinga-plugins/browse/master/swh/icinga_plugins/vault.py
This is implemented by the git-bare cooker, but it's not publicly available for now.
Sep 8 2021
Sep 6 2021
Sep 3 2021
For git-bare cooking to work, upgrade swh.storage to the latest version (> v0.35) and
restart the storage service (done).
Sep 2 2021
Sep 1 2021
Dropping the subtask about the full packaging.
Packaging ok:
Package python3-swh.graph.client built [1]
- Add 'has debian packaging branch' tag to the repository
- Install hook to cascade the debian packaging build [1]
- Ensure necessary ci jobs is installed to build package (already there)
Local build successful [1]
Aug 31 2021
Discussing with the sysadm team, it's unnecessary to actually go up to package py4j.
What's required here for the vault the python3-swh.vault.client package with only the module swh.vault.client and swh.vault.naive_client.
Aug 30 2021
Aug 27 2021
might be related to T3168 for the packaging part.
And I triggered re-cooking bundles that were requested in the last month.
Note that the cache invalidation is not completely done though as the objstorage used
is an azure one.Currently investigating how to clean that up.
Note that the cache invalidation is not completely done though as the objstorage used
is an azure one.
- status.io: Open maintenance ticket to notify of the partial disruption in service
- vangogh: Stop puppet
- vangogh: Stop gunicorn-swh-vault
- vault db: Schema migration [1]
- Upgrade workers and webapp nodes with latest swh.vault and restart cooker service
- Start back gunicorn-swh-vault
- Try a cooking and check result -> ok
- Close maintenance ticket as everything is fine