|Work in Progress||vlorentz||T3096 Efficient and reliable download via the Vault|
|Work in Progress||None||T887 Vault: "snapshot" cooker|
|Work in Progress||vlorentz||T3504 Make the git-bare cooker publicly available|
|Resolved||vlorentz||T3505 Make the git-bare cooker available to the staff and beta-testers in the production webapp|
|Resolved||vlorentz||T3506 Get rid of the concept of vault "object_type"|
|Resolved||ardumont||T3507 prod: vault: Deploy v1.0.0|
|Resolved||ardumont||T3503 staging: vault: Deploy v1.0.0|
- status.io: Open maintenance ticket to notify of the partial disruption in service
- vangogh: Stop puppet
- vangogh: Stop gunicorn-swh-vault
- vault db: Schema migration 
- Upgrade workers and webapp nodes with latest swh.vault and restart cooker service
- Start back gunicorn-swh-vault
- Try a cooking and check result -> ok
- Close maintenance ticket as everything is fine
Note that the cache invalidation is not completely done though as the objstorage used is
an azure one.
truncate vault_batch cascade; drop table vault_bundle cascade; drop type cook_type; create type bundle_type as enum ('flat', 'gitfast', 'git_bare'); comment on type bundle_type is 'Type of the requested bundle'; create table vault_bundle ( id bigserial primary key, type bundle_type not null, swhid text not null, -- requested object ID task_id integer, -- scheduler task id task_status cook_status not null default 'new', -- status of the task sticky boolean not null default false, -- bundle cannot expire ts_created timestamptz not null default now(), -- timestamp of creation ts_done timestamptz, -- timestamp of the cooking result ts_last_access timestamptz not null default now(), -- last access progress_msg text -- progress message ); create unique index vault_bundle_type_swhid on vault_bundle (type, swhid); create index vault_bundle_task_id on vault_bundle (task_id); insert into dbversion (version, release, description) values (4, now(), 'Initial version');
Note that the cache invalidation is not completely done though as the objstorage used
is an azure one.
Currently investigating how to clean that up.
container 'contents' in the 'swhvaultstorage' storage in the portal azure found, deleted and created back.
Cook again and now this holds the one and only new cooked stuff.
So the cache is now fully invalidated as well.
All services are fine from my end.