In addition to failed cooking retry, failed RPC calls are now also retried.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Dec 15 2020
Dec 14 2020
Dec 8 2020
Vault 0.5.0 packaged (includes a fix from @tenma about dropping the unused default configuration).
Vault configuration adapted to add the retry behavior puppet side.
Dec 7 2020
Nov 20 2020
Nov 3 2020
Nov 2 2020
Oct 30 2020
Oct 29 2020
Oct 28 2020
Oct 20 2020
This is unified now.
Oct 16 2020
Oct 15 2020
This has been solved a long time ago ;)
May 11 2020
May 5 2020
indeed, I forgot to adapt the vault after a refactoring of swh-core. I'll fix it
indeed, I forgot to adapt the vault after a refactoring of swh-core. I'll fix it
I think the simple way to fix this is to catch the RemoteException for 404 in the api_lookup function to restore previous behavior.
Ok I see what is wrong here. Based on the obtained traceback, this is due to changes in how 404 errors are handled server side:
Traceback (most recent call last): File "/home/antoine/swh/swh-environment/swh-web/swh/web/api/apidoc.py", line 358, in documented_view response = f(request, **kwargs) File "/home/antoine/swh/swh-environment/swh-web/swh/web/api/views/vault.py", line 100, in api_vault_cook_directory res = _dispatch_cook_progress(request, "directory", obj_id) File "/home/antoine/swh/swh-environment/swh-web/swh/web/api/views/vault.py", line 28, in _dispatch_cook_progress request=request, File "/home/antoine/swh/swh-environment/swh-web/swh/web/api/views/utils.py", line 67, in api_lookup res = lookup_fn(*args) File "/home/antoine/swh/swh-environment/swh-web/swh/web/common/service.py", line 1112, in vault_progress raise e File "/home/antoine/swh/swh-environment/swh-web/swh/web/common/service.py", line 1109, in vault_progress return vault.progress(obj_type, obj_id) File "/home/antoine/swh/swh-environment/swh-vault/swh/vault/api/client.py", line 29, in progress return self.get("progress/{}/{}".format(obj_type, hex_id)) File "/home/antoine/swh/swh-environment/swh-core/swh/core/api/__init__.py", line 294, in get return self._decode_response(response) File "/home/antoine/swh/swh-environment/swh-core/swh/core/api/__init__.py", line 352, in _decode_response self.raise_for_status(response) File "/home/antoine/swh/swh-environment/swh-core/swh/core/api/__init__.py", line 308, in raise_for_status raise RemoteException(payload="404 not found", response=response) swh.core.api.RemoteException: 404 not found
May 1 2020
Feb 18 2020
Feb 14 2020
Ah, I think I understand the issue:
I wonder why sometimes requested objects don't show up on the vault status page. I think this might be because the "status page update" doesn't happen when the cooking has been requested in the past.
Feb 13 2020
Complete email:
Oct 7 2019
Pluggable compression has been implemented for all objstorage backends, which means we could
- store the (compressed) bundles in an uncompressed objstorage on azure
- when a user requests the bundle
- generate a temporary URL (using BlobSharedAccessSignature.generate_blob)
- redirect to that temporary URL
Sep 19 2019
Of course, the current bundles are double-compressed, which makes this... not great.
I wonder whether the best solution wouldn't be to just generate a redirect to a direct download url from the azure bucket using a temporary shared access signature.
The previous mail reply got truncated...
Hi Zack,
@sunweaver: the bundle is ready, in theory you should be able to obtain it like this:
$ wget https://archive.softwareheritage.org/api/1/vault/revision/85678b0d6c52d6fd0af50c8e493c74dd15a7115d/gitfast/raw/ -O 85678b0d6c52d6fd0af50c8e493c74dd15a7115d.gitfast.gz $ git init $ zcat 85678b0d6c52d6fd0af50c8e493c74dd15a7115d.gitfast.gz | git fast-import
I say in theory because (due to T885) download of large bundles is a bit flaky right now.
see T1964 for a concrete example where the lack of streaming is causing problems (after the cooking, when the bundle is ready)
$ wget https://archive.softwareheritage.org/api/1/vault/revision/85678b0d6c52d6fd0af50c8e493c74dd15a7115d/gitfast/raw/ --2019-09-19 11:43:50-- https://archive.softwareheritage.org/api/1/vault/revision/85678b0d6c52d6fd0af50c8e493c74dd15a7115d/gitfast/raw/ Resolving archive.softwareheritage.org (archive.softwareheritage.org)... 128.93.193.31 Connecting to archive.softwareheritage.org (archive.softwareheritage.org)|128.93.193.31|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 539845226 (515M) [application/gzip] Saving to: ‘index.html’
Sep 18 2019
I've bumped the limit to 1GB and sent a cook request again.
So, this is now actually working (@rdicosmo just tried it again) *but* the bundle stops being assembled shortly before finishing due to a maximum size limit of ~500 MB:
Aug 23 2019
In T1964#36396, @sunweaver wrote:@zack: Thanks for providing the tarball. However, I in fact need a tarred up Git repo until the moment that GPL was revoked, as I want to re-pulish it via GitLab (or Github). Do you think it is possible to get a copy of that git repo any time soon?
@zack: the point about rocrail is that GPL was revoked without copyright holders' consent. And there is no most recent version public anymore these days.
@zack: Thanks for providing the tarball. However, I in fact need a tarred up Git repo until the moment that GPL was revoked, as I want to re-pulish it via GitLab (or Github). Do you think it is possible to get a copy of that git repo any time soon?
@sunweaver: did you see my answer above?
Hi,
Hi,
I can totally reproduce this issue.
Jul 29 2019
Jul 26 2019
The issue came from the fact that the vault tries to retrieve the whole revisions log in a single call to the storage API.
May 28 2019
Ack, i was going to update the task with your other comment, thanks for going the extra mile ;)
May 25 2019
my take: don't bother (see: T1716#32312)
In T1716#32249, @ardumont wrote:Webapp/cookers migrated to use the azure vault instance.
May 24 2019
Webapp/cookers migrated to use the azure vault instance.
May 23 2019
Heads up, checking the new vault's objstorage speaking to azure blob storage is fine (credential and all):
- using the docker environment with a vault plugged to the new azure's blobstorage
- load data in the docker env
- request a cooking in the docker webapp
- check the cooking is ok (it is)
- check the download of the cooking is ok (it is)
- check the azure blob storage's new blob (it is and it's the same as the one from the local webapp).
May 22 2019
Rebase and plug to production branch
Adapt according to review:
Thanks for this change!
piping octocatalog-diff into cat should drop the escape codes
May 21 2019
So, as in D1495:
May 20 2019
May 17 2019
Created the db on prado on secondary cluster (i normalized the user name from swhvault to swh-vault, same for the db name):
To be clear, that would mean moving the database on the secondary postgres cluster on prado.
May 16 2019
Considering the size of that database, and the fact that we don't have any provisions to automatically spin up a new database server, I think it would make more sense to repatriate it on our main postgres setup, rather than movig it to a new machine on azure.