Plus that documents it a bit in an automatic manner, so *thumbs up*.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Dec 1 2021
Nov 16 2021
Jul 29 2021
Jul 28 2021
Jul 27 2021
I decided to make the swh-graph container create the compressed graph itself before starting. That's the easiest way to use it AND to implement it IMO.
May 19 2021
Issue is now solved, closing this.
awesome, thanks :)
May 18 2021
and the build is green ;)
16:50:52 ============================= test session starts ============================== 16:50:52 platform linux -- Python 3.7.3, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 16:50:52 cachedir: .tox/py3/.pytest_cache 16:50:52 rootdir: /var/lib/jenkins/workspace/swh-docker-dev/docker 16:50:52 plugins: testinfra-6.3.0, testinfra-6.0.0 16:50:52 collected 7 items 16:50:52 16:50:52 tests/test_deposit.py ..... [ 71%] 16:53:13 tests/test_git_loader.py . [ 85%]
thanks for having investigated that
So the docker_default network did not get removed since April 20, 2021, see docker network inspect docker_default output below:
15:07:16 [ 15:07:16 { 15:07:16 "Name": "docker_default", 15:07:16 "Id": "a0145eb2f5b3f055bfafce7a88289346c4668f3a19f2544122267272cd96f7b3", 15:07:16 "Created": "2021-04-20T02:57:51.141593622Z", 15:07:16 "Scope": "local", 15:07:16 "Driver": "bridge", 15:07:16 "EnableIPv6": false, 15:07:16 "IPAM": { 15:07:16 "Driver": "default", 15:07:16 "Options": null, 15:07:16 "Config": [ 15:07:16 { 15:07:16 "Subnet": "172.28.0.0/16", 15:07:16 "Gateway": "172.28.0.1" 15:07:16 } 15:07:16 ] 15:07:16 }, 15:07:16 "Internal": false, 15:07:16 "Attachable": true, 15:07:16 "Ingress": false, 15:07:16 "ConfigFrom": { 15:07:16 "Network": "" 15:07:16 }, 15:07:16 "ConfigOnly": false, 15:07:16 "Containers": {}, 15:07:16 "Options": {}, 15:07:16 "Labels": { 15:07:16 "com.docker.compose.network": "default", 15:07:16 "com.docker.compose.project": "docker", 15:07:16 "com.docker.compose.version": "1.29.1" 15:07:16 } 15:07:16 } 15:07:16 ]
May 12 2021
May 4 2021
If you face this issue, try restarting the containers using docker-compose down and docker-compose up.
Apr 8 2021
So either the vault is fine and the test needs to be improved.
Or the vault builds the tarball without taking into account the symlink case.
Mar 29 2021
Mar 15 2021
@rendong951 Sorry, but we can't reproduce the issue on our end.
Feb 18 2021
Dec 31 2020
I also try command fllowing your reply,the results are the same.
Dec 29 2020
It's not a DNS issue. The error is Could not resolve hostname https:. It shouldn't try to resolve https:, it's a scheme, not a domain name.
Thank you very much for your reply.
I followed your command.[1]
Dec 18 2020
Please wrap your stacktrace, excerpt of code, etc... with triple backquote before and after.
So this becomes more readable.
Thanks in advance.
Thank you very much for your reply.
And I follow your guidance,now,all the services are started,but when I save code in my Software Heritage platform, the status always failed, I checked log :
swh-loader_1 | [2020-12-18 02:49:13,495: ERROR/ForkPoolWorker-1] Loading failure, updating to partial status
swh-loader_1 | Traceback (most recent call last):
swh-loader_1 | File "/srv/softwareheritage/venv/lib/python3.7/site-packages/dulwich/client.py", line 912, in fetch_pack
swh-loader_1 | refs, server_capabilities = read_pkt_refs(proto)
swh-loader_1 | File "/srv/softwareheritage/venv/lib/python3.7/site-packages/dulwich/client.py", line 215, in read_pkt_refs
swh-loader_1 | for pkt in proto.read_pkt_seq():
swh-loader_1 | File "/srv/softwareheritage/venv/lib/python3.7/site-packages/dulwich/protocol.py", line 277, in read_pkt_seq
swh-loader_1 | pkt = self.read_pkt_line()
swh-loader_1 | File "/srv/softwareheritage/venv/lib/python3.7/site-packages/dulwich/protocol.py", line 223, in read_pkt_line
swh-loader_1 | raise HangupException()
swh-loader_1 | dulwich.errors.HangupException: The remote server unexpectedly closed the connection.
swh-loader_1 |
swh-loader_1 | During handling of the above exception, another exception occurred:
swh-loader_1 |
swh-loader_1 | Traceback (most recent call last):
swh-loader_1 | File "/srv/softwareheritage/venv/lib/python3.7/site-packages/swh/loader/core/loader.py", line 318, in load
swh-loader_1 | more_data_to_fetch = self.fetch_data()
swh-loader_1 | File "/srv/softwareheritage/venv/lib/python3.7/site-packages/swh/loader/git/loader.py", line 239, in fetch_data
swh-loader_1 | self.origin.url, self.base_snapshot, do_progress
swh-loader_1 | File "/srv/softwareheritage/venv/lib/python3.7/site-packages/swh/loader/git/loader.py", line 174, in fetch_pack_from_origin
swh-loader_1 | progress=do_activity,
swh-loader_1 | File "/srv/softwareheritage/venv/lib/python3.7/site-packages/dulwich/client.py", line 914, in fetch_pack
swh-loader_1 | raise _remote_error_from_stderr(stderr)
swh-loader_1 | dulwich.errors.HangupException: ssh: Could not resolve hostname https: Temporary failure in name resolution
Dec 9 2020
Sep 28 2020
I have added the `--pull` option in the job configuration so the python image will always be pulled (if needed) during the build :
Sep 27 2020
It seems the python 3.7 images on the jenkins master is quite old :
root@thyssen:~# docker images | grep python python 3.7 a4cc999cf2aa 16 months ago 929MB
I'm guessing it's because $VERSION_CODENAME is empty here: https://forge.softwareheritage.org/source/swh-environment/browse/master/docker/Dockerfile$3
Sep 17 2020
possibly related to T2373.
Sep 16 2020
May 16 2020
It worked \m/ (that was it).
Hmph, should have changed the commit title ¯\_(ツ)_/¯
May 6 2020
Jan 27 2020
Probably doesn't anymore since we moved to confluent-kafka
Dec 11 2019
Resolved by D2424.
Dec 9 2019
Nov 26 2019
Closed by 9a98e8be06817055693671cfbe6917645171993e
Nov 20 2019
And it's fixed.
Ok, the stacktrace and the current indexer code (master) seems out of sync.
rDCIDXb2ff26454b37f3d332b90ff6a70a45b43dea6526 fixing that part of the journal client.
I failed to find the swh-docker-compose.logs file.
I finally found the reference of the node the build ran on though (because i keep forgetting that) [1]
Nov 19 2019
Nov 18 2019
Nov 12 2019
Indeed, the Dockerfile fix is much simpler. Let's hope they will fix the issue upstream (not sure regarding this comment).
We've migrated away from doing that ages ago in favor of app factories, and I'd very much prefer we avoid introducing this again: Side effects such as reading configuration files unconditionally running in importable modules are very bad form.
Seems the simplest solution is to instantiate the wsgi application in swh.*.api.server modules.
Ah, I guess that's the opposite then.
Works fine for me? Are you sure your docker image is up to date?
I was also looking at it.
Sep 6 2019
Sep 5 2019
Hm, i don't know how to close this issue, but it should be closed now.
Thx.
Thx to this change (provided by @olasd) in the swh-docker-dev repository, tasks are exectued:
diff --git a/conf/loader.yml b/conf/loader.yml index 4a4fb54..0cc07e6 100644 --- a/conf/loader.yml +++ b/conf/loader.yml @@ -5,6 +5,7 @@ storage: celery: task_broker: amqp://guest:guest@amqp// task_modules: + - swh.loader.package.tasks - swh.loader.debian.tasks - swh.loader.dir.tasks - swh.loader.git.tasks @@ -16,6 +17,7 @@ celery: - swh.deposit.loader.tasks
Jul 28 2019
May 22 2019
May 10 2019
Apr 13 2019
Apr 2 2019
Mar 25 2019
Let's call it done, event if the small dataset part has not been addressed.
Let's call it done, some minor parts may still need a bit of attention thou
Mar 17 2019
Mar 8 2019
Mar 6 2019
Closing this as I forgot it exists a configuration entry to force the serving of assets by Django (https://forge.softwareheritage.org/rCDFD7b3213293ca1670a738d540cbba05e87e5cf6042).