Indeed, the Dockerfile fix is much simpler. Let's hope they will fix the issue upstream (not sure regarding this comment).
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Nov 12 2019
We've migrated away from doing that ages ago in favor of app factories, and I'd very much prefer we avoid introducing this again: Side effects such as reading configuration files unconditionally running in importable modules are very bad form.
Seems the simplest solution is to instantiate the wsgi application in swh.*.api.server modules.
Ah, I guess that's the opposite then.
Works fine for me? Are you sure your docker image is up to date?
I was also looking at it.
Sep 6 2019
Sep 5 2019
Hm, i don't know how to close this issue, but it should be closed now.
Thx.
Thx to this change (provided by @olasd) in the swh-docker-dev repository, tasks are exectued:
diff --git a/conf/loader.yml b/conf/loader.yml index 4a4fb54..0cc07e6 100644 --- a/conf/loader.yml +++ b/conf/loader.yml @@ -5,6 +5,7 @@ storage: celery: task_broker: amqp://guest:guest@amqp// task_modules: + - swh.loader.package.tasks - swh.loader.debian.tasks - swh.loader.dir.tasks - swh.loader.git.tasks @@ -16,6 +17,7 @@ celery: - swh.deposit.loader.tasks
Jul 28 2019
May 22 2019
May 10 2019
Apr 13 2019
Apr 2 2019
Mar 25 2019
Let's call it done, event if the small dataset part has not been addressed.
Let's call it done, some minor parts may still need a bit of attention thou
Mar 17 2019
Mar 8 2019
Mar 6 2019
Closing this as I forgot it exists a configuration entry to force the serving of assets by Django (https://forge.softwareheritage.org/rCDFD7b3213293ca1670a738d540cbba05e87e5cf6042).
Feb 25 2019
Feb 20 2019
status:
- runner: fine (no crash, no restart)
- listener: fine (same)
- workers: fine (same)
Feb 19 2019
WIP as the new version has been deployed (runner, listener, workers, etc...)
Let's see if the occurrences still occur.
I've done the upgrade on saatchi and restarted both listener and runner. I've removed the runner restart from the saatchi crontab.
I've pushed an updated kombu to our repository.
Feb 17 2019
bunch of celery workers (loader*, lister*) indeed have a ConnectionResetError stacktrace (not necessarily the same):
Feb 16 2019
As per our pair-programming yesterday, I think we reproduce this in production now (with the runner at least).
Feb 5 2019
Jan 28 2019
Jan 24 2019
I confirm that I do not see ConnectionResetError: [Errno 104] Connection reset by peer and BrokenPipeError: [Errno 32] Broken pipe so far in the runner logs with kombu from git's master.