Reopening as i'm still refactoring/cleaning up more modules.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Jul 10 2020
Jul 9 2020
Jul 8 2020
Jul 7 2020
In the end, it's more dead code since it's only code we pass into when the storage used is an in-memory instance.
This is no longer the case, tests are now using pg-storage instance.
Jul 6 2020
I expect this has been fixed now...
May 19 2020
Feb 6 2020
Jan 29 2020
Jan 22 2020
Oct 1 2019
Sep 10 2019
Sep 7 2019
Aug 22 2019
Jul 3 2019
May 23 2019
May 21 2019
Feb 16 2019
Heads up.
Nov 1 2018
Might be related to T1156.
Oct 20 2018
Oct 19 2018
Oct 15 2018
Oct 10 2018
(There is a verifier module which i don't use)
Oct 4 2018
Oct 2 2018
As per D409#8432 conclusion
Oct 1 2018
cc: @douardda , just because we discussed this today :)
Sep 12 2018
Sep 11 2018
Aug 3 2018
First pass have been done complete a while back.
Jul 26 2018
Jul 24 2018
I forgot to reference the task in commit rDLDHGdb2803207a2934da4665379c12224f9eb90e8995 fixing the issue.
Jul 19 2018
to correct the revisions...
T1156 created for loading the hg origins once again
Thanks for spotting. We also need a separate task to correct the
revisions that were already loaded in the archive. Can you please file
it? (tag "archive content")
Jun 19 2018
Mar 21 2018
$ cat ~/.config/swh/kibana/query.yml indexes: - swh_workers-2018.03.*
Grunt, we are missing information again.
It was supposed to be fixed.
why no errors reported at all in logs (or logs for that matters..., removing all filters, this seems to stop around the 7th of march 2018)
Current status, the queue is empty.
Mar 16 2018
Mar 14 2018
Finally, rescheduled using swh-scheduler.
Heading towards T986.
As in https://forge.softwareheritage.org/T879#16396, a limit of 2Gib on dump size was used to separate origins.
The current lists are stored at:
Mar 9 2018
Wrapping up:
- Loaders (swh-worker@swh_loader_{something}.service) now are part of a systemd slice to limit their memory usage (up to 90%). [1]
- Loaders can now use a /tmp dedicated to their systemd service. That permits, when restarting the service to automatically clean that /tmp. This is activated for svn, mercurial and deposit loaders. [2]
- Sibling typed loader can clean up amongst themselves (if some are killed and did not have time to finish their job). [3]
- Relatedly, loaders are now dealing properly with the prepare phase exploding (it did not clean up properly nor update the visit status). [4]
Mar 7 2018
The gist of this is:
- separate a prepare_origin_visit method from prepare method
- prepare_origin_visit is an adapter method to setup origin/visit data (loader dependent because we don't have the same parameter structure...). This could fail (prod issues) but in extreme cases.
- prepare is a state dependent on the loader's logic, but independent from the origin preparation (this can fail and that's what this issue is all about).
Feb 24 2018
Feb 23 2018
Status:
- [DONE] backup
- [IN-PROGRESS] Clean up in progress
Feb 21 2018
Thanks for the heads up.
FWIW the backup has now completed.
It seems like the biggest problem is/was
Can we associate the name of the temporary storage directory for a load with that loader's pid, and then make every new loader instance compare existing temp storage dirs during init? If a storage directory exists for a process that does not exist (because the process was killed) then it can be deleted.
I worry that RAM is way more constrained than disk space is. It seems like the biggest problem is/was