ok, got it, we still want a patch for my first question, which is make timeout value could be configured
Feb 24 2022
Jan 18 2022
Jan 17 2022
Er yeah, the deposit isn't designed for archives this big. You should probably host your tarballs somewhere and point the archive loader to it, instead.
Jan 15 2022
Another problem is swh-deposit client, when I use below command to upload an large archive(16Gb size) , it will consume much memory which is more than 40Gb, this is also a big problem for client, I hope swh could automatically divide large archive:)
Thanks for help me open this issue, From my use usage scenario , I need to upload some package perhaps greater than 10G to deposit, thus will raise an timeout issue. I used this guide to deploy my environment(https://docs.softwareheritage.org/devel/getting-started.html#getting-started) , and my server configuration is 16core/64Gbram/200Gbdisk size, I hope this timeout value could changed by configuration file, because the upload time is depend on every user deployment environment.
Dec 6 2021
Dec 3 2021
After some fighting to untangle the mess we had in the scheduling dbs:
- wrong task type used
- wrong data format in old entries
What a mess! The existing data both in staging and production are not in the expected
shape for the loader. Hence the issue of failing the load 
Nov 4 2021
Sep 23 2021
Sep 3 2021
May 27 2021
Your diff sounds quite enough.
May 26 2021
Or even directly the archive loader's default behavior (append previously seen branch
from early snapshot/visit). As discussed, I'm wondering whether an archive loader (gnu
or cran ) would not benefit from always displaying previously seen branches (whether
they are still present in the current main api we list/visit or not ).
This could be implemented by adding a new option to the loader.
May 4 2021
Apr 28 2021
Apr 23 2021
Now deployed in prod:
New swh.loader.core deployed in staging.
Apr 19 2021
Apr 15 2021
Apr 6 2021
if you remember the crash times (.zsh_history?), we could find a range of candidate SWHIDs...
The migration script has now run to completion (took around a week).
Mar 30 2021
I've deployed the extid schema changes on all storages, and I've started the migration script on getty.
now it is. (the mercurial loader technically doesn't use ExtID internally, but it already passes nodeids, which are close enough)
Mar 29 2021
actually, only solved for package loaders