After upgrading pergamon and debugging through sentry and cli, the deposit icinga check is back on track.
Triggered back the icinga checks there.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
May 3 2022
Apr 29 2022
Apr 28 2022
Apr 27 2022
Apr 26 2022
Apr 22 2022
Apr 21 2022
Feb 25 2022
- Fix loader.core debian build [1]
In T3976#79630, @olasd wrote:https://opam.ocaml.org/doc/FAQ.html#Why-does-opam-require-bwrap
"If needed, for special cases like unprivileged containers, sandboxing can be disabled on opam init with the --disable-sandboxing flag (only for non-initialised opam)".
I think that would make sense, as we never execute code from the opam root, we only read metadata files.
"If needed, for special cases like unprivileged containers, sandboxing can be disabled on opam init with the --disable-sandboxing flag (only for non-initialised opam)".
Feb 24 2022
- Fix deposit debian build
Jan 18 2022
ok, got it, we still want a patch for my first question, which is make timeout value could be configured
Jan 17 2022
Er yeah, the deposit isn't designed for archives this big. You should probably host your tarballs somewhere and point the archive loader to it, instead.
Jan 15 2022
Another problem is swh-deposit client, when I use below command to upload an large archive(16Gb size) , it will consume much memory which is more than 40Gb, this is also a big problem for client, I hope swh could automatically divide large archive:)
Thanks for help me open this issue, From my use usage scenario , I need to upload some package perhaps greater than 10G to deposit, thus will raise an timeout issue. I used this guide to deploy my environment(https://docs.softwareheritage.org/devel/getting-started.html#getting-started) , and my server configuration is 16core/64Gbram/200Gbdisk size, I hope this timeout value could changed by configuration file, because the upload time is depend on every user deployment environment.
Dec 6 2021
Dec 3 2021
After some fighting to untangle the mess we had in the scheduling dbs:
- wrong task type used
- wrong data format in old entries
What a mess! The existing data both in staging and production are not in the expected
shape for the loader. Hence the issue of failing the load [1]