That sounds a like a good idea but I'd like to see how that renders first. Can you
please add a screenshot of the result, tia?
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Feed Advanced Search
Advanced Search
Advanced Search
Aug 22 2022
Aug 22 2022
Aug 5 2022
Aug 5 2022
ardumont added a parent task for T4371: Deploy swh-scrubber on all storage instances: T3841: regularly scrub all the data stores of swh.
ardumont committed rSPSITEeb7fb4a541c2: worker-large: Decrease concurrency to 5 tasks (authored by ardumont).
worker-large: Decrease concurrency to 5 tasks
ardumont committed rSPSITEb7f5673705e6: indexer: Increase origin intrinsic metadata journal client instances (authored by ardumont).
indexer: Increase origin intrinsic metadata journal client instances
ardumont committed rSPSITE46a294ac097b: Rename indexer journal client group id to origin intrinsic metadata (authored by ardumont).
Rename indexer journal client group id to origin intrinsic metadata
ardumont committed rSPSITE508fabb650c8: indexer: Ensure old configuration files are purged when not needed (authored by ardumont).
indexer: Ensure old configuration files are purged when not needed
Increase large workers' memory
the fix is in D8198
ardumont renamed T4399: OIN -> SWH deposit connection from OID -> SWH deposit connection to OIN -> SWH deposit connection.
Blacken module
ardumont committed rDDEP640e662eb9a1: swh.deposit.config: Add types and coverage (authored by ardumont).
swh.deposit.config: Add types and coverage
ardumont committed rSPSITEf66897270610: idx-journal-client: Fix indexer type so cli can start properly (authored by ardumont).
idx-journal-client: Fix indexer type so cli can start properly
ardumont committed rSPSITEc0157ae8116e: content-indexer: Add twice content indexer instances (authored by ardumont).
content-indexer: Add twice content indexer instances
ardumont committed rSPSITE44bfe087552f: indexer: allow multiple indexer journal client instances (authored by ardumont).
indexer: allow multiple indexer journal client instances
Use correct range of commits for the rebase update
Fix docstring
Add missing coverage
ardumont accepted D8194: cassandra: Make origin_visit_status_get_random's interval consistent with postgresql.
not wrong there ;)
neat
ardumont published D8197: Fix crash of test_*_arbitrary when given objects with the same id for review.
ardumont published D8196: Fix crash of test_*_arbitrary when given objects with the same id for review.
Rebase
ardumont committed rSPSITE6384e5e8ce90: worker-large: Make the standard queue consumption a bit faster (authored by ardumont).
worker-large: Make the standard queue consumption a bit faster
ardumont committed rDWAPPS708fb873223a: add-forge-now: Display forge url as link in moderation view (authored by ardumont).
add-forge-now: Display forge url as link in moderation view
On deposit staging instance, create the user with the proper information [1].
Check everything is fine [2] (if it's not amend directly in db).
makes sense, thx.
Aug 4 2022
Aug 4 2022
In T4400#89007, @rdicosmo wrote:
one question inline.
Adapt according to review and unify one missed add-forge-now view as well with this
ardumont added inline comments to D8178: add-forge-now: Display forge url as link in moderation view.
ardumont changed the status of T4266: prod: Deploy swh-loader-core 3.5.0 on staging from Invalid to Resolved.
worth opening a dedicated forge issue
Done. T4423
open an upstream issue
ardumont raised the priority of T4423: Gogs pagination API breaks because of fatal repos from Low to Normal.
ardumont renamed T4387: Scrubber processes getting killed by OOM killer from scrubber process killed by OOM killer to Scrubber processes getting killed by OOM killer.
I've ended dropping the ballooning for that node.
As i've deployed twice as much services as before to scrub somerset as well [1]
production/scrubber1: Drop ballooning
ardumont moved T4371: Deploy swh-scrubber on all storage instances from in-progress to deployed/landed/monitoring on the System administration board.
Deployed both in staging and production [1]:
ardumont committed rSPSITE77bcb4a01d1c: scrubber/checker: Fix configuration filename to expected .yml (authored by ardumont).
scrubber/checker: Fix configuration filename to expected .yml
ardumont committed rSPSITEf36f24609578: scrubber: Fix missing prefix Environment variable setup (authored by ardumont).
scrubber: Fix missing prefix Environment variable setup
ardumont changed the status of T4371: Deploy swh-scrubber on all storage instances from Open to Work in Progress.
ardumont updated the summary of D8181: scrubber: Make service parametric on the db instance to scrub.
ardumont committed rSPSITEaa64d3ee19d2: scrubber: Make service parametric on the db instance to scrub (authored by ardumont).
scrubber: Make service parametric on the db instance to scrub
ardumont updated the summary of D8181: scrubber: Make service parametric on the db instance to scrub.
Activate service as well
ardumont added inline comments to D8181: scrubber: Make service parametric on the db instance to scrub.
ardumont added inline comments to D8181: scrubber: Make service parametric on the db instance to scrub.
ardumont updated the test plan for D8181: scrubber: Make service parametric on the db instance to scrub.
Adapt according to review
ardumont added inline comments to D8181: scrubber: Make service parametric on the db instance to scrub.
ardumont committed rDWAPPS3e5d552a3283: Drop ctags indexer references and hidden api using it (authored by ardumont).
Drop ctags indexer references and hidden api using it
ardumont moved T4371: Deploy swh-scrubber on all storage instances from Backlog to Weekly backlog on the System administration board.
Still happening:
[ +0.063819] Killed process 1216634 (swh) total-vm:262252kB, anon-rss:206288kB, file-rss:724kB, shmem-rss:0kB [ +0.077492] oom_reaper: reaped process 1216634 (swh), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB [Jul27 05:29] journalbeat invoked oom-killer: gfp_mask=0x6200ca(GFP_HIGHUSER_MOVABLE), nodemask=(null), order=0, oom_score_adj=0 [ +0.000002] journalbeat cpuset=/ mems_allowed=0 [ +0.000023] CPU: 2 PID: 688 Comm: journalbeat Not tainted 4.19.0-21-amd64 #1 Debian 4.19.249-2 -- [ +0.000002] [1250623] 997 1250623 3349 70 65536 4 0 systemctl [ +0.000001] Out of memory: Kill process 1199696 (swh) score 82 or sacrifice child [ +0.063517] Killed process 1199696 (swh) total-vm:201224kB, anon-rss:47784kB, file-rss:0kB, shmem-rss:0kB [ +0.066909] oom_reaper: reaped process 1199696 (swh), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB [Jul27 06:12] kworker/0:0: page allocation failure: order:0, mode:0x6310ca(GFP_HIGHUSER_MOVABLE|__GFP_NORETRY|__GFP_NOMEMALLOC), nodemask=(null) [ +0.000002] kworker/0:0 cpuset=/ mems_allowed=0 [ +0.000011] CPU: 0 PID: 1244099 Comm: kworker/0:0 Not tainted 4.19.0-21-amd64 #1 Debian 4.19.249-2 -- [ +0.000001] [1257717] 997 1257717 1369 35 49152 0 0 sudo [ +0.000005] Out of memory: Kill process 1081929 (swh) score 136 or sacrifice child [ +0.102631] Killed process 1081929 (swh) total-vm:303484kB, anon-rss:178744kB, file-rss:4kB, shmem-rss:0kB [ +0.132669] oom_reaper: reaped process 1081929 (swh), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB [Jul27 09:29] swh invoked oom-killer: gfp_mask=0x6200ca(GFP_HIGHUSER_MOVABLE), nodemask=(null), order=0, oom_score_adj=0 [ +0.000002] swh cpuset=/ mems_allowed=0 [ +0.000010] CPU: 2 PID: 1209826 Comm: swh Not tainted 4.19.0-21-amd64 #1 Debian 4.19.249-2 -- [ +0.000001] [1264290] 0 1264290 3544 63 65536 0 0 check_journal [ +0.000001] Out of memory: Kill process 1112652 (swh) score 70 or sacrifice child [ +0.060327] Killed process 1112652 (swh) total-vm:195560kB, anon-rss:20828kB, file-rss:436kB, shmem-rss:0kB [ +0.066241] oom_reaper: reaped process 1112652 (swh), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
ardumont committed rSPREf8662af5f80f: production/scrubber1: Bump ballooning to 2G (authored by ardumont).
production/scrubber1: Bump ballooning to 2G
ardumont committed rSPSITE6471a0e90203: graphql: Fix icinga check string for graphql instance (authored by ardumont).
graphql: Fix icinga check string for graphql instance
Let's try and see if that fixes it then.