- User Since
- Sep 7 2015, 3:42 PM (332 w, 1 d)
lgtm, assuming those were installed in the swh-docs somewhere, right?
The task_acks_lateoption was disabled before the new scheduler migration.
Perhaps it may be interesting to reactivate it for the high_priority_loaders as they handle tasks created by users.
They are also monitored by the end to end probes.
Drop fix doc commit from the diff
The 2nd commit about the docs should fix the master build btw!
I'll remove it from the diff and commit it first.
- Adapt according to suggestion (rephrase error message)
- avoid error message duplication
- Add test case around the new conditional
- Adapt according to link suggestion from the other diff (dedicated commit btw)
Mon, Jan 17
On belvedere: do the dump and send it to the admin db node:
root@belvedere:~# DUMP_NAME=/srv/softwareheritage/postgres/keycloak-20220117.sql.gz; time sudo -i -u postgres pg_dump --clean --if-exists keycloak | pigz -c - > $DUMP_NAME && scp -i .ssh/id_ed25519.borg $DUMP_NAME dali.internal.admin.swh.network:/srv/postgresql/14/main/
lgtm but I'm not sure i understood everything (code wise).
- Those tasks were updated with a status failed  
- Their associated scheduler task id where archived recently 
- They have been rescheduled through the save code now cli 
- Their ingestion is ongoing and their associated status should update once done
Fri, Jan 14
one suggestion inline.
Thu, Jan 13
Open flux in the firewall
- dns updated: dali and alias db1 (.internal.admin.swh.network) are ok
Dumped hedgedoc and mounted back into the vm vagrant without issues.
Update with recent changes (bump shared_mem to 8G, add network authorization, ...)x
Well actually use the right expected folder (as per the puppet manifest actually configured):
root@dali:~# zfs destroy data/postgresql root@dali:~# zfs list NAME USED AVAIL REFER MOUNTPOINT data 156K 193G 24K /data root@dali:~# zfs create -o mountpoint=/srv/postgresql/14/main -o atime=off -o relatime=on -o compression=lz4 data/postgresql
- build the vm dali (terraform apply)