Build is green
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
All Stories
Apr 15 2021
Fix typos and add a missing status check
closing this issue as the quota of disks replacement will be over after T3243.
The disk on db1 looks stable for the moment. It will be removed from the zfs pool in case of problems.
Related to T2117
The compression and the transfer of the dump from clearly-defined server to saam is in progress.
ssh clearly-defined.internal.staging.swh.network "pbzip2 -c -9 /srv/softwareheritage/clearlydefined/clearcode-dump-2021-01-05/clearcode_backup-20210105-2111.dump" | pv | cat > clearcode_backup-20210105-2111.dump.bz2
When it will be done, and if the compression ratio is good, the compressed archive will be added to the public annex in a dataset/clearly-defined directory
According to https://intranet.softwareheritage.org/wiki/Outboarding:
- tg19999 unix account disabled
*TG1999 removed from interns members in the forge, no tasks were assigned to the user
- He was not on the #swh-team channel
- He was not a member of the swh-team ml
- VPN certificated revoked
root@louvre:/etc/openvpn/keys# ./easyrsa revoke tg1999
Build is green
rebase
don't forget to count committers too
Rebase
Let's go for it, then. May you take this over?
In T2912#63245, @vsellier wrote:This kind of journal client will be necessary in any case if we want to extend the usage of the counters for other perimeters (metadata count, origin per forge, ...)
I saw a parmap origin which got scheduled (la la la ;)
In T3084#63278, @ardumont wrote:Pushed, packaged, deployed.
scheduler runner continues happily to schedule existing tasks and some new task with priority
Apr 15 13:12:51 saatchi swh[234257]: INFO:swh.scheduler.celery_backend.runner:Grabbed 2084 tasks load-git Apr 15 13:12:54 saatchi swh[234257]: INFO:swh.scheduler.cli.admin.runner:Scheduled 4128 tasks Apr 15 13:14:06 saatchi swh[234257]: INFO:swh.scheduler.celery_backend.runner:Grabbed 1 tasks load-pypi Apr 15 13:14:06 saatchi swh[234257]: INFO:swh.scheduler.celery_backend.runner:Grabbed 1 tasks load-git (priority) ...That task got done almost immediately...
So there you go ;)
Build is green
scheduler runner continues happily to schedule existing tasks and some new task with priority
Apr 15 13:12:51 saatchi swh[234257]: INFO:swh.scheduler.celery_backend.runner:Grabbed 2084 tasks load-git Apr 15 13:12:54 saatchi swh[234257]: INFO:swh.scheduler.cli.admin.runner:Scheduled 4128 tasks Apr 15 13:14:06 saatchi swh[234257]: INFO:swh.scheduler.celery_backend.runner:Grabbed 1 tasks load-pypi Apr 15 13:14:06 saatchi swh[234257]: INFO:swh.scheduler.celery_backend.runner:Grabbed 1 tasks load-git (priority) ...
Rebase
thanks, I will move the dump in a secure place where it will be backuped.
Email sent to the dsi to launch the replacement.
Looks good to me.
Build is green
Thanks for the typos ping ;)
fixed.
Fix sentence typos
Build is green
Adapt according to suggestion
lgtm, but I don't understand what this does 🤷
In preparation of the disk replacement, their leds must be activated to make the emplacement identifiable:
- Ensure all the led are off
root@storage1:~# ls /dev/sd* | grep -e "[a-z]$" | xargs -n1 -t -i{} ledctl normal={} ledctl normal=/dev/sda ledctl normal=/dev/sdb ledctl normal=/dev/sdc ledctl normal=/dev/sdd ledctl normal=/dev/sde ledctl normal=/dev/sdf ledctl normal=/dev/sdg ledctl normal=/dev/sdh ledctl normal=/dev/sdi ledctl normal=/dev/sdj ledctl normal=/dev/sdk ledctl normal=/dev/sdl ledctl normal=/dev/sdm ledctl normal=/dev/sdn
- light on
root@storage1:~# ledctl locate=/dev/sdb root@storage1:~# ledctl locate=/dev/sdc
lgtm, but I don't understand what this does 🤷
oh and actually, there is no reason for this test to be specific to the in-memory implementation. Add it to the regular test suite