- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Jan 8 2023
Oct 19 2022
Sep 9 2022
Mar 25 2022
Mar 23 2022
Feb 18 2022
Feb 16 2022
Feb 15 2022
New lister did its work, 74 new origins got seen [1] [2].
Feb 14 2022
@Alphare fixed the sourceforge lister to list correctly bzr origins.
Tagged a v2.6.4 with that fix in the intent to deploy this on staging.
Feb 11 2022
And deployed.
Next step: Actually feeding them some bzr origins.
We'll need to discuss some more with @Alphare.
Of course, the build was ok, but not the actual install [1]
Another build release ongoing to fix that... [2]
After some more fight and backporting dependencies [1] [2]. The build is now happy on
stable as well [3].
Feb 10 2022
Previous one fixed.
pypi-upload job failed for some reason.
Fixed.
Yes, thanks for the heads up.
D7130 has been committed, so a new version of the loader for faster ingestion will need to be released.
Feb 9 2022
- Prepare the necessary debian metadata files to allow CI package build [1]
Here we go, first release is up. [1]
Repository configured correctly regarding the ci.
But the hook was missing on the repository.
Fixed [1].
Let's do the first release.
Feb 8 2022
loader-bzr run in docker \o/:
- Make it run within docker
Feb 7 2022
We got those bzr origins currently listed in staging (and prod) infra [1].
Do they sound good enough as dataset?
Dec 6 2021
Jul 29 2021
Jul 7 2021
This is the disk position according to a picture of the server taken by Christophe :
Jun 14 2021
A priori, zfs did itself the replace action, selecting one spare disk in the disks spare pool.
We needed only to detach the failing one, it's done. [1]
Serial number:
root@storage1:~# smartctl -a /dev/disk/by-id/wwn-0x5000c500a23e4511 | grep -B1 "Serial Number" Device Model: ST6000NM0115-1YZ110 Serial Number: ZAD0SDDK
May 6 2021
Actions performed:
- wwn-0x5000c500d5de652a(sdb) : new -> spare
- wwn-0x5000c500a22eed6f(sdh) : spare -> mirror
- wwn-0x5000c500d5dda886(sdc) : new -> mirror
The checks ran without detecting any bad block on the disk.
They can be added on the zfs pool again.
May 5 2021
A full badblock test is launched on both disks:
root@storage1:~# badblocks -v -w -B -s -b 4096 /dev/sdb
root@storage1:~# badblocks -v -w -B -s -b 4096 /dev/sdc
The disk were replaced by Christophe.
Apparently, the led of one of the disk is still on, so they need to be switched off:
root@storage1:~# ls /dev/sd* | grep -e "[a-z]$" | xargs -n1 -t -i{} ledctl normal={} ledctl normal=/dev/sda ledctl normal=/dev/sdb ledctl normal=/dev/sdc ledctl normal=/dev/sdd ledctl normal=/dev/sde ledctl normal=/dev/sdf ledctl normal=/dev/sdg ledctl normal=/dev/sdh ledctl normal=/dev/sdi ledctl normal=/dev/sdj ledctl normal=/dev/sdk ledctl normal=/dev/sdl ledctl normal=/dev/sdm ledctl normal=/dev/sdn