- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Apr 15 2021
The compression and the transfer of the dump from clearly-defined server to saam is in progress.
ssh clearly-defined.internal.staging.swh.network "pbzip2 -c -9 /srv/softwareheritage/clearlydefined/clearcode-dump-2021-01-05/clearcode_backup-20210105-2111.dump" | pv | cat > clearcode_backup-20210105-2111.dump.bz2
When it will be done, and if the compression ratio is good, the compressed archive will be added to the public annex in a dataset/clearly-defined directory
According to https://intranet.softwareheritage.org/wiki/Outboarding:
- tg19999 unix account disabled
*TG1999 removed from interns members in the forge, no tasks were assigned to the user
- He was not on the #swh-team channel
- He was not a member of the swh-team ml
- VPN certificated revoked
root@louvre:/etc/openvpn/keys# ./easyrsa revoke tg1999
thanks, I will move the dump in a secure place where it will be backuped.
Email sent to the dsi to launch the replacement.
In preparation of the disk replacement, their leds must be activated to make the emplacement identifiable:
- Ensure all the led are off
root@storage1:~# ls /dev/sd* | grep -e "[a-z]$" | xargs -n1 -t -i{} ledctl normal={} ledctl normal=/dev/sda ledctl normal=/dev/sdb ledctl normal=/dev/sdc ledctl normal=/dev/sdd ledctl normal=/dev/sde ledctl normal=/dev/sdf ledctl normal=/dev/sdg ledctl normal=/dev/sdh ledctl normal=/dev/sdi ledctl normal=/dev/sdj ledctl normal=/dev/sdk ledctl normal=/dev/sdl ledctl normal=/dev/sdm ledctl normal=/dev/sdn
- light on
root@storage1:~# ledctl locate=/dev/sdb root@storage1:~# ledctl locate=/dev/sdc
I would really like to keep the author counter: how complex is it to add it?
Staging webapp[1] and webapp1 on production [2] are now configured to use swh-counters to display the historical values and the live object counts.
Deployment done on staging and production. The new counters are currently only activated on webapp1
fix a typo on the commit description
Apr 14 2021
- version 0.1.296(swh2) released and deployed on all the webapp
lgtm
rename the object variable
rebase
take review's feedbacks in consideration
Apr 13 2021
swh-web v0.0.295 is released and deployed on staging and production.
The puppet script generating the aggragated data is updated and was run to refresh the data.
The webapp can be released with this diff now
Reference the right task
one additional point before releasing this, the puppet script making the aggregation need to be improved as it only merge the data for the content graph :
https://forge.softwareheritage.org/source/puppet-swh-site/browse/production/site-modules/profile/files/stats_exporter/export_archive_counters.py$109
The P1005 convert the data added by the webapp to json data that can be added to the /usr/local/share/swh-data/history-counters.munin.json file.
This content can be added on the file before the change on the webapp is released. it will just add few duplicate points to render, but with no effect on the final rendering
this is the result to add to the current historical data:
{"revision": [[1441065600000, 0], [1467331200000, 594305600], [1473811200000, 644628800], [1479945600000, 704845952], [1494374400000, 780882048], [1506384000000, 853277241], [1516752000000, 943061517], [1518480000000, 946216028], [1521936000000, 980390191], [1538611200000, 1126348335], [1548547200000, 1248389319], [1554681600000, 1293870115], [1561593600000, 1326776432], [1563926400000, 1358421267], [1569110400000, 1379380527], [1569715200000, 1385477933], [1577836800000, 1414420369], [1580947200000, 1428955761], [1586217600000, 1590436149], [1589673600000, 1717420203], [1590537600000, 1744034936]], "origin": [[1441065600000, 0], [1467331200000, 22777052], [1473811200000, 25258776], [1479945600000, 53488904], [1494374400000, 58257484], [1506384000000, 65546644], [1516752000000, 71814787], [1518480000000, 81655813], [1521936000000, 83797945], [1538611200000, 85202432], [1548547200000, 88288721], [1554681600000, 88297714], [1561593600000, 89301694], [1563926400000, 89601149], [1569110400000, 90231104], [1569715200000, 90487661], [1577836800000, 91400586], [1580947200000, 91512130], [1586217600000, 107875943], [1589673600000, 121172621], [1590537600000, 123781438]], "content": [[1441065600000, 0]]}
Apr 12 2021
The disks are removed from the zfs pool. The replacement be done
The mirror is removed fro the pool:
root@storage1:~# zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data 21.8T 2.50T 19.3T - - 20% 11% 1.00x ONLINE -
All the 3 vms are reconfigured. the ips are released.
- puppet disabled on all the 3 nodes (pergamon, moma, tate)
- old /etc/network/interface file backuped
cp /etc/network/interfaces T3228-interfaces
- configuration changed on promox to swtich eth0 from vlan210 to vlan1300
- applying puppet configuration
- restart
- Former vlan1300 interface removed on proxmox
rebase
Are we planning to add a way to notify the mirrors of the takedown notices ?
I'm just thinking if it could be interesting to subscribe the staging environment to it to ensure the content is also removed from it (and also flagged to avoid any further ingestion).
👍 thanks
Ticket opened on the seagate site for the replacement of these 2 disks, the information will be transferred to the DSI for the packaging (as soon the disk will be removed from the pool)
storage disks will be replaced in T3243
The mirror-1 removal is in progress:
root@storage1:~# zpool remove data mirror-1
There are 2 disks with errors that should now be replaced:
- /dev/sdb/wwn-0x5000c500a23e3868 An old one
- /dev/sdc/wwn-0x5000c500a22f48c9 the disk just removed from the pool
The failing disk was removed from the pool:
root@storage1:~# zpool detach data wwn-0x5000c500a22f48c9
The new failing drive is /dev/sdc
root@storage1:~# ls -al /dev/disk/by-id/ | grep wwn-0x5000c500a22f48c9 lrwxrwxrwx 1 root root 9 Apr 11 03:42 wwn-0x5000c500a22f48c9 -> ../../sdc lrwxrwxrwx 1 root root 10 Mar 11 17:08 wwn-0x5000c500a22f48c9-part1 -> ../../sdc1 lrwxrwxrwx 1 root root 10 Mar 11 17:08 wwn-0x5000c500a22f48c9-part9 -> ../../sdc9
A script is regurarly executed to close the oldest indexes (30days) : P1004
It should be added on puppet and scheduled in a cron