rebase
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Apr 14 2021
take review's feedbacks in consideration
Apr 13 2021
swh-web v0.0.295 is released and deployed on staging and production.
The puppet script generating the aggragated data is updated and was run to refresh the data.
The webapp can be released with this diff now
Reference the right task
one additional point before releasing this, the puppet script making the aggregation need to be improved as it only merge the data for the content graph :
https://forge.softwareheritage.org/source/puppet-swh-site/browse/production/site-modules/profile/files/stats_exporter/export_archive_counters.py$109
The P1005 convert the data added by the webapp to json data that can be added to the /usr/local/share/swh-data/history-counters.munin.json file.
This content can be added on the file before the change on the webapp is released. it will just add few duplicate points to render, but with no effect on the final rendering
this is the result to add to the current historical data:
{"revision": [[1441065600000, 0], [1467331200000, 594305600], [1473811200000, 644628800], [1479945600000, 704845952], [1494374400000, 780882048], [1506384000000, 853277241], [1516752000000, 943061517], [1518480000000, 946216028], [1521936000000, 980390191], [1538611200000, 1126348335], [1548547200000, 1248389319], [1554681600000, 1293870115], [1561593600000, 1326776432], [1563926400000, 1358421267], [1569110400000, 1379380527], [1569715200000, 1385477933], [1577836800000, 1414420369], [1580947200000, 1428955761], [1586217600000, 1590436149], [1589673600000, 1717420203], [1590537600000, 1744034936]], "origin": [[1441065600000, 0], [1467331200000, 22777052], [1473811200000, 25258776], [1479945600000, 53488904], [1494374400000, 58257484], [1506384000000, 65546644], [1516752000000, 71814787], [1518480000000, 81655813], [1521936000000, 83797945], [1538611200000, 85202432], [1548547200000, 88288721], [1554681600000, 88297714], [1561593600000, 89301694], [1563926400000, 89601149], [1569110400000, 90231104], [1569715200000, 90487661], [1577836800000, 91400586], [1580947200000, 91512130], [1586217600000, 107875943], [1589673600000, 121172621], [1590537600000, 123781438]], "content": [[1441065600000, 0]]}
Apr 12 2021
The disks are removed from the zfs pool. The replacement be done
The mirror is removed fro the pool:
root@storage1:~# zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data 21.8T 2.50T 19.3T - - 20% 11% 1.00x ONLINE -
All the 3 vms are reconfigured. the ips are released.
- puppet disabled on all the 3 nodes (pergamon, moma, tate)
- old /etc/network/interface file backuped
cp /etc/network/interfaces T3228-interfaces
- configuration changed on promox to swtich eth0 from vlan210 to vlan1300
- applying puppet configuration
- restart
- Former vlan1300 interface removed on proxmox
rebase
Are we planning to add a way to notify the mirrors of the takedown notices ?
I'm just thinking if it could be interesting to subscribe the staging environment to it to ensure the content is also removed from it (and also flagged to avoid any further ingestion).
👍 thanks
Ticket opened on the seagate site for the replacement of these 2 disks, the information will be transferred to the DSI for the packaging (as soon the disk will be removed from the pool)
storage disks will be replaced in T3243
The mirror-1 removal is in progress:
root@storage1:~# zpool remove data mirror-1
There are 2 disks with errors that should now be replaced:
- /dev/sdb/wwn-0x5000c500a23e3868 An old one
- /dev/sdc/wwn-0x5000c500a22f48c9 the disk just removed from the pool
The failing disk was removed from the pool:
root@storage1:~# zpool detach data wwn-0x5000c500a22f48c9
The new failing drive is /dev/sdc
root@storage1:~# ls -al /dev/disk/by-id/ | grep wwn-0x5000c500a22f48c9 lrwxrwxrwx 1 root root 9 Apr 11 03:42 wwn-0x5000c500a22f48c9 -> ../../sdc lrwxrwxrwx 1 root root 10 Mar 11 17:08 wwn-0x5000c500a22f48c9-part1 -> ../../sdc1 lrwxrwxrwx 1 root root 10 Mar 11 17:08 wwn-0x5000c500a22f48c9-part9 -> ../../sdc9
A script is regurarly executed to close the oldest indexes (30days) : P1004
It should be added on puppet and scheduled in a cron
Apr 11 2021
Apr 10 2021
Apr 9 2021
Everything is released correctly and deployed on staging
I finally found why the graphs looks weird : https://forge.softwareheritage.org/source/swh-web/browse/master/swh/web/misc/urls.py$31
With a dirty patch on the server, it's way better:
fix a typo on the commit message
The pipeline is deployed in staging.
It's working but it seems the graphs need some initial values in staging to make the rendering correctly:
Add a filter to limit the metrics to the current environment