Fix pushed and deployed. Puppet agent unstuck [1].
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Dec 14 2021
- Fix whatever is broken (puppet agent --test is failing after reboot for some reasons)
The following minor postgresql upgrades will be performed during the upgrade:
- somerset: postgresql 13.4 -> 13.5 [1]
A dump/restore is not required for those running 13.X.
- belvedere:
- 11.14-0 -> 11.14-1 (indexer db)
- 12.8-1 -> 12.9-1 [2] (other dbs)
A dump/restore is not required for those running 12.X.
- db1:
- 12.8-1 -> 12.9-1 [2]
Dec 13 2021
"un coup ds l'eau" I failed the migration and reverted the node to the snapshot i took prior to the migration [1].
upgrade done following the T3799 procedure.
After several tests in vagrant, the upgrade looks ok, even if I couldn't succeed to have a complete local dns environment.
And:
- migration done.
- inventory updated
- checks ok [1]
Checking through vagrant everything is running smoothly.
Dec 10 2021
All the hypervisors are migrated and the services restored
same ^ for pompidou
The ceph packages need to be also updated on the proxmox nodes even if they are not in the ceph cluster (from the output of pve6to7)
I had forgotten to set a maintenance tag about this in grafana.
Fixed now.
Dec 9 2021
We (all sysadm \o/) followed the guide one step at a time.
Cluster is in HEALTH_OK [1]
Output of the pve6to7 script on uffizi:
Preconditions checklist from the proxmox upgrade guide:
- Upgraded to the latest version of Proxmox VE 6.4 (check correct package repository configuration)
On all nodes:
root@pergamon:/etc/clustershell# clush -b -w @hypervisors "pveversion" --------------- branly,pompidou,uffizi (3) --------------- pve-manager/6.4-13/9f411e79 (running kernel: 5.4.103-1-pve) --------------- beaubourg --------------- pve-manager/6.4-13/9f411e79 (running kernel: 5.4.143-1-pve) --------------- hypervisor3 --------------- pve-manager/6.4-13/9f411e79 (running kernel: 5.4.128-1-pve)
- TODO Hyper-converged Ceph: upgrade the Ceph Nautilus cluster to Ceph 15.2 Octopus before you start the Proxmox VE upgrade to 7.0. Follow the guide Ceph Nautilus to Octopus
- No backup server Co-installed Proxmox Backup Server: see the Proxmox Backup Server 1.1 to 2.x upgrade how-to
- Reliable access to the node (through ssh, iKVM/IPMI or physical access)
- A healthy cluster
- Valid and tested backup of all VMs and CTs (in case something goes wrong) At least 4 GiB free disk space on the root mount point.
- Check known upgrade issues
- from later on the doc Test the pve6to7 migration checklist
Dec 8 2021
I had forgotten 2 steps prior to closing this task:
- Vagrantfile updated accordingly (to make them use the debian 11 box url)
- inventory updated to mention the nodes are now running bullseye
I need to check with the team how to properly do that (well how we do it).
Well, just check all plugins and trigger update.
unbound was unhappy though [1]
There may remains plugin upgrade to do.
I need to check with the team how to properly do that (well how we do it).
Jenkins is responding fine.
Thyssen:
- Prepare jenkins in "prepare shutdown" given this task as notice (using the previous link)
- Upgrade according to the plan
- Rebooted
Dec 7 2021
Heads up, this ui should be pretty cool to prevent build from happening prior to the actual dist-upgrade [1].
Upgrade of thyssen applied (which includes an upgrade of jenkins).
Thyssen is still running buster.
In T3770#75051, @ardumont wrote:As amended in the description, jenkins-debian1 is done.
I'm waiting for the current swh-web release to come through to ensure there is no actual side-effect.
As amended in the description, jenkins-debian1 is done.
I'm waiting for the current swh-web release to come through to ensure there is no actual side-effect.
Vagrant tests in progress btw.
Manifest are incomplete so fixing them along the way.
Vagrant tests in progress btw.
Manifest are incomplete so fixing them along the way.