Make sure the hypervisor3 ssds are setup as ceph OSDs
Description
Description
Status | Assigned | Task | ||
---|---|---|---|---|
Migrated | gitlab-migration | T2501 Proxmox reliability improvements (Summer 2020) | ||
Migrated | gitlab-migration | T2502 Migrate VM storage to ceph | ||
Migrated | gitlab-migration | T2503 Set up hyperconverged ceph cluster | ||
Migrated | gitlab-migration | T2508 Update Ceph pool to use the default CRUSH replication rule | ||
Migrated | gitlab-migration | T2504 Add hypervisor3 ssds as Ceph OSDs | ||
Migrated | gitlab-migration | T2505 Migrate hypervisor3 vms to (temporary) ceph pool | ||
Migrated | gitlab-migration | T2507 Set up temporary (single-host) ceph pool | ||
Migrated | gitlab-migration | T2506 Add new beaubourg SSDs as Ceph OSDs |
Event Timeline
Comment Actions
Remove "hypervisor3-ssd" local storage from proxmox config (done via proxmox gui)
Check all logical volumes removed
lvs
Drop volume group
vgremove ssd
Remove physical volume
pvremove /dev/md0
Disable mdraid
mdadm --manage /dev/md0 --stop
Uninstall mdadm (unnecessary on this host now)
apt-get purge mdadm
Zap all drives
for letter in {a..h}; do ceph-volume lvm zap /dev/sd$letter; done
Create ceph osds
for letter in {a..h}; do pveceph osd create /dev/sd$letter; done
Wait for Placement Groups to rebalance
ceph -w