Page MenuHomeSoftware Heritage

Phantom device mapper volume usage in Proxmox: local storage is not available on target node
Closed, MigratedEdits Locked

Description

Migrating at least some beaubourg VMs to hypervisor3 fails with this kind of error messages:

2019-01-31 15:41:15 starting migration of VM 102 to node 'hypervisor3' (192.168.100.34)
2019-01-31 15:41:15 ERROR: Failed to sync data - storage 'beaubourg-ssd' is not available on node 'hypervisor3'
2019-01-31 15:41:15 aborting phase 1 - cleanup resources
2019-01-31 15:41:15 ERROR: migration aborted (duration 00:00:00): Failed to sync data - storage 'beaubourg-ssd' is not available on node 'hypervisor3'
TASK ERROR: migration aborted

Event Timeline

ftigeot triaged this task as High priority.Feb 1 2019, 10:40 AM
ftigeot created this task.

No mention of "beaubourg-ssd" is visible in the Proxmox virtual machine management interface.
All virtual disk backends are stored on Ceph.

The main VM disk is stored on a "vm-102-disk-1" volume (on Ceph)
There is an inactive lvm volume on "beaubourg-ssd" formerly associated with this VM, it was used as the virtual disk backend before the virtual disk device was migrated to Ceph.

# lvdisplay
  --- Logical volume ---
  LV Path                /dev/ssd/vm-102-disk-0
  LV Name                vm-102-disk-0
  VG Name                ssd
  LV UUID                JgyKXo-NN8e-Zg0A-EJf7-8xbE-zJIy-J8lxB1
  LV Write Access        read/write
  LV Creation host, time beaubourg, 2018-09-17 09:23:34 +0000
  LV Status              NOT available
  LV Size                40.00 GiB
  Current LE             10240
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

No mention is being made of this virtual drive in either the Proxmox web interface or Proxmox configuration files dedicated to vm-102 in /etc/pve on the Proxmox cluster nodes.

ftigeot changed the task status from Open to Work in Progress.Feb 1 2019, 10:55 AM

The previous drive is neither active nor opened:

# lvs
  LV                VG  Attr       LSize
  vm-102-disk-0     ssd -wi------- 40.00g
  vm-102-disk-1     ssd -wi-ao---- 40.00g

For comparison purposes here, vm-102-disk-1 is a virtual drive backend really used by the VM.

Since the previously used drive is not used anymore, I decided to remove it:

# lvchange -a y ssd/vm-102-disk-0
# vremove /dev/ssd/vm-102-disk-0

Removing the previously used volume allowed VM migration to complete.

It is likely Proxmox internal state still referenced internally this volume even though it wasn't supposed to anymore.