Page MenuHomeSoftware Heritage

Create elastic worker node up to rancher cluster registration
ClosedPublic

Authored by ardumont on Apr 20 2022, 4:32 PM.

Details

Summary

As discussed with @vsellier, this uses a more recent template with zfs preinstalled (so the zfs module is already loaded).
This should make the installation complete without issues.

Related to T4144

Test Plan
$ terraform plan
module.vault.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/121]
module.search-esnode0.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/130]
module.objstorage0.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/102]
module.mirror-test.proxmox_vm_qemu.node: Refreshing state... [id=uffizi/qemu/132]
module.scheduler0.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/116]
module.webapp.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/119]
module.maven-exporter0.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/122]
rancher2_cluster.test-rke: Refreshing state... [id=c-dqlbs]
module.scrubber0.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/142]
module.rp0.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/129]
module.deposit.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/120]
module.worker1.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/118]
module.worker0.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/117]
module.worker2.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/112]
module.counters0.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/138]
module.search0.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/131]
module.worker3.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/137]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # module.elastic-worker0.proxmox_vm_qemu.node will be created
  + resource "proxmox_vm_qemu" "node" {
      + additional_wait           = 0
      + agent                     = 0
      + balloon                   = 1024
      + bios                      = "seabios"
      + boot                      = "c"
      + bootdisk                  = (known after apply)
      + ciuser                    = "root"
      + clone                     = "debian-bullseye-11.0-2021-09-09"
      + clone_wait                = 0
      + cores                     = 4
      + cpu                       = "kvm64"
      + default_ipv4_address      = (known after apply)
      + define_connection_info    = true
      + desc                      = "elastic worker running in rancher cluster"
      + force_create              = false
      + full_clone                = false
      + guest_agent_ready_timeout = 100
      + hotplug                   = "network,disk,usb"
      + id                        = (known after apply)
      + ipconfig0                 = "ip=192.168.130.130/24,gw=192.168.130.1"
      + kvm                       = true
      + memory                    = 4096
      + name                      = "elastic-worker0"
      + nameserver                = "192.168.100.29"
      + numa                      = false
      + onboot                    = true
      + oncreate                  = true
      + os_type                   = "cloud-init"
      + preprovision              = true
      + reboot_required           = (known after apply)
      + scsihw                    = (known after apply)
      + searchdomain              = "internal.staging.swh.network"
      + sockets                   = 1
      + ssh_host                  = (known after apply)
      + ssh_port                  = (known after apply)
      + ssh_user                  = "root"
      + sshkeys                   = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDVKCfpeIMg7GS3Pk03ZAcBWAeDZ+AvWk2k/pPY0z8MJ3YAbqZkRtSK7yaDgJV6Gro7nn/TxdJLo2jEzzWvlC8d8AEzhZPy5Z/qfVVjqBTBM4H5+e+TItAHFfaY5+0WvIahxcfsfaq70MWfpJhszAah3ThJ4mqzYaw+dkr42+a7Gx3Ygpb/m2dpnFnxvXdcuAJYStmHKU5AWGWWM+Fm50/fdMqUfNd8MbKhkJt5ihXQmZWMOt7ls4N8i5NZWnS9YSWow8X/ENOEqCRN9TyRkc+pPS0w9DNi0BCsWvSRJOkyvQ6caEnKWlNoywCmM1AlIQD3k4RUgRWe0vqg/UKPpH3Z root@terraform"
      + tablet                    = true
      + target_node               = "uffizi"
      + unused_disk               = (known after apply)
      + vcpus                     = 0
      + vlan                      = -1
      + vmid                      = (known after apply)

      + disk {
          + backup       = 0
          + cache        = "none"
          + file         = (known after apply)
          + format       = (known after apply)
          + iothread     = 0
          + mbps         = 0
          + mbps_rd      = 0
          + mbps_rd_max  = 0
          + mbps_wr      = 0
          + mbps_wr_max  = 0
          + media        = (known after apply)
          + replicate    = 0
          + size         = "50G"
          + slot         = (known after apply)
          + ssd          = 0
          + storage      = "proxmox"
          + storage_type = (known after apply)
          + type         = "virtio"
          + volume       = (known after apply)
        }

      + network {
          + bridge    = "vmbr443"
          + firewall  = false
          + link_down = false
          + macaddr   = (known after apply)
          + model     = "virtio"
          + queues    = (known after apply)
          + rate      = (known after apply)
          + tag       = -1
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + elastic-worker0_summary = (known after apply)

│ Warning: Experimental feature "module_variable_optional_attrs" is active

│   on versions.tf line 3, in terraform:
│    3:   experiments      = [module_variable_optional_attrs]

│ Experimental features are subject to breaking changes in future minor or patch releases, based on feedback.

│ If you have feedback on the design of this feature, please open a GitHub issue to discuss it.

│ (and 17 more similar warnings elsewhere)


──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.

Up to docker registration being happy \o/:

...
module.elastic-worker2.proxmox_vm_qemu.node (remote-exec): e426e72aec1e: Pull complete
module.elastic-worker2.proxmox_vm_qemu.node (remote-exec): Digest: sha256:7346bb39ca69a7e5fce0363f172783bc5958883779c50d798e31e2c2944a0154
module.elastic-worker2.proxmox_vm_qemu.node (remote-exec): Status: Downloaded newer image for rancher/rancher-agent:v2.6.4
module.elastic-worker2.proxmox_vm_qemu.node (remote-exec): d3345583ace979a83dc269fccc2542ab957fdf5563f55c869b1ed48cbde96e4b
module.elastic-worker2.proxmox_vm_qemu.node: Creation complete after 12m58s [id=uffizi/qemu/148]

Diff Detail

Repository
rSPRE sysadm-provisioning
Branch
master
Lint
Lint Skipped
Unit
Unit Tests Skipped
Build Status
Buildable 28668
Build 44795: arc lint + arc unit

Event Timeline

ardumont edited the test plan for this revision. (Show Details)
ardumont added a subscriber: vsellier.

update with correction commit range

Update with terraform applied result in .tfstate

This revision is now accepted and ready to land.Apr 21 2022, 7:39 PM