Page MenuHomeSoftware Heritage

Add new production rancher worker with dedicated swh/loader role
ClosedPublic

Authored by ardumont on Oct 11 2022, 3:43 PM.

Details

Summary

Related to T4618

Test Plan

terraform plan:

  # module.rancher-node-production-worker05.proxmox_vm_qemu.node will be created
  + resource "proxmox_vm_qemu" "node" {
      + additional_wait           = 0
      + agent                     = 0
      + automatic_reboot          = true
      + balloon                   = 32768
      + bios                      = "seabios"
      + boot                      = "c"
      + bootdisk                  = (known after apply)
      + ciuser                    = "root"
      + clone                     = "debian-bullseye-11.4-zfs-2022-07-27"
      + clone_wait                = 0
      + cores                     = 5
      + cpu                       = "kvm64"
      + default_ipv4_address      = (known after apply)
      + define_connection_info    = true
      + desc                      = "Generic worker node"
      + force_create              = false
      + full_clone                = false
      + guest_agent_ready_timeout = 100
      + hotplug                   = "network,disk,usb"
      + id                        = (known after apply)
      + ipconfig0                 = "ip=192.168.100.125/24,gw=192.168.100.1"
      + kvm                       = true
      + memory                    = 65536
      + name                      = "rancher-node-production-worker05"
      + nameserver                = "192.168.100.29"
      + numa                      = false
      + onboot                    = true
      + oncreate                  = true
      + os_type                   = "cloud-init"
      + preprovision              = true
      + reboot_required           = (known after apply)
      + scsihw                    = (known after apply)
      + searchdomain              = "internal.softwareheritage.org"
      + sockets                   = 2
      + ssh_host                  = (known after apply)
      + ssh_port                  = (known after apply)
      + ssh_user                  = "root"
      + sshkeys                   = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDVKCfpeIMg7GS3Pk03ZAcBWAeDZ+AvWk2k/pPY0z8MJ3YAbqZkRtSK7yaDgJV6Gro7nn/TxdJLo2jEzzWvlC8d8AEzhZPy5Z/qfVVjqBTBM4H5+e+TItAHFfaY5+0WvIahxcfsfaq70MWfpJhszAah3ThJ4mqzYaw+dkr42+a7Gx3Ygpb/m2dpnFnxvXdcuAJYStmHKU5AWGWWM+Fm50/fdMqUfNd8MbKhkJt5ihXQmZWMOt7ls4N8i5NZWnS9YSWow8X/ENOEqCRN9TyRkc+pPS0w9DNi0BCsWvSRJOkyvQ6caEnKWlNoywCmM1AlIQD3k4RUgRWe0vqg/UKPpH3Z root@terraform"
      + tablet                    = true
      + target_node               = "hypervisor3"
      + unused_disk               = (known after apply)
      + vcpus                     = 0
      + vlan                      = -1
      + vmid                      = (known after apply)

      + disk {
          + backup       = 0
          + cache        = "none"
          + file         = (known after apply)
          + format       = (known after apply)
          + iothread     = 0
          + mbps         = 0
          + mbps_rd      = 0
          + mbps_rd_max  = 0
          + mbps_wr      = 0
          + mbps_wr_max  = 0
          + media        = (known after apply)
          + replicate    = 0
          + size         = "20G"
          + slot         = (known after apply)
          + ssd          = 0
          + storage      = "proxmox"
          + storage_type = (known after apply)
          + type         = "virtio"
          + volume       = (known after apply)
        }
      + disk {
          + backup       = 0
          + cache        = "none"
          + file         = (known after apply)
          + format       = (known after apply)
          + iothread     = 0
          + mbps         = 0
          + mbps_rd      = 0
          + mbps_rd_max  = 0
          + mbps_wr      = 0
          + mbps_wr_max  = 0
          + media        = (known after apply)
          + replicate    = 0
          + size         = "20G"
          + slot         = (known after apply)
          + ssd          = 0
          + storage      = "scratch"
          + storage_type = (known after apply)
          + type         = "virtio"
          + volume       = (known after apply)
        }

      + network {
          + bridge    = "vmbr0"
          + firewall  = false
          + link_down = false
          + macaddr   = (known after apply)
          + model     = "virtio"
          + queues    = (known after apply)
          + rate      = (known after apply)
          + tag       = -1
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + rancher-node-production-worker05_summary    = <<-EOT

Diff Detail

Repository
rSPRE sysadm-provisioning
Branch
master
Lint
No Linters Available
Unit
No Unit Test Coverage
Build Status
Buildable 32217
Build 50460: arc lint + arc unit

Event Timeline

Fix uffizi hypervisor and scratch name

Decrease memory, increase scratch space, add label large-scratch-fs to node

This revision is now accepted and ready to land.Oct 11 2022, 4:09 PM