Related to T4221
Details
Details
- Reviewers
- None
- Group Reviewers
System administrators - Maniphest Tasks
- T4221: Create a kubernetes cluster for the deployment experiment
- Commits
- rSPREd62700b98e98: Refresh staging.tfstate after the new kubernetes cluster creation
rSPRE529a2d63d236: refresh elastic-workers cluster after accidentally elastic-worker0 removal
rSPRE0ff550d0f889: Declare the rancher cluster for the deployment's internship
rSPREd43e6bb05d85: Ensure cloud-init is not running before starting puppet
rSPRE062ec99d2101: Align worker0 and worker1 qemu arguments to match the real vms configuration
terraform plan (hoping the worker0 and worker1 changes will be applied without impacts...):
Terraform will perform the following actions: # rancher2_cluster.deployment_intership will be created + resource "rancher2_cluster" "deployment_intership" { + annotations = (known after apply) + ca_cert = (sensitive value) + cluster_registration_token = (known after apply) + default_pod_security_policy_template_id = (known after apply) + default_project_id = (known after apply) + description = "staging cluster for deployment test" + desired_agent_image = (known after apply) + desired_auth_image = (known after apply) + docker_root_dir = (known after apply) + driver = (known after apply) + enable_cluster_alerting = (known after apply) + enable_cluster_istio = (known after apply) + enable_cluster_monitoring = (known after apply) + enable_network_policy = (known after apply) + fleet_workspace_name = (known after apply) + id = (known after apply) + istio_enabled = (known after apply) + kube_config = (sensitive value) + labels = (known after apply) + name = "deployment-intership" + system_project_id = (known after apply) + windows_prefered_cluster = false + cluster_auth_endpoint { + ca_certs = (known after apply) + enabled = (known after apply) + fqdn = (known after apply) } + cluster_template_answers { + cluster_id = (known after apply) + project_id = (known after apply) + values = (known after apply) } + cluster_template_questions { + default = (known after apply) + required = (known after apply) + type = (known after apply) + variable = (known after apply) } + eks_config_v2 { + cloud_credential_id = (known after apply) + imported = (known after apply) + kms_key = (known after apply) + kubernetes_version = (known after apply) + logging_types = (known after apply) + name = (known after apply) + private_access = (known after apply) + public_access = (known after apply) + public_access_sources = (known after apply) + region = (known after apply) + secrets_encryption = (known after apply) + security_groups = (known after apply) + service_role = (known after apply) + subnets = (known after apply) + tags = (known after apply) + node_groups { + desired_size = (known after apply) + disk_size = (known after apply) + ec2_ssh_key = (known after apply) + gpu = (known after apply) + image_id = (known after apply) + instance_type = (known after apply) + labels = (known after apply) + max_size = (known after apply) + min_size = (known after apply) + name = (known after apply) + request_spot_instances = (known after apply) + resource_tags = (known after apply) + spot_instance_types = (known after apply) + subnets = (known after apply) + tags = (known after apply) + user_data = (known after apply) + version = (known after apply) + launch_template { + id = (known after apply) + name = (known after apply) + version = (known after apply) } } } + k3s_config { + version = (known after apply) + upgrade_strategy { + drain_server_nodes = (known after apply) + drain_worker_nodes = (known after apply) + server_concurrency = (known after apply) + worker_concurrency = (known after apply) } } + rke2_config { + version = (known after apply) + upgrade_strategy { + drain_server_nodes = (known after apply) + drain_worker_nodes = (known after apply) + server_concurrency = (known after apply) + worker_concurrency = (known after apply) } } + rke_config { + addon_job_timeout = (known after apply) + enable_cri_dockerd = false + ignore_docker_version = true + kubernetes_version = (known after apply) + prefix_path = (known after apply) + ssh_agent_auth = false + ssh_cert_path = (known after apply) + ssh_key_path = (known after apply) + win_prefix_path = (known after apply) + authentication { + sans = (known after apply) + strategy = (known after apply) } + authorization { + mode = (known after apply) + options = (known after apply) } + bastion_host { + address = (known after apply) + port = (known after apply) + ssh_agent_auth = (known after apply) + ssh_key = (sensitive value) + ssh_key_path = (known after apply) + user = (known after apply) } + cloud_provider { + custom_cloud_provider = (known after apply) + name = (known after apply) + aws_cloud_provider { + global { + disable_security_group_ingress = (known after apply) + disable_strict_zone_check = (known after apply) + elb_security_group = (known after apply) + kubernetes_cluster_id = (known after apply) + kubernetes_cluster_tag = (known after apply) + role_arn = (known after apply) + route_table_id = (known after apply) + subnet_id = (known after apply) + vpc = (known after apply) + zone = (known after apply) } + service_override { + region = (known after apply) + service = (known after apply) + signing_method = (known after apply) + signing_name = (known after apply) + signing_region = (known after apply) + url = (known after apply) } } + azure_cloud_provider { + aad_client_cert_password = (sensitive value) + aad_client_cert_path = (known after apply) + aad_client_id = (sensitive value) + aad_client_secret = (sensitive value) + cloud = (known after apply) + cloud_provider_backoff = (known after apply) + cloud_provider_backoff_duration = (known after apply) + cloud_provider_backoff_exponent = (known after apply) + cloud_provider_backoff_jitter = (known after apply) + cloud_provider_backoff_retries = (known after apply) + cloud_provider_rate_limit = (known after apply) + cloud_provider_rate_limit_bucket = (known after apply) + cloud_provider_rate_limit_qps = (known after apply) + load_balancer_sku = (known after apply) + location = (known after apply) + maximum_load_balancer_rule_count = (known after apply) + primary_availability_set_name = (known after apply) + primary_scale_set_name = (known after apply) + resource_group = (known after apply) + route_table_name = (known after apply) + security_group_name = (known after apply) + subnet_name = (known after apply) + subscription_id = (sensitive value) + tenant_id = (sensitive value) + use_instance_metadata = (known after apply) + use_managed_identity_extension = (known after apply) + vm_type = (known after apply) + vnet_name = (known after apply) + vnet_resource_group = (known after apply) } + openstack_cloud_provider { + block_storage { + bs_version = (known after apply) + ignore_volume_az = (known after apply) + trust_device_path = (known after apply) } + global { + auth_url = (known after apply) + ca_file = (known after apply) + domain_id = (sensitive value) + domain_name = (known after apply) + password = (sensitive value) + region = (known after apply) + tenant_id = (sensitive value) + tenant_name = (known after apply) + trust_id = (sensitive value) + username = (sensitive value) } + load_balancer { + create_monitor = (known after apply) + floating_network_id = (known after apply) + lb_method = (known after apply) + lb_provider = (known after apply) + lb_version = (known after apply) + manage_security_groups = (known after apply) + monitor_delay = (known after apply) + monitor_max_retries = (known after apply) + monitor_timeout = (known after apply) + subnet_id = (known after apply) + use_octavia = (known after apply) } + metadata { + request_timeout = (known after apply) + search_order = (known after apply) } + route { + router_id = (known after apply) } } + vsphere_cloud_provider { + disk { + scsi_controller_type = (known after apply) } + global { + datacenters = (known after apply) + insecure_flag = (known after apply) + password = (sensitive value) + port = (known after apply) + soap_roundtrip_count = (known after apply) + user = (sensitive value) } + network { + public_network = (known after apply) } + virtual_center { + datacenters = (known after apply) + name = (known after apply) + password = (sensitive value) + port = (known after apply) + soap_roundtrip_count = (known after apply) + user = (sensitive value) } + workspace { + datacenter = (known after apply) + default_datastore = (known after apply) + folder = (known after apply) + resourcepool_path = (known after apply) + server = (known after apply) } } } + dns { + node_selector = (known after apply) + options = (known after apply) + provider = (known after apply) + reverse_cidrs = (known after apply) + upstream_nameservers = (known after apply) + linear_autoscaler_params { + cores_per_replica = (known after apply) + max = (known after apply) + min = (known after apply) + nodes_per_replica = (known after apply) + prevent_single_point_failure = (known after apply) } + nodelocal { + ip_address = (known after apply) + node_selector = (known after apply) } + tolerations { + effect = (known after apply) + key = (known after apply) + operator = (known after apply) + seconds = (known after apply) + value = (known after apply) } + update_strategy { + strategy = (known after apply) + rolling_update { + max_surge = (known after apply) + max_unavailable = (known after apply) } } } + ingress { + default_backend = (known after apply) + dns_policy = (known after apply) + extra_args = (known after apply) + http_port = (known after apply) + https_port = (known after apply) + network_mode = (known after apply) + node_selector = (known after apply) + options = (known after apply) + provider = (known after apply) + tolerations { + effect = (known after apply) + key = (known after apply) + operator = (known after apply) + seconds = (known after apply) + value = (known after apply) } + update_strategy { + strategy = (known after apply) + rolling_update { + max_unavailable = (known after apply) } } } + monitoring { + node_selector = (known after apply) + options = (known after apply) + provider = (known after apply) + replicas = (known after apply) + tolerations { + effect = (known after apply) + key = (known after apply) + operator = (known after apply) + seconds = (known after apply) + value = (known after apply) } + update_strategy { + strategy = (known after apply) + rolling_update { + max_surge = (known after apply) + max_unavailable = (known after apply) } } } + network { + mtu = 0 + options = (known after apply) + plugin = "canal" } + services { + etcd { + ca_cert = (known after apply) + cert = (sensitive value) + creation = (known after apply) + external_urls = (known after apply) + extra_args = (known after apply) + extra_binds = (known after apply) + extra_env = (known after apply) + gid = (known after apply) + image = (known after apply) + key = (sensitive value) + path = (known after apply) + retention = (known after apply) + snapshot = (known after apply) + uid = (known after apply) + backup_config { + enabled = (known after apply) + interval_hours = (known after apply) + retention = (known after apply) + safe_timestamp = (known after apply) + timeout = (known after apply) + s3_backup_config { + access_key = (sensitive value) + bucket_name = (known after apply) + custom_ca = (known after apply) + endpoint = (known after apply) + folder = (known after apply) + region = (known after apply) + secret_key = (sensitive value) } } } + kube_api { + admission_configuration = (known after apply) + always_pull_images = (known after apply) + extra_args = (known after apply) + extra_binds = (known after apply) + extra_env = (known after apply) + image = (known after apply) + pod_security_policy = (known after apply) + service_cluster_ip_range = (known after apply) + service_node_port_range = (known after apply) + audit_log { + enabled = (known after apply) + configuration { + format = (known after apply) + max_age = (known after apply) + max_backup = (known after apply) + max_size = (known after apply) + path = (known after apply) + policy = (known after apply) } } + event_rate_limit { + configuration = (known after apply) + enabled = (known after apply) } + secrets_encryption_config { + custom_config = (known after apply) + enabled = (known after apply) } } + kube_controller { + cluster_cidr = (known after apply) + extra_args = (known after apply) + extra_binds = (known after apply) + extra_env = (known after apply) + image = (known after apply) + service_cluster_ip_range = (known after apply) } + kubelet { + cluster_dns_server = (known after apply) + cluster_domain = (known after apply) + extra_args = (known after apply) + extra_binds = (known after apply) + extra_env = (known after apply) + fail_swap_on = (known after apply) + generate_serving_certificate = (known after apply) + image = (known after apply) + infra_container_image = (known after apply) } + kubeproxy { + extra_args = (known after apply) + extra_binds = (known after apply) + extra_env = (known after apply) + image = (known after apply) } + scheduler { + extra_args = (known after apply) + extra_binds = (known after apply) + extra_env = (known after apply) + image = (known after apply) } } + upgrade_strategy { + drain = (known after apply) + max_unavailable_controlplane = (known after apply) + max_unavailable_worker = (known after apply) + drain_input { + delete_local_data = (known after apply) + force = (known after apply) + grace_period = (known after apply) + ignore_daemon_sets = (known after apply) + timeout = (known after apply) } } } + scheduled_cluster_scan { + enabled = (known after apply) + scan_config { + cis_scan_config { + debug_master = (known after apply) + debug_worker = (known after apply) + override_benchmark_version = (known after apply) + override_skip = (known after apply) + profile = (known after apply) } } + schedule_config { + cron_schedule = (known after apply) + retention = (known after apply) } } } # module.rancher_node_internship0.proxmox_vm_qemu.node will be created + resource "proxmox_vm_qemu" "node" { + additional_wait = 0 + agent = 0 + balloon = 4096 + bios = "seabios" + boot = "c" + bootdisk = (known after apply) + ciuser = "root" + clone = "debian-bullseye-11.3-zfs-2022-04-21" + clone_wait = 0 + cores = 4 + cpu = "kvm64" + default_ipv4_address = (known after apply) + define_connection_info = true + desc = "Rancher node for the internship" + force_create = false + full_clone = false + guest_agent_ready_timeout = 100 + hotplug = "network,disk,usb" + id = (known after apply) + ipconfig0 = "ip=192.168.130.140/24,gw=192.168.130.1" + kvm = true + memory = 8192 + name = "rancher-node-intership0" + nameserver = "192.168.100.29" + numa = false + onboot = true + oncreate = true + os_type = "cloud-init" + preprovision = true + reboot_required = (known after apply) + scsihw = (known after apply) + searchdomain = "internal.staging.swh.network" + sockets = 1 + ssh_host = (known after apply) + ssh_port = (known after apply) + ssh_user = "root" + sshkeys = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDVKCfpeIMg7GS3Pk03ZAcBWAeDZ+AvWk2k/pPY0z8MJ3YAbqZkRtSK7yaDgJV6Gro7nn/TxdJLo2jEzzWvlC8d8AEzhZPy5Z/qfVVjqBTBM4H5+e+TItAHFfaY5+0WvIahxcfsfaq70MWfpJhszAah3ThJ4mqzYaw+dkr42+a7Gx3Ygpb/m2dpnFnxvXdcuAJYStmHKU5AWGWWM+Fm50/fdMqUfNd8MbKhkJt5ihXQmZWMOt7ls4N8i5NZWnS9YSWow8X/ENOEqCRN9TyRkc+pPS0w9DNi0BCsWvSRJOkyvQ6caEnKWlNoywCmM1AlIQD3k4RUgRWe0vqg/UKPpH3Z root@terraform" + tablet = true + target_node = "uffizi" + unused_disk = (known after apply) + vcpus = 0 + vlan = -1 + vmid = 146 + disk { + backup = 0 + cache = "none" + file = (known after apply) + format = (known after apply) + iothread = 0 + mbps = 0 + mbps_rd = 0 + mbps_rd_max = 0 + mbps_wr = 0 + mbps_wr_max = 0 + media = (known after apply) + replicate = 0 + size = "20G" + slot = (known after apply) + ssd = 0 + storage = "proxmox" + storage_type = (known after apply) + type = "virtio" + volume = (known after apply) } + disk { + backup = 0 + cache = "none" + file = (known after apply) + format = (known after apply) + iothread = 0 + mbps = 0 + mbps_rd = 0 + mbps_rd_max = 0 + mbps_wr = 0 + mbps_wr_max = 0 + media = (known after apply) + replicate = 0 + size = "50G" + slot = (known after apply) + ssd = 0 + storage = "proxmox" + storage_type = (known after apply) + type = "virtio" + volume = (known after apply) } + network { + bridge = "vmbr443" + firewall = false + link_down = false + macaddr = (known after apply) + model = "virtio" + queues = (known after apply) + rate = (known after apply) + tag = -1 } } # module.rancher_node_internship1.proxmox_vm_qemu.node will be created + resource "proxmox_vm_qemu" "node" { + additional_wait = 0 + agent = 0 + balloon = 4096 + bios = "seabios" + boot = "c" + bootdisk = (known after apply) + ciuser = "root" + clone = "debian-bullseye-11.3-zfs-2022-04-21" + clone_wait = 0 + cores = 4 + cpu = "kvm64" + default_ipv4_address = (known after apply) + define_connection_info = true + desc = "Rancher node for the internship" + force_create = false + full_clone = false + guest_agent_ready_timeout = 100 + hotplug = "network,disk,usb" + id = (known after apply) + ipconfig0 = "ip=192.168.130.141/24,gw=192.168.130.1" + kvm = true + memory = 8192 + name = "rancher-node-intership1" + nameserver = "192.168.100.29" + numa = false + onboot = true + oncreate = true + os_type = "cloud-init" + preprovision = true + reboot_required = (known after apply) + scsihw = (known after apply) + searchdomain = "internal.staging.swh.network" + sockets = 1 + ssh_host = (known after apply) + ssh_port = (known after apply) + ssh_user = "root" + sshkeys = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDVKCfpeIMg7GS3Pk03ZAcBWAeDZ+AvWk2k/pPY0z8MJ3YAbqZkRtSK7yaDgJV6Gro7nn/TxdJLo2jEzzWvlC8d8AEzhZPy5Z/qfVVjqBTBM4H5+e+TItAHFfaY5+0WvIahxcfsfaq70MWfpJhszAah3ThJ4mqzYaw+dkr42+a7Gx3Ygpb/m2dpnFnxvXdcuAJYStmHKU5AWGWWM+Fm50/fdMqUfNd8MbKhkJt5ihXQmZWMOt7ls4N8i5NZWnS9YSWow8X/ENOEqCRN9TyRkc+pPS0w9DNi0BCsWvSRJOkyvQ6caEnKWlNoywCmM1AlIQD3k4RUgRWe0vqg/UKPpH3Z root@terraform" + tablet = true + target_node = "uffizi" + unused_disk = (known after apply) + vcpus = 0 + vlan = -1 + vmid = 146 + disk { + backup = 0 + cache = "none" + file = (known after apply) + format = (known after apply) + iothread = 0 + mbps = 0 + mbps_rd = 0 + mbps_rd_max = 0 + mbps_wr = 0 + mbps_wr_max = 0 + media = (known after apply) + replicate = 0 + size = "20G" + slot = (known after apply) + ssd = 0 + storage = "proxmox" + storage_type = (known after apply) + type = "virtio" + volume = (known after apply) } + disk { + backup = 0 + cache = "none" + file = (known after apply) + format = (known after apply) + iothread = 0 + mbps = 0 + mbps_rd = 0 + mbps_rd_max = 0 + mbps_wr = 0 + mbps_wr_max = 0 + media = (known after apply) + replicate = 0 + size = "50G" + slot = (known after apply) + ssd = 0 + storage = "proxmox" + storage_type = (known after apply) + type = "virtio" + volume = (known after apply) } + network { + bridge = "vmbr443" + firewall = false + link_down = false + macaddr = (known after apply) + model = "virtio" + queues = (known after apply) + rate = (known after apply) + tag = -1 } } # module.rancher_node_internship2.proxmox_vm_qemu.node will be created + resource "proxmox_vm_qemu" "node" { + additional_wait = 0 + agent = 0 + balloon = 4096 + bios = "seabios" + boot = "c" + bootdisk = (known after apply) + ciuser = "root" + clone = "debian-bullseye-11.3-zfs-2022-04-21" + clone_wait = 0 + cores = 4 + cpu = "kvm64" + default_ipv4_address = (known after apply) + define_connection_info = true + desc = "Rancher node for the internship" + force_create = false + full_clone = false + guest_agent_ready_timeout = 100 + hotplug = "network,disk,usb" + id = (known after apply) + ipconfig0 = "ip=192.168.130.142/24,gw=192.168.130.1" + kvm = true + memory = 8192 + name = "rancher-node-intership2" + nameserver = "192.168.100.29" + numa = false + onboot = true + oncreate = true + os_type = "cloud-init" + preprovision = true + reboot_required = (known after apply) + scsihw = (known after apply) + searchdomain = "internal.staging.swh.network" + sockets = 1 + ssh_host = (known after apply) + ssh_port = (known after apply) + ssh_user = "root" + sshkeys = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDVKCfpeIMg7GS3Pk03ZAcBWAeDZ+AvWk2k/pPY0z8MJ3YAbqZkRtSK7yaDgJV6Gro7nn/TxdJLo2jEzzWvlC8d8AEzhZPy5Z/qfVVjqBTBM4H5+e+TItAHFfaY5+0WvIahxcfsfaq70MWfpJhszAah3ThJ4mqzYaw+dkr42+a7Gx3Ygpb/m2dpnFnxvXdcuAJYStmHKU5AWGWWM+Fm50/fdMqUfNd8MbKhkJt5ihXQmZWMOt7ls4N8i5NZWnS9YSWow8X/ENOEqCRN9TyRkc+pPS0w9DNi0BCsWvSRJOkyvQ6caEnKWlNoywCmM1AlIQD3k4RUgRWe0vqg/UKPpH3Z root@terraform" + tablet = true + target_node = "uffizi" + unused_disk = (known after apply) + vcpus = 0 + vlan = -1 + vmid = 146 + disk { + backup = 0 + cache = "none" + file = (known after apply) + format = (known after apply) + iothread = 0 + mbps = 0 + mbps_rd = 0 + mbps_rd_max = 0 + mbps_wr = 0 + mbps_wr_max = 0 + media = (known after apply) + replicate = 0 + size = "20G" + slot = (known after apply) + ssd = 0 + storage = "proxmox" + storage_type = (known after apply) + type = "virtio" + volume = (known after apply) } + disk { + backup = 0 + cache = "none" + file = (known after apply) + format = (known after apply) + iothread = 0 + mbps = 0 + mbps_rd = 0 + mbps_rd_max = 0 + mbps_wr = 0 + mbps_wr_max = 0 + media = (known after apply) + replicate = 0 + size = "50G" + slot = (known after apply) + ssd = 0 + storage = "proxmox" + storage_type = (known after apply) + type = "virtio" + volume = (known after apply) } + network { + bridge = "vmbr443" + firewall = false + link_down = false + macaddr = (known after apply) + model = "virtio" + queues = (known after apply) + rate = (known after apply) + tag = -1 } } # module.worker0.proxmox_vm_qemu.node will be updated in-place ~ resource "proxmox_vm_qemu" "node" { - args = "-device virtio-rng-pci" -> null id = "pompidou/qemu/117" name = "worker0" # (42 unchanged attributes hidden) # (2 unchanged blocks hidden) } # module.worker1.proxmox_vm_qemu.node will be updated in-place ~ resource "proxmox_vm_qemu" "node" { - args = "-device virtio-rng-pci" -> null id = "pompidou/qemu/118" name = "worker1" # (42 unchanged attributes hidden) # (2 unchanged blocks hidden) } Plan: 4 to add, 2 to change, 0 to destroy. Changes to Outputs: + deployment_intership_cluster_command = (sensitive value) + deployment_intership_cluster_summary = (sensitive value) + rancher_node_internship0_summary = <<-EOT hostname: rancher-node-intership0 fqdn: rancher-node-intership0.internal.staging.swh.network network: ip=192.168.130.140/24,gw=192.168.130.1 EOT + rancher_node_internship1_summary = <<-EOT hostname: rancher-node-intership1 fqdn: rancher-node-intership1.internal.staging.swh.network network: ip=192.168.130.141/24,gw=192.168.130.1 EOT + rancher_node_internship2_summary = <<-EOT hostname: rancher-node-intership2 fqdn: rancher-node-intership2.internal.staging.swh.network network: ip=192.168.130.142/24,gw=192.168.130.1 EOT
Diff Detail
Diff Detail
- Repository
- rSPRE sysadm-provisioning
- Lint
Automatic diff as part of commit; lint not applicable. - Unit
Automatic diff as part of commit; unit tests not applicable.
Event Timeline
Comment Actions
- fix wrong references to the elastic worker cluster
- rename nodes from rancher-node-internX to rancher-node-internshipX
Comment Actions
- fix the cloud-init / puppet concurrency after the vms startup
- remove the wrong vmid assigned to the new cluster nodes
- refresh the staging.tfstate file after applying the new configuration