Related to T4135
Details
Details
- Reviewers
- None
- Group Reviewers
System administrators - Maniphest Tasks
- T4135: staging: Deploy graphql service
- Commits
- rSPRE71e31c0fea42: Add new staging graphql-worker nodes
terraform plan:
terraform plan module.worker1.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/118] module.rp0.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/129] module.search-esnode0.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/130] module.objstorage0.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/102] module.scrubber0.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/142] rancher2_cluster.deployment_internship: Refreshing state... [id=c-fvnrx] rancher2_cluster.staging-workers: Refreshing state... [id=c-bp26n] module.search0.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/131] module.worker2.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/112] module.maven-exporter0.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/122] module.webapp.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/119] module.scheduler0.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/116] module.deposit.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/120] module.mirror-test.proxmox_vm_qemu.node: Refreshing state... [id=uffizi/qemu/132] module.vault.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/121] module.worker3.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/137] module.worker0.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/117] module.counters0.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/138] rancher2_catalog_v2.keda: Refreshing state... [id=c-bp26n.keda] rancher2_app_v2.rancher-monitoring: Refreshing state... [id=c-bp26n.cattle-monitoring-system/rancher-monitoring] rancher2_app_v2.keda: Refreshing state... [id=c-bp26n.kedacore/keda] module.elastic-worker0.proxmox_vm_qemu.node: Refreshing state... [id=uffizi/qemu/146] module.elastic-worker1.proxmox_vm_qemu.node: Refreshing state... [id=uffizi/qemu/147] module.elastic-worker3.proxmox_vm_qemu.node: Refreshing state... [id=pompidou/qemu/149] module.elastic-worker2.proxmox_vm_qemu.node: Refreshing state... [id=uffizi/qemu/148] module.rancher_node_internship2.proxmox_vm_qemu.node: Refreshing state... [id=uffizi/qemu/152] module.rancher_node_internship1.proxmox_vm_qemu.node: Refreshing state... [id=uffizi/qemu/151] module.rancher_node_internship0.proxmox_vm_qemu.node: Refreshing state... [id=uffizi/qemu/150] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # rancher2_cluster.cluster-graphql will be created + resource "rancher2_cluster" "cluster-graphql" { + annotations = (known after apply) + ca_cert = (sensitive value) + cluster_registration_token = (known after apply) + default_pod_security_policy_template_id = (known after apply) + default_project_id = (known after apply) + description = "graphql staging cluster" + desired_agent_image = (known after apply) + desired_auth_image = (known after apply) + docker_root_dir = (known after apply) + driver = (known after apply) + enable_cluster_alerting = (known after apply) + enable_cluster_istio = (known after apply) + enable_cluster_monitoring = (known after apply) + enable_network_policy = (known after apply) + fleet_workspace_name = (known after apply) + id = (known after apply) + istio_enabled = (known after apply) + kube_config = (sensitive value) + labels = (known after apply) + name = "cluster-graphql" + system_project_id = (known after apply) + windows_prefered_cluster = false + cluster_auth_endpoint { + ca_certs = (known after apply) + enabled = (known after apply) + fqdn = (known after apply) } + cluster_template_answers { + cluster_id = (known after apply) + project_id = (known after apply) + values = (known after apply) } + cluster_template_questions { + default = (known after apply) + required = (known after apply) + type = (known after apply) + variable = (known after apply) } + eks_config_v2 { + cloud_credential_id = (known after apply) + imported = (known after apply) + kms_key = (known after apply) + kubernetes_version = (known after apply) + logging_types = (known after apply) + name = (known after apply) + private_access = (known after apply) + public_access = (known after apply) + public_access_sources = (known after apply) + region = (known after apply) + secrets_encryption = (known after apply) + security_groups = (known after apply) + service_role = (known after apply) + subnets = (known after apply) + tags = (known after apply) + node_groups { + desired_size = (known after apply) + disk_size = (known after apply) + ec2_ssh_key = (known after apply) + gpu = (known after apply) + image_id = (known after apply) + instance_type = (known after apply) + labels = (known after apply) + max_size = (known after apply) + min_size = (known after apply) + name = (known after apply) + request_spot_instances = (known after apply) + resource_tags = (known after apply) + spot_instance_types = (known after apply) + subnets = (known after apply) + tags = (known after apply) + user_data = (known after apply) + version = (known after apply) + launch_template { + id = (known after apply) + name = (known after apply) + version = (known after apply) } } } + k3s_config { + version = (known after apply) + upgrade_strategy { + drain_server_nodes = (known after apply) + drain_worker_nodes = (known after apply) + server_concurrency = (known after apply) + worker_concurrency = (known after apply) } } + rke2_config { + version = (known after apply) + upgrade_strategy { + drain_server_nodes = (known after apply) + drain_worker_nodes = (known after apply) + server_concurrency = (known after apply) + worker_concurrency = (known after apply) } } + rke_config { + addon_job_timeout = (known after apply) + enable_cri_dockerd = false + ignore_docker_version = true + kubernetes_version = (known after apply) + prefix_path = (known after apply) + ssh_agent_auth = false + ssh_cert_path = (known after apply) + ssh_key_path = (known after apply) + win_prefix_path = (known after apply) + authentication { + sans = (known after apply) + strategy = (known after apply) } + authorization { + mode = (known after apply) + options = (known after apply) } + bastion_host { + address = (known after apply) + port = (known after apply) + ssh_agent_auth = (known after apply) + ssh_key = (sensitive value) + ssh_key_path = (known after apply) + user = (known after apply) } + cloud_provider { + custom_cloud_provider = (known after apply) + name = (known after apply) + aws_cloud_provider { + global { + disable_security_group_ingress = (known after apply) + disable_strict_zone_check = (known after apply) + elb_security_group = (known after apply) + kubernetes_cluster_id = (known after apply) + kubernetes_cluster_tag = (known after apply) + role_arn = (known after apply) + route_table_id = (known after apply) + subnet_id = (known after apply) + vpc = (known after apply) + zone = (known after apply) } + service_override { + region = (known after apply) + service = (known after apply) + signing_method = (known after apply) + signing_name = (known after apply) + signing_region = (known after apply) + url = (known after apply) } } + azure_cloud_provider { + aad_client_cert_password = (sensitive value) + aad_client_cert_path = (known after apply) + aad_client_id = (sensitive value) + aad_client_secret = (sensitive value) + cloud = (known after apply) + cloud_provider_backoff = (known after apply) + cloud_provider_backoff_duration = (known after apply) + cloud_provider_backoff_exponent = (known after apply) + cloud_provider_backoff_jitter = (known after apply) + cloud_provider_backoff_retries = (known after apply) + cloud_provider_rate_limit = (known after apply) + cloud_provider_rate_limit_bucket = (known after apply) + cloud_provider_rate_limit_qps = (known after apply) + load_balancer_sku = (known after apply) + location = (known after apply) + maximum_load_balancer_rule_count = (known after apply) + primary_availability_set_name = (known after apply) + primary_scale_set_name = (known after apply) + resource_group = (known after apply) + route_table_name = (known after apply) + security_group_name = (known after apply) + subnet_name = (known after apply) + subscription_id = (sensitive value) + tenant_id = (sensitive value) + use_instance_metadata = (known after apply) + use_managed_identity_extension = (known after apply) + vm_type = (known after apply) + vnet_name = (known after apply) + vnet_resource_group = (known after apply) } + openstack_cloud_provider { + block_storage { + bs_version = (known after apply) + ignore_volume_az = (known after apply) + trust_device_path = (known after apply) } + global { + auth_url = (known after apply) + ca_file = (known after apply) + domain_id = (sensitive value) + domain_name = (known after apply) + password = (sensitive value) + region = (known after apply) + tenant_id = (sensitive value) + tenant_name = (known after apply) + trust_id = (sensitive value) + username = (sensitive value) } + load_balancer { + create_monitor = (known after apply) + floating_network_id = (known after apply) + lb_method = (known after apply) + lb_provider = (known after apply) + lb_version = (known after apply) + manage_security_groups = (known after apply) + monitor_delay = (known after apply) + monitor_max_retries = (known after apply) + monitor_timeout = (known after apply) + subnet_id = (known after apply) + use_octavia = (known after apply) } + metadata { + request_timeout = (known after apply) + search_order = (known after apply) } + route { + router_id = (known after apply) } } + vsphere_cloud_provider { + disk { + scsi_controller_type = (known after apply) } + global { + datacenters = (known after apply) + insecure_flag = (known after apply) + password = (sensitive value) + port = (known after apply) + soap_roundtrip_count = (known after apply) + user = (sensitive value) } + network { + public_network = (known after apply) } + virtual_center { + datacenters = (known after apply) + name = (known after apply) + password = (sensitive value) + port = (known after apply) + soap_roundtrip_count = (known after apply) + user = (sensitive value) } + workspace { + datacenter = (known after apply) + default_datastore = (known after apply) + folder = (known after apply) + resourcepool_path = (known after apply) + server = (known after apply) } } } + dns { + node_selector = (known after apply) + options = (known after apply) + provider = (known after apply) + reverse_cidrs = (known after apply) + upstream_nameservers = (known after apply) + linear_autoscaler_params { + cores_per_replica = (known after apply) + max = (known after apply) + min = (known after apply) + nodes_per_replica = (known after apply) + prevent_single_point_failure = (known after apply) } + nodelocal { + ip_address = (known after apply) + node_selector = (known after apply) } + tolerations { + effect = (known after apply) + key = (known after apply) + operator = (known after apply) + seconds = (known after apply) + value = (known after apply) } + update_strategy { + strategy = (known after apply) + rolling_update { + max_surge = (known after apply) + max_unavailable = (known after apply) } } } + ingress { + default_backend = (known after apply) + dns_policy = (known after apply) + extra_args = (known after apply) + http_port = (known after apply) + https_port = (known after apply) + network_mode = (known after apply) + node_selector = (known after apply) + options = (known after apply) + provider = (known after apply) + tolerations { + effect = (known after apply) + key = (known after apply) + operator = (known after apply) + seconds = (known after apply) + value = (known after apply) } + update_strategy { + strategy = (known after apply) + rolling_update { + max_unavailable = (known after apply) } } } + monitoring { + node_selector = (known after apply) + options = (known after apply) + provider = (known after apply) + replicas = (known after apply) + tolerations { + effect = (known after apply) + key = (known after apply) + operator = (known after apply) + seconds = (known after apply) + value = (known after apply) } + update_strategy { + strategy = (known after apply) + rolling_update { + max_surge = (known after apply) + max_unavailable = (known after apply) } } } + network { + mtu = 0 + options = (known after apply) + plugin = "canal" } + services { + etcd { + ca_cert = (known after apply) + cert = (sensitive value) + creation = (known after apply) + external_urls = (known after apply) + extra_args = (known after apply) + extra_binds = (known after apply) + extra_env = (known after apply) + gid = (known after apply) + image = (known after apply) + key = (sensitive value) + path = (known after apply) + retention = (known after apply) + snapshot = (known after apply) + uid = (known after apply) + backup_config { + enabled = (known after apply) + interval_hours = (known after apply) + retention = (known after apply) + safe_timestamp = (known after apply) + timeout = (known after apply) + s3_backup_config { + access_key = (sensitive value) + bucket_name = (known after apply) + custom_ca = (known after apply) + endpoint = (known after apply) + folder = (known after apply) + region = (known after apply) + secret_key = (sensitive value) } } } + kube_api { + admission_configuration = (known after apply) + always_pull_images = (known after apply) + extra_args = (known after apply) + extra_binds = (known after apply) + extra_env = (known after apply) + image = (known after apply) + pod_security_policy = (known after apply) + service_cluster_ip_range = (known after apply) + service_node_port_range = (known after apply) + audit_log { + enabled = (known after apply) + configuration { + format = (known after apply) + max_age = (known after apply) + max_backup = (known after apply) + max_size = (known after apply) + path = (known after apply) + policy = (known after apply) } } + event_rate_limit { + configuration = (known after apply) + enabled = (known after apply) } + secrets_encryption_config { + custom_config = (known after apply) + enabled = (known after apply) } } + kube_controller { + cluster_cidr = (known after apply) + extra_args = (known after apply) + extra_binds = (known after apply) + extra_env = (known after apply) + image = (known after apply) + service_cluster_ip_range = (known after apply) } + kubelet { + cluster_dns_server = (known after apply) + cluster_domain = (known after apply) + extra_args = (known after apply) + extra_binds = (known after apply) + extra_env = (known after apply) + fail_swap_on = (known after apply) + generate_serving_certificate = (known after apply) + image = (known after apply) + infra_container_image = (known after apply) } + kubeproxy { + extra_args = (known after apply) + extra_binds = (known after apply) + extra_env = (known after apply) + image = (known after apply) } + scheduler { + extra_args = (known after apply) + extra_binds = (known after apply) + extra_env = (known after apply) + image = (known after apply) } } + upgrade_strategy { + drain = (known after apply) + max_unavailable_controlplane = (known after apply) + max_unavailable_worker = (known after apply) + drain_input { + delete_local_data = (known after apply) + force = (known after apply) + grace_period = (known after apply) + ignore_daemon_sets = (known after apply) + timeout = (known after apply) } } } + scheduled_cluster_scan { + enabled = (known after apply) + scan_config { + cis_scan_config { + debug_master = (known after apply) + debug_worker = (known after apply) + override_benchmark_version = (known after apply) + override_skip = (known after apply) + profile = (known after apply) } } + schedule_config { + cron_schedule = (known after apply) + retention = (known after apply) } } } # module.graphql-worker0.proxmox_vm_qemu.node will be created + resource "proxmox_vm_qemu" "node" { + additional_wait = 0 + agent = 0 + automatic_reboot = true + balloon = 4096 + bios = "seabios" + boot = "c" + bootdisk = (known after apply) + ciuser = "root" + clone = "debian-bullseye-11.3-zfs-2022-04-21" + clone_wait = 0 + cores = 4 + cpu = "kvm64" + default_ipv4_address = (known after apply) + define_connection_info = true + desc = "elastic worker running in rancher cluster" + force_create = false + full_clone = false + guest_agent_ready_timeout = 100 + hotplug = "network,disk,usb" + id = (known after apply) + ipconfig0 = "ip=192.168.130.150/24,gw=192.168.130.1" + kvm = true + memory = 8192 + name = "graphql-worker0" + nameserver = "192.168.100.29" + numa = false + onboot = true + oncreate = true + os_type = "cloud-init" + preprovision = true + reboot_required = (known after apply) + scsihw = (known after apply) + searchdomain = "internal.staging.swh.network" + sockets = 1 + ssh_host = (known after apply) + ssh_port = (known after apply) + ssh_user = "root" + sshkeys = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDVKCfpeIMg7GS3Pk03ZAcBWAeDZ+AvWk2k/pPY0z8MJ3YAbqZkRtSK7yaDgJV6Gro7nn/TxdJLo2jEzzWvlC8d8AEzhZPy5Z/qfVVjqBTBM4H5+e+TItAHFfaY5+0WvIahxcfsfaq70MWfpJhszAah3ThJ4mqzYaw+dkr42+a7Gx3Ygpb/m2dpnFnxvXdcuAJYStmHKU5AWGWWM+Fm50/fdMqUfNd8MbKhkJt5ihXQmZWMOt7ls4N8i5NZWnS9YSWow8X/ENOEqCRN9TyRkc+pPS0w9DNi0BCsWvSRJOkyvQ6caEnKWlNoywCmM1AlIQD3k4RUgRWe0vqg/UKPpH3Z root@terraform" + tablet = true + target_node = "uffizi" + unused_disk = (known after apply) + vcpus = 0 + vlan = -1 + vmid = 146 + disk { + backup = 0 + cache = "none" + file = (known after apply) + format = (known after apply) + iothread = 0 + mbps = 0 + mbps_rd = 0 + mbps_rd_max = 0 + mbps_wr = 0 + mbps_wr_max = 0 + media = (known after apply) + replicate = 0 + size = "20G" + slot = (known after apply) + ssd = 0 + storage = "proxmox" + storage_type = (known after apply) + type = "virtio" + volume = (known after apply) } + disk { + backup = 0 + cache = "none" + file = (known after apply) + format = (known after apply) + iothread = 0 + mbps = 0 + mbps_rd = 0 + mbps_rd_max = 0 + mbps_wr = 0 + mbps_wr_max = 0 + media = (known after apply) + replicate = 0 + size = "50G" + slot = (known after apply) + ssd = 0 + storage = "proxmox" + storage_type = (known after apply) + type = "virtio" + volume = (known after apply) } + network { + bridge = "vmbr443" + firewall = false + link_down = false + macaddr = (known after apply) + model = "virtio" + queues = (known after apply) + rate = (known after apply) + tag = -1 } } # module.graphql-worker1.proxmox_vm_qemu.node will be created + resource "proxmox_vm_qemu" "node" { + additional_wait = 0 + agent = 0 + automatic_reboot = true + balloon = 4096 + bios = "seabios" + boot = "c" + bootdisk = (known after apply) + ciuser = "root" + clone = "debian-bullseye-11.3-zfs-2022-04-21" + clone_wait = 0 + cores = 4 + cpu = "kvm64" + default_ipv4_address = (known after apply) + define_connection_info = true + desc = "graphql worker running in rancher cluster" + force_create = false + full_clone = false + guest_agent_ready_timeout = 100 + hotplug = "network,disk,usb" + id = (known after apply) + ipconfig0 = "ip=192.168.130.151/24,gw=192.168.130.1" + kvm = true + memory = 8192 + name = "graphql-worker1" + nameserver = "192.168.100.29" + numa = false + onboot = true + oncreate = true + os_type = "cloud-init" + preprovision = true + reboot_required = (known after apply) + scsihw = (known after apply) + searchdomain = "internal.staging.swh.network" + sockets = 1 + ssh_host = (known after apply) + ssh_port = (known after apply) + ssh_user = "root" + sshkeys = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDVKCfpeIMg7GS3Pk03ZAcBWAeDZ+AvWk2k/pPY0z8MJ3YAbqZkRtSK7yaDgJV6Gro7nn/TxdJLo2jEzzWvlC8d8AEzhZPy5Z/qfVVjqBTBM4H5+e+TItAHFfaY5+0WvIahxcfsfaq70MWfpJhszAah3ThJ4mqzYaw+dkr42+a7Gx3Ygpb/m2dpnFnxvXdcuAJYStmHKU5AWGWWM+Fm50/fdMqUfNd8MbKhkJt5ihXQmZWMOt7ls4N8i5NZWnS9YSWow8X/ENOEqCRN9TyRkc+pPS0w9DNi0BCsWvSRJOkyvQ6caEnKWlNoywCmM1AlIQD3k4RUgRWe0vqg/UKPpH3Z root@terraform" + tablet = true + target_node = "uffizi" + unused_disk = (known after apply) + vcpus = 0 + vlan = -1 + vmid = (known after apply) + disk { + backup = 0 + cache = "none" + file = (known after apply) + format = (known after apply) + iothread = 0 + mbps = 0 + mbps_rd = 0 + mbps_rd_max = 0 + mbps_wr = 0 + mbps_wr_max = 0 + media = (known after apply) + replicate = 0 + size = "20G" + slot = (known after apply) + ssd = 0 + storage = "proxmox" + storage_type = (known after apply) + type = "virtio" + volume = (known after apply) } + disk { + backup = 0 + cache = "none" + file = (known after apply) + format = (known after apply) + iothread = 0 + mbps = 0 + mbps_rd = 0 + mbps_rd_max = 0 + mbps_wr = 0 + mbps_wr_max = 0 + media = (known after apply) + replicate = 0 + size = "50G" + slot = (known after apply) + ssd = 0 + storage = "proxmox" + storage_type = (known after apply) + type = "virtio" + volume = (known after apply) } + network { + bridge = "vmbr443" + firewall = false + link_down = false + macaddr = (known after apply) + model = "virtio" + queues = (known after apply) + rate = (known after apply) + tag = -1 } } # module.graphql-worker2.proxmox_vm_qemu.node will be created + resource "proxmox_vm_qemu" "node" { + additional_wait = 0 + agent = 0 + automatic_reboot = true + balloon = 4096 + bios = "seabios" + boot = "c" + bootdisk = (known after apply) + ciuser = "root" + clone = "debian-bullseye-11.3-zfs-2022-04-21" + clone_wait = 0 + cores = 4 + cpu = "kvm64" + default_ipv4_address = (known after apply) + define_connection_info = true + desc = "graphql worker running in rancher cluster" + force_create = false + full_clone = false + guest_agent_ready_timeout = 100 + hotplug = "network,disk,usb" + id = (known after apply) + ipconfig0 = "ip=192.168.130.152/24,gw=192.168.130.1" + kvm = true + memory = 8192 + name = "graphql-worker2" + nameserver = "192.168.100.29" + numa = false + onboot = true + oncreate = true + os_type = "cloud-init" + preprovision = true + reboot_required = (known after apply) + scsihw = (known after apply) + searchdomain = "internal.staging.swh.network" + sockets = 1 + ssh_host = (known after apply) + ssh_port = (known after apply) + ssh_user = "root" + sshkeys = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDVKCfpeIMg7GS3Pk03ZAcBWAeDZ+AvWk2k/pPY0z8MJ3YAbqZkRtSK7yaDgJV6Gro7nn/TxdJLo2jEzzWvlC8d8AEzhZPy5Z/qfVVjqBTBM4H5+e+TItAHFfaY5+0WvIahxcfsfaq70MWfpJhszAah3ThJ4mqzYaw+dkr42+a7Gx3Ygpb/m2dpnFnxvXdcuAJYStmHKU5AWGWWM+Fm50/fdMqUfNd8MbKhkJt5ihXQmZWMOt7ls4N8i5NZWnS9YSWow8X/ENOEqCRN9TyRkc+pPS0w9DNi0BCsWvSRJOkyvQ6caEnKWlNoywCmM1AlIQD3k4RUgRWe0vqg/UKPpH3Z root@terraform" + tablet = true + target_node = "uffizi" + unused_disk = (known after apply) + vcpus = 0 + vlan = -1 + vmid = (known after apply) + disk { + backup = 0 + cache = "none" + file = (known after apply) + format = (known after apply) + iothread = 0 + mbps = 0 + mbps_rd = 0 + mbps_rd_max = 0 + mbps_wr = 0 + mbps_wr_max = 0 + media = (known after apply) + replicate = 0 + size = "20G" + slot = (known after apply) + ssd = 0 + storage = "proxmox" + storage_type = (known after apply) + type = "virtio" + volume = (known after apply) } + disk { + backup = 0 + cache = "none" + file = (known after apply) + format = (known after apply) + iothread = 0 + mbps = 0 + mbps_rd = 0 + mbps_rd_max = 0 + mbps_wr = 0 + mbps_wr_max = 0 + media = (known after apply) + replicate = 0 + size = "50G" + slot = (known after apply) + ssd = 0 + storage = "proxmox" + storage_type = (known after apply) + type = "virtio" + volume = (known after apply) } + network { + bridge = "vmbr443" + firewall = false + link_down = false + macaddr = (known after apply) + model = "virtio" + queues = (known after apply) + rate = (known after apply) + tag = -1 } } Plan: 4 to add, 0 to change, 0 to destroy. Changes to Outputs: + graphql-worker0_summary = <<-EOT hostname: graphql-worker0 fqdn: graphql-worker0.internal.staging.swh.network network: ip=192.168.130.150/24,gw=192.168.130.1 EOT + graphql-worker1_summary = <<-EOT hostname: graphql-worker1 fqdn: graphql-worker1.internal.staging.swh.network network: ip=192.168.130.151/24,gw=192.168.130.1 EOT + graphql-worker2_summary = <<-EOT hostname: graphql-worker2 fqdn: graphql-worker2.internal.staging.swh.network network: ip=192.168.130.152/24,gw=192.168.130.1 EOT - rancher2_cluster_command = (sensitive value) + rancher2_cluster_graphql_command = (sensitive value) + rancher2_cluster_graphql_summary = (sensitive value) + rancher2_cluster_staging_worker_command = (sensitive value) + rancher2_cluster_staging_workers_summary = (sensitive value) - rancher2_cluster_summary = (sensitive value) ╷ │ Warning: Experimental feature "module_variable_optional_attrs" is active │ │ on versions.tf line 3, in terraform: │ 3: experiments = [module_variable_optional_attrs] │ │ Experimental features are subject to breaking changes in future minor or patch releases, based on feedback. │ │ If you have feedback on the design of this feature, please open a GitHub issue to discuss it. │ │ (and 26 more similar warnings elsewhere) ╵
Diff Detail
Diff Detail
- Repository
- rSPRE sysadm-provisioning
- Branch
- master
- Lint
No Linters Available - Unit
No Unit Test Coverage - Build Status
Buildable 30522 Build 47721: arc lint + arc unit
Event Timeline
proxmox/terraform/staging/cluster-graphql.tf | ||
---|---|---|
44 | one small typo and boum, elastic-worker0 got squashed ¯\_(ツ)_/¯. |
Comment Actions
cluster created and one node... took 1h though... slow.
... module.graphql-worker0.proxmox_vm_qemu.node: Creation complete after 1h0m40s [id=uffizi/qemu/162] Apply complete! Resources: 1 added, 0 changed, 0 destroyed. Outputs: counters0_summary = <<EOT hostname: counters0 fqdn: counters0.internal.staging.swh.network network: ip=192.168.130.95/24,gw=192.168.130.1 EOT deployment_internship_cluster_command = <sensitive> deployment_internship_cluster_summary = <sensitive> deposit_summary = <<EOT hostname: deposit fqdn: deposit.internal.staging.swh.network network: ip=192.168.130.31/24,gw=192.168.130.1 EOT elastic-worker0_summary = <<EOT hostname: elastic-worker0 fqdn: elastic-worker0.internal.staging.swh.network network: ip=192.168.130.130/24,gw=192.168.130.1 EOT elastic-worker1_summary = <<EOT hostname: elastic-worker1 fqdn: elastic-worker1.internal.staging.swh.network network: ip=192.168.130.131/24,gw=192.168.130.1 EOT elastic-worker2_summary = <<EOT hostname: elastic-worker2 fqdn: elastic-worker2.internal.staging.swh.network network: ip=192.168.130.132/24,gw=192.168.130.1 EOT elastic-worker3_summary = <<EOT hostname: elastic-worker3 fqdn: elastic-worker3.internal.staging.swh.network network: ip=192.168.130.133/24,gw=192.168.130.1 EOT graphql-worker0_summary = <<EOT hostname: graphql-worker0 fqdn: graphql-worker0.internal.staging.swh.network network: ip=192.168.130.150/24,gw=192.168.130.1 EOT maven-exporter0_summary = <<EOT hostname: maven-exporter0 fqdn: maven-exporter0.internal.staging.swh.network network: ip=192.168.130.70/24,gw=192.168.130.1 EOT mirror-tests_summary = <<EOT hostname: mirror-test fqdn: mirror-test.internal.staging.swh.network network: ip=192.168.130.160/24,gw=192.168.130.1 EOT objstorage0_summary = <<EOT hostname: objstorage0 fqdn: objstorage0.internal.staging.swh.network network: ip=192.168.130.110/24,gw=192.168.130.1 EOT rancher2_cluster_graphql_command = <sensitive> rancher2_cluster_graphql_summary = <sensitive> rancher2_cluster_staging_worker_command = <sensitive> rancher2_cluster_staging_workers_summary = <sensitive> rancher_node_internship0_summary = <<EOT hostname: rancher-node-intership0 fqdn: rancher-node-intership0.internal.staging.swh.network network: ip=192.168.130.140/24,gw=192.168.130.1 EOT rancher_node_internship1_summary = <<EOT hostname: rancher-node-intership1 fqdn: rancher-node-intership1.internal.staging.swh.network network: ip=192.168.130.141/24,gw=192.168.130.1 EOT rancher_node_internship2_summary = <<EOT hostname: rancher-node-intership2 fqdn: rancher-node-intership2.internal.staging.swh.network network: ip=192.168.130.142/24,gw=192.168.130.1 EOT rp0_summary = <<EOT hostname: rp0 fqdn: rp0.internal.staging.swh.network network: ip=192.168.130.20/24,gw=192.168.130.1 EOT scheduler0_summary = <<EOT hostname: scheduler0 fqdn: scheduler0.internal.staging.swh.network network: ip=192.168.130.50/24,gw=192.168.130.1 EOT scrubber0_summary = <<EOT hostname: scrubber0 fqdn: scrubber0.internal.staging.swh.network network: ip=192.168.130.120/24,gw=192.168.130.1 EOT search-esnode0_summary = <<EOT hostname: search-esnode0 fqdn: search-esnode0.internal.staging.swh.network network: ip=192.168.130.80/24,gw=192.168.130.1 EOT search0_summary = <<EOT hostname: search0 fqdn: search0.internal.staging.swh.network network: ip=192.168.130.90/24,gw=192.168.130.1 EOT vault_summary = <<EOT hostname: vault fqdn: vault.internal.staging.swh.network network: ip=192.168.130.60/24,gw=192.168.130.1 EOT webapp_summary = <<EOT hostname: webapp fqdn: webapp.internal.staging.swh.network network: ip=192.168.130.30/24,gw=192.168.130.1 EOT worker0_summary = <<EOT hostname: worker0 fqdn: worker0.internal.staging.swh.network network: ip=192.168.130.100/24,gw=192.168.130.1 EOT worker1_summary = <<EOT hostname: worker1 fqdn: worker1.internal.staging.swh.network network: ip=192.168.130.101/24,gw=192.168.130.1 EOT worker2_summary = <<EOT hostname: worker2 fqdn: worker2.internal.staging.swh.network network: ip=192.168.130.102/24,gw=192.168.130.1 EOT worker3_summary = <<EOT hostname: worker3 fqdn: worker3.internal.staging.swh.network network: ip=192.168.130.103/24,gw=192.168.130.1 EOT
Comment Actions
done
module.graphql-worker2.proxmox_vm_qemu.node: Creation complete after 1h15m36s [id=uffizi/qemu/164] Apply complete! Resources: 2 added, 0 changed, 0 destroyed. Outputs: ... graphql-worker0_summary = <<EOT hostname: graphql-worker0 fqdn: graphql-worker0.internal.staging.swh.network network: ip=192.168.130.150/24,gw=192.168.130.1 EOT graphql-worker1_summary = <<EOT hostname: graphql-worker1 fqdn: graphql-worker1.internal.staging.swh.network network: ip=192.168.130.151/24,gw=192.168.130.1 EOT graphql-worker2_summary = <<EOT hostname: graphql-worker2 fqdn: graphql-worker2.internal.staging.swh.network network: ip=192.168.130.152/24,gw=192.168.130.1 ...