Page Menu
Home
Software Heritage
Search
Configure Global Search
Log In
Files
F7123266
D1762.id6054.diff
No One
Temporary
Actions
View File
Edit File
Delete File
View Transforms
Subscribe
Mute Notifications
Award Token
Flag For Later
Size
14 KB
Subscribers
None
D1762.id6054.diff
View Options
diff --git a/proxmox/terraform/README.md b/proxmox/terraform/README.md
new file mode 100644
--- /dev/null
+++ b/proxmox/terraform/README.md
@@ -0,0 +1,64 @@
+# What
+
+Terraform allows to transparently declare our infrastructure as code. Providing
+a (non-official so far) plugin, we can provision vm the same way for our rocq
+infra (proxmox)
+
+# The road so far
+
+## Prepare workstation
+
+See prepare-workstation.md
+
+## setup.sh
+
+Create a `setup.sh` file holding the PM_{USER,PASS} information:
+
+```
+export PM_USER=<swh-login>@pam
+export PM_PASS=<swh-login-pass>
+```
+
+source it in your current shell session.
+
+```
+source setup.sh
+```
+
+## provision new vm
+
+```
+terraform init
+terraform apply
+```
+
+# Details
+
+The provisioning is bootstraping vm declared in ".tf" files. It's using a base
+template (debian-9-template, debian-10-template) installed in the hypervisor.
+Instructions are detailed in the `init-template.md` file.
+
+# Init
+
+This initializes your local copy with the necessary:
+
+```
+terraform init
+```
+
+# Plan changes
+
+Compulse all *.tf files present in the folder, then compute a
+differential plan:
+
+```
+terraform plan
+```
+
+# Apply changes
+
+Propose to apply the plan to the infra (interactively):
+
+```
+terraform apply
+```
diff --git a/proxmox/terraform/init-template.md b/proxmox/terraform/init-template.md
new file mode 100644
--- /dev/null
+++ b/proxmox/terraform/init-template.md
@@ -0,0 +1,169 @@
+In the following documentation, we will explain the necessary steps
+needed to initialize a template vm.
+
+Expectations:
+
+- hypervisor: orsay (could be beaubourg, hypervisor3)
+- \`/usr/bin/qm\` available from the hypervisor
+
+Prepare vm template
+===================
+
+Connect to hypervisor orsay (\`ssh orsay\`)
+
+And then as root, retrieve openstack images:
+
+```
+mkdir debian-10
+wget -O debian-10/debian-10-openstack-amd64.qcow2 \
+ https://cdimage.debian.org/cdimage/openstack/current/debian-10.0.1-20190708-openstack-amd64.qcow2
+wget -O debian-10/debian-10-openstack-amd64.qcow2.index \
+ https://cdimage.debian.org/cdimage/openstack/current/debian-10.0.1-20190708-openstack-amd64.qcow2.index
+mkdir debian-9
+wget -O debian-9/debian-9-openstack-amd64.qcow2 \
+ https://cloud.debian.org/images/cloud/OpenStack/current-9/debian-9-openstack-amd64.qcow2
+wget -O debian-9/debian-9-openstack-amd64.qcow2.index \
+ https://cloud.debian.org/images/cloud/OpenStack/current-9/debian-9-openstack-amd64.qcow2.index
+```
+
+Note:
+
+- Not presented here but you should check the hashes of what you
+ retrieved from the internet
+
+Create vm
+---------
+
+```
+chmod +x init-template.sh
+./init-template.sh 9
+```
+
+This created a basic vm with basic login/pass as root/test so we can
+connect to it.
+
+Note: Implementation wise, this uses an openstack debian image,
+cloud-init ready [1]
+
+[1] https://cdimage.debian.org/cdimage/openstack/
+
+Check image is working
+----------------------
+
+The rationale is to:
+
+- boot the vm
+- check some basic information (kernel, distribution, connection,
+ release, etc...).
+- adapt slightly the vms (dns resolver, ip, upgrade, etc...)
+
+### Start vm
+
+```
+qm start 9000
+```
+
+### Checks
+
+Login through the console web-ui:
+
+- accessible from <https://orsay.internal.softwareheritage.org:8006/>
+- View \`datacenter\`
+- unfold the hypervisor \`orsay\` menu
+- select the vm \`9000\`
+- click the \`console\` menu.
+- log in as root/test password
+
+Checks:
+
+- kernel linux version
+- debian release
+
+### Adaptations
+
+Update grub's timeout to 0 for a faster boot (as root):
+```
+sed -i s'/GRUB_TIMEOUT = 5/GRUB_TIMEOUT = 0/' etc/default/grub
+update-grub
+```
+
+Then, add some expected defaults:
+```
+apt update
+apt upgrade -y
+apt install -y puppet
+systemctl stop puppet; systemctl disable puppet.service
+mkdir -p /etc/facter/facts.d
+echo location=sesi_rocquencourt_staging > /etc/facter/facts.d/location.txt
+
+# for stretch (debian-9)
+# we need a superior version of facter package
+# because we use syntax from that superior version
+cat > /etc/apt/sources.list.d/backports.list <<EOF
+deb https://deb.debian.org/debian stretch-backports main
+EOF
+apt install -t stretch-backports facter
+# for stretch, we also need a superior version of cloud-init
+# the current stable version fails silently to set some cloud-init configuration
+cat > /etc/apt/sources.list.d/buster.list <<EOF
+deb https://deb.debian.org/debian buster main
+EOF
+apt update
+apt install -y cloud-init
+rm /etc/apt/sources.list.d/{buster,backports}.list
+# install cloud-init from buster version (7.9 is too old
+# and prevents some cloud-init functionalities from working)
+userdel debian # remove id 1000 which conflicts with our puppet setup
+```
+- etc...
+
+### Remove cloud-init setup from vm
+
+```
+# stop vm
+qm stop 9000
+# remove cloud-init setup
+qm set 9000 --delete ciuser,cipassword,ipconfig0,nameserver,sshkeys
+```
+
+Template the image
+------------------
+
+When the vm is ready, we can use it as a base template for future
+clones:
+
+```
+qm template 9000
+```
+
+Clone image
+===========
+
+This is a tryout referenced here to demonstrate the shortcoming. That\'s
+not necesary to do this as this will be taken care of by proxmox.
+
+Sadly full clone only works:
+
+```
+qm clone 9000 666 --name debian-10-tryout --full true
+```
+
+As in: Fully clone from template \"9000\", the new vm with id \"666\"
+dubbed \"buster-tryout\".
+
+Note (partial clone does not work):
+
+```
+root@orsay:/home/ardumont/proxmox# qm clone 9000 666 --name buster-tryout
+Linked clone feature is not supported for drive 'virtio0'
+```
+
+Note:
+
+- tested with all drives: ide, sata, scsi, virtio
+- only thing that worked was without a disk (but then no more os...)
+
+source
+======
+
+<https://orsay.internal.softwareheritage.org:8006/pve-docs/chapter-qm.html#qm_cloud_init>
diff --git a/proxmox/terraform/init-template.sh b/proxmox/terraform/init-template.sh
new file mode 100644
--- /dev/null
+++ b/proxmox/terraform/init-template.sh
@@ -0,0 +1,37 @@
+#!/usr/bin/env bash
+
+set -x
+set -e
+
+VERSION=${1-"9"}
+NAME="template-debian-${VERSION}"
+IMG="debian-$VERSION/debian-$VERSION-openstack-amd64.qcow2"
+
+VM_ID="${VERSION}000"
+VM_DISK="vm-$VM_ID-disk-0"
+
+# create vm
+qm create $VM_ID --memory 4096 --net0 virtio,bridge=vmbr0 --name "$NAME"
+# import disk to orsay-ssd-2018 (lots of space there)
+qm importdisk $VM_ID $IMG orsay-ssd-2018 --format qcow2
+# finally attach the new disk to the VM as virtio drive
+qm set $VM_ID --scsihw virtio-scsi-pci --virtio0 "orsay-ssd-2018:$VM_DISK"
+# resize the disk to add 30G (image size is 2G) ~> this increases the clone time so no
+# qm resize 9000 virtio0 +30G
+# configure a cdrom drive which is used to pass the cloud-init data
+# to the vm
+qm set $VM_ID --ide2 orsay-ssd-2018:cloudinit
+# boot from disk only
+qm set $VM_ID --boot c --bootdisk virtio0
+# add serial console (for cloud-init, this is needed or else that won't work)
+qm set $VM_ID --serial0 socket
+# sets the number of sockets/cores
+qm set $VM_ID --sockets 2 --cores 1
+
+# cloud init temporary setup
+qm set $VM_ID --ciuser root
+qm set $VM_ID --ipconfig0 "ip=192.168.100.125/24,gw=192.168.100.1"
+qm set $VM_ID --nameserver "192.168.100.29"
+
+SSH_KEY_PUB=$HOME/.ssh/proxmox-ssh-key.pub
+[ -f $SSH_KEY_PUB ] && qm set $VM_ID --sshkeys $SSH_KEY_PUB
diff --git a/proxmox/terraform/prepare-workstation.md b/proxmox/terraform/prepare-workstation.md
new file mode 100644
--- /dev/null
+++ b/proxmox/terraform/prepare-workstation.md
@@ -0,0 +1,26 @@
+This is the required tooling for the following to work.
+
+# terraform-provider-proxmox
+
+go module to install
+
+```
+git clone https://github.com/Telmate/terraform-provider-proxmox
+cd terraform-provider-proxmox
+
+# compile terraform proxmox provider
+export GOPATH=`pwd`
+make setup
+make
+make install
+
+# Install so that terrafor actually sees the plugin
+mkdir -p ~/.terraform.d/plugins/linux_amd64
+cp -v ./bin/* ~/.terraform.d/plugins/linux_amd64/
+```
+
+At the end of this, `terraform init` within /proxmox/terraform/ should now
+work.
+
+Doc: https://github.com/Telmate/terraform-provider-proxmox/blob/master/README.md
+
diff --git a/proxmox/terraform/staging.tf b/proxmox/terraform/staging.tf
new file mode 100644
--- /dev/null
+++ b/proxmox/terraform/staging.tf
@@ -0,0 +1,170 @@
+# Keyword use:
+# - provider: Define the provider(s)
+# - data: Retrieve data information to be used within the file
+# - resource: Define resource and create/update
+
+provider "proxmox" {
+ pm_tls_insecure = true
+ pm_api_url = "https://orsay.internal.softwareheritage.org:8006/api2/json"
+ # in a shell (see README): source ./setup.sh
+}
+
+# `pass search terraform-proxmox` in credential store
+variable "ssh_key_data" {
+ type = "string"
+ default = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDVKCfpeIMg7GS3Pk03ZAcBWAeDZ+AvWk2k/pPY0z8MJ3YAbqZkRtSK7yaDgJV6Gro7nn/TxdJLo2jEzzWvlC8d8AEzhZPy5Z/qfVVjqBTBM4H5+e+TItAHFfaY5+0WvIahxcfsfaq70MWfpJhszAah3ThJ4mqzYaw+dkr42+a7Gx3Ygpb/m2dpnFnxvXdcuAJYStmHKU5AWGWWM+Fm50/fdMqUfNd8MbKhkJt5ihXQmZWMOt7ls4N8i5NZWnS9YSWow8X/ENOEqCRN9TyRkc+pPS0w9DNi0BCsWvSRJOkyvQ6caEnKWlNoywCmM1AlIQD3k4RUgRWe0vqg/UKPpH3Z root@terraform"
+}
+
+variable "user_admin" {
+ type = "string"
+ default = "root"
+}
+
+variable "domain" {
+ type = "string"
+ default = "internal.staging.swh.network"
+}
+
+variable "puppet_environment" {
+ type = "string"
+ default = "new_staging"
+}
+
+variable "puppet_master" {
+ type = "string"
+ default = "pergamon.internal.softwareheritage.org"
+}
+
+variable "dns" {
+ type = "string"
+ default = "192.168.100.29"
+}
+
+variable "gateway_ip" {
+ type = "string"
+ default = "192.168.128.1"
+}
+
+resource "proxmox_vm_qemu" "gateway" {
+ name = "gateway"
+ desc = "staging gateway node"
+ # hypervisor onto which make the vm
+ target_node = "orsay"
+ # See init-template.md to see the template vm bootstrap
+ clone = "template-debian-9"
+ # linux kernel 2.6
+ qemu_os = "l26"
+ # generic setup
+ sockets = 1
+ cores = 1
+ memory = 1024
+ # boot machine when hypervirsor starts
+ onboot = true
+ #### cloud-init setup
+ # to actually set some information per os_type (values: ubuntu, centos,
+ # cloud-init). Keep this as cloud-init
+ os_type = "cloud-init"
+ # ciuser - User name to change ssh keys and password for instead of the
+ # image’s configured default user.
+ ciuser = "${var.user_admin}"
+ ssh_user = "${var.user_admin}"
+ # searchdomain - Sets DNS search domains for a container.
+ searchdomain = "${var.domain}"
+ # nameserver - Sets DNS server IP address for a container.
+ nameserver = "${var.dns}"
+ # sshkeys - public ssh keys, one per line
+ sshkeys = "${var.ssh_key_data}"
+ # FIXME: When T1872 lands, this will need to be updated
+ # ipconfig0 - [gw =] [,ip=<IPv4Format/CIDR>]
+ # ip to communicate for now with the prod network through louvre
+ ipconfig0 = "ip=192.168.100.125/24,gw=192.168.100.1"
+ # vms from the staging network will use this vm as gateway
+ ipconfig1 = "ip=${var.gateway_ip}/24"
+ disk {
+ id = 0
+ type = "virtio"
+ storage = "orsay-ssd-2018"
+ storage_type = "ssd"
+ size = "20G"
+ }
+ network {
+ id = 0
+ model = "virtio"
+ bridge = "vmbr0"
+ macaddr = "6E:ED:EF:EB:3C:AA"
+ }
+ network {
+ id = 1
+ model = "virtio"
+ bridge = "vmbr0"
+ macaddr = "FE:95:CC:A5:EB:43"
+ }
+ # Delegate to puppet at the end of the provisioning the software setup
+ provisioner "remote-exec" {
+ inline = [
+ "sysctl -w net.ipv4.ip_forward=1",
+ # make it persistent
+ "sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/g' /etc/sysctl.conf",
+ # add route to louvre (the persistence part is done through puppet)
+ "iptables -t nat -A POSTROUTING -s 192.168.128.0/24 -o eth0 -j MASQUERADE",
+ "sed -i 's/127.0.1.1/${var.gateway_ip}/g' /etc/hosts",
+ "puppet agent --server ${var.puppet_master} --environment=${var.puppet_environment} --waitforcert 60 --test || echo 'Node provisionned!'",
+ ]
+ }
+}
+
+resource "proxmox_vm_qemu" "storage" {
+ name = "storage0"
+ desc = "swh storage node"
+ # hypervisor onto which make the vm
+ target_node = "orsay"
+ # See init-template.md to see the template vm bootstrap
+ clone = "template-debian-9"
+ # linux kernel 2.6
+ qemu_os = "l26"
+ # generic setup
+ sockets = 1
+ cores = 2
+ memory = 8192
+ # boot machine when hypervirsor starts
+ onboot = true
+ #### cloud-init setup
+ # to actually set some information per os_type (values: ubuntu, centos,
+ # cloud-init). Keep this as cloud-init
+ os_type = "cloud-init"
+ # ciuser - User name to change ssh keys and password for instead of the
+ # image’s configured default user.
+ ciuser = "${var.user_admin}"
+ # searchdomain - Sets DNS search domains for a container.
+ searchdomain = "${var.domain}"
+ # nameserver - Sets DNS server IP address for a container.
+ nameserver = "${var.dns}"
+ # sshkeys - public ssh keys, one per line
+ sshkeys = "${var.ssh_key_data}"
+ # ipconfig0 - [gw =] [,ip=<IPv4Format/CIDR>]
+ ipconfig0 = "ip=192.168.128.2/24,gw=${var.gateway_ip}"
+ ssh_user = "${var.user_admin}"
+ disk {
+ id = 0
+ type = "virtio"
+ storage = "orsay-ssd-2018"
+ storage_type = "ssd"
+ size = "32G"
+ }
+ network {
+ id = 0
+ model = "virtio"
+ bridge = "vmbr0"
+ macaddr = "CA:73:7F:ED:F9:01"
+ }
+
+ # Delegate to puppet at the end of the provisioning the software setup
+ provisioner "remote-exec" {
+ inline = [
+ "sed -i 's/127.0.1.1/192.168.128.2/g' /etc/hosts",
+ "puppet agent --server ${var.puppet_master} --environment=${var.puppet_environment} --waitforcert 60 --test || echo 'Node provisionned!'",
+ ]
+ }
+ # forced to specify as there is no way to introspect the gateway's ip
+ depends_on = ["proxmox_vm_qemu.gateway"]
+}
File Metadata
Details
Attached
Mime Type
text/plain
Expires
Wed, Dec 18, 8:40 AM (1 d, 23 h ago)
Storage Engine
blob
Storage Format
Raw Data
Storage Handle
3220323
Attached To
D1762: Provision staging vms through terraform (up to the first puppet run)
Event Timeline
Log In to Comment