diff --git a/README.md b/README.md index dc3ee98..4adb364 100644 --- a/README.md +++ b/README.md @@ -1,170 +1,304 @@ Software Heritage Puppet environment ==================================== This repository contains the metadata for Software Heritage's puppet infrastructure git repositories. The repositories are managed using [myrepos][1] (see the .mrconfig file), and the `mr` command. [1]: http://myrepos.branchable.com/ As our .mrconfig file contains "untrusted" checkout commands (to setup the upstream repositories of our mirrors of third-party modules), you need to add the .mrconfig file to your ~/.mrtrust file: readlink -f .mrconfig >> ~/.mrtrust You can then checkout the repositories using `mr up`. For periodic updates after initial setup, you can use the `bin/update` helper: cd puppet-environment bin/update Module Layout ------------- We use dynamic environments, and the role/profiles puppet workflow as demonstrated by the following series of articles: - http://garylarizza.com/blog/2014/02/17/puppet-workflow-part-1/ - http://garylarizza.com/blog/2014/02/17/puppet-workflow-part-2/ - http://garylarizza.com/blog/2014/02/18/puppet-workflow-part-3/ - http://garylarizza.com/blog/2014/03/07/puppet-workflow-part-3b/ - http://garylarizza.com/blog/2014/10/24/puppet-workflows-4-using-hiera-in-anger/ Our main manifests are present in the `swh-site` repository. Each branch of that repository corresponds to an environment in the puppet workflow presented. This repository contains the Puppetfile referencing all the modules, the main manifest file `manifests/site.pp`, and the hiera `data` directory, as well as the two "site-modules" for `profile`s and `role`s. Our setup mirrors the git repositories of third-party Puppet modules on the Software Heritage git server --- this is to avoid reliance on 3rd party *hosting* services in order to be able to deploy. We add an upstream remote to the repositories through our mr configuration. Deployment ---------- Deployment happens on the `pergamon.softwareheritage.org` server, using our custom deployment script: sudo /etc/puppet/environments/production/deploy.sh This updates the dynamic environments according to the contents of the branches of the git repository, and the Puppetfile inside. For each third-party module, we pin the module definition in the Puppetfile to a specific tag or revision. Our specific deploy script also fetches private repositories and merges them with the public r10k setup. Local puppet manifest diffing with octocatalog-diff --------------------------------------------------- puppet-environment contains the whole scaffolding to be able to use [octocatalog-diff][2] on our manifests. This allows for quick(er) local iterations while developing complex puppet manifests. Dependencies [2]: https://github.com/github/octocatalog-diff You need the following packages installed on your machine: r10k octocatalog-diff puppet ### Running The `bin/octocatalog-diff` script allows diffing the manifests between two environments (that is, between two branches of the *swh-site* repository. By default it diffs between production and staging. Default usage: bin/octocatalog-diff pergamon Limitations Our setup for octocatalog-diff doesn't support exported resources, so you won't see your fancy icinga checks there. Integration of third party puppet modules ----------------------------------------- We mirror external repositories to our own forge, to avoid having external dependencies in our deployment. In the `swh-site/Puppetfile`, we pin the installation of those modules to the highest version (that works with our current puppet/facter version), by using the `:ref` specifier. ### Adding a new external puppet module In the *puppet-environment* repository, the `bin/import-puppet-module` takes care of the following tasks: * Getting metadata from the Puppet forge for the module (description, upstream git URL) * Cloning the repository * Creating a mirror repository on the Software Heritage forge, with the proper permissions and metadata (notably the Sync to GitHub flag) * Pushing the clone to the forge * Updating the `.mrconfig` and `.gitignore` files to know the new repository To be able to use the script, you need to : * Be a member of the System Administrators Phabricator group * Have the Arcanist API key setup * A pair of python dependencies : `python3-phabricator` and `python3-requests` (pull them from testing if needed). Example usage to pull the `elastic/elasticsearch` module bin/import-module elastic-elasticsearch git diff # review changes git add .mrconfig .gitignore git commit -m "Add the elastic/elasticsearch module" git push Once the module is added, you need to register it in `swh-site/Puppetfile`. You should also check in the module metadata whether any dependencies need importing as well, which you should do using the same procedure. ### Updating external puppet modules There's two sides of this coin: #### Updating our git clone of external puppet modules The *puppet-environment* `.mrconfig` file has a pullup command which does the right thing. To update all clones: mr -j4 pullup #### Upgrading external puppet modules Upgrading external puppet modules happens manually. In the *puppet-environment* repository, the `bin/check-module-updates` script compares the Puppetfile and the local clones and lists the available updates. (depends on `ruby r10k`). On a staging branch of the *swh-site* repository, update the `:ref` value for the module in the `Puppetfile` to the latest tag. You can then run `octocatalog-diff` on a few relevant servers and look for changes. Deploy workflow ---------------- ### Semi-automated you@localhost$ # hack on puppet Git repo you@localhost$ rake validate you@localhost$ git commit you@localhost$ git push you@localhost$ cd puppet-environment you@localhost$ bin/deploy-on machine1 machine2... Remember to pass `--apt` to `bin/deploy-on` if freshly uploaded Software Heritage packages are to be deployed. Also, `bin/deploy-on --help` is your friend. ### Manual you@localhost$ # hack on puppet Git repo you@localhost$ rake validate you@localhost$ git commit you@localhost$ git push you@pergamon$ sudo swh-puppet-master-deploy you@machine$ sudo apt-get update # if a new or updated version of a Debian package needs deploying you@machine$ sudo swh-puppet-test # to test/review changes you@machine$ sudo swh-puppet-apply # to apply + +Local tests with vagrant +------------------------ + +> **_NOTE_**: The vagrant configuration uses a public image generated with packer. +See the dedicated readme[1] in the packer directory for more information to manage this image. + +### Setup + +Vagrant and Virtualbox tools must be installed. On a debian based environment: + +``` +# 2020-09-17 vagrant is not working with virtualbox 6.1 +apt install vagrant virtualbox-6.0 +# An additional plugin must be installed to manage the virtualbox addons in the vms +vagrant plugin install vagrant-vbguest +``` + +### Usage + +#### Prepare the puppet environment + +The puppet directory struture needs to be prepared before starting a vm. +It can be done with the ``bin/prepare-vagrant-conf`` script. The script must be run each time a new commit is done to refresh the code applied on the vms. + +The working directory is ``/tmp/puppet``. + +``` +bin/prepare-vagrant-conf [-b branch] +``` + +The ``-b`` parameter allows to create a specific puppet environment based on the branch with the same name. + +It allows to test changes on feature branches (The ``environment`` variable in the Vagrantfile has to be updated accordingly). + +**_NOTE_**: This command only uses the **committed files**. The pending changes will not be included on the configuration. + +(**to be confirmed**) By convention, the vagrant names are prefixed by the environment for the core archive servers: +* staging-webapp0 +* staging-worker0 +* staging-db0 +* production-worker0 +* production-db0 +* ... + +#### Status +The status of all the vms present on the vagranfile can be checked with: + +``` +vagrant status +``` + +Example: +``` +$ vagrant status +Current machine states: + +staging-webapp running (virtualbox) +staging-worker0 running (virtualbox) +prod-worker01 not created (virtualbox) +test poweroff (virtualbox) + +This environment represents multiple VMs. The VMs are all listed +above with their current state. For more information about a specific +VM, run `vagrant status NAME`. +``` + +#### Start a vm + +``` +vagrant up +``` + +For example to start the worker0 of the staging: + +``` +$ # update the configuration +$ bin/prepare-vagrant-conf +$ # start the vm +$ vagrant up +``` + +If it's the first time this vm is launched, vagrant will apply the puppet configuration to init the server. + +#### Apply the puppet configuration + +To speedup the tests, a scripts can be used to synchronize the changes made on the working directories and the puppet configuration directories used by vagrant. This script avoid to have to commit each change before being able to test it. + +In one terminal: +``` +bin/prepare-vagrant-conf +bin/watch-vagrant-conf +``` + +In another terminal: +``` +vagrant provision +``` + +In this case, the configuration will always be uptodate with the local directories. + +> **_NOTE_**: It works for basic changes on the swh-site and data configurations. For other changes like declaring a new puppet module, the ``prepare-vagrant-conf`` must be called to completely rebuild the configuration. + +#### connect to a vm + +A shell can be opened to a running vm with this command: + +``` +vagrant ssh +``` + +#### Stop a vm + +``` +vagrant stop +``` + +#### Update the configuration + +If the vagrantfile is updated to change a property of a server, like the memory, cpu configuration or network, the configuration has to be reloaded: + +``` +vagrant reload +``` + +#### Cleanup + +To recover space, the vms can be destroyed with: + +``` +vagrant destroy +``` + +[1]: [packer/README.md][packer/README.md] diff --git a/Vagrantfile b/Vagrantfile new file mode 100644 index 0000000..4014b3d --- /dev/null +++ b/Vagrantfile @@ -0,0 +1,207 @@ +Vagrant.require_version ">= 2.2.0" +ENV["LC_ALL"] = "en_US.UTF-8" + +environment="staging" + +# local configuration +#$global_debian10_box = "debian10-20200922-0913" +#$global_debian10_box_url = "file:///path/to/packer/builds/swh-debian-10.5-amd64-20200922-0913.box" + +# http configuration +$global_debian10_box = "debian10-20200922-0913" +$global_debian10_box_url = "https://annex.softwareheritage.org/public/isos/virtualbox/debian/swh-debian-10.5-amd64-20200922-0913.box" + +################ +## STAGING +################ +Vagrant.configure("2") do |config| + config.vm.define :"staging-webapp" do |config| + + # config.ssh.insert_key = false + + config.vm.box = $global_debian10_box + config.vm.box_url = $global_debian10_box_url + config.vm.box_check_update = false + config.vm.hostname = "webapp.internal.staging.swh.network" + config.vm.network :private_network, ip: "10.168.128.8", netmask: "255.255.255.0" + + config.vm.synced_folder "/tmp/puppet/", "/tmp/puppet", type: 'nfs' + + config.vm.provider "virtualbox" do |vb| + vb.gui = false + vb.check_guest_additions = false + vb.linked_clone = true + vb.customize ["modifyvm", :id, "--memory", "512", "--name", "staging-webapp", "--cpus", "2", "--vram", "256"] + end + config.vm.provision "puppet" do |puppet| + puppet.environment_path = "/tmp/puppet/environments" + puppet.environment = "#{environment}" + puppet.hiera_config_path = "#{puppet.environment_path}/#{puppet.environment}/hiera-vagrant.yaml" + puppet.manifest_file = "site.pp" + puppet.manifests_path = "swh-site/manifests" + puppet.options = "--verbose" + # puppet.options = "--verbose --debug" + # puppet.options = "--verbose --debug --trace" + puppet.facter = { + "vagrant_testing" => "1", + "testing" => "vagrant", + "location" => "vagrant" + } + end + end +end + +Vagrant.configure("2") do |config| + config.vm.define :"staging-worker0" do |config| + + config.vm.box = $global_debian10_box + config.vm.box_url = $global_debian10_box_url + config.vm.hostname = "worker0.staging.swh.network" + config.vm.network :private_network, ip: "10.168.128.5", netmask: "255.255.255.0" + + config.vm.synced_folder "/tmp/puppet/", "/tmp/puppet", type: 'nfs' + + config.vm.provider "virtualbox" do |vb| + vb.gui = false + vb.check_guest_additions = false + vb.linked_clone = true + vb.customize ["modifyvm", :id, "--memory", "4096", "--name", "staging-worker0", "--cpus", "2", "--vram", "256"] + end + + config.vm.provision "puppet" do |puppet| + puppet.environment_path = "/tmp/puppet/environments" + puppet.environment = "#{environment}" + puppet.hiera_config_path = "#{puppet.environment_path}/#{puppet.environment}/hiera-vagrant.yaml" + puppet.manifest_file = "site.pp" + puppet.manifests_path = "swh-site/manifests" + puppet.options = "--verbose" + # puppet.options = "--verbose --debug" + # puppet.options = "--verbose --debug --trace" + puppet.facter = { + "vagrant_testing" => "1", + "testing" => "vagrant", + "location" => "vagrant" + } + end + end +end + +################ +## ADMIN +################ +Vagrant.configure("2") do |config| + config.vm.define :"admin-inventory" do |config| + + # config.ssh.insert_key = false + + config.vm.box = $global_debian10_box + config.vm.box_url = $global_debian10_box_url + config.vm.box_check_update = false + config.vm.hostname = "inventory.internal.softwareheritage.org" + config.vm.network :private_network, ip: "10.168.101.5", netmask: "255.255.255.0" + + config.vm.synced_folder "/tmp/puppet/", "/tmp/puppet", type: 'nfs' + + config.vm.provider "virtualbox" do |vb| + vb.gui = false + vb.check_guest_additions = false + vb.linked_clone = true + vb.customize ["modifyvm", :id, "--memory", "512", "--name", "admin-inventory", "--cpus", "2", "--vram", "256"] + end + config.vm.provision "puppet" do |puppet| + puppet.environment_path = "/tmp/puppet/environments" + puppet.environment = "#{environment}" + puppet.hiera_config_path = "#{puppet.environment_path}/#{puppet.environment}/hiera-vagrant.yaml" + puppet.manifest_file = "site.pp" + puppet.manifests_path = "swh-site/manifests" + puppet.options = "--verbose" + # puppet.options = "--verbose --debug" + # puppet.options = "--verbose --debug --trace" + puppet.facter = { + "vagrant_testing" => "1", + "testing" => "vagrant", + "location" => "vagrant" + } + end + end +end + +################ +## PRODUCTION +################ +Vagrant.configure("2") do |config| + config.vm.define :"prod-worker01" do |config| + + config.vm.box = $global_debian10_box + config.vm.box_url = $global_debian10_box_url + config.vm.hostname = "worker01.softwareheritage.org" + config.vm.network :private_network, ip: "10.168.100.21", netmask: "255.255.255.0" + + config.vm.synced_folder "/tmp/puppet/", "/tmp/puppet", type: 'nfs' + + config.vm.provider "virtualbox" do |vb| + vb.gui = false + vb.check_guest_additions = false + vb.linked_clone = true + vb.customize ["modifyvm", :id, "--memory", "4096", "--name", "worker01", "--cpus", "2", "--vram", "256"] + end + + config.vm.provision "puppet" do |puppet| + puppet.environment_path = "/tmp/puppet/environments" + puppet.environment = "#{environment}" + puppet.hiera_config_path = "#{puppet.environment_path}/#{puppet.environment}/hiera-vagrant.yaml" + puppet.manifest_file = "site.pp" + puppet.manifests_path = "swh-site/manifests" + puppet.options = "--verbose" + # puppet.options = "--verbose --debug" + # puppet.options = "--verbose --debug --trace" + puppet.facter = { + "vagrant_testing" => "1", + "testing" => "vagrant", + "location" => "vagrant" + } + end + end +end + +################ +## MISC +################ +Vagrant.configure("2") do |config| + config.vm.define :test do |config| + + config.ssh.insert_key = false + + config.vm.box = $global_debian10_box + config.vm.box_url = $global_debian10_box_url + config.vm.box_check_update = false + config.vm.hostname = "test.softwareheritage.org" + config.vm.network :private_network, ip: "10.168.100.30", netmask: "255.255.255.0" + config.vm.network :private_network, ip: "10.168.101.30", netmask: "255.255.255.0" + config.vm.network "forwarded_port", guest: 10030, host: 22 + + config.vm.synced_folder "/tmp/puppet/", "/tmp/puppet", type: 'nfs' + + config.vm.provider "virtualbox" do |vb| + vb.gui = false + vb.check_guest_additions = false + vb.linked_clone = true + vb.customize ["modifyvm", :id, "--memory", "512", "--name", "test", "--cpus", "2", "--vram", "256"] + end + config.vm.provision "puppet" do |puppet| + puppet.environment_path = "/tmp/puppet/environments" + puppet.environment = "vagrant" + puppet.hiera_config_path = "#{puppet.environment_path}/#{puppet.environment}/hiera-vagrant.yaml" + puppet.manifest_file = "site.pp" + puppet.manifests_path = "swh-site/manifests" + puppet.options = "--verbose" + # puppet.options = "--verbose --debug" + # puppet.options = "--verbose --debug --trace" + puppet.facter = { + "vagrant_testing" => "1", + "testing" => "vagrant", + "location" => "vagrant" + } + end + end +end diff --git a/bin/prepare-vagrant-conf b/bin/prepare-vagrant-conf new file mode 100755 index 0000000..fbbb965 --- /dev/null +++ b/bin/prepare-vagrant-conf @@ -0,0 +1,60 @@ +#!/bin/bash + +set -eu + +PUPPET_ENV=$(readlink -f $(dirname $0)/..) +OCD_BASE="${PUPPET_ENV}/octocatalog-diff" +FACTS_DIR="${OCD_BASE}/facts" + +function usage { + echo "usage: $0 [-b/--branch branch]" +} + +CLEAN_TMPDIR=true +USE_REMOTE_REPOS=false +FROM=production +TO=staging +HOSTS=() +OCTOCATALOG_DIFF_ARGS= +R10K_ARGS= +CLEAN_TMPDIR=false +UNCOMMITTED=false + +while (( "$#" )); do + case "$1" in + -b|--branch) + BRANCH=$2 + shift + ;; + -u|-uncommited) + UNCOMMITTED=true + ;; + *) + echo u + ;; + esac + shift +done + +TMPDIR=/tmp/puppet +mkdir -p /tmp/puppet +rm -rf /tmp/puppet/environments/ + +function template { + sed -e "s#@PUPPET_ENV@#${PUPPET_ENV}#g" -e "s#@TMPDIR@#${TMPDIR}#g" $1 > $2 +} + +# R10k config +template $OCD_BASE/r10k.yaml $TMPDIR/r10k.yaml +# override git configuration to clone from local repositories instead of the forge +template $OCD_BASE/gitconfig $TMPDIR/.gitconfig + +HOME=$TMPDIR r10k deploy environment -c $TMPDIR/r10k.yaml --puppetfile $R10K_ARGS production staging ${BRANCH} + +unknown_branch=false +git clone $PUPPET_ENV/private/swh-private-data-censored $TMPDIR/environments/production/data/private +git clone $PUPPET_ENV/private/swh-private-data-censored $TMPDIR/environments/staging/data/private + +if [ ! -z "${BRANCH}" ]; then + git clone $PUPPET_ENV/private/swh-private-data-censored $TMPDIR/environments/$BRANCH/data/private +fi diff --git a/bin/watch-vagrant-conf b/bin/watch-vagrant-conf new file mode 100755 index 0000000..c80bd15 --- /dev/null +++ b/bin/watch-vagrant-conf @@ -0,0 +1,30 @@ +#!/bin/bash + +set -e + +PUPPET_ENV=$(readlink -f $(dirname $0)/..) + +RSYNC_PARAMS="-aP" + +EXCLUDES=".git .vagrant" +EXCLUDES_PARAMS="" + +for ex in ${EXCLUDES}; do + EXCLUDES_PARAMS="${EXCLUDES_PARAMS} --exclude $ex" +done + +function sync_puppet_conf { + rsync $RSYNC_PARAMS $EXCLUDES_PARAMS swh-site/data/ /tmp/puppet/environments/vagrant/data + rsync $RSYNC_PARAMS $EXCLUDES_PARAMS swh-site/site-modules/ /tmp/puppet/environments/vagrant/site-modules/ + rsync $RSYNC_PARAMS $EXCLUDES_PARAMS private/swh-private-data-censored/ /tmp/puppet/environments/vagrant/data/private +} + +# Initial sync +sync_puppet_conf + +while true; do + inotifywait -q -r -e modify -e moved_to -e moved_from -e move -e create $EXCLUDES_PARAMS ${PUPPET_ENV} + echo Update detecting, synchronizing.... + sleep .5 + sync_puppet_conf +done diff --git a/packer/README.md b/packer/README.md index de1fc24..64412da 100644 --- a/packer/README.md +++ b/packer/README.md @@ -1,144 +1,143 @@ Packer usage ============ packer[1] is used to generate the virtualbox[2] images used to locally simulate the different servers and to test the puppet configuration and the service deployments. Setup ----- Packer and virtualbox tools are needed to create the base image. -An additional plugin must be installed to manage the virtualbox addons in the vms. On debian(10) : ``` -apt install vagrant virtualbox-6.0 # 2020-09-17 vagrant is not working with virtualbox 6.1 +apt install packer virtualbox-6.0 # 2020-09-17 vagrant is not working with virtualbox 6.1 vagrant plugin install vagrant-vbguest ``` Generate a new virtualbox image ------------------------------- ### Configuration description For the debian buster image, this files are used : * `debian_buster.json`: the configuration entrypoint describing the tasks packer will execute to generate the image * `http/buster-preseed.cfg`: The debian preseed file used by debian to manage the installation. Debian loads it through an http server started by packer during the build. * `scripts/post-install.sh`: Post install script containing the post installation tasks needed to prepare the vms to ready to apply the puppet configuration (install puppet, manage vagrant's user key, ...) ### Build the image To build an image, use this command in the current directory : ``` packer build ``` For example, to build or rebuild the debian buster image : ``` packer build debian_buster.json ``` :WARNING: virtualbox open the vm's console during the build. Don't interact with it to avoid to interfer with the packer execution. This command executes this process : * Create a new VM in virtualbox and boot it with the iso image defines in the ``iso_image`` parameter. * Simulate keyboard interactions to enter the ``boot_command`` which basically tells debian to start the installation based on the ``buster_preseed.cfg`` file * Call one or several provisioners after the installation to fine tune the installation. For our needs, only the ``scripts/post-install.sh`` script is executed * package the image into a vbox file directly usable by virtualbox and place it in the ``builds`` directory. ### Publish the image The image must be publish on the public annex site[3] to be usable by other persons than the person who build it. The images are published in the ``/isos/virtualbox/debian``[4] directory. The ``git-annex`` usage is documented on the intranet[5]. Once the new image is published, the ``Vagrantfile`` [4] file can be updated to declare it (``$global_debian10_box`` and ``$global_debian10_box_url`` properties[6]). [1]: https://www.packer.io [2]: https://www.virtualbox.org [3]: https://annex.softwareheritage.org/public [4]: https://forge.softwareheritage.org/source/annex-public/browse/master/isos/virtualbox/debian/ [5]: https://intranet.softwareheritage.org/wiki/Git_annex [6]: https://forge.softwareheritage.org/source/puppet-environment/browse/master/Vagrantfile Annex ----- ### Generate a preseed file It can be useful to prepare the installation for a new debian version : * install the new version on a vm * execute the following commands: ``` apt update apt install curl debconf debconf-utils debconf-get-selections --installer > /tmp/preseed.cfg debconf-get-selections >>/tmp/preseed.cfg ``` The preseed file must be adapted to specify the user passwords or the partitionning apparently not included in the preseed file. For buster, the following lines were added : ``` d-i pkgsel/include string puppet openssh-server apt-transport-https # Whether to upgrade packages after debootstrap. # Allowed values: none, safe-upgrade, full-upgrade d-i pkgsel/upgrade select full-upgrade # Root password, either in clear text d-i passwd/root-password password rootroot d-i passwd/root-password-again password rootroot # To create a normal user account. d-i passwd/username string vagrant # Normal user's password, either in clear text d-i passwd/user-password password vagrant d-i passwd/user-password-again password vagrant d-i passwd/user-fullname string Vagrant # Create the first user with the specified UID instead of the default. d-i passwd/user-uid string 999 # The user account will be added to some standard initial groups. To # override that, use this. d-i passwd/user-default-groups string audio cdrom video sudo ### Partitioning d-i partman-auto/init_automatically_partition select biggest_free #d-i partman-auto/disk string /dev/vda d-i partman-auto/method string lvm # Keep some space on the lvm volume to play with snapshots d-i partman-auto-lvm/guided_size string 90% # If one of the disks that are going to be automatically partitioned # contains an old LVM configuration, the user will normally receive a # warning. This can be preseeded away... d-i partman-lvm/device_remove_lvm boolean true # The same applies to pre-existing software RAID array: d-i partman-md/device_remove_md boolean true # And the same goes for the confirmation to write the lvm partitions. d-i partman-lvm/confirm boolean true d-i partman-lvm/confirm_nooverwrite boolean true d-i partman-auto-lvm/no_boot boolean true d-i partman-auto/choose_recipe select atomic d-i partman-lvm/device_remove_lvm boolean true d-i partman-md/device_remove_md boolean true d-i partman-lvm/confirm boolean true d-i partman-lvm/confirm_nooverwrite boolean true d-i partman-partitioning/confirm_write_new_label boolean true d-i partman/choose_partition select finish d-i partman/confirm boolean true d-i partman/confirm_nooverwrite boolean true d-i apt-setup/cdrom/set-first boolean false d-i apt-setup/cdrom/set-next boolean false d-i apt-setup/cdrom/set-failed boolean false d-i mirror/country string manual d-i mirror/http/hostname string http.fr.debian.org d-i mirror/http/directory string /debian d-i mirror/http/proxy string d-i apt-setup/use_mirror boolean false popularity-contest popularity-contest/participate boolean false ``` It's important that the vagrant user doesn't have an UID of 1000 as puppet will try to create a user with a uid/gid of 1000/1000.