- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Jan 8 2023
Oct 19 2022
Oct 3 2022
Sep 14 2022
Jul 4 2022
May 2 2022
Mar 3 2022
Feb 25 2022
Apr 16 2021
Apr 9 2021
Mar 18 2021
Sep 28 2020
This was now just a matter of doing the clickity click on all hosts. They're now using the dedicated vlan.
I've added a bridge vmbr443 to all hypervisors.
I wanted to rename the bridges on the proxmox hosts to something clearer (like vmbr-staging) but it turns out that proxmox only supports bridges named /vmbr\d+/. Ugh.
Sep 23 2020
Sep 2 2020
Sep 1 2020
Jul 21 2020
Jun 9 2020
(the VLAN id for the staging vlan is 443).
In T1872#44251, @rdicosmo wrote:Is this now done? If that's the case this ticked should be closed.
May 28 2020
May 9 2020
Is this now done? If that's the case this ticked should be closed.
Apr 24 2020
Feb 4 2020
Jan 22 2020
Sep 10 2019
A priori, everything is done on sesi's side, as asked by @olasd, last ticket entries excerpt:
Sep 6 2019
sesi asked for some more details about the hardware to propagate that vlan too.
I just replied, so hopefully, that should converge soon.
Aug 8 2019
Aug 7 2019
Most of the relevant commits use the 192.168.128.0/24 address space.
Aug 1 2019
An email asking more details has been received or clarification.
P485 is the draft for the answer.
I'll rework the git history and split the diffs in multiple smaller ones.
Jul 31 2019
we really want this first staging vm to live in a dedicated /24 instead of prod's one (192.168.100.0/24),
- init-template: Update documentation about some more needed steps
- prepare-workstation: Fix instructions
- gateway: Instantiate the staging gateway
- storage0: Use the gateway for that node
- staging: Work around the puppet agent non standard exit code
- storage: Push dependency on gateway provisioning
Jul 30 2019
- staging: Add gateway node
- staging: Update macaddress to new one
Jul 29 2019
we really do not want this hardocoded MAC address,
Looks globally ok but as discussed IRL:
- we really do not want this hardocoded MAC address,
- we really want this first staging vm to live in a dedicated /24 instead of prod's one (192.168.100.0/24),
- it would be nice to check if this declared resources can be "templatized"; having to copy/paste this whole resource declaration for each and every VM we want to instantiate is the promise of a nightmare...
Jul 27 2019
- Rename storage.tf to staging.tf as this describes staging vms
- staging: Actually apply the first puppet run
Jul 26 2019
- storage: Update /etc/hosts and do not use the --certname flag
- storage.tf: Expose the hostname as variable for reuse
- storage.tf: Use sed to alter /etc/hosts
Jul 25 2019
- storage: Refactor into variables what will change
- storage: Delegate to puppet the provisioning
- storage: Refactor into variables what will change
Heads up:
terraform apply ssh root@192.168.100.125 "puppet agent --server pergamon.internal.softwareheritage.org --test --noop --environment=new_staging --waitforcert 60 --certname storage0.internal.staging.swh.network"
- init-template: Update necessary steps for debian-9
- Use template-debian-9 as default to match production
- Docs: Improve phrasing sentence
Fix issues with debian 9 by using ssh connection to template
Jul 24 2019
Remove not-supposed to be committed-code
Remove in-progress work from diff
- storage: Update default hardware
- storage0: Drop the swh- prefix which is redundant in fqdn form
- init-template: Add default instructions adaptations to template
- Add work around current limitation in api-proxmox-go
- storage.tf: Format according to terraform conventions
- Reference workstation preparation so that it's reproducible
- variables: Use the actual root key referenced in the password store
- Add FIXME about the new vlan
- - Update documentations to explain how to reproduce
Plug to production branch
This technically looks good but from a security point of view, why put the secret "private" and "provenance-index" directories in a publically accessible location ?