Details
- Reviewers
olasd - Group Reviewers
Reviewers - Maniphest Tasks
- T2606: Test puppet configuration in a local vagrant environment
Diff Detail
- Repository
- rSENV Puppet Environment
- Branch
- master
- Lint
No Linters Available - Unit
No Unit Test Coverage - Build Status
Buildable 15908 Build 24483: arc lint + arc unit
Event Timeline
With the way we deploy puppet changes, environments map *exactly* to branches of the swh-site repository, so this should really be --from-environment $FROM --to-environment $TO rather than adding another argument to the script.
And I think that's where the change in D4147 really breaks down (and what made me feel that we couldn't do it in the first place): when you want to test a change to the variables specific to the environment, in practice, as you're doing that in a new branch, you would have to create a new file in the data/environments/ hierarchy every time (as branching out means creating a new environment), and then you'd need to fold the changes back into the target environment before merging.
In our current deployment, the "location" variable is a bit redundant with the "environment" one but, in effect, that's because we're overloading the "puppet meaning" of the term environment with our own "deployment environment".
- puppet environments exist solely to be able to run different versions of the puppet code; we could rename them to default and devel rather than production and staging, and we could have all hosts on the default puppet environment (regardless of what the intent of their deployment is), unless we really want to test an upgrade of the puppet code on them.
- locations are the different areas in which we're deploying code, with their specificities
Now, we could split the location variable/fact into two:
- a subnet (values: azure_euwest, sesi_rocquencourt_prod, sesi_rocquencourt_staging, vagrant), which would set the "network-related" stuff like DNS resolvers, ntp servers, what subnet is the "local" subnet, ..
- a deployment_mode (deployment_zone ? deployment ?) variable (values: production, staging) setting the default values for which hosts should be contacted for what purpose.
We could have overrides for certain subnets in certain deployment modes (e.g. to point production hosts on azure to local mirrors of things).
I'm envisioning the following hiera hierarchy:
hierarchy: - path: "private/hostname/%{trusted.certname}.yaml" name: "Per hostname private credentials" - path: "hostname/%{trusted.certname}.yaml" name: "Per hostname configuration" - path: "private/deployments/%{::deployment}/%{::subnet}.yaml" name: "Per deployment and subnet credentials" - path: "deployments/%{::deployment}/%{::subnet}.yaml" name: "Per deployment and subnet configuration" - path: "private/deployments/%{::deployment}/common.yaml" name: "Per deployment private credentials" - path: "deployments/%{::deployment}/common.yaml" name: "Per deployment common configuration" - path: "private/subnets/%{::subnet}.yaml" name: "Per subnet private credentials" - path: "subnets/%{::subnet}.yaml" name: "Per subnet configuration" - path: "private/common.yaml" name: "Common private credentials" - path: "common/*.yaml" name: "Common configuration"
I'll submit a diff to implement this (superseding D4147, then)
Awesome, thanks for the explanation.
I think it's getting clearer in my head now.
that's because we're overloading the "puppet meaning" of the term environment with our own "deployment environment".
yes, that's most probably the source of my confusion so far.
@vsellier D4149#103128 ^ heads up, I think you'd like this clarification + the plan to improve the situation ;)