The test phase is achieved. OPNSense seems to have a consensus with no blocking points.
Let's start the real implementation now.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Oct 19 2020
formating (fat finger)
formating
formating
rollback the network configuration commit (should be a new diff)
poc network configuration in markdown
Oct 16 2020
Oct 15 2020
There is a proxmox builder [1] for packer, I will give it a try to check if we can benefit of the work done for vagrant on puppet and have a common base between the real vms and the local vms used to test.
👍 it looks synchronized
Oct 14 2020
fix the wrong status change embedded with the previous comment
A prometheus exporter is available as an additional plugin.
The open vpn configuration support a certificat authority and csr stuff currently manually managed on louvre.
- IPSec / Azure configuration
I was not able to test the git backup plugin as it seems it's not yet released and it doesn't appear on the installable plugin list.
The commit for the version 1.0 was done 6 days ago : https://github.com/opnsense/plugins/commit/87c4c96fe1d1dc881f72f91ee67b6a84c9dea42a
I have also tested with the development version of pfsense but it also does not appear.
The HA was quite simple to configure with the documentation [1] and an additional blog post which helps with the nat section not very explicit in the official documentation [2]
It's recommended to have a dedicated network link between the 2 firewalls used to the synchronization. In the tests I have done, I configured the sync on the admin network (VLAN442). It works but it's not the optimal configuration.
Oct 13 2020
Well, I let this problem aside for the moment as there is nothing special configured for the interface on the VLAN1300 and I have no idea of what can be the source of the problem. Perhaps the "illumination" will come later...
Having the WAN gateway declared on the VLAN1330 is working well.
Changing the default gateway to 128.93.166.62 force to declare an additional route for the vpn connections (192.168.101.0/24 => gw 192.168.100.1).
PFSense and OPNsense were tested.
Oct 12 2020
@olasd I looked at the swh-docs repository to store the sources of the diagrams as you have suggested but I'm not sure this is the better place to store them as the goal is not to display them on the doc site.
LGTM (not tested)
Thanks, it's really great.
I have tested locally the qemutest vm and converted the staging-webapp and staging-deposit vms, everything looks good.
The virtualbox and libvirt networks (with the same ip range) can't cohabit together but after a cleanup on the virtualbox side, everything works as expected.
Oct 8 2020
Thanks, no changes are detected by terraform after this diff
Oct 7 2020
rebase
Link to a diff, not a task
fix a typo on the commit message
lgtm, with this, we will be able to update the staging environment without impacting the rest of the infra
Oct 6 2020
In D4165#103265, @ardumont wrote:looks good to me.
i don't see vagrant in there but i gather that's what you said about making the network part in vagrant a noop or something.
WDYT to add a variable like profile::network::[activated|managed|whatever] to activate or not the network profile application ? It would avoid to introduce some vagrant specifics in the manifests
In D4150#102910, @ardumont wrote:If i'm understanding this correctly, this will allow us to generate self-signed certificates when we want to create a service in our stack that needs a certificate.
Just generate it with the script within (generate-certificate) and commit into this repository.
Then trigger back the vagrant provisision <vm-with-desired-service>.
Then everything should run smoothly within that provision step.
correct?
Yes exactly, it's correct. It remains only the icinga part to remove the last errors during the provisioning. I still haven't a simple way to do it as it uses a certificate named with the vm's fqdn and should be generated after the vm creation if we want it to be automatised.
Another question is i'm just wondering whether it should be named netbox-vagrant instead of netbox given what we have in the defaults.yaml [1]
good remark. There must be a mistake somewhere on this override of this property as when I provision the vm locally, it searches for netbox. I will remove this declaration because it's not necessary and it will allow to remove a property for vagrant on the defaults.yaml file.
Refactor virtualbox images declarations
Remove useless empty lines
Oct 5 2020
I failed to successfully execute mount on the container without the privileged option so I finally configured the swh-fuse job with this option.
In fact after others tests, only the device and the --privileged option are necessary as runnin in privileged mode completely disable seccomp.
I made some test locally, adding the options --privileged, --device /dev/fuse and --cap-add SYS_ADMIN is working :
Oct 2 2020
The service is up and runnig at https://inventory.internal.softwareheritage.org
I will add the admin password on the credentials.