Page MenuHomeSoftware Heritage

Install a new VPN endpoint at Rocquencourt
Closed, MigratedEdits Locked

Description

There is actually only one VPN endpoint at Rocquencourt, on louvre.

Louvre has recently experienced a catastrophic failure and had to be rebooted, shutting down all existing network connections.
She will also be moved to a different physical bay, soon, once again shutting down connections for all users.

This can be avoided by creating a different VPN endpoint in Rocquencourt, on a different host.

Event Timeline

ftigeot triaged this task as Normal priority.Feb 12 2019, 1:12 PM
ftigeot created this task.

Louvre had previously fallen more than once. Some of the events are documented in T1173.

Thanks for recording this task; I'll use the opportunity to document the reasoning behind the current internal networking setup, to try and make sure nothing is forgotten before migrating it.

There's more to the networking single point of failure than just the VPN (which one, btw?), and I think adapting the configuration to not have a single point of failure will be more work than just moving the VPN configurations on a VM.

The "internal" network is currently split across three logical networks:

  • SESI VLAN 440 (network 192.168.100.0/24) connects the "internal" interface of all the machines at Rocquencourt (bare metal, VM, containers);
  • Azure Virtual Network swh-vnet (network 192.168.200.0/22) connects our machines on Azure;
  • OpenVPN (network 192.168.101.0/24), connecting some external resources (two containers in Bologna, orangerie and orangeriedev), as well as roaming or static staff machines to one another.

The current expectation throughout the infrastructure is that those three networks can communicate with each other directly. The way we're doing that currently is having a common router with a leg on each of the networks.

The machine currently acting as this router is louvre which is:

  • the default gateway (192.168.100.1) for the VLAN 440 network (also set up to do MASQUERADEing towards the public internet)
  • the OpenVPN server
  • the local endpoint for an IPSEC tunnel allowing communication between VLAN 440, OpenVPN and the Azure Virtual Network

To make the network communication across all machines work seamlessly, a few classes of configurations are required.

  • Rocquencourt machines that are setup as purely internal (no public IP address), such as containers, some virtual machines, the elasticsearch and ceph nodes, have the easiest config: a static network configuration with default route pointing at 192.168.100.1. Considering its place as common router, this allows machines to reach all networks, as well as the outside world, with no extra config.
  • Rocquencourt machines with a public IP address (workers, VMs with external services, some hypervisors) are set up with two interfaces, one on the public vlan (untagged on the physical network) and one on the private vlan (id=440 on the physical network)
    1. On the main routing table:
      • The default route (reaching the outside world) is set via the public interface.
      • We add explicit routes for the two VPN networks via the internal interface gateway.
    2. By default linux will make packets sent from any IP address go through the default gateway; To avoid that and make sure bidirectional communication across all three internal networks succeeds, we:
      1. Define an extra routing table private, in /etc/iproute2/rt_tables
      2. Set an ip rule to make packets outgoing on the private interface use the private routing table
      3. Set a default route on the private table to go through 192.168.100.1.

Both Rocquencourt configurations, in simple cases (i.e. not for hypervisors with bridges and VLAN trunking and bonding on top of one another), are handled by the profile::network puppet manifest.

  • The Azure routing matrix is, quite frankly, black magic to my eyes. The setup is as follows:
    • We have defined a virtual network, swh-vnet, to use the range 192.168.200.0/22.
    • All virtual machines have at least a network interface
      1. we attach it to the swh-vnet network on VM setup
      2. an "IP configuration" is added so they get a ("Dynamic" but morally never changing) IP address in the range 192.168.200.0/22 through a DHCP server (managed by Azure infra).
        1. if public IP addressing is enabled on the "IP configuration", nothing changes in the output of the DHCP server, the VM only sees the private VNet address. However, bidirectional NAT happens at the edge of the network so that traffic to the public IP arrives at the VM, and traffic exiting the VM to the internet appears as coming from the public IP address.
        2. if public IP addressing is disabled, the machine still gets to talk to the outside world through a (built-in) NAT gateway.
    • To get communication to/from swh-vnet, we have setup a "Virtual network gateway" swh-vnet-gateway
      • https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-about-vpngateways
      • "Site-to-Site" configuration uses IPSEC / IKEv2 on the backend
      • azure-side configuration:
        1. declare louvre.softwareheritage.org as "Local network gateway", setting up the networks that are available behind it (192.168.100.0/24 and 192.168.101.0/24)
        2. declare a "Connection" between swh-vnet-gateway and louvre.softwareheritage.org (set the shared secret)
      • louvre-side configuration
        1. strongswan as IPSEC client
        2. Tunnel configuration in /etc/ipsec.conf
        3. configuration skimmed from a random blog post, confirmed by comparing with the Cisco config provided by azure
      • On the azure VM side, the routing happens automatically within their routing fabric
      • On the louvre side, IPSEC routing happens directly within the kernel (the routes aren't visible through ip route, only ipsec status shows the state of the tunnels)
  • The OpenVPN clients get their routes to the three networks in two ways:
    • they can communicate with each other with the client-to-client directive enabled in the OpenVPN config
    • routes to the other networks are pushed explicitly (push "route xxx yyy" directives)

So, to sum up, the current SPOF is actually a nicely tangled ball of, in order of priority from (IMO) most to least important:

  • gateway to the internet for internal-only machines at Rocquencourt
  • router between the private networks
  • IPSEC network-to-network gateway for Azure
  • OpenVPN gateway for roaming clients and odd external machines that don't warrant a network-to-network connection

All these bits also warrant a bunch of firewalling rules on the SESI side as well as with our partners where we use OpenVPN, which took quite a while to get right and won't be able to be changed in a pinch.

I honestly have no idea from which side to start pulling this string to untangle it, but at least now the breakdown of the setup is written down somewhere.

Thanks @olasd for this piece of information.

One remark: I don't want to move the OpenVPN on a VM, I want (at least) 2 OpenVPN on @ different physical machines. Indeed, we might want to have the same redundancy for other networking path currently going through louvre. This may also raise the question (already mentioned by @ftigeot IINM) of using a dedicated network-appliance-like machine as central router in Rocquencourt (I've been using Lanner brand without problem in my previous job, or it might also be a simple 1u dell machine).

One question: why are there so many machines with a public IPv4? I understand this for louvre, since it's the current main router (doing MASQ etc.), but I don't really get why pergamon, tate, moma, banco or beaubourg have public IPs. I'd much rather prefer a vip-like frontend (e.g. with varnish or nginx acting as a reverse proxy, potentially doing load balancing, caching, tls handchecks and so on).
This frontend machine does not even need a public IP, doing DNAT on the front router(s) could be enough, I think.

Or something like:

                               +-+
           pub ip1             |p|
                 +-- swh-fw1 --+r|
                /              |i|
Internet -- SESI               |v|
                \              |a|
                 +-- swh-fw2 --+t|
           pub ip2             |e|
                               +-+

I'm slightly reordering what you wrote here, sorry!

One remark: I don't want to move the OpenVPN on a VM, I want (at least) 2 OpenVPN on @ different physical machines.

Please help me understand the concrete reasoning behind OpenVPN needing such a high availability setup. While it's an important tool for backend admin work (e.g. deployment of new versions), an OpenVPN downtime will not prevent most of the team from working. The only bit of public-facing infrastructure really depending on OpenVPN is the vault, and it's been designed to gracefully handle unavailability, on purpose.

I think we should spend time on identifying the team day to day tasks that depend on OpenVPN, and thinking of alternative ways of achieving them. For instance, one thing I can think of right off the bat is access to production logs through kibana, which we should be able to put behind a (public) reverse proxy with authentication.

One question: why are there so many machines with a public IPv4? I understand this for louvre, since it's the current main router (doing MASQ etc.), but I don't really get why pergamon, tate, moma, banco or beaubourg have public IPs.

There's two distinct answers to this question:

  • workers each have a separate public IP address to avoid a potential upstream blacklist blocking all of them at once. They also have a mostly free-for-all whitelist on the SESI edge firewall.
  • machines with public services have a public IP address because it is (or, I guess, was) the solution with the least overhead to spin up the infra at the time.

Our infra grew organically around a single hypervisor (louvre) with a large disk array directly attached, and we've never really done any refactoring to really get away from that. "load balancing" across our public IP addresses could surely happen at the network border just as well as on the machines separately as done currently.

I'd much rather prefer a vip-like frontend (e.g. with varnish or nginx acting as a reverse proxy, potentially doing load balancing, caching, tls handchecks and so on). This frontend machine does not even need a public IP, doing DNAT on the front router(s) could be enough, I think.

Sure, I don't see any reason not to gradually move towards that. For instance that's how Jenkins has been setup : jenkins.softwareheritage.org points to pergamon's public IP, which reverse-proxies to the service on the dedicated VM.

Or something like:

                               +-+
           pub ip1             |p|
                 +-- swh-fw1 --+r|
                /              |i|
Internet -- SESI               |v|
                \              |a|
                 +-- swh-fw2 --+t|
           pub ip2             |e|
                               +-+

Indeed, we might want to have the same redundancy for other networking path currently going through louvre.
This may also raise the question (already mentioned by @ftigeot IINM) of using a dedicated network-appliance-like machine as central router in Rocquencourt (I've been using Lanner brand without problem in my previous job, or it might also be a simple 1u dell machine).

I'm not questioning that the status quo needs to be improved here. We just need to keep in mind that adding complexity (be that different hardware, software high availability, ...) to the network setup means more moving parts, more maintenance overhead, more things to learn, more things to monitor.

My highest priority concern on the current setup is that everything is cobbled together by hand without much consistency or monitoring, and that only my head used to contain a coherent view of the full setup (now, it's my head and this ticket, yay).

All in all, I would really like our internal networking to move towards something that is:

  • declarative: we should have a somewhat self-documenting configuration for the topology of the network.
  • generic and consistent: we should be able to route across disparate networks, e.g. from different cloud providers, as well as on-premises networks and a VPN for admin tasks.
  • reliable and resilient: there must be some sort of mesh routing between our networks, so that things keep working when a link fails; networks must support having several edge routers
zack removed ftigeot as the assignee of this task.Sep 8 2020, 8:59 AM
vsellier changed the task status from Open to Work in Progress.May 27 2021, 11:00 AM
vsellier claimed this task.
vsellier moved this task from Backlog to in-progress on the System administration board.

The OPNsense firewall configuration was finalized based on the initial configuration olasd has previously done on the OPNsense firewalls.

The new VPN has a network range 192.168.102.1/23 (192.168.102.1 -> 192.168.103.254)

What was changed:

  • On VLAN440: the packet from the VPN are routed to the default gateway (louvre). To avoid that, a new NAT roule was declared as: on VLAN440, NAT packet from VPN as issued from 192.168.100.130 (opnsense gw on the vlan)
  • ON VLAN440: a new rule allowing traffic from the gateway was declared as the all the packet are not issued from the vlan (Firewall / Rules / VLAN440 rule 28)
  • To allow the firewall management from the VPN, the rule blocking the packet to the firewall was disabled (Firewall / Rules / OpenVPN rule 15)

I have made some tests with the certificates, it works with newly certificates, but also with the old certificates imported in the opnsense user's profile.
If so, we can perform the migration by just updating the public address of louvre.softwareheritage.org even if it could be interesting to take the opportunity to change the public server name to something more functional like 'vpn.softwareheritage.org'

The ipsec vpn also needs to be tested for the azure nodes

Actually, we have the old openvpn and ipsec running in parallel of the new opnsenses VPNs:

Some subnets and routing were explicitly declared to have this configuration working. For example, only the users connected to the opnsense openvpn are routed through the opensense ipsec when they connect to a azure server.

The final goal is to completely decommission louvre.
To limit the possible impacts on some servers like the desktops in the INRIA offices and eventually use a more common ip for the firewalls gateway vip on the VLAN440 (192.168.100.130), we will reassign the current network ranges and the ip of louvre to the fw.

The final target will be something like :

The detailed plan to perform the migration the more smoothly as possible will follow in a forthcoming comment

  • new dns entry vpn.softwareheritage.org created:
vpn	A	3600	128.93.166.2
  • The address change of the firewalls is in preparation with D5906
  • main vlan440 ips changes to regroup the fw at the beginning of the range and have similar ips on each vlan:
    • pushkin: 192.168.100.128 -> 192.168.100.2
    • glyptotek: 192.168.100.129 -> 192.168.100.3
  • next step is to try to import the current certificate revocation list of louvre

The certificated revoked by louvre can be imported in opnsense and revoked in an internal crl.
It more simple than importing the current louvre's crl as an imported crl needs to be manage externally and it's raw content paste on the ui.

It was test with a test certificate:

  • created in louvre
  • the user can connect to the opnsence vpn
  • import the certificate with the ca keyan empty key
  • add the certificate in the revocation list,
  • the user can't connect to the vpn.

I will import the 24 revoked certificated of louvre:

root@louvre:/etc/openvpn/keys/pki/revoked/certs_by_serial# ls -l | wc -l
24

All the revoked certificates are imported on the opnsense crl.
I will also import the valid certificates so we will have them in case a revocation is needed

  • The louvre's certificates are all created on opnsense
  • I perform some tests to access the firewall through a serial console. The ssh access can be done by using a serial console of one of the server expose on the IPMI network:
# ipmitool -I lanplus -H swh-ceph-mon1-adm.inria.fr -U XXX -P XXX sol activate
[SOL Session operational.  Use ~? for help]

pompidou login: vsellier
Password: 
Last login: Wed Jun 23 08:59:19 UTC 2021 on ttyS1
Linux pompidou 5.4.103-1-pve #1 SMP PVE 5.4.103-1 (Sun, 07 Mar 2021 15:55:09 +0100) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
vsellier@pompidou ~ % ssh root@192.168.100.2
Password:
Last login: Wed Jun 23 09:08:40 2021 from 192.168.100.170
----------------------------------------------
|      Hello, this is OPNsense 21.1          |         @@@@@@@@@@@@@@@
|                                            |        @@@@         @@@@
| Website:	https://opnsense.org/        |         @@@\\\   ///@@@
| Handbook:	https://docs.opnsense.org/   |       ))))))))   ((((((((
| Forums:	https://forum.opnsense.org/  |         @@@///   \\\@@@
| Code:		https://github.com/opnsense  |        @@@@         @@@@
| Twitter:	https://twitter.com/opnsense |         @@@@@@@@@@@@@@@
----------------------------------------------

*** pushkin.internal.softwareheritage.org: OPNsense 21.1.5 (amd64/OpenSSL) ***

 IPSecAzure (ipsec1) -> v4: 10.111.1.1/30
 VLAN440 (vtnet1) -> v4: 192.168.100.2/24
 VLAN442 (vtnet3) -> v4: 192.168.50.2/24
 VLAN443 (vtnet2) -> v4: 192.168.130.2/24
 VLAN1300 (vtnet0) -> v4: 128.93.166.3/26

 HTTPS: SHA256 1C 8A B5 F7 09 0B 1E 41 CA 6D 8D 87 36 C3 83 ED
               8E 2B 1D 8D B4 EC C8 E9 7F 4F A2 E7 DE 80 C2 C9
 SSH:   SHA256 6x6jbjeoxPL2oTOtDbgl4I677M2yu4IAQAPR+sTcd/g (ECDSA)
 SSH:   SHA256 aXd7x/nuO8DWRGN/8Wdn1hpv5XLStxFKhJMKTsvvOTs (ED25519)
 SSH:   SHA256 nE1LasZBoeo6gjyP0zLCPqSns2GW4IFTLZ1IGo/yW5o (RSA)

  0) Logout                              7) Ping host
  1) Assign interfaces                   8) Shell
  2) Set interface IP address            9) pfTop
  3) Reset the root password            10) Firewall log
  4) Reset to factory defaults          11) Reload all services
  5) Power off system                   12) Update from console
  6) Reboot system                      13) Restore a backup

Enter an option:

So it will be possible to shutdown the main server to fallback to the second one in case of a configuration mistake
The vm can also be shutdown in proxmox if needed

Regarding the recurrent disconnection of the azure VPN, it seems the only difference is on the reauth=no activated on louvre but not on opnsense.
The option was activated on opnsense too, the connection should not killed anymore on a key renegociation (if I have well undertood ;))

From the ipsec documentation: https://wiki.strongswan.org/projects/strongswan/wiki/ConnSection

reauth = yes | no

whether rekeying of an IKE_SA should also reauthenticate the peer. In IKEv1, reauthentication is always done.
In IKEv2, a value of no rekeys without uninstalling the IPsec SAs, a value of yes (the default)
creates a new IKE_SA from scratch and tries to recreate all IPsec SAs.

Let's see if it's better

It seems it's better, there were at worst 1 error per day since the update.

The VPN documentation[1] was updated to explain how to manage the CSR signing and the certificate revocation with the firewall

[1] https://wiki.softwareheritage.org/wiki/VPN

vsellier moved this task from in-progress to done on the System administration board.