Page MenuHomeSoftware Heritage

Test rancher pros/cons
Closed, MigratedEdits Locked

Description

If we choose to go to kubernetes, we will probably need to to manage several clusters to isolate the concerns.
For example, we will need at least a production and staging cluster + some sandbox for the developers and tests.

Rancher is supposed to provide a easy way to centralize the cluster management but it need to be tested to see what it's possible.
The firt step is to deploy a basic installation and ply with it but the goal is reply to the following points :

  • pro/cons compared to a manual installation
  • automation capabilities (are there any integration with terraform or puppet ?)
  • operation cost (cost of an upgrade ?)
  • ...

Event Timeline

vsellier changed the task status from Open to Work in Progress.May 11 2021, 9:30 AM
vsellier triaged this task as Normal priority.
vsellier created this task.
vsellier moved this task from Backlog to in-progress on the System administration board.

The basic installation with helm is simple for a mono server installation: https://rancher.com/docs/rancher/v2.5/en/installation/install-rancher-on-k8s/#install-the-rancher-helm-chart

The manual creation of a new kubernetes cluster is done via the rancher interface. It gives a docker command to execute on the node of the cluster that simply launch and configure kubernetes on the cluster.


The upgrades of the clusters are very easy as they are only a pull / restart of the new versions of the rancher images. It can be done without service interruption. Only the scheduling of new pods are disabled during the upgrade. It can be launched via the interface.

I have an issue to display the cluster status, but I thinks it's due to a dns issue as the rancher master has no real dns entry.

For the automation, the terraform provider looks promising: https://registry.terraform.io/providers/rancher/rancher2/latest/docs

With a master declared in the dns, everything seems to work well.
when the docker command is launched on a node, it's status is well detected and the node is correctly configured after a couple of minute.
The cluster explorer is also working now.

It's now possible to test the terraform provisionning

This comment was removed by vsellier.
vsellier moved this task from in-progress to done on the System administration board.

I think the issue can be closed.
The pros are:

  • it simplify the cluster management (create, configuration and most of all, kubernetes upgrades)
  • centralize the global view of the cluster and what is running on it
  • OSS and transparent policy

The cons are:

  • The underlying network overlay is not working on debian (10 and 11)

It possible to start with standalone k3s clusters for example until we don't have to manage a lot of clusters

It seems the network issue is fixed in version 2.6.3 which is quite a good news

swhworker@poc-rancher:~$ ./test-network.sh 
=> Start network overlay test
poc-rancher-sw0 can reach poc-rancher-sw0
poc-rancher-sw0 can reach poc-rancher-sw1
poc-rancher-sw1 can reach poc-rancher-sw0
poc-rancher-sw1 can reach poc-rancher-sw1
=> End network overlay test

The poc-rancher-swX nodes are still in buster, I will make the same test in bullseye