Page MenuHomeSoftware Heritage

[cassandra] create etcd / controlplane servers
Closed, ResolvedPublic


A good practice for production kubernetes clusters is to separate etcd and controlplane services from the worker nodes.

In our case it will also allow to have homogenous worker nodes and put the kubernetes overhead outside the cassandra servers by themselves.

3 vms with 4 vcpus and 8Go each, default disks

Event Timeline

vsellier changed the task status from Open to Work in Progress.Jul 7 2022, 11:56 AM
vsellier moved this task from Backlog to in-progress on the System administration board.

The management nodes were correctly created but it seems rancher is having some issuer to register them in the cluster.

I couldn't figure yet why this is faliing.
I found nothing weird in the logs (even with the TRACE level activated following

vsellier moved this task from in-progress to done on the System administration board.

Finally, the cluster is up.
I'm not sure what unstuck the node registration, but I suspect a node with all the roles is needed to bootstrap the cluster.
I tried this initially, it didn't worked, but I'm not sure in which status the cluster was.

The cassandra nodes were initialized and configured with these commands :

mkdir -p /etc/facter/facts.d
cat >/etc/facter/facts.d/deployment.txt <<EOF
cat >/etc/facter/facts.d/subnet.txt <<EOF

echo "" >> /etc/hosts

echo "nameserver" >/etc/resolv.conf 

apt update; apt install -y facter puppet gnupg
apt -y dist-upgrade

apt autoremove

puppet agent --test --fqdn cassandra0[0-6] --server

lvcreate -L100G --name docker $(hostname)-vg

systemctl stop docker docker.socket
rm -rf /var/lib/docker

zpool create docker /dev/$(hostname)-vg/docker
zfs set compress=zstd atime=off relatime=on docker
zfs create -o mountpoint=/var/lib/docker docker/lib
systemctl restart docker

+ the docker command given by rancher to initialize a worker