- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Aug 8 2022
Jul 13 2022
Unfortunately, the operator test is a failure due to the lack of configuration possibility
- non blocker, the init containers are OOMkilled during the start, it can be solved by editing the cassandra statefulset created by the operator to extend the limits
- blocker, it's not possible to configure the commitlog_directory explicitly. it's by default on /var/lib/cassandra/commitlog
- it's not easy to propagate the host mounts to use 2 mountpoints /srv/cassandra and /srv/cassandra/commitlog without tweaking the kernel / rancher configuration
- it's not possible to add a second volume on the pod description created by the operator
Jul 12 2022
declare the extra volume binding (no impact on the tfstate)
The mountpoint needs to be declare on the kubelet container to be reachable by the pods:
--- /tmp/cluster-orig.yaml 2022-07-12 11:27:27.169509573 +0200 +++ /tmp/cluster.yaml 2022-07-12 11:26:54.865395186 +0200 @@ -58,6 +58,8 @@ service_node_port_range: 30000-32767 kube-controller: {} kubelet: + extra_binds: + - '/srv/prometheus:/srv/prometheus' fail_swap_on: false generate_serving_certificate: false kubeproxy: {}
Jul 11 2022
add tfstate file
Configure the data directory:
root@pergamon:~# clush -b -w @cassandra-mgmt hostname --------------- rancher-node-cassandra1 --------------- rancher-node-cassandra1 --------------- rancher-node-cassandra2 --------------- rancher-node-cassandra2 --------------- rancher-node-cassandra3 --------------- rancher-node-cassandra3
Finally, the cluster is up.
I'm not sure what unstuck the node registration, but I suspect a node with all the roles is needed to bootstrap the cluster.
I tried this initially, it didn't worked, but I'm not sure in which status the cluster was.
update dns configuration to use pergamon directly
Jul 7 2022
The management nodes were correctly created but it seems rancher is having some issuer to register them in the cluster.
The kubernetes upgrade was launched through the azure portal (it's also possible to trigger it with the az command line)
Everything looks fine:
- A new node with the version 1.22.6 was triggerd
kubectl get pods -o wide; echo; kubectl get nodes -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES debian 1/1 Running 1 (23m ago) 27m 10.244.0.63 aks-default-36212332-vmss000000 <none> <none> rancher-59f4c74c6f-5vlq6 1/1 Running 0 91m 10.244.0.59 aks-default-36212332-vmss000000 <none> <none> rancher-59f4c74c6f-92txx 1/1 Running 0 90m 10.244.0.60 aks-default-36212332-vmss000000 <none> <none> rancher-59f4c74c6f-cfshs 1/1 Running 0 91m 10.244.0.58 aks-default-36212332-vmss000000 <none> <none> rancher-webhook-6958cfcddf-2gjwn 1/1 Running 0 85d 10.244.0.26 aks-default-36212332-vmss000000 <none> <none>
rebase
I've no idea if the cpu/memory/disk spec are large enough or not, I didn't find the info on the thanos documentation
Jul 5 2022
Please also merge this in the staging branch and notify the sysadm irc room when it's pushed, we will need to deploy it manually to clean the previous services
Jun 30 2022
rebase
fix the readme name
Jun 29 2022
It seems the rancher cluster can be updated to any version :
from https://rancher.com/docs/rancher/v2.6/en/installation/install-rancher-on-k8s/:
Rancher can be installed on any Kubernetes cluster. This cluster can use upstream Kubernetes, or it can use one of Rancher’s Kubernetes distributions, or it can be a managed Kubernetes cluster from a provider such as Amazon EKS.
It's also confirmed by the suse rke compatibility matrix: https://www.suse.com/assets/EN-Rancherv2.6.4-150422-0151-56.pdf
Jun 28 2022
I will be solved by D7890