- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Jul 11 2022
update dns configuration to use pergamon directly
Jul 7 2022
The management nodes were correctly created but it seems rancher is having some issuer to register them in the cluster.
The kubernetes upgrade was launched through the azure portal (it's also possible to trigger it with the az command line)
Everything looks fine:
- A new node with the version 1.22.6 was triggerd
kubectl get pods -o wide; echo; kubectl get nodes -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES debian 1/1 Running 1 (23m ago) 27m 10.244.0.63 aks-default-36212332-vmss000000 <none> <none> rancher-59f4c74c6f-5vlq6 1/1 Running 0 91m 10.244.0.59 aks-default-36212332-vmss000000 <none> <none> rancher-59f4c74c6f-92txx 1/1 Running 0 90m 10.244.0.60 aks-default-36212332-vmss000000 <none> <none> rancher-59f4c74c6f-cfshs 1/1 Running 0 91m 10.244.0.58 aks-default-36212332-vmss000000 <none> <none> rancher-webhook-6958cfcddf-2gjwn 1/1 Running 0 85d 10.244.0.26 aks-default-36212332-vmss000000 <none> <none>
rebase
I've no idea if the cpu/memory/disk spec are large enough or not, I didn't find the info on the thanos documentation
Jul 5 2022
Please also merge this in the staging branch and notify the sysadm irc room when it's pushed, we will need to deploy it manually to clean the previous services
Jun 30 2022
rebase
fix the readme name
Jun 29 2022
It seems the rancher cluster can be updated to any version :
from https://rancher.com/docs/rancher/v2.6/en/installation/install-rancher-on-k8s/:
Rancher can be installed on any Kubernetes cluster. This cluster can use upstream Kubernetes, or it can use one of Rancher’s Kubernetes distributions, or it can be a managed Kubernetes cluster from a provider such as Amazon EKS.
It's also confirmed by the suse rke compatibility matrix: https://www.suse.com/assets/EN-Rancherv2.6.4-150422-0151-56.pdf
Jun 28 2022
I will be solved by D7890
make mypy happy
Jun 27 2022
add missing parenthesis
update according the reviews
- simplify the cache management
- fix the doc strings
Jun 24 2022
looks good (for what it's worth)
It's confirmed that the issue seems to be on the python part of the current implementation so I'm eager to see D7890 landed ;)
Jun 22 2022
I reversed engineered the py4j communication protocol, so next time it will hang, we should be able to tell if the issue is on the gateway server side or on the python side:
- Create a name pipe
mkfifo /tmp/test chmod a+w /tmp/test tail -F /tmp/test
- query the graph
ss -ltp | grep java <get the port number> telnet localhost <port number> c o0 get_handler s/tmp/test e
Looks like something is wrong in the operator state management.
For what I found on internet, it could be related to the cert-manager version but it should be already fixed. For example: https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/issues/315
(The current cert-manager version in the cluster is 1.8.0)
Jun 20 2022
Jun 17 2022
A dozen of clients running in the provenance-client01 are using the multiplexer configuration.
It seems to work correctly
Jun 16 2022
rebase
Update according the reviews
- Add and fix license headers
- Ensure the _revisions_count variable was computed before returning its value
Jun 15 2022
\o/ well done
I've deliberately created the diff with the 3 commits inside, I just forgot to update the title ;)