- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
May 18 2022
May 17 2022
- rebase
- move the parameters after 'icinga_plugins'
- add an environment parameter
Here are the results of the queries.
You can directly paste the json in the search profiler to see the result.
(Be careful some are quite huge)
May 16 2022
the file /var/lib/journalbeat/registry looks corrupted:
on worker10.euwest:
root@worker10:/var/lib/journalbeat# cat registry <?xml version="1.0" encoding="utf-8"?> <GoalState xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="goalstate10.xsd"> <Version>2012-11-30</Version> <Incarnation>1</Incarnation> <Machine> <ExpectedState>Started</ExpectedState> <StopRolesDeadlineHint>3
on worker09.euwest:
root@worker09:/var/lib/journalbeat# cat registry update_time: 2022-05-16T07:11:29.680690647Z journal_entries: - path: LOCAL_SYSTEM_JOURNAL cursor: s=1b5676c17e22450b80579b9caf065703;i=659f65c;b=97b0842367c749299a4a12ec839f1c3b;m=5b66c4ba4c0;t=5df1bbb86b72f;x=8e43c09dfc1a706e realtime_timestamp: 1652685086832431 monotonic_timestamp: 6281059083456
May 13 2022
LGTM, just 2 non-blocking questions inline
May 12 2022
May 11 2022
credentials created following https://docs.softwareheritage.org/sysadm/mirror-operations/onboard.html#how-to-create-the-objstorage-credentials
May 10 2022
great, thanks
- rebase
- update according the review feedbacks
May 9 2022
May 6 2022
The cluster is declared and the node provisionning.
- fix the cloud-init / puppet concurrency after the vms startup
- remove the wrong vmid assigned to the new cluster nodes
- refresh the staging.tfstate file after applying the new configuration
May 5 2022
May 4 2022
Align worker0 and worker1 qemu arguments to match the real vms configuration
- fix wrong references to the elastic worker cluster
- rename nodes from rancher-node-internX to rancher-node-internshipX
The backup is in sync, everything is back to normal
Now let's restart the synchronization:
root@backup01:~# zfs destroy -r data/sync/dali/postgresql root@backup01:~# systemctl reset-failed syncoid-dali-postgresql.service root@backup01:~# systemctl restart syncoid-dali-postgresql.service
and the same for data/sync/dali/postgresql_wall
The disk needed to be detached and reattached in order to be resized.
It seems zfs didn't detect the pool after the reboot.
A reimport did the work:
root@backup01:~# zpool import data root@backup01:~# zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data 255G 188G 66.9G - - 9% 73% 1.00x ONLINE -
(The pool is still correctly detected after a reboot)
The disk is sized at 200G, according to the azure portal, it can be resized to 256 without any additional cost.
Enter the size of the disk you would like to create. You will be charged the same rate for your provisioned disk, regardless of how much of the disk space is being used For example, a 200 GiB disk is provisioned on a 256 GiB disk, so you would be billed for the 256 GiB provisioned.
May 3 2022
- rebase
- test the prometheus exporter file creation
May 2 2022
Apr 21 2022
Good news, it looks like there is no more issues with the inter-node communication with rancher 2.6.4 and bullseye.
gg ;)
Apr 20 2022
Rebase
Thanks, we will look later how to have a better zpool initialization
There is no occurrences of this error in the logs and the consumers don't have any lag, so yes, I guess it is.