And now giverny seems to refuse to boot back. Stopping a bit that part as I cannot check further yet.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Nov 19 2021
Nov 18 2021
- journalbeat and filebeat are migrated on all the nodes
- after the lag recovery and the fix of the closed indexes script, everything looks good
- Check if mongo services still running, there are some:
Nov 17 2021
- The 3 esnodes are updated to version 7.15.2:
for each node:
puppet agent --disable
for each node:
apt update apt dist-upgrade
Nov 10 2021
For the record, the upgrade of esnode[1-3] to bullseye is ok (in vagrant).
The upgrade is done without errors, puppet is green. A reinstall from scratch is also working well without warning.
The diff to prepare the migration of filebeat and journalbeat are ready. If everything is good after the review, the upgrade will be perform at the beginning of the W46.
Nov 9 2021
Everything looks good with logstash 1:7.15.1
The monitoring of the logstash errors is still working as previously:
root@logstash0:/usr/lib/nagios/plugins/swh# ./check_logstash_errors.sh OK - No errors detected
after closing the current system index:
root@logstash0:/usr/lib/nagios/plugins/swh# ./check_logstash_errors.sh CRITICAL - Logstash has detected some errors in outputs errors=9 non_retryable_errors=13
To upgrade kibana, upgrading the version looks enough. The migration is automatically done and all the configured elements are still available:
root@esnode1:~# curl -s http://10.168.100.61:9200/_cat/indices\?v=true\&s=index | grep kibana health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open .kibana-event-log-7.15.1-000001 24Wb0rfUQuqab3Iody3Hrg 1 1 1 0 12.1kb 6kb <-------- new index green open .kibana-event-log-7.8.0-000001 6IjHICQVS2uX8qBekJLWsw 1 1 2 0 21.4kb 10.7kb green open .kibana_2 Oh9O6uB1R0-oNPbnhTM8kw 1 1 1928 3 1.5mb 788.4kb green open .kibana_7.15.1_001 5fyk6NMUSE-3P6uhx-HSeg 1 1 1110 35 5.3mb 2.6mb <-------- new index (automatically migrated from kibana_2) green open .kibana_task_manager_1 vINZFVqCSJiDHHFMdYGwTA 1 1 5 0 32kb 16kb green open .kibana_task_manager_7.15.1_001 pYeR_zFdTZO_jqxYS1DB9g 1 1 16 369 527kb 277.5kb <-------- new index
root@esnode1:~# curl -s http://10.168.100.61:9200/_cat/aliases\?v=true\&s=index | grep kibana alias index filter routing.index routing.search is_write_index .kibana-event-log-7.15.1 .kibana-event-log-7.15.1-000001 - - - true .kibana-event-log-7.8.0 .kibana-event-log-7.8.0-000001 - - - true .kibana .kibana_7.15.1_001 - - - - .kibana_7.15.1 .kibana_7.15.1_001 - - - - .kibana_task_manager .kibana_task_manager_7.15.1_001 - - - - .kibana_task_manager_7.15.1 .kibana_task_manager_7.15.1_001 - - - -
Nov 8 2021
The migration of ES can be performed with:
- elasticsearch migration
In order to validate the kibana upgrade, the kibana configuration can be copied locally with these commands:
The preparation of the migration through the vagrant environment is in progress.
Thanks for the info.
For the record, the entry point of the upgrade process: https://www.elastic.co/guide/en/elastic-stack/current/upgrading-elastic-stack.html
Nov 5 2021
FWIW the main blocker for upgrading journalbeat is a change in the target mapping, which will need some adaptations in our log routing (between systemlogs and swh_workers), as well as, well, an updated mapping on the target indexes!