These credentials are used to connect through the public access
I initially tried to convert the script in a management script (add/list/remove)
but fall in a rabbit hole so finally give up.
Differential D6986
kafka: add a script to create the kafka credentials vsellier on Jan 20 2022, 10:54 AM. Authored by
Details These credentials are used to connect through the public access I initially tried to convert the script in a management script (add/list/remove)
diff origin/production/getty.internal.softwareheritage.org current/getty.internal.softwareheritage.org ******************************************* + File[/etc/default/prometheus-kafka-consumer-group-exporter/rocquencourt_staging] => parameters => "content": "# prometheus-kafka-consumer-group-exporter (For cluster rocquenc... "ensure": "present", "group": "root", "mode": "0644", "notify": "Service[prometheus-kafka-consumer-group-exporter@rocquencourt_sta... "owner": "root" ******************************************* + File[/usr/local/sbin/create_kafka_users_rocquencourt.sh] => parameters => "content": "#!/bin/bash\n\nset -e\n\nzookeepers=kafka1.internal.softwareheri... "ensure": "present", "group": "root", "mode": "0700", "owner": "root" ******************************************* + File[/usr/local/sbin/create_kafka_users_rocquencourt_staging.sh] => parameters => "content": "#!/bin/bash\n\nset -e\n\nzookeepers=journal1.internal.staging.sw... "ensure": "present", "group": "root", "mode": "0700", "owner": "root" ******************************************* + Profile::Prometheus::Export_scrape_config[kafka-consumer-group-rocquencourt_staging] => parameters => "job": "kafka-consumer-group", "labels": { "cluster": "rocquencourt_staging" }, "target": "192.168.100.102:9209" ******************************************* + Service[prometheus-kafka-consumer-group-exporter@rocquencourt_staging] => parameters => "enable": true, "ensure": "running" ******************************************* *** End octocatalog-diff on getty.internal.softwareheritage.org
diff origin/production/storage1.internal.staging.swh.network current/storage1.internal.staging.swh.network ******************************************* - File[/etc/default/prometheus-kafka-consumer-group-exporter/rocquencourt_staging] ******************************************* - File[/etc/default/prometheus-kafka-consumer-group-exporter] ******************************************* - Package[prometheus-kafka-consumer-group-exporter] ******************************************* - Profile::Prometheus::Export_scrape_config[kafka-consumer-group-rocquencourt_staging] ******************************************* - Service[prometheus-kafka-consumer-group-exporter@rocquencourt_staging] ******************************************* *** End octocatalog-diff on storage1.internal.staging.swh.network kafka1: *** Running octocatalog-diff on host kafka1.internal.softwareheritage.org I, [2022-01-20T19:18:01.505916 #2246748] INFO -- : Catalogs compiled for kafka1.internal.softwareheritage.org I, [2022-01-20T19:18:01.804223 #2246748] INFO -- : Diffs computed for kafka1.internal.softwareheritage.org I, [2022-01-20T19:18:01.804265 #2246748] INFO -- : No differences *** End octocatalog-diff on kafka1.internal.softwareheritage.org
Diff Detail
Event TimelineComment Actions Could we create this script, for each kafka cluster, on the kafka management host rather than on individual brokers, so we can have all the management in a single place? We should generate the full broker _list_ rather than point at a single broker which may or may not be down. Comment Actions Yes it could, It will be cleaner, I will update in this way. Comment Actions
|