Page MenuHomeSoftware Heritage

kafka: add a script to create the kafka credentials
ClosedPublic

Authored by vsellier on Jan 20 2022, 10:54 AM.

Details

Summary

These credentials are used to connect through the public access

I initially tried to convert the script in a management script (add/list/remove)
but fall in a rabbit hole so finally give up.

Test Plan
  • getty
diff origin/production/getty.internal.softwareheritage.org current/getty.internal.softwareheritage.org
*******************************************
+ File[/etc/default/prometheus-kafka-consumer-group-exporter/rocquencourt_staging] =>
   parameters =>
     "content": "# prometheus-kafka-consumer-group-exporter (For cluster rocquenc...
     "ensure": "present",
     "group": "root",
     "mode": "0644",
     "notify": "Service[prometheus-kafka-consumer-group-exporter@rocquencourt_sta...
     "owner": "root"
*******************************************
+ File[/usr/local/sbin/create_kafka_users_rocquencourt.sh] =>
   parameters =>
     "content": "#!/bin/bash\n\nset -e\n\nzookeepers=kafka1.internal.softwareheri...
     "ensure": "present",
     "group": "root",
     "mode": "0700",
     "owner": "root"
*******************************************
+ File[/usr/local/sbin/create_kafka_users_rocquencourt_staging.sh] =>
   parameters =>
     "content": "#!/bin/bash\n\nset -e\n\nzookeepers=journal1.internal.staging.sw...
     "ensure": "present",
     "group": "root",
     "mode": "0700",
     "owner": "root"
*******************************************
+ Profile::Prometheus::Export_scrape_config[kafka-consumer-group-rocquencourt_staging] =>
   parameters =>
     "job": "kafka-consumer-group",
     "labels": {
       "cluster": "rocquencourt_staging"
     },
     "target": "192.168.100.102:9209"
*******************************************
+ Service[prometheus-kafka-consumer-group-exporter@rocquencourt_staging] =>
   parameters =>
     "enable": true,
     "ensure": "running"
*******************************************
*** End octocatalog-diff on getty.internal.softwareheritage.org
  • storage1
diff origin/production/storage1.internal.staging.swh.network current/storage1.internal.staging.swh.network
*******************************************
- File[/etc/default/prometheus-kafka-consumer-group-exporter/rocquencourt_staging]
*******************************************
- File[/etc/default/prometheus-kafka-consumer-group-exporter]
*******************************************
- Package[prometheus-kafka-consumer-group-exporter]
*******************************************
- Profile::Prometheus::Export_scrape_config[kafka-consumer-group-rocquencourt_staging]
*******************************************
- Service[prometheus-kafka-consumer-group-exporter@rocquencourt_staging]
*******************************************
*** End octocatalog-diff on storage1.internal.staging.swh.network

kafka1:

*** Running octocatalog-diff on host kafka1.internal.softwareheritage.org
I, [2022-01-20T19:18:01.505916 #2246748]  INFO -- : Catalogs compiled for kafka1.internal.softwareheritage.org
I, [2022-01-20T19:18:01.804223 #2246748]  INFO -- : Diffs computed for kafka1.internal.softwareheritage.org
I, [2022-01-20T19:18:01.804265 #2246748]  INFO -- : No differences
*** End octocatalog-diff on kafka1.internal.softwareheritage.org

Diff Detail

Repository
rSPSITE puppet-swh-site
Lint
Automatic diff as part of commit; lint not applicable.
Unit
Automatic diff as part of commit; unit tests not applicable.

Event Timeline

Could we create this script, for each kafka cluster, on the kafka management host rather than on individual brokers, so we can have all the management in a single place?

We should generate the full broker _list_ rather than point at a single broker which may or may not be down.

Yes it could, It will be cleaner, I will update in this way.
I took a shortcut estimating we should know which broker is up or not, to avoid to have to build the connection string :)

  • Install the scripts for all the environments in getty, the journal orchestrator;
  • as the cluster configurations are now global, it impacts the consumer group exporter. It make sense to move it from storage1 to getty to also centralize this part (FW rules will need to be adapted accordingly);
vsellier edited the test plan for this revision. (Show Details)

add the puppet header on the script

ardumont added a subscriber: ardumont.

lgtm

one typo on getty.

data/subnets/vagrant.yaml
150
This revision is now accepted and ready to land.Jan 21 2022, 11:06 AM

fix the typo on the getty hostname