diff --git a/docker/README.rst b/docker/README.rst --- a/docker/README.rst +++ b/docker/README.rst @@ -627,7 +627,10 @@ This can be used like:: - ~/swh-environment/docker$ docker-compose -f docker-compose.yml -f docker-compose.storage-mirror.yml up -d + ~/swh-environment/docker$ docker-compose \ + -f docker-compose.yml \ + -f docker-compose.storage-mirror.yml \ + up -d [...] Compared to the original compose file, this will: @@ -658,16 +661,16 @@ :: - (swh)$ docker-compose \ - -f docker-compose.yml \ - -f docker-compose.storage-mirror.yml \ - -f docker-compose.storage-mirror.override.yml \ - run \ - swh-journal-backfiller \ - snapshot \ - --start-object 000000 \ - --end-object 000001 \ - --dry-run + ~/swh-environment/docker$ docker-compose \ + -f docker-compose.yml \ + -f docker-compose.storage-mirror.yml \ + -f docker-compose.storage-mirror.override.yml \ + run \ + swh-journal-backfiller \ + snapshot \ + --start-object 000000 \ + --end-object 000001 \ + --dry-run Cassandra ^^^^^^^^^ @@ -677,7 +680,10 @@ This can be used like:: - ~/swh-environment/docker$ docker-compose -f docker-compose.yml -f docker-compose.cassandra.yml up -d + ~/swh-environment/docker$ docker-compose \ + -f docker-compose.yml \ + -f docker-compose.cassandra.yml \ + up -d [...] @@ -693,7 +699,10 @@ Instead, you can enable swh-search, which is based on ElasticSearch and much more efficient, like this:: - ~/swh-environment/docker$ docker-compose -f docker-compose.yml -f docker-compose.search.yml up -d + ~/swh-environment/docker$ docker-compose \ + -f docker-compose.yml \ + -f docker-compose.search.yml \ + up -d [...] Efficient counters @@ -710,7 +719,10 @@ So we have an alternative based on Redis' HyperLogLog feature, which you can test with:: - ~/swh-environment/docker$ docker-compose -f docker-compose.yml -f docker-compose.counters.yml up -d + ~/swh-environment/docker$ docker-compose \ + -f docker-compose.yml \ + -f docker-compose.counters.yml \ + up -d [...] @@ -725,7 +737,9 @@ You can use it with:: - ~/swh-environment/docker$ docker-compose -f docker-compose.yml -f docker-compose.graph.yml up -d + ~/swh-environment/docker$ docker-compose \ + -f docker-compose.yml \ + -f docker-compose.graph.yml up -d On the first start, it will run some precomputation based on all objects already in your local SWH instance; so it may take a long time if you loaded many @@ -740,9 +754,18 @@ Then, you need to explicitly request recomputing the graph before restarts if you want to update it:: - ~/swh-environment/docker$ docker-compose -f docker-compose.yml -f docker-compose.graph.yml run swh-graph update - ~/swh-environment/docker$ docker-compose -f docker-compose.yml -f docker-compose.graph.yml stop swh-graph - ~/swh-environment/docker$ docker-compose -f docker-compose.yml -f docker-compose.graph.yml up swh-graph -d + ~/swh-environment/docker$ docker-compose \ + -f docker-compose.yml \ + -f docker-compose.graph.yml \ + run swh-graph update + ~/swh-environment/docker$ docker-compose \ + -f docker-compose.yml \ + -f docker-compose.graph.yml \ + stop swh-graph + ~/swh-environment/docker$ docker-compose \ + -f docker-compose.yml \ + -f docker-compose.graph.yml \ + up -d swh-graph Keycloak @@ -764,6 +787,50 @@ at http://localhost:8025/. +Kafka +^^^^^ + +Consuming topics from the host +"""""""""""""""""""""""""""""" + +As mentioned above, it is possible to consume topics from the kafka server available +in the docker-compose environment from the host using `127.0.0.1:5092` as broker URL. + +Resetting offsets +""""""""""""""""" + +It is also possible to reset a consumer group offset using the following command:: + + ~swh-environment/docker$ docker-compose \ + run kafka kafka-consumer-groups.sh \ + --bootstrap-server kafka:9092 \ + --group \ + --all-topics \ + --reset-offsets --to-earliest --execute + [...] + +You can use `--topic ` instead of `--all-topics` to specify a topic. + +Getting information on consumers +"""""""""""""""""""""""""""""""" + +You can get information on consumer groups:: + + ~swh-environment/docker$ docker-compose \ + run kafka kafka-consumer-groups.sh \ + --bootstrap-server kafka:9092 \ + --describe --members --all-groups + [...] + +Or the stored offsets for all (or a given) groups:: + + ~swh-environment/docker$ docker-compose \ + run kafka kafka-consumer-groups.sh \ + --bootstrap-server kafka:9092 \ + --describe --offsets --all-groups + [...] + + Using Sentry ------------