[33mswh-indexer-journal-client_1 |[0m WARNING: You are using pip version 20.2.2; however, version 20.2.3 is available.
[33mswh-indexer-journal-client_1 |[0m You should consider upgrading via the '/srv/softwareheritage/venv/bin/python3 -m pip install --upgrade pip' command.
[33mswh-indexer-journal-client_1 |[0m File "/srv/softwareheritage/venv/lib/python3.7/site-packages/swh/journal/client.py", line 37, in get_journal_client
[33mswh-indexer-journal-client_1 |[0m File "/srv/softwareheritage/venv/lib/python3.7/site-packages/swh/journal/client.py", line 174, in __init__
[33mswh-indexer-journal-client_1 |[0m for topic in self.consumer.list_topics(timeout=10).topics.keys()
[33mswh-indexer-journal-client_1 |[0m cimpl.KafkaException: KafkaError{code=_TRANSPORT,val=-195,str="Failed to get metadata: Local: Broker transport failure"}
[33mswh-indexer-journal-client_1 |[0m Using pip from /srv/softwareheritage/venv/bin/pip
[33mswh-indexer-journal-client_1 |[0m WARNING: You are using pip version 20.2.2; however, version 20.2.3 is available.
[33mswh-indexer-journal-client_1 |[0m You should consider upgrading via the '/srv/softwareheritage/venv/bin/python3 -m pip install --upgrade pip' command.
[33mswh-indexer-journal-client_1 |[0m File "/srv/softwareheritage/venv/lib/python3.7/site-packages/swh/journal/client.py", line 37, in get_journal_client
[33mswh-indexer-journal-client_1 |[0m File "/srv/softwareheritage/venv/lib/python3.7/site-packages/swh/journal/client.py", line 174, in __init__
[33mswh-indexer-journal-client_1 |[0m for topic in self.consumer.list_topics(timeout=10).topics.keys()
[33mswh-indexer-journal-client_1 |[0m cimpl.KafkaException: KafkaError{code=_TRANSPORT,val=-195,str="Failed to get metadata: Local: Broker transport failure"}
[33mswh-indexer-journal-client_1 |[0m Using pip from /srv/softwareheritage/venv/bin/pip
[33mswh-indexer-journal-client_1 |[0m WARNING: You are using pip version 20.2.2; however, version 20.2.3 is available.
[33mswh-indexer-journal-client_1 |[0m You should consider upgrading via the '/srv/softwareheritage/venv/bin/python3 -m pip install --upgrade pip' command.
[33mswh-indexer-journal-client_1 |[0m File "/srv/softwareheritage/venv/lib/python3.7/site-packages/swh/journal/client.py", line 37, in get_journal_client
[33mswh-indexer-journal-client_1 |[0m File "/srv/softwareheritage/venv/lib/python3.7/site-packages/swh/journal/client.py", line 174, in __init__
[33mswh-indexer-journal-client_1 |[0m for topic in self.consumer.list_topics(timeout=10).topics.keys()
[33mswh-indexer-journal-client_1 |[0m cimpl.KafkaException: KafkaError{code=_TRANSPORT,val=-195,str="Failed to get metadata: Local: Broker transport failure"}
[32mswh-vault_1 |[0m kombu 4.6.11
[32mswh-vault_1 |[0m launchpadlib 1.10.13
[32mswh-vault_1 |[0m lazr.restfulclient 0.14.3
[32mswh-vault_1 |[0m lazr.uri 1.0.5
[32mswh-vault_1 |[0m lxml 4.5.2
[32mswh-vault_1 |[0m MarkupSafe 1.1.1
[32mswh-vault_1 |[0m msgpack 1.0.0
[32mswh-vault_1 |[0m multidict 4.7.6
[32mswh-vault_1 |[0m mypy-extensions 0.4.3
[32mswh-vault_1 |[0m oauthlib 3.1.0
[32mswh-vault_1 |[0m patool 1.12
[32mswh-vault_1 |[0m pbr 5.5.0
[32mswh-vault_1 |[0m pika 1.1.0
[32mswh-vault_1 |[0m pip 20.2.2
[32mswh-vault_1 |[0m pkginfo 1.5.0.1
[32mswh-vault_1 |[0m prometheus-client 0.8.0
[32mswh-vault_1 |[0m psutil 5.7.2
[32mswh-vault_1 |[0m psycopg2 2.8.5
[32mswh-vault_1 |[0m pyasn1 0.4.8
[32mswh-vault_1 |[0m pybadges 2.2.1
[32mswh-vault_1 |[0m pycparser 2.20
[32mswh-vault_1 |[0m Pygments 2.6.1
[32mswh-vault_1 |[0m PyLD 2.0.3
[32mswh-vault_1 |[0m python-dateutil 2.8.1
[32mswh-vault_1 |[0m python-debian 0.1.37
[32mswh-vault_1 |[0m python-hglib 2.6.1
[32mswh-vault_1 |[0m python-jose 3.2.0
[32mswh-vault_1 |[0m python-keycloak 0.22.0
[32mswh-vault_1 |[0m python-magic 0.4.18
[32mswh-vault_1 |[0m python-memcached 1.59
[32mswh-vault_1 |[0m python-mimeparse 1.6.0
[32mswh-vault_1 |[0m pytz 2020.1
[32mswh-vault_1 |[0m PyYAML 5.3.1
[32mswh-vault_1 |[0m requests 2.24.0
[32mswh-vault_1 |[0m retrying 1.3.3
[32mswh-vault_1 |[0m rsa 4.6
[32mswh-vault_1 |[0m SecretStorage 3.1.2
[32mswh-vault_1 |[0m sentry-sdk 0.17.3
[32mswh-vault_1 |[0m setuptools 50.1.0
[32mswh-vault_1 |[0m six 1.15.0
[32mswh-vault_1 |[0m sortedcontainers 2.2.2
[32mswh-vault_1 |[0m soupsieve 2.0.1
[32mswh-vault_1 |[0m SQLAlchemy 1.3.19
[32mswh-vault_1 |[0m sqlitedict 1.6.0
[32mswh-vault_1 |[0m sqlparse 0.3.1
[32mswh-vault_1 |[0m subvertpy 0.10.1
[32mswh-vault_1 |[0m swh.core 0.2.3
[32mswh-vault_1 |[0m swh.deposit 0.0.90
[32mswh-vault_1 |[0m swh.indexer 0.2.1
[32mswh-vault_1 |[0m swh.journal 0.4.2
[32mswh-vault_1 |[0m swh.lister 0.1.2
[32mswh-vault_1 |[0m swh.loader.core 0.9.1
[32mswh-vault_1 |[0m swh.loader.git 0.3.6
[32mswh-vault_1 |[0m swh.loader.mercurial 0.0.33
[32mswh-vault_1 |[0m swh.loader.svn 0.3.3
[32mswh-vault_1 |[0m swh.model 0.6.6
[32mswh-vault_1 |[0m swh.objstorage 0.1.1
[32mswh-vault_1 |[0m swh.scheduler 0.5.2
[32mswh-vault_1 |[0m swh.search 0.2.2
[32mswh-vault_1 |[0m swh.storage 0.13.3
[32mswh-vault_1 |[0m swh.vault 0.0.34
[32mswh-vault_1 |[0m swh.web 0.0.254
[32mswh-vault_1 |[0m tenacity 6.2.0
[32mswh-vault_1 |[0m testresources 2.0.1
[32mswh-vault_1 |[0m typing-extensions 3.7.4.3
[32mswh-vault_1 |[0m urllib3 1.25.10
[32mswh-vault_1 |[0m vcversioner 2.16.0.0
[32mswh-vault_1 |[0m vine 1.3.0
[32mswh-vault_1 |[0m wadllib 1.3.4
[32mswh-vault_1 |[0m Werkzeug 1.0.1
[32mswh-vault_1 |[0m wheel 0.35.1
[32mswh-vault_1 |[0m wrapt 1.12.1
[32mswh-vault_1 |[0m xmltodict 0.12.0
[32mswh-vault_1 |[0m yarl 1.5.1
[32mswh-vault_1 |[0m zipp 3.1.0
[32mswh-vault_1 |[0m WARNING: You are using pip version 20.2.2; however, version 20.2.3 is available.
[32mswh-vault_1 |[0m You should consider upgrading via the '/srv/softwareheritage/venv/bin/python3 -m pip install --upgrade pip' command.
[32mswh-vault_1 |[0m Waiting for postgresql to start
[32mswh-vault_1 |[0m wait-for-it: waiting for swh-vault-db:5432 without a timeout
[32mswh-vault_1 |[0m wait-for-it: swh-vault-db:5432 is available after 0 seconds
[36;1mswh-web_1 |[0m Using pip from /srv/softwareheritage/venv/bin/pip
[36;1mswh-web_1 |[0m File "/usr/local/lib/python3.7/http/client.py", line 1032, in _send_output
[36;1mswh-web_1 |[0m self.send(msg)
[36;1mswh-web_1 |[0m File "/usr/local/lib/python3.7/http/client.py", line 972, in send
[36;1mswh-web_1 |[0m self.connect()
[36;1mswh-web_1 |[0m File "/srv/softwareheritage/venv/lib/python3.7/site-packages/urllib3/connection.py", line 187, in connect
[36;1mswh-web_1 |[0m conn= self._new_conn()
[36;1mswh-web_1 |[0m File "/srv/softwareheritage/venv/lib/python3.7/site-packages/urllib3/connection.py", line 172, in _new_conn
[36;1mswh-web_1 |[0m self, "Failed to establish a new connection: %s" % e
[36;1mswh-web_1 |[0m urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7efeeadfa250>: Failed to establish a new connection: [Errno 111] Connection refused
[36;1mswh-web_1 |[0m
[36;1mswh-web_1 |[0m During handling of the above exception, another exception occurred:
[36;1mswh-web_1 |[0m File "/srv/softwareheritage/venv/lib/python3.7/site-packages/urllib3/util/retry.py", line 439, in increment
[36;1mswh-web_1 |[0m raise MaxRetryError(_pool, url, error or ResponseError(cause))
[36;1mswh-web_1 |[0m urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='swh-storage', port=5002): Max retries exceeded with url: /stat/counters (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7efeeadfa250>: Failed to establish a new connection: [Errno 111] Connection refused'))
[36;1mswh-web_1 |[0m
[36;1mswh-web_1 |[0m During handling of the above exception, another exception occurred:
[36;1mswh-web_1 |[0m requests.exceptions.ConnectionError: HTTPConnectionPool(host='swh-storage', port=5002): Max retries exceeded with url: /stat/counters (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7efeeadfa250>: Failed to establish a new connection: [Errno 111] Connection refused'))
[36;1mswh-web_1 |[0m
[36;1mswh-web_1 |[0m During handling of the above exception, another exception occurred:
[36;1mswh-web_1 |[0m File "/srv/softwareheritage/venv/lib/python3.7/site-packages/django/core/handlers/exception.py", line 90, in response_for_exception
[36;1mswh-web_1 |[0m File "/srv/softwareheritage/venv/lib/python3.7/site-packages/django/core/handlers/exception.py", line 90, in response_for_exception
[36;1mswh-web_1 |[0m File "/srv/softwareheritage/venv/lib/python3.7/site-packages/django/core/handlers/exception.py", line 90, in response_for_exception
[36;1mswh-web_1 |[0m File "/srv/softwareheritage/venv/lib/python3.7/site-packages/django/core/handlers/exception.py", line 90, in response_for_exception
[36;1mswh-web_1 |[0m File "/srv/softwareheritage/venv/lib/python3.7/site-packages/django/core/handlers/exception.py", line 90, in response_for_exception
[36;1mswh-web_1 |[0m File "/srv/softwareheritage/venv/lib/python3.7/site-packages/django/core/handlers/exception.py", line 90, in response_for_exception
[36;1mswh-web_1 |[0m File "/srv/softwareheritage/venv/lib/python3.7/site-packages/django/core/handlers/exception.py", line 90, in response_for_exception
[36;1mswh-web_1 |[0m File "/srv/softwareheritage/venv/lib/python3.7/site-packages/django/core/handlers/exception.py", line 90, in response_for_exception
[36;1mswh-web_1 |[0m File "/srv/softwareheritage/venv/lib/python3.7/site-packages/django/core/handlers/exception.py", line 90, in response_for_exception
[36;1mswh-web_1 |[0m File "/srv/softwareheritage/venv/lib/python3.7/site-packages/django/core/handlers/exception.py", line 90, in response_for_exception
[36;1mswh-web_1 |[0m File "/srv/softwareheritage/venv/lib/python3.7/site-packages/django/core/handlers/exception.py", line 90, in response_for_exception
[36;1mswh-web_1 |[0m File "/srv/softwareheritage/venv/lib/python3.7/site-packages/django/core/handlers/exception.py", line 90, in response_for_exception
[36;1mswh-web_1 |[0m File "/srv/softwareheritage/venv/lib/python3.7/site-packages/django/core/handlers/exception.py", line 90, in response_for_exception
[36;1mswh-web_1 |[0m File "/srv/softwareheritage/venv/lib/python3.7/site-packages/django/core/handlers/exception.py", line 90, in response_for_exception
[36;1mswh-web_1 |[0m File "/srv/softwareheritage/venv/lib/python3.7/site-packages/django/core/handlers/exception.py", line 90, in response_for_exception
[36;1mswh-web_1 |[0m File "/srv/softwareheritage/venv/lib/python3.7/site-packages/swh/core/api/__init__.py", line 260, in raw_verb
[36;1mswh-web_1 |[0m raise self.api_exception(e)
[36;1mswh-web_1 |[0m swh.storage.exc.StorageAPIError: An unexpected error occurred in the api backend: HTTPConnectionPool(host='swh-storage', port=5002): Max retries exceeded with url: /stat/counters (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7efeeadfa250>: Failed to establish a new connection: [Errno 111] Connection refused'))
[31;1mswh-scheduler-listener_1 |[0m WARNING: You are using pip version 20.2.2; however, version 20.2.3 is available.
[31;1mswh-scheduler-listener_1 |[0m You should consider upgrading via the '/srv/softwareheritage/venv/bin/python3 -m pip install --upgrade pip' command.
[31;1mswh-scheduler-listener_1 |[0m Waiting for postgresql to start
[31;1mswh-scheduler-listener_1 |[0m wait-for-it: waiting for swh-scheduler-db:5432 without a timeout
[31;1mswh-scheduler-listener_1 |[0m wait-for-it: swh-scheduler-db:5432 is available after 0 seconds
[31;1mswh-scheduler-listener_1 |[0m Using pip from /srv/softwareheritage/venv/bin/pip
[32;1mgrafana_1 |[0m t=2020-10-16T12:13:30+0000 lvl=info msg="Executing migration"logger=migrator id="Add index for epoch_end"
[32;1mgrafana_1 |[0m t=2020-10-16T12:13:30+0000 lvl=info msg="Executing migration"logger=migrator id="Make epoch_end the same as epoch"
[32;1mgrafana_1 |[0m t=2020-10-16T12:13:30+0000 lvl=info msg="Executing migration"logger=migrator id="Move region to single row"
[32;1mgrafana_1 |[0m t=2020-10-16T12:13:30+0000 lvl=info msg="Executing migration"logger=migrator id="Remove index org_id_epoch from annotation table"
[32;1mgrafana_1 |[0m t=2020-10-16T12:13:30+0000 lvl=info msg="Executing migration"logger=migrator id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
[32;1mgrafana_1 |[0m t=2020-10-16T12:13:30+0000 lvl=info msg="Executing migration"logger=migrator id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
[32;1mgrafana_1 |[0m t=2020-10-16T12:13:30+0000 lvl=info msg="Executing migration"logger=migrator id="Add index for org_id_epoch_end_epoch on annotation table"
[32;1mgrafana_1 |[0m t=2020-10-16T12:13:30+0000 lvl=info msg="Executing migration"logger=migrator id="Remove index org_id_epoch_epoch_end from annotation table"
[32;1mgrafana_1 |[0m t=2020-10-16T12:13:30+0000 lvl=info msg="Executing migration"logger=migrator id="Add index for alert_id on annotation table"
[31;1mswh-scheduler-listener_1 |[0m WARNING: You are using pip version 20.2.2; however, version 20.2.3 is available.
[31;1mswh-scheduler-listener_1 |[0m You should consider upgrading via the '/srv/softwareheritage/venv/bin/python3 -m pip install --upgrade pip' command.
[31;1mswh-scheduler-listener_1 |[0m Waiting for postgresql to start
[31;1mswh-scheduler-listener_1 |[0m wait-for-it: waiting for swh-scheduler-db:5432 without a timeout
[31;1mswh-scheduler-listener_1 |[0m wait-for-it: swh-scheduler-db:5432 is available after 0 seconds
[32;1mgrafana_1 |[0m t=2020-10-16T12:13:30+0000 lvl=eror msg="Failed to read plugin provisioning files from directory"logger=provisioning.plugins path=/etc/grafana/provisioning/plugins error="open /etc/grafana/provisioning/plugins: no such file or directory"
[32;1mgrafana_1 |[0m t=2020-10-16T12:13:30+0000 lvl=eror msg="Can't read alert notification provisioning files from directory"logger=provisioning.notifiers path=/etc/grafana/provisioning/notifiers error="open /etc/grafana/provisioning/notifiers: no such file or directory"
[32;1mgrafana_1 |[0m t=2020-10-16T12:13:30+0000 lvl=warn msg="[Deprecated] the dashboard provisioning config is outdated. please upgrade"logger=provisioning.dashboard filename=/etc/grafana/provisioning/dashboards/all.yaml
[32;1mgrafana_1 |[0m t=2020-10-16T12:13:30+0000 lvl=warn msg="[Deprecated] The folder property is deprecated. Please use path instead."logger=provisioning.dashboard type=file name=default
[32;1mgrafana_1 |[0m t=2020-10-16T12:42:13+0000 lvl=eror msg="Failed to read plugin provisioning files from directory"logger=provisioning.plugins path=/etc/grafana/provisioning/plugins error="open /etc/grafana/provisioning/plugins: no such file or directory"
[32;1mgrafana_1 |[0m t=2020-10-16T12:42:13+0000 lvl=eror msg="Can't read alert notification provisioning files from directory"logger=provisioning.notifiers path=/etc/grafana/provisioning/notifiers error="open /etc/grafana/provisioning/notifiers: no such file or directory"
[32;1mgrafana_1 |[0m t=2020-10-16T12:42:13+0000 lvl=warn msg="[Deprecated] the dashboard provisioning config is outdated. please upgrade"logger=provisioning.dashboard filename=/etc/grafana/provisioning/dashboards/all.yaml
[32;1mgrafana_1 |[0m t=2020-10-16T12:42:13+0000 lvl=warn msg="[Deprecated] The folder property is deprecated. Please use path instead."logger=provisioning.dashboard type=file name=default
[33mkafka-manager_1 |[0m 2020-10-16 12:13:36,593 - [INFO] o.a.z.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes
[33mkafka-manager_1 |[0m 2020-10-16 12:13:36,598 - [INFO] o.a.z.ClientCnxn - zookeeper.request.timeout value is 0. feature enabled=
[33mkafka-manager_1 |[0m 2020-10-16 12:13:36,648 - [INFO] o.a.z.ClientCnxn - Opening socket connection to server zookeeper/172.20.0.13:2181. Will not attempt to authenticate using SASL (unknown error)
[33mkafka-manager_1 |[0m 2020-10-16 12:42:20,586 - [INFO] o.a.z.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes
[33mkafka-manager_1 |[0m 2020-10-16 12:42:20,596 - [INFO] o.a.z.ClientCnxn - zookeeper.request.timeout value is 0. feature enabled=
[33mkafka-manager_1 |[0m 2020-10-16 12:42:20,644 - [INFO] o.a.z.ClientCnxn - Opening socket connection to server zookeeper/172.20.0.8:2181. Will not attempt to authenticate using SASL (unknown error)
[34;1mswh-deposit_1 |[0m Apply all migrations: auth, contenttypes, deposit, sessions
[34;1mswh-deposit_1 |[0m Running migrations:
[34;1mswh-deposit_1 |[0m Applying contenttypes.0001_initial... OK
[34;1mswh-deposit_1 |[0m Applying contenttypes.0002_remove_content_type_name... OK
[34;1mswh-deposit_1 |[0m Applying auth.0001_initial... OK
[34;1mswh-deposit_1 |[0m Applying auth.0002_alter_permission_name_max_length... OK
[34;1mswh-deposit_1 |[0m Applying auth.0003_alter_user_email_max_length... OK
[34;1mswh-deposit_1 |[0m Applying auth.0004_alter_user_username_opts... OK
[34;1mswh-deposit_1 |[0m Applying auth.0005_alter_user_last_login_null... OK
[34;1mswh-deposit_1 |[0m Applying auth.0006_require_contenttypes_0002... OK
[34;1mswh-deposit_1 |[0m Applying auth.0007_alter_validators_add_error_messages... OK
[34;1mswh-deposit_1 |[0m Applying auth.0008_alter_user_username_max_length... OK
[34;1mswh-deposit_1 |[0m Applying auth.0009_alter_user_last_name_max_length... OK
[34;1mswh-deposit_1 |[0m Applying auth.0010_alter_group_name_max_length... OK
[34;1mswh-deposit_1 |[0m Applying auth.0011_update_proxy_permissions... OK
[34;1mswh-deposit_1 |[0m Applying deposit.0001_initial... OK
[34;1mswh-deposit_1 |[0m Applying deposit.0002_depositrequest_archive... OK
[34;1mswh-deposit_1 |[0m Applying deposit.0003_temporaryarchive... OK
[34;1mswh-deposit_1 |[0m Applying deposit.0004_delete_temporaryarchive... OK
[34;1mswh-deposit_1 |[0m Applying deposit.0005_auto_20171019_1436... OK
[34;1mswh-deposit_1 |[0m Applying deposit.0006_depositclient_url... OK
[34;1mswh-deposit_1 |[0m Applying deposit.0007_auto_20171129_1609... OK
[34;1mswh-deposit_1 |[0m Applying deposit.0008_auto_20171130_1513... OK
[34;1mswh-deposit_1 |[0m Applying deposit.0009_deposit_parent... OK
[34;1mswh-deposit_1 |[0m Applying deposit.0010_auto_20180110_0953... OK
[34;1mswh-deposit_1 |[0m Applying deposit.0011_auto_20180115_1510... OK
[34;1mswh-deposit_1 |[0m Applying deposit.0012_deposit_status_detail... OK
[34;1mswh-deposit_1 |[0m Applying deposit.0013_depositrequest_raw_metadata... OK
[34;1mswh-deposit_1 |[0m Applying deposit.0014_auto_20180720_1221... OK
[34;1mswh-deposit_1 |[0m Applying deposit.0015_depositrequest_typemigration... OK
[34;1mswh-deposit_1 |[0m Applying deposit.0016_auto_20190507_1408... OK
[34;1mswh-deposit_1 |[0m Applying deposit.0017_auto_20190925_0906... OK
[34;1mswh-deposit_1 |[0m Applying deposit.0018_migrate_swhids... OK
[34;1mswh-deposit_1 |[0m Applying deposit.0019_auto_20200519_1035... OK
[34;1mswh-deposit_1 |[0m Applying sessions.0001_initial... OK
[34;1mswh-deposit_1 |[0m + swh-deposit admin user exists test
[34;1mswh-deposit_1 |[0m User test does not exist.
[34;1mswh-deposit_1 |[0m + swh-deposit admin user create --username test --password test --provider-url https://softwareheritage.org --domain softwareheritage.org
[34;1mswh-deposit_1 |[0m collection: test
[34;1mswh-deposit_1 |[0m Create new collection test
[34;1mswh-deposit_1 |[0m Collection test created
[34;1mswh-deposit_1 |[0m Create new user test
[34;1mswh-deposit_1 |[0m Information registered for user {'id': 1, 'collections': [1], 'username': 'test', 'domain': 'softwareheritage.org', 'provider_url': 'https://softwareheritage.org'}
[36mkafka_1 |[0m Excluding KAFKA_JMX_OPTS from broker config
[36mkafka_1 |[0m [Configuring]'advertised.port' in '/opt/kafka/config/server.properties'
[36mkafka_1 |[0m Excluding KAFKA_HOME from broker config
[36mkafka_1 |[0m [Configuring]'advertised.host.name' in '/opt/kafka/config/server.properties'
[36mkafka_1 |[0m [Configuring]'message.max.bytes' in '/opt/kafka/config/server.properties'
[36mkafka_1 |[0m [Configuring]'port' in '/opt/kafka/config/server.properties'
[36mkafka_1 |[0m [Configuring]'advertised.listeners' in '/opt/kafka/config/server.properties'
[36mkafka_1 |[0m [Configuring]'broker.id' in '/opt/kafka/config/server.properties'
[36mkafka_1 |[0m [Configuring]'log4j.logger.kafka.authorizer.logger' in '/opt/kafka/config/log4j.properties'
[36mkafka_1 |[0m Excluding KAFKA_VERSION from broker config
[36mkafka_1 |[0m [Configuring]'listeners' in '/opt/kafka/config/server.properties'
[36mkafka_1 |[0m [Configuring]'zookeeper.connect' in '/opt/kafka/config/server.properties'
[36mkafka_1 |[0m [Configuring]'log.dirs' in '/opt/kafka/config/server.properties'
[36mkafka_1 |[0m [2020-10-16 12:13:30,820] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[36mkafka_1 |[0m [2020-10-16 12:13:31,949] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[36mkafka_1 |[0m [2020-10-16 12:13:32,078] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
[36mkafka_1 |[0m [2020-10-16 12:13:32,106] INFO starting (kafka.server.KafkaServer)
[36mkafka_1 |[0m [2020-10-16 12:13:32,108] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer)
[36mkafka_1 |[0m [2020-10-16 12:13:32,147] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient)
[36mkafka_1 |[0m [2020-10-16 12:13:32,157] INFO Client environment:zookeeper.version=3.5.8-f439ca583e70862c3068a1f2a7d4d068eec33315, built on 05/04/2020 15:53 GMT (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:13:32,157] INFO Client environment:host.name=b619b7cc1466 (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:13:32,157] INFO Client environment:java.version=1.8.0_212 (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:13:32,158] INFO Client environment:java.vendor=IcedTea (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:13:32,158] INFO Client environment:java.home=/usr/lib/jvm/java-1.8-openjdk/jre (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:13:32,158] INFO Client environment:java.class.path=/opt/kafka/bin/../libs/activation-1.1.1.jar:/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/kafka/bin/../libs/commons-cli-1.4.jar:/opt/kafka/bin/../libs/commons-lang3-3.8.1.jar:/opt/kafka/bin/../libs/connect-api-2.6.0.jar:/opt/kafka/bin/../libs/connect-basic-auth-extension-2.6.0.jar:/opt/kafka/bin/../libs/connect-file-2.6.0.jar:/opt/kafka/bin/../libs/connect-json-2.6.0.jar:/opt/kafka/bin/../libs/connect-mirror-2.6.0.jar:/opt/kafka/bin/../libs/connect-mirror-client-2.6.0.jar:/opt/kafka/bin/../libs/connect-runtime-2.6.0.jar:/opt/kafka/bin/../libs/connect-transforms-2.6.0.jar:/opt/kafka/bin/../libs/hk2-api-2.5.0.jar:/opt/kafka/bin/../libs/hk2-locator-2.5.0.jar:/opt/kafka/bin/../libs/hk2-utils-2.5.0.jar:/opt/kafka/bin/../libs/jackson-annotations-2.10.2.jar:/opt/kafka/bin/../libs/jackson-core-2.10.2.jar:/opt/kafka/bin/../libs/jackson-databind-2.10.2.jar:/opt/kafka/bin/../libs/jackson-dataformat-csv-2.10.2.jar:/opt/kafka/bin/../libs/jackson-datatype-jdk8-2.10.2.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.10.2.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.10.2.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.10.2.jar:/opt/kafka/bin/../libs/jackson-module-paranamer-2.10.2.jar:/opt/kafka/bin/../libs/jackson-module-scala_2.13-2.10.2.jar:/opt/kafka/bin/../libs/jakarta.activation-api-1.2.1.jar:/opt/kafka/bin/../libs/jakarta.annotation-api-1.3.4.jar:/opt/kafka/bin/../libs/jakarta.inject-2.5.0.jar:/opt/kafka/bin/../libs/jakarta.ws.rs-api-2.1.5.jar:/opt/kafka/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/opt/kafka/bin/../libs/javassist-3.22.0-CR2.jar:/opt/kafka/bin/../libs/javassist-3.26.0-GA.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/opt/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/kafka/bin/../libs/jersey-client-2.28.jar:/opt/kafka/bin/../libs/jersey-common-2.28.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.28.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.28.jar:/opt/kafka/bin/../libs/jersey-hk2-2.28.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.28.jar:/opt/kafka/bin/../libs/jersey-server-2.28.jar:/opt/kafka/bin/../libs/jetty-client-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-continuation-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-http-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-io-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-security-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-server-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-servlet-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-servlets-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-util-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/kafka/bin/../libs/kafka-clients-2.6.0.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-2.6.0.jar:/opt/kafka/bin/../libs/kafka-streams-2.6.0.jar:/opt/kafka/bin/../libs/kafka-streams-examples-2.6.0.jar:/opt/kafka/bin/../libs/kafka-streams-scala_2.13-2.6.0.jar:/opt/kafka/bin/../libs/kafka-streams-test-utils-2.6.0.jar:/opt/kafka/bin/../libs/kafka-tools-2.6.0.jar:/opt/kafka/bin/../libs/kafka_2.13-2.6.0-sources.jar:/opt/kafka/bin/../libs/kafka_2.13-2.6.0.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-java-1.7.1.jar:/opt/kafka/bin/../libs/maven-artifact-3.6.3.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/netty-buffer-4.1.50.Final.jar:/opt/kafka/bin/../libs/netty-codec-4.1.50.Final.jar:/opt/kafka/bin/../libs/netty-common-4.1.50.Final.jar:/opt/kafka/bin/../libs/netty-handler-4.1.50.Final.jar:/opt/kafka/bin/../libs/netty-resolver-4.1.50.Final.jar:/opt/kafka/bin/../libs/netty-transport-4.1.50.Final.jar:/opt/kafka/bin/../libs/netty-transport-native-epoll-4.1.50.Final.jar:/opt/kafka/bin/../libs/netty-transport-native-unix-common-4.1.50.Final.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka/bin/../libs/paranamer-2.8.jar:/opt/kafka/bin/../libs/plexus-utils-3.2.1.jar:/opt/kafka/bin/../libs/reflections-0.9.12.jar:/opt/kafka/bin/../libs/rocksdbjni-5.18.4.jar:/opt/kafka/bin/../libs/scala-collection-compat_2.13-2.1.6.jar:/opt/kafka/bin/../libs/scala-java8-compat_2.13-0.9.1.jar:/opt/kafka/bin/../libs/scala-library-2.13.2.jar:/opt/kafka/bin/../libs/scala-logging_2.13-3.9.2.jar:/opt/kafka/bin/../libs/scala-reflect-2.13.2.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.30.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.30.jar:/opt/kafka/bin/../libs/snappy-java-1.1.7.3.jar:/opt/kafka/bin/../libs/validation-api-2.0.1.Final.jar:/opt/kafka/bin/../libs/zookeeper-3.5.8.jar:/opt/kafka/bin/../libs/zookeeper-jute-3.5.8.jar:/opt/kafka/bin/../libs/zstd-jni-1.4.4-7.jar (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:13:32,160] INFO Client environment:java.library.path=/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64:/usr/lib/jvm/java-1.8-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:13:32,160] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:13:32,160] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:13:32,161] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:13:32,161] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:13:32,161] INFO Client environment:os.version=5.7.0-0.bpo.2-amd64 (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:13:32,161] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:13:32,161] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:13:32,161] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:13:32,161] INFO Client environment:os.memory.free=976MB (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:13:32,162] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:13:32,162] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:13:32,175] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
[36mkafka_1 |[0m [2020-10-16 12:13:32,185] INFO zookeeper.request.timeout value is 0. feature enabled=(org.apache.zookeeper.ClientCnxn)
[36mkafka_1 |[0m [2020-10-16 12:13:32,193] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[36mkafka_1 |[0m [2020-10-16 12:13:32,200] INFO Opening socket connection to server zookeeper/172.20.0.13:2181. Will not attempt to authenticate using SASL (unknown error)(org.apache.zookeeper.ClientCnxn)
[36mkafka_1 |[0m [2020-10-16 12:13:32,247] INFO Session establishment complete on server zookeeper/172.20.0.13:2181, sessionid= 0x10027a5e91a0000, negotiated timeout=18000(org.apache.zookeeper.ClientCnxn)
[36mkafka_1 |[0m [2020-10-16 12:13:32,470] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[36mkafka_1 |[0m [2020-10-16 12:13:33,218] INFO Cluster ID= pmAE1HTIScGPF2-yN3FlXw (kafka.server.KafkaServer)
[36mkafka_1 |[0m [2020-10-16 12:13:33,228] WARN No meta.properties file under dir /kafka/kafka-logs-b619b7cc1466/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[36mkafka_1 |[0m [2020-10-16 12:13:33,322] INFO KafkaConfig values:
[36mkafka_1 |[0m [2020-10-16 12:13:33,389] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[36mkafka_1 |[0m [2020-10-16 12:13:33,390] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[36mkafka_1 |[0m [2020-10-16 12:13:33,395] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[36mkafka_1 |[0m [2020-10-16 12:13:33,423] INFO Log directory /kafka/kafka-logs-b619b7cc1466 not found, creating it. (kafka.log.LogManager)
[36mkafka_1 |[0m [2020-10-16 12:13:33,448] INFO Loading logs from log dirs ArraySeq(/kafka/kafka-logs-b619b7cc1466)(kafka.log.LogManager)
[36mkafka_1 |[0m [2020-10-16 12:13:33,451] INFO Attempting recovery for all logs in /kafka/kafka-logs-b619b7cc1466 since no clean shutdown file was found (kafka.log.LogManager)
[36mkafka_1 |[0m [2020-10-16 12:13:33,457] INFO Loaded 0 logs in 9ms. (kafka.log.LogManager)
[36mkafka_1 |[0m [2020-10-16 12:13:33,477] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[36mkafka_1 |[0m [2020-10-16 12:13:33,481] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[36mkafka_1 |[0m [2020-10-16 12:13:34,439] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[36mkafka_1 |[0m [2020-10-16 12:13:34,569] INFO [SocketServer brokerId=1001] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT)(kafka.network.SocketServer)
[36mkafka_1 |[0m [2020-10-16 12:13:34,611] INFO [ExpirationReaper-1001-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:13:34,616] INFO [ExpirationReaper-1001-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:13:34,647] INFO [ExpirationReaper-1001-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:13:34,647] INFO [ExpirationReaper-1001-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:13:34,709] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[36mkafka_1 |[0m [2020-10-16 12:13:34,756] INFO Creating /brokers/ids/1001 (is it secure? false)(kafka.zk.KafkaZkClient)
[36mkafka_1 |[0m [2020-10-16 12:13:34,783] INFO Stat of the created znode at /brokers/ids/1001 is: 25,25,1602850414773,1602850414773,1,0,0,72101187571810304,182,0,25
[36mkafka_1 |[0m (kafka.zk.KafkaZkClient)
[36mkafka_1 |[0m [2020-10-16 12:13:34,784] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: PLAINTEXT://kafka:9092, czxid (broker epoch): 25(kafka.zk.KafkaZkClient)
[36mkafka_1 |[0m [2020-10-16 12:13:35,040] INFO [ExpirationReaper-1001-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:13:35,041] INFO [ExpirationReaper-1001-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:13:35,043] INFO [ExpirationReaper-1001-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:13:35,060] INFO Successfully created /controller_epoch with initial epoch 0(kafka.zk.KafkaZkClient)
[36mkafka_1 |[0m [2020-10-16 12:13:35,097] INFO [GroupCoordinator 1001]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[36mkafka_1 |[0m [2020-10-16 12:13:35,098] INFO [GroupCoordinator 1001]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[36mkafka_1 |[0m [2020-10-16 12:13:35,114] INFO [ProducerId Manager 1001]: Acquired new producerId block (brokerId:1001,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1(kafka.coordinator.transaction.ProducerIdManager)
[36mkafka_1 |[0m [2020-10-16 12:13:35,119] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 15 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[36mkafka_1 |[0m [2020-10-16 12:13:35,147] INFO [TransactionCoordinator id=1001] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[36mkafka_1 |[0m [2020-10-16 12:13:35,188] INFO [TransactionCoordinator id=1001] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[36mkafka_1 |[0m [2020-10-16 12:13:35,310] INFO [ExpirationReaper-1001-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:13:35,527] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[36mkafka_1 |[0m [2020-10-16 12:13:35,568] INFO [SocketServer brokerId=1001] Starting socket server acceptors and processors (kafka.network.SocketServer)
[36mkafka_1 |[0m [2020-10-16 12:13:35,616] INFO [SocketServer brokerId=1001] Started data-plane acceptor and processor(s)for endpoint : ListenerName(PLAINTEXT)(kafka.network.SocketServer)
[36mkafka_1 |[0m [2020-10-16 12:13:35,851] INFO [SocketServer brokerId=1001] Started socket server acceptors and processors (kafka.network.SocketServer)
[36mkafka_1 |[0m [2020-10-16 12:13:35,856] INFO Kafka version: 2.6.0 (org.apache.kafka.common.utils.AppInfoParser)
[36mkafka_1 |[0m [2020-10-16 12:13:35,872] INFO Kafka commitId: 62abe01bee039651 (org.apache.kafka.common.utils.AppInfoParser)
[36mkafka_1 |[0m [2020-10-16 12:13:35,873] INFO Kafka startTimeMs: 1602850415852(org.apache.kafka.common.utils.AppInfoParser)
[36mkafka_1 |[0m [2020-10-16 12:13:35,874] INFO [KafkaServer id=1001] started (kafka.server.KafkaServer)
[36mkafka_1 |[0m [2020-10-16 12:23:35,098] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[36mkafka_1 |[0m [2020-10-16 12:33:35,098] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[36mkafka_1 |[0m [2020-10-16 12:34:33,473] INFO Terminating process due to signal SIGTERM (org.apache.kafka.common.utils.LoggingSignalHandler)
[36mkafka_1 |[0m [2020-10-16 12:34:33,474] INFO [KafkaServer id=1001] shutting down (kafka.server.KafkaServer)
[36mkafka_1 |[0m [2020-10-16 12:34:33,490] INFO [/config/changes-event-process-thread]: Shutting down (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[36mkafka_1 |[0m [2020-10-16 12:34:33,491] INFO [/config/changes-event-process-thread]: Shutdown completed (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[36mkafka_1 |[0m [2020-10-16 12:34:33,491] INFO [/config/changes-event-process-thread]: Stopped (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[36mkafka_1 |[0m [2020-10-16 12:34:33,491] INFO [SocketServer brokerId=1001] Stopping socket server request processors (kafka.network.SocketServer)
[36mkafka_1 |[0m [2020-10-16 12:34:33,496] INFO [SocketServer brokerId=1001] Stopped socket server request processors (kafka.network.SocketServer)
[36mkafka_1 |[0m [2020-10-16 12:34:33,496] INFO [data-plane Kafka Request Handler on Broker 1001], shutting down (kafka.server.KafkaRequestHandlerPool)
[36mkafka_1 |[0m [2020-10-16 12:34:33,498] INFO [data-plane Kafka Request Handler on Broker 1001], shut down completely (kafka.server.KafkaRequestHandlerPool)
[36mkafka_1 |[0m [2020-10-16 12:34:33,499] INFO [ExpirationReaper-1001-AlterAcls]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:33,625] INFO [ExpirationReaper-1001-AlterAcls]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:33,625] INFO [ExpirationReaper-1001-AlterAcls]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:33,627] INFO [KafkaApi-1001] Shutdown complete. (kafka.server.KafkaApis)
[36mkafka_1 |[0m [2020-10-16 12:34:33,629] INFO [ExpirationReaper-1001-topic]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:33,655] INFO [ExpirationReaper-1001-topic]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:33,655] INFO [ExpirationReaper-1001-topic]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:33,658] INFO [TransactionCoordinator id=1001] Shutting down. (kafka.coordinator.transaction.TransactionCoordinator)
[36mkafka_1 |[0m [2020-10-16 12:34:33,660] INFO [ProducerId Manager 1001]: Shutdown complete: last producerId assigned 0(kafka.coordinator.transaction.ProducerIdManager)
[36mkafka_1 |[0m [2020-10-16 12:34:33,661] INFO [Transaction State Manager 1001]: Shutdown complete(kafka.coordinator.transaction.TransactionStateManager)
[36mkafka_1 |[0m [2020-10-16 12:34:33,661] INFO [Transaction Marker Channel Manager 1001]: Shutting down (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[36mkafka_1 |[0m [2020-10-16 12:34:33,663] INFO [TransactionCoordinator id=1001] Shutdown complete. (kafka.coordinator.transaction.TransactionCoordinator)
[36mkafka_1 |[0m [2020-10-16 12:34:33,664] INFO [GroupCoordinator 1001]: Shutting down. (kafka.coordinator.group.GroupCoordinator)
[36mkafka_1 |[0m [2020-10-16 12:34:33,666] INFO [ExpirationReaper-1001-Heartbeat]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:33,825] INFO [ExpirationReaper-1001-Heartbeat]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:33,825] INFO [ExpirationReaper-1001-Heartbeat]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:33,826] INFO [ExpirationReaper-1001-Rebalance]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:33,873] INFO [ExpirationReaper-1001-Rebalance]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:33,873] INFO [ExpirationReaper-1001-Rebalance]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:33,875] INFO [GroupCoordinator 1001]: Shutdown complete. (kafka.coordinator.group.GroupCoordinator)
[36mkafka_1 |[0m [2020-10-16 12:34:33,879] INFO [ReplicaManager broker=1001] Shutting down (kafka.server.ReplicaManager)
[36mkafka_1 |[0m [2020-10-16 12:34:33,880] INFO [LogDirFailureHandler]: Shutting down (kafka.server.ReplicaManager$LogDirFailureHandler)
[36mkafka_1 |[0m [2020-10-16 12:34:33,881] INFO [LogDirFailureHandler]: Shutdown completed (kafka.server.ReplicaManager$LogDirFailureHandler)
[36mkafka_1 |[0m [2020-10-16 12:34:33,881] INFO [LogDirFailureHandler]: Stopped (kafka.server.ReplicaManager$LogDirFailureHandler)
[36mkafka_1 |[0m [2020-10-16 12:34:33,883] INFO [ReplicaFetcherManager on broker 1001] shutting down (kafka.server.ReplicaFetcherManager)
[36mkafka_1 |[0m [2020-10-16 12:34:33,888] INFO [ReplicaFetcherManager on broker 1001] shutdown completed (kafka.server.ReplicaFetcherManager)
[36mkafka_1 |[0m [2020-10-16 12:34:33,889] INFO [ReplicaAlterLogDirsManager on broker 1001] shutting down (kafka.server.ReplicaAlterLogDirsManager)
[36mkafka_1 |[0m [2020-10-16 12:34:33,891] INFO [ReplicaAlterLogDirsManager on broker 1001] shutdown completed (kafka.server.ReplicaAlterLogDirsManager)
[36mkafka_1 |[0m [2020-10-16 12:34:33,891] INFO [ExpirationReaper-1001-Fetch]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:34,027] INFO [ExpirationReaper-1001-Fetch]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:34,028] INFO [ExpirationReaper-1001-Fetch]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:34,029] INFO [ExpirationReaper-1001-Produce]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:34,032] INFO [ExpirationReaper-1001-Produce]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:34,032] INFO [ExpirationReaper-1001-Produce]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:34,033] INFO [ExpirationReaper-1001-DeleteRecords]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:34,230] INFO [ExpirationReaper-1001-DeleteRecords]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:34,230] INFO [ExpirationReaper-1001-DeleteRecords]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:34,232] INFO [ExpirationReaper-1001-ElectLeader]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:34,273] INFO [ExpirationReaper-1001-ElectLeader]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:34,273] INFO [ExpirationReaper-1001-ElectLeader]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:34,290] INFO [ReplicaManager broker=1001] Shut down completely (kafka.server.ReplicaManager)
[36mkafka_1 |[0m [2020-10-16 12:34:34,292] INFO Shutting down. (kafka.log.LogManager)
[36mkafka_1 |[0m [2020-10-16 12:34:34,315] INFO Shutdown complete. (kafka.log.LogManager)
[36mkafka_1 |[0m [2020-10-16 12:34:34,327] INFO [ZooKeeperClient Kafka server] Closing. (kafka.zookeeper.ZooKeeperClient)
[36mkafka_1 |[0m [2020-10-16 12:34:34,432] INFO Session: 0x10027a5e91a0000 closed (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:34:34,432] INFO EventThread shut down for session: 0x10027a5e91a0000 (org.apache.zookeeper.ClientCnxn)
[36mkafka_1 |[0m [2020-10-16 12:34:34,434] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zookeeper.ZooKeeperClient)
[36mkafka_1 |[0m [2020-10-16 12:34:34,434] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:34,616] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:34,616] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:34,616] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:34,638] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:34,638] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:34,639] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:35,616] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:35,616] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[36mkafka_1 |[0m [2020-10-16 12:34:35,617] INFO [SocketServer brokerId=1001] Shutting down socket server (kafka.network.SocketServer)
[36mkafka_1 |[0m [2020-10-16 12:34:35,628] INFO [SocketServer brokerId=1001] Shutdown completed (kafka.network.SocketServer)
[36mkafka_1 |[0m [2020-10-16 12:34:35,633] INFO [KafkaServer id=1001] shut down completed (kafka.server.KafkaServer)
[36mkafka_1 |[0m Excluding KAFKA_JMX_OPTS from broker config
[36mkafka_1 |[0m [Configuring]'advertised.port' in '/opt/kafka/config/server.properties'
[36mkafka_1 |[0m Excluding KAFKA_HOME from broker config
[36mkafka_1 |[0m [Configuring]'advertised.host.name' in '/opt/kafka/config/server.properties'
[36mkafka_1 |[0m [Configuring]'message.max.bytes' in '/opt/kafka/config/server.properties'
[36mkafka_1 |[0m [Configuring]'port' in '/opt/kafka/config/server.properties'
[36mkafka_1 |[0m [Configuring]'advertised.listeners' in '/opt/kafka/config/server.properties'
[36mkafka_1 |[0m [Configuring]'broker.id' in '/opt/kafka/config/server.properties'
[36mkafka_1 |[0m [Configuring]'log4j.logger.kafka.authorizer.logger' in '/opt/kafka/config/log4j.properties'
[36mkafka_1 |[0m Excluding KAFKA_VERSION from broker config
[36mkafka_1 |[0m [Configuring]'listeners' in '/opt/kafka/config/server.properties'
[36mkafka_1 |[0m [Configuring]'zookeeper.connect' in '/opt/kafka/config/server.properties'
[36mkafka_1 |[0m [Configuring]'log.dirs' in '/opt/kafka/config/server.properties'
[36mkafka_1 |[0m [2020-10-16 12:42:15,004] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[36mkafka_1 |[0m [2020-10-16 12:42:15,988] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[36mkafka_1 |[0m [2020-10-16 12:42:16,199] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
[36mkafka_1 |[0m [2020-10-16 12:42:16,263] INFO starting (kafka.server.KafkaServer)
[36mkafka_1 |[0m [2020-10-16 12:42:16,264] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer)
[36mkafka_1 |[0m [2020-10-16 12:42:16,313] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient)
[36mkafka_1 |[0m [2020-10-16 12:42:16,323] INFO Client environment:zookeeper.version=3.5.8-f439ca583e70862c3068a1f2a7d4d068eec33315, built on 05/04/2020 15:53 GMT (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:42:16,323] INFO Client environment:host.name=b619b7cc1466 (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:42:16,323] INFO Client environment:java.version=1.8.0_212 (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:42:16,323] INFO Client environment:java.vendor=IcedTea (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:42:16,323] INFO Client environment:java.home=/usr/lib/jvm/java-1.8-openjdk/jre (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:42:16,323] INFO Client environment:java.class.path=/opt/kafka/bin/../libs/activation-1.1.1.jar:/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/kafka/bin/../libs/commons-cli-1.4.jar:/opt/kafka/bin/../libs/commons-lang3-3.8.1.jar:/opt/kafka/bin/../libs/connect-api-2.6.0.jar:/opt/kafka/bin/../libs/connect-basic-auth-extension-2.6.0.jar:/opt/kafka/bin/../libs/connect-file-2.6.0.jar:/opt/kafka/bin/../libs/connect-json-2.6.0.jar:/opt/kafka/bin/../libs/connect-mirror-2.6.0.jar:/opt/kafka/bin/../libs/connect-mirror-client-2.6.0.jar:/opt/kafka/bin/../libs/connect-runtime-2.6.0.jar:/opt/kafka/bin/../libs/connect-transforms-2.6.0.jar:/opt/kafka/bin/../libs/hk2-api-2.5.0.jar:/opt/kafka/bin/../libs/hk2-locator-2.5.0.jar:/opt/kafka/bin/../libs/hk2-utils-2.5.0.jar:/opt/kafka/bin/../libs/jackson-annotations-2.10.2.jar:/opt/kafka/bin/../libs/jackson-core-2.10.2.jar:/opt/kafka/bin/../libs/jackson-databind-2.10.2.jar:/opt/kafka/bin/../libs/jackson-dataformat-csv-2.10.2.jar:/opt/kafka/bin/../libs/jackson-datatype-jdk8-2.10.2.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.10.2.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.10.2.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.10.2.jar:/opt/kafka/bin/../libs/jackson-module-paranamer-2.10.2.jar:/opt/kafka/bin/../libs/jackson-module-scala_2.13-2.10.2.jar:/opt/kafka/bin/../libs/jakarta.activation-api-1.2.1.jar:/opt/kafka/bin/../libs/jakarta.annotation-api-1.3.4.jar:/opt/kafka/bin/../libs/jakarta.inject-2.5.0.jar:/opt/kafka/bin/../libs/jakarta.ws.rs-api-2.1.5.jar:/opt/kafka/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/opt/kafka/bin/../libs/javassist-3.22.0-CR2.jar:/opt/kafka/bin/../libs/javassist-3.26.0-GA.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/opt/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/kafka/bin/../libs/jersey-client-2.28.jar:/opt/kafka/bin/../libs/jersey-common-2.28.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.28.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.28.jar:/opt/kafka/bin/../libs/jersey-hk2-2.28.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.28.jar:/opt/kafka/bin/../libs/jersey-server-2.28.jar:/opt/kafka/bin/../libs/jetty-client-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-continuation-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-http-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-io-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-security-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-server-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-servlet-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-servlets-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-util-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/kafka/bin/../libs/kafka-clients-2.6.0.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-2.6.0.jar:/opt/kafka/bin/../libs/kafka-streams-2.6.0.jar:/opt/kafka/bin/../libs/kafka-streams-examples-2.6.0.jar:/opt/kafka/bin/../libs/kafka-streams-scala_2.13-2.6.0.jar:/opt/kafka/bin/../libs/kafka-streams-test-utils-2.6.0.jar:/opt/kafka/bin/../libs/kafka-tools-2.6.0.jar:/opt/kafka/bin/../libs/kafka_2.13-2.6.0-sources.jar:/opt/kafka/bin/../libs/kafka_2.13-2.6.0.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-java-1.7.1.jar:/opt/kafka/bin/../libs/maven-artifact-3.6.3.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/netty-buffer-4.1.50.Final.jar:/opt/kafka/bin/../libs/netty-codec-4.1.50.Final.jar:/opt/kafka/bin/../libs/netty-common-4.1.50.Final.jar:/opt/kafka/bin/../libs/netty-handler-4.1.50.Final.jar:/opt/kafka/bin/../libs/netty-resolver-4.1.50.Final.jar:/opt/kafka/bin/../libs/netty-transport-4.1.50.Final.jar:/opt/kafka/bin/../libs/netty-transport-native-epoll-4.1.50.Final.jar:/opt/kafka/bin/../libs/netty-transport-native-unix-common-4.1.50.Final.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka/bin/../libs/paranamer-2.8.jar:/opt/kafka/bin/../libs/plexus-utils-3.2.1.jar:/opt/kafka/bin/../libs/reflections-0.9.12.jar:/opt/kafka/bin/../libs/rocksdbjni-5.18.4.jar:/opt/kafka/bin/../libs/scala-collection-compat_2.13-2.1.6.jar:/opt/kafka/bin/../libs/scala-java8-compat_2.13-0.9.1.jar:/opt/kafka/bin/../libs/scala-library-2.13.2.jar:/opt/kafka/bin/../libs/scala-logging_2.13-3.9.2.jar:/opt/kafka/bin/../libs/scala-reflect-2.13.2.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.30.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.30.jar:/opt/kafka/bin/../libs/snappy-java-1.1.7.3.jar:/opt/kafka/bin/../libs/validation-api-2.0.1.Final.jar:/opt/kafka/bin/../libs/zookeeper-3.5.8.jar:/opt/kafka/bin/../libs/zookeeper-jute-3.5.8.jar:/opt/kafka/bin/../libs/zstd-jni-1.4.4-7.jar (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:42:16,324] INFO Client environment:java.library.path=/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64:/usr/lib/jvm/java-1.8-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:42:16,324] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:42:16,325] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:42:16,325] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:42:16,325] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:42:16,325] INFO Client environment:os.version=5.7.0-0.bpo.2-amd64 (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:42:16,325] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:42:16,325] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:42:16,325] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:42:16,325] INFO Client environment:os.memory.free=976MB (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:42:16,325] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:42:16,325] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper)
[36mkafka_1 |[0m [2020-10-16 12:42:16,346] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
[36mkafka_1 |[0m [2020-10-16 12:42:16,361] INFO zookeeper.request.timeout value is 0. feature enabled=(org.apache.zookeeper.ClientCnxn)
[36mkafka_1 |[0m [2020-10-16 12:42:16,379] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[36mkafka_1 |[0m [2020-10-16 12:42:16,614] INFO Opening socket connection to server zookeeper/172.20.0.8:2181. Will not attempt to authenticate using SASL (unknown error)(org.apache.zookeeper.ClientCnxn)
[36mkafka_1 |[0m [2020-10-16 12:42:16,726] INFO Session establishment complete on server zookeeper/172.20.0.8:2181, sessionid= 0x10027c03a580000, negotiated timeout=18000(org.apache.zookeeper.ClientCnxn)
[36mkafka_1 |[0m [2020-10-16 12:42:16,730] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[36mkafka_1 |[0m [2020-10-16 12:42:17,540] INFO Cluster ID= pmAE1HTIScGPF2-yN3FlXw (kafka.server.KafkaServer)
[36mkafka_1 |[0m [2020-10-16 12:42:17,710] INFO KafkaConfig values:
[32mprometheus_1 |[0m level=info ts=2020-10-16T12:13:27.685Z caller=main.go:308 msg="No time or size retention was set so using the default time retention"duration=15d
[32mprometheus_1 |[0m level=info ts=2020-10-16T12:34:11.325Z caller=main.go:767 msg="See you next time!"
[32mprometheus_1 |[0m level=info ts=2020-10-16T12:42:12.585Z caller=main.go:308 msg="No time or size retention was set so using the default time retention"duration=15d
[36mkafka_1 |[0m [2020-10-16 12:42:17,773] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[36mkafka_1 |[0m [2020-10-16 12:42:17,788] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[36mkafka_1 |[0m [2020-10-16 12:42:17,788] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[36mkafka_1 |[0m [2020-10-16 12:42:17,828] INFO Loading logs from log dirs ArraySeq(/kafka/kafka-logs-b619b7cc1466)(kafka.log.LogManager)
[36mkafka_1 |[0m [2020-10-16 12:42:17,832] INFO Skipping recovery for all logs in /kafka/kafka-logs-b619b7cc1466 since clean shutdown file was found (kafka.log.LogManager)
[36mkafka_1 |[0m [2020-10-16 12:42:17,846] INFO Loaded 0 logs in 18ms. (kafka.log.LogManager)
[36mkafka_1 |[0m [2020-10-16 12:42:17,879] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[36mkafka_1 |[0m [2020-10-16 12:42:17,886] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[36mkafka_1 |[0m [2020-10-16 12:42:18,667] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[36mkafka_1 |[0m [2020-10-16 12:42:18,721] INFO [SocketServer brokerId=1001] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT)(kafka.network.SocketServer)
[36mkafka_1 |[0m [2020-10-16 12:42:18,764] INFO [ExpirationReaper-1001-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:42:18,765] INFO [ExpirationReaper-1001-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:42:18,766] INFO [ExpirationReaper-1001-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:42:18,779] INFO [ExpirationReaper-1001-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:42:18,849] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[36mkafka_1 |[0m [2020-10-16 12:42:18,968] INFO Creating /brokers/ids/1001 (is it secure? false)(kafka.zk.KafkaZkClient)
[36mkafka_1 |[0m [2020-10-16 12:42:19,006] INFO Stat of the created znode at /brokers/ids/1001 is: 51,51,1602852138994,1602852138994,1,0,0,72101300603977728,182,0,51
[36mkafka_1 |[0m (kafka.zk.KafkaZkClient)
[36mkafka_1 |[0m [2020-10-16 12:42:19,008] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: PLAINTEXT://kafka:9092, czxid (broker epoch): 51(kafka.zk.KafkaZkClient)
[36mkafka_1 |[0m [2020-10-16 12:42:19,327] INFO [ExpirationReaper-1001-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:42:19,341] INFO [ExpirationReaper-1001-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:42:19,358] INFO [ExpirationReaper-1001-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:42:19,470] INFO [GroupCoordinator 1001]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[36mkafka_1 |[0m [2020-10-16 12:42:19,476] INFO [GroupCoordinator 1001]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[36mkafka_1 |[0m [2020-10-16 12:42:19,517] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 27 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[36mkafka_1 |[0m [2020-10-16 12:42:19,554] INFO [ProducerId Manager 1001]: Acquired new producerId block (brokerId:1001,blockStartProducerId:1000,blockEndProducerId:1999) by writing to Zk with path version 2(kafka.coordinator.transaction.ProducerIdManager)
[36mkafka_1 |[0m [2020-10-16 12:42:19,596] INFO [TransactionCoordinator id=1001] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[36mkafka_1 |[0m [2020-10-16 12:42:19,609] INFO [TransactionCoordinator id=1001] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[36mkafka_1 |[0m [2020-10-16 12:42:19,898] INFO [ExpirationReaper-1001-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[36mkafka_1 |[0m [2020-10-16 12:42:20,120] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[36mkafka_1 |[0m [2020-10-16 12:42:20,172] INFO [SocketServer brokerId=1001] Starting socket server acceptors and processors (kafka.network.SocketServer)
[36mkafka_1 |[0m [2020-10-16 12:42:20,197] INFO [SocketServer brokerId=1001] Started data-plane acceptor and processor(s)for endpoint : ListenerName(PLAINTEXT)(kafka.network.SocketServer)
[36mkafka_1 |[0m [2020-10-16 12:42:20,230] INFO [SocketServer brokerId=1001] Started socket server acceptors and processors (kafka.network.SocketServer)
[36mkafka_1 |[0m [2020-10-16 12:42:20,252] INFO Kafka version: 2.6.0 (org.apache.kafka.common.utils.AppInfoParser)
[36mkafka_1 |[0m [2020-10-16 12:42:20,252] INFO Kafka commitId: 62abe01bee039651 (org.apache.kafka.common.utils.AppInfoParser)
[36mkafka_1 |[0m [2020-10-16 12:42:20,252] INFO Kafka startTimeMs: 1602852140231(org.apache.kafka.common.utils.AppInfoParser)
[36mkafka_1 |[0m [2020-10-16 12:42:20,254] INFO [KafkaServer id=1001] started (kafka.server.KafkaServer)
[36mkafka_1 |[0m [2020-10-16 12:52:19,471] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[36mkafka_1 |[0m [2020-10-16 13:02:19,472] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[36mkafka_1 |[0m [2020-10-16 13:12:19,472] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[35mswh-scheduler_1 |[0m Using pip from /srv/softwareheritage/venv/bin/pip
[34mswh-scheduler-db_1 |[0m You can change this by editing pg_hba.conf or using the option -A, or
[34mswh-scheduler-db_1 |[0m --auth-local and --auth-host, the next time you run initdb.
[34mswh-scheduler-db_1 |[0m waiting for server to start....2020-10-16 12:13:27.319 UTC [45] LOG: starting PostgreSQL 12.4 (Debian 12.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6)8.3.0, 64-bit
[34mswh-scheduler-db_1 |[0m 2020-10-16 12:13:27.324 UTC [45] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
[34mswh-scheduler-db_1 |[0m 2020-10-16 12:13:27.368 UTC [46] LOG: database system was shut down at 2020-10-16 12:13:26 UTC
[34mswh-scheduler-db_1 |[0m 2020-10-16 12:13:27.395 UTC [45] LOG: database system is ready to accept connections
[32;1mswh-vault-db_1 |[0m You can change this by editing pg_hba.conf or using the option -A, or
[32;1mswh-vault-db_1 |[0m --auth-local and --auth-host, the next time you run initdb.
[32;1mswh-vault-db_1 |[0m waiting for server to start....2020-10-16 12:13:27.852 UTC [46] LOG: starting PostgreSQL 12.4 (Debian 12.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6)8.3.0, 64-bit
[32;1mswh-vault-db_1 |[0m 2020-10-16 12:13:27.855 UTC [46] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
[32;1mswh-vault-db_1 |[0m 2020-10-16 12:13:27.882 UTC [47] LOG: database system was shut down at 2020-10-16 12:13:27 UTC
[32;1mswh-vault-db_1 |[0m 2020-10-16 12:13:27.888 UTC [46] LOG: database system is ready to accept connections
[32mswh-storage-db_1 |[0m The default database encoding has accordingly been set to "UTF8".
[32;1mswh-vault-db_1 |[0m 2020-10-16 12:14:30.513 UTC [117] FATAL: database "swh-vault" does not exist
[35;1mswh-deposit-db_1 |[0m The default database encoding has accordingly been set to "UTF8".
[35;1mswh-deposit-db_1 |[0m The default text search configuration will be set to "english".
[35mamqp_1 |[0m ## ##
[31mswh-idx-storage-db_1 |[0m The database cluster will be initialized with locale "en_US.utf8".
[36;1mswh-objstorage_1 |[0m swh.deposit 0.0.90
[33mzookeeper_1 |[0m 2020-10-16 12:13:27,493 [myid:] - INFO [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
[34mswh-scheduler-db_1 |[0m 2020-10-16 12:17:25.818 UTC [735] FATAL: database "swh-scheduler" does not exist
[31;1mnginx_1 |[0m 10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
[32mswh-storage-db_1 |[0m The default text search configuration will be set to "english".
[32;1mswh-vault-db_1 |[0m 2020-10-16 12:14:31.572 UTC [118] FATAL: database "swh-vault" does not exist
[32;1mswh-vault-db_1 |[0m 2020-10-16 12:14:32.613 UTC [119] FATAL: database "swh-vault" does not exist
[35;1mswh-deposit-db_1 |[0m
[35mamqp_1 |[0m ########## Logs: tty
[31mswh-idx-storage-db_1 |[0m The default database encoding has accordingly been set to "UTF8".
[36;1mswh-objstorage_1 |[0m swh.indexer 0.2.1
[33mzookeeper_1 |[0m 2020-10-16 12:13:27,493 [myid:] - INFO [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 1
[34mswh-scheduler-db_1 |[0m 2020-10-16 12:17:26.360 UTC [736] FATAL: database "swh-scheduler" does not exist
[31;1mnginx_1 |[0m 10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
[32mswh-storage-db_1 |[0m
[36mswh-listers-db_1 |[0m The database cluster will be initialized with locale "en_US.utf8".
[32;1mswh-vault-db_1 |[0m 2020-10-16 12:14:33.667 UTC [120] FATAL: database "swh-vault" does not exist
[35;1mswh-deposit-db_1 |[0m Data page checksums are disabled.
[35mamqp_1 |[0m ###### ## tty
[31mswh-idx-storage-db_1 |[0m The default text search configuration will be set to "english".
[36;1mswh-objstorage_1 |[0m swh.journal 0.4.2
[33mzookeeper_1 |[0m 2020-10-16 12:13:27,499 [myid:] - WARN [main:QuorumPeerMain@116] - Either no config or no quorum defined in config, running in standalone mode
[34mswh-scheduler-db_1 |[0m 2020-10-16 12:17:26.885 UTC [737] FATAL: database "swh-scheduler" does not exist
[33mzookeeper_1 |[0m 2020-10-16 12:13:27,590 [myid:] - INFO [main:Environment@100] - Server environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 04:05 GMT
[31mswh-idx-storage-db_1 |[0m creating subdirectories ... ok
[34mswh-scheduler-db_1 |[0m 2020-10-16 12:17:28.471 UTC [742] FATAL: database "swh-scheduler" does not exist
[32;1mswh-vault-db_1 |[0m 2020-10-16 12:14:55.844 UTC [141] FATAL: database "swh-vault" does not exist
[35;1mswh-deposit-db_1 |[0m waiting for server to start....2020-10-16 12:13:27.919 UTC [46] LOG: starting PostgreSQL 12.4 (Debian 12.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6)8.3.0, 64-bit
[31mswh-idx-storage-db_1 |[0m You can change this by editing pg_hba.conf or using the option -A, or
[36;1mswh-objstorage_1 |[0m wrapt 1.12.1
[33mzookeeper_1 |[0m 2020-10-16 12:13:27,669 [myid:] - INFO [main:ZooKeeperServer@845] - minSessionTimeout set to -1
[34mswh-scheduler-db_1 |[0m 2020-10-16 12:17:34.392 UTC [758] FATAL: database "swh-scheduler" does not exist
[32mswh-storage-db_1 |[0m
[36mswh-listers-db_1 |[0m
[35mamqp_1 |[0m
[32;1mswh-vault-db_1 |[0m 2020-10-16 12:14:57.982 UTC [143] FATAL: database "swh-vault" does not exist
[35;1mswh-deposit-db_1 |[0m 2020-10-16 12:13:27.968 UTC [47] LOG: database system was shut down at 2020-10-16 12:13:27 UTC
[31;1mnginx_1 |[0m /docker-entrypoint.sh: Configuration complete; ready for start up
[31mswh-idx-storage-db_1 |[0m --auth-local and --auth-host, the next time you run initdb.
[36;1mswh-objstorage_1 |[0m xmltodict 0.12.0
[36;1mswh-objstorage_1 |[0m yarl 1.5.1
[34mswh-scheduler-db_1 |[0m 2020-10-16 12:17:34.394 UTC [759] FATAL: database "swh-scheduler" does not exist
[32mswh-storage-db_1 |[0m waiting for server to start....2020-10-16 12:13:26.590 UTC [47] LOG: starting PostgreSQL 12.4 (Debian 12.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6)8.3.0, 64-bit
[36mswh-listers-db_1 |[0m Success. You can now start the database server using:
[32;1mswh-vault-db_1 |[0m 2020-10-16 12:14:59.030 UTC [144] FATAL: database "swh-vault" does not exist
[35;1mswh-deposit-db_1 |[0m 2020-10-16 12:13:27.975 UTC [46] LOG: database system is ready to accept connections
[31;1mnginx_1 |[0m 2020/10/16 12:42:12 [notice]1#1: using the "epoll" event method
[31mswh-idx-storage-db_1 |[0m waiting for server to start....2020-10-16 12:13:26.249 UTC [45] LOG: starting PostgreSQL 12.4 (Debian 12.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6)8.3.0, 64-bit
[33mzookeeper_1 |[0m 2020-10-16 12:13:27,669 [myid:] - INFO [main:ZooKeeperServer@854] - maxSessionTimeout set to -1
[36;1mswh-objstorage_1 |[0m zipp 3.1.0
[34mswh-scheduler-db_1 |[0m 2020-10-16 12:17:34.866 UTC [760] FATAL: database "swh-scheduler" does not exist
[32mswh-storage-db_1 |[0m 2020-10-16 12:13:26.592 UTC [47] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
[36mswh-listers-db_1 |[0m
[35mamqp_1 |[0m Enabling free disk space monitoring
[31mswh-idx-storage-db_1 |[0m 2020-10-16 12:13:26.250 UTC [45] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
[33mzookeeper_1 |[0m 2020-10-16 12:13:27,689 [myid:] - INFO [main:ServerCnxnFactory@117] - Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory
[36;1mswh-objstorage_1 |[0m WARNING: You are using pip version 20.2.2; however, version 20.2.3 is available.
[34mswh-scheduler-db_1 |[0m 2020-10-16 12:17:35.428 UTC [761] FATAL: database "swh-scheduler" does not exist
[32mswh-storage-db_1 |[0m 2020-10-16 12:13:26.616 UTC [48] LOG: database system was shut down at 2020-10-16 12:13:26 UTC
[31;1mnginx_1 |[0m 2020/10/16 12:42:12 [notice]1#1: built by gcc 8.3.0 (Debian 8.3.0-6)
[31mswh-idx-storage-db_1 |[0m 2020-10-16 12:13:26.449 UTC [46] LOG: database system was shut down at 2020-10-16 12:13:25 UTC
[33mzookeeper_1 |[0m 2020-10-16 12:13:27,706 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2181
[36;1mswh-objstorage_1 |[0m You should consider upgrading via the '/srv/softwareheritage/venv/bin/python3 -m pip install --upgrade pip' command.
[34mswh-scheduler-db_1 |[0m 2020-10-16 12:17:35.428 UTC [762] FATAL: database "swh-scheduler" does not exist
[32mswh-storage-db_1 |[0m 2020-10-16 12:13:26.620 UTC [47] LOG: database system is ready to accept connections
[36mswh-listers-db_1 |[0m
[32;1mswh-vault-db_1 |[0m 2020-10-16 12:15:01.128 UTC [146] FATAL: database "swh-vault" does not exist
[35mamqp_1 |[0m Disk free limit set to 50MB
[35;1mswh-deposit-db_1 |[0m CREATE DATABASE
[31;1mnginx_1 |[0m 2020/10/16 12:42:12 [notice]1#1: OS: Linux 5.7.0-0.bpo.2-amd64
[31mswh-idx-storage-db_1 |[0m 2020-10-16 12:13:26.454 UTC [45] LOG: database system is ready to accept connections
[33mzookeeper_1 |[0m 2020-10-16 12:13:32,224 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@215] - Accepted socket connection from /172.20.0.18:46188
[36;1mswh-objstorage_1 |[0m Starting the swh-objstorage API server
[34mswh-scheduler-db_1 |[0m 2020-10-16 12:17:35.933 UTC [763] FATAL: database "swh-scheduler" does not exist
[32mswh-storage-db_1 |[0m done
[36mswh-listers-db_1 |[0m waiting for server to start....2020-10-16 12:13:27.762 UTC [45] LOG: starting PostgreSQL 12.4 (Debian 12.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6)8.3.0, 64-bit
[32;1mswh-vault-db_1 |[0m 2020-10-16 12:15:02.182 UTC [147] FATAL: database "swh-vault" does not exist
[33mzookeeper_1 |[0m 2020-10-16 12:13:32,230 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@949] - Client attempting to establish new session at /172.20.0.18:46188
[36;1mswh-objstorage_1 |[0m [2020-10-16 12:42:15 +0000][1][DEBUG] Current configuration:
[34mswh-scheduler-db_1 |[0m 2020-10-16 12:17:36.490 UTC [764] FATAL: database "swh-scheduler" does not exist
[32mswh-storage-db_1 |[0m server started
[36mswh-listers-db_1 |[0m 2020-10-16 12:13:27.766 UTC [45] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
[32;1mswh-vault-db_1 |[0m 2020-10-16 12:15:03.222 UTC [148] FATAL: database "swh-vault" does not exist
[32;1mswh-vault-db_1 |[0m 2020-10-16 12:15:12.838 UTC [157] FATAL: database "swh-vault" does not exist
[33mzookeeper_1 |[0m 2020-10-16 12:13:36,725 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@949] - Client attempting to establish new session at /172.20.0.17:44384
[32mswh-storage-db_1 |[0m PostgreSQL init process complete; ready for start up.
[34mswh-scheduler-db_1 |[0m 2020-10-16 12:17:40.178 UTC [775] FATAL: database "swh-scheduler" does not exist
[36mswh-listers-db_1 |[0m waiting for server to shut down....2020-10-16 12:13:28.057 UTC [45] LOG: aborting any active transactions
[35;1mswh-deposit-db_1 |[0m PostgreSQL init process complete; ready for start up.
[35mamqp_1 |[0m exited: stopped
[32;1mswh-vault-db_1 |[0m 2020-10-16 12:15:17.058 UTC [161] FATAL: database "swh-vault" does not exist
[31mswh-idx-storage-db_1 |[0m server stopped
[33mzookeeper_1 |[0m 2020-10-16 12:34:10,772 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1056] - Closed socket connection for client /172.20.0.17:44384 which had sessionid 0x10027a5e91a0001
[32mswh-storage-db_1 |[0m 2020-10-16 12:13:26.886 UTC [1] LOG: starting PostgreSQL 12.4 (Debian 12.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6)8.3.0, 64-bit
[34mswh-scheduler-db_1 |[0m 2020-10-16 12:17:40.736 UTC [777] FATAL: database "swh-scheduler" does not exist
[36mswh-listers-db_1 |[0m 2020-10-16 12:13:28.059 UTC [47] LOG: shutting down
[35;1mswh-deposit-db_1 |[0m 2020-10-16 12:13:28.354 UTC [1] LOG: starting PostgreSQL 12.4 (Debian 12.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6)8.3.0, 64-bit
[35mamqp_1 |[0m
[32;1mswh-vault-db_1 |[0m 2020-10-16 12:15:19.176 UTC [163] FATAL: database "swh-vault" does not exist
[31mswh-idx-storage-db_1 |[0m PostgreSQL init process complete; ready for start up.
[33mzookeeper_1 |[0m 2020-10-16 12:34:34,331 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1056] - Closed socket connection for client /172.20.0.18:46188 which had sessionid 0x10027a5e91a0000
[33mzookeeper_1 |[0m 2020-10-16 12:42:12,411 [myid:] - WARN [main:QuorumPeerMain@116] - Either no config or no quorum defined in config, running in standalone mode
[32mswh-storage-db_1 |[0m 2020-10-16 12:13:31.458 UTC [64] FATAL: database "swh-storage" does not exist
[34mswh-scheduler-db_1 |[0m 2020-10-16 12:17:43.371 UTC [784] FATAL: database "swh-scheduler" does not exist
[36mswh-listers-db_1 |[0m 2020-10-16 12:13:28.164 UTC [1] LOG: starting PostgreSQL 12.4 (Debian 12.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6)8.3.0, 64-bit
[35;1mswh-deposit-db_1 |[0m 2020-10-16 12:34:22.297 UTC [1] LOG: received fast shutdown request
[33mzookeeper_1 |[0m 2020-10-16 12:42:12,425 [myid:] - INFO [main:Environment@100] - Server environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 04:05 GMT
[32mswh-storage-db_1 |[0m 2020-10-16 12:13:36.972 UTC [69] FATAL: database "swh-storage" does not exist
[34mswh-scheduler-db_1 |[0m 2020-10-16 12:17:44.960 UTC [789] FATAL: database "swh-scheduler" does not exist
[36mswh-listers-db_1 |[0m 2020-10-16 12:13:28.221 UTC [1] LOG: database system is ready to accept connections
[35;1mswh-deposit-db_1 |[0m
[35mamqp_1 |[0m
[32;1mswh-vault-db_1 |[0m 2020-10-16 12:15:31.891 UTC [175] FATAL: database "swh-vault" does not exist
[31mswh-idx-storage-db_1 |[0m 2020-10-16 12:13:31.432 UTC [64] FATAL: database "swh-indexers" does not exist
[32;1mswh-vault-db_1 |[0m 2020-10-16 12:15:48.709 UTC [191] FATAL: database "swh-vault" does not exist
[31mswh-idx-storage-db_1 |[0m 2020-10-16 12:13:42.220 UTC [80] FATAL: database "swh-indexers" does not exist
[36;1mswh-objstorage_1 |[0m logconfig: None
[33mzookeeper_1 |[0m 2020-10-16 12:42:12,429 [myid:] - INFO [main:ZooKeeperServer@854] - maxSessionTimeout set to -1
[32mswh-storage-db_1 |[0m 2020-10-16 12:13:54.827 UTC [86] FATAL: database "swh-storage" does not exist
[34mswh-scheduler-db_1 |[0m 2020-10-16 12:17:51.314 UTC [806] FATAL: database "swh-scheduler" does not exist
[35mamqp_1 |[0m Adding vhost '/'
[32;1mswh-vault-db_1 |[0m 2020-10-16 12:15:49.758 UTC [192] FATAL: database "swh-vault" does not exist
[31mswh-idx-storage-db_1 |[0m 2020-10-16 12:13:42.339 UTC [81] FATAL: database "swh-indexers" does not exist
[36;1mswh-objstorage_1 |[0m logconfig_dict: {}
[33mzookeeper_1 |[0m 2020-10-16 12:42:12,434 [myid:] - INFO [main:ServerCnxnFactory@117] - Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory
[32mswh-storage-db_1 |[0m 2020-10-16 12:13:55.893 UTC [87] FATAL: database "swh-storage" does not exist
[34mswh-scheduler-db_1 |[0m 2020-10-16 12:17:51.322 UTC [807] FATAL: database "swh-scheduler" does not exist
[35mamqp_1 |[0m
[32;1mswh-vault-db_1 |[0m 2020-10-16 12:15:50.807 UTC [193] FATAL: database "swh-vault" does not exist
[31mswh-idx-storage-db_1 |[0m 2020-10-16 12:13:43.257 UTC [82] FATAL: database "swh-indexers" does not exist
[32;1mswh-vault-db_1 |[0m 2020-10-16 12:15:51.874 UTC [194] FATAL: database "swh-vault" does not exist
[31mswh-idx-storage-db_1 |[0m 2020-10-16 12:13:43.377 UTC [83] FATAL: database "swh-indexers" does not exist
[36;1mswh-objstorage_1 |[0m syslog: False
[33mzookeeper_1 |[0m 2020-10-16 12:42:16,703 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@215] - Accepted socket connection from /172.20.0.16:37452
[32mswh-storage-db_1 |[0m 2020-10-16 12:13:57.972 UTC [89] FATAL: database "swh-storage" does not exist
[34mswh-scheduler-db_1 |[0m 2020-10-16 12:17:51.834 UTC [808] FATAL: database "swh-scheduler" does not exist
[35mamqp_1 |[0m Creating user 'guest'
[32;1mswh-vault-db_1 |[0m 2020-10-16 12:15:52.921 UTC [195] FATAL: database "swh-vault" does not exist
[31mswh-idx-storage-db_1 |[0m 2020-10-16 12:13:44.317 UTC [84] FATAL: database "swh-indexers" does not exist
[36;1mswh-objstorage_1 |[0m syslog_prefix: None
[33mzookeeper_1 |[0m 2020-10-16 12:42:16,716 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@949] - Client attempting to establish new session at /172.20.0.16:37452
[32mswh-storage-db_1 |[0m 2020-10-16 12:13:59.016 UTC [90] FATAL: database "swh-storage" does not exist
[34mswh-scheduler-db_1 |[0m 2020-10-16 12:17:52.372 UTC [809] FATAL: database "swh-scheduler" does not exist
[35mamqp_1 |[0m
[32;1mswh-vault-db_1 |[0m 2020-10-16 12:15:53.968 UTC [196] FATAL: database "swh-vault" does not exist
[36;1mswh-objstorage_1 |[0m syslog_facility: user
[33mzookeeper_1 |[0m 2020-10-16 12:42:16,717 [myid:] - INFO [SyncThread:0:FileTxnLog@213] - Creating new log file: log.25
[32mswh-storage-db_1 |[0m 2020-10-16 12:14:00.076 UTC [91] FATAL: database "swh-storage" does not exist
[34mswh-scheduler-db_1 |[0m 2020-10-16 12:17:52.375 UTC [810] FATAL: database "swh-scheduler" does not exist
[31mswh-idx-storage-db_1 |[0m 2020-10-16 12:13:52.942 UTC [101] FATAL: database "swh-indexers" does not exist
[36;1mswh-objstorage_1 |[0m pre_exec: <function PreExec.pre_exec at 0x7f5f35d119e0>
[32;1mswh-vault-db_1 |[0m 2020-10-16 12:16:11.964 UTC [213] FATAL: database "swh-vault" does not exist
[32mswh-storage-db_1 |[0m 2020-10-16 12:14:17.950 UTC [108] FATAL: database "swh-storage" does not exist
[34mswh-scheduler-db_1 |[0m 2020-10-16 12:17:58.187 UTC [826] FATAL: database "swh-scheduler" does not exist
[33mzookeeper_1 |[0m 2020-10-16 12:42:20,668 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@949] - Client attempting to establish new session at /172.20.0.15:46976
[35mamqp_1 |[0m
[31mswh-idx-storage-db_1 |[0m 2020-10-16 12:13:53.785 UTC [102] FATAL: database "swh-indexers" does not exist
[36;1mswh-objstorage_1 |[0m pre_request: <function PreRequest.pre_request at 0x7f5f35d11b00>
[32;1mswh-vault-db_1 |[0m 2020-10-16 12:16:13.020 UTC [214] FATAL: database "swh-vault" does not exist
[32mswh-storage-db_1 |[0m 2020-10-16 12:14:19.011 UTC [109] FATAL: database "swh-storage" does not exist
[34mswh-scheduler-db_1 |[0m 2020-10-16 12:17:58.659 UTC [827] FATAL: database "swh-scheduler" does not exist
[33mzookeeper_1 |[0m 2020-10-16 12:42:20,671 [myid:] - INFO [SyncThread:0:ZooKeeperServer@694] - Established session 0x10027c03a580001 with negotiated timeout 40000for client /172.20.0.15:46976