Page MenuHomeSoftware Heritage

No OneTemporary

diff --git a/tests/README.md b/tests/README.md
index b8ecf30..5c0049d 100644
--- a/tests/README.md
+++ b/tests/README.md
@@ -1,193 +1,194 @@
# mirror stack deployment tests
These are a set of tests for the deployment of a full software heritage mirror
stack.
As of today, only docker swarm based deployment tests are available.
## docker swarm deployment tests
This test is using
[pytest-testinfra](https://github.com/pytest-dev/pytest-testinfra) to
orchestrate the deployment and checks that are made against the replicated
Software Heritage Archive.
The idea of this test is:
- a test dataset is built by loading a few origins in a dedicated swh instance
(using the swh-environment/docker),
- the gathered objects are pushed in a dedicated set of kafka
topics on swh's staging kafka broker (swh.test.objects),
- expected statistics for each origin are also computed and pushed in the
swh.test.objects.stats topic; these statistics are simply the total number,
for each origin, of each object type (content, directory, revision, snapshot,
release) is reachable from that origin.
Then, the test scenario is the following:
1. copy all docker config files and resolve template ones in a temp dir
(especially conf/graph-replayer.yml.test and conf/content-replayer.yml.test,
see the mirror_stack fixture in conftest.py),
2. create and deploy a docker stack from the mirror.yml compose file from the
tmp dir; note that replayer services are not started at this point (their
replication factor is set to 0 in mirror.yml),
3. wait for all the services to be up
4. scale the content replayer service to 1, and wait for the service to be up,
5. scale the content replayer service to 4, and wait for services to be up,
6. wait for the content replaying to be done (test replayer services are
configured with stop_on_eof=true),
7. scale the content replayer to 0
8. repeat steps 4-7 for the graph-replayer
9. retrieve expected stats for each origin from a dedicated
swh.test.objects.stats topic on kafka,
10. compute these stats from the replicated archive; note that this step also
check content object hashes from the replicated objstorage,
11. compare computed stats with expected ones.
12. spawn vault (flat) cooking for each origin (latest snapshot's master)
13. wait for the tgz artifacts to be generated by vault-workers
14. download resulting artifacts and make a few checks on their content.
Obviously, this test heavily depends on the content of ``swh.test.objects``
topics on kafka, thus some tooling is required to manage said test dataset.
These tools are not part of this repo, but will be provided in the
swh-environment git repo (these are using the development docker environment).
### Running the test
The test is written using pytest-testinfra, thus relies on the pytest execution
tool.
Note that for this test run:
- docker swarm must be enabled
- it will use dedicated test kafka topics on the staging kafka broker hosted by
software heritage (see the Journal TLS endpoint listed on
https://docs.softwareheritage.org/sysadm/network-architecture/service-urls.html#public-urls),
- it will require a few environment variables set before running the test,
namely:
- `SWH_MIRROR_TEST_KAFKA_USERNAME`: login used to access the kafka
broker,
- `SWH_MIRROR_TEST_KAFKA_PASSWORD`: password used to access the kafka
broker,
- `SWH_MIRROR_TEST_KAFKA_BROKER`: URL os the kafka broker (should be the
one described above),
- `SWH_MIRROR_TEST_OBJSTORAGE_URL`: the URL of the source object storage used
for the content replication; it would typically include access credentials,
e.g. `https://login:password@objstorage.softwareheritage.org/`,
- `SWH_IMAGE_TAG`: the docker image tag to be tested.
You can copy the template `env/tests.env.template` to `env/tests.env` to set them.
- the `softwareheritage/base`, `softwareheritage/web`,
`softwareheritage/replayer` and `softwareheritage/test` images must be built
with the proper image tag (`$SWH_IMAGE_TAG`). See the
`../images/build_images.sh` script to rebuild images if need be.
Assuming you have a properly set up environment:
```
# check the docker swarm cluster is ok
~/swh-mirror$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
w6uzfpxayyc8l9ksfud7dlq9p * libra Ready Active Leader 20.10.5+dfsg1
# check images
~/swh-mirror$ echo $SWH_IMAGE_TAG
20220805-185133
~/swh-mirror$ docker image ls -f reference="softwareheritage/*:$SWH_IMAGE_TAG"
REPOSITORY TAG IMAGE ID CREATED SIZE
softwareheritage/replayer 20220805-185133 da2d12d57a65 5 days ago 223MB
softwareheritage/test 20220805-185133 cb4449867d3a 5 days ago 682MB
softwareheritage/web 20220805-185133 66c54d5c2611 5 days ago 364MB
softwareheritage/base 20220805-185133 528010e1fc9c 5 days ago 682MB
# check environment variables are set
~/swh-mirror$ env | grep SWH_MIRROR_TEST
SWH_MIRROR_TEST_KAFKA_PASSWORD=<xxx>
SWH_MIRROR_TEST_KAFKA_BROKER=broker1.journal.staging.swh.network:9093
SWH_MIRROR_TEST_KAFKA_USERNAME=mirror-test-ro
SWH_MIRROR_TEST_OBJSTORAGE_URL=https://<login>:<pwd>@objstorage.softwareheritage.org/
```
you should be able to execute the test:
```
~/swh-mirror$ pytest
============================== test session starts ==============================
platform linux -- Python 3.9.2, pytest-6.2.5, py-1.9.0, pluggy-1.0.0
rootdir: /home/ddouard/swh/swh-docker
plugins: django-4.5.2, dash-1.18.1, django-test-migrations-1.2.0, forked-1.4.0, redis-2.4.0, requests-mock-1.9.3, Faker-4.18.0, asyncio-0.18.1, xdist-2.1.0, hypothesis-6.4.3, testinfra-6.8.0, postgresql-3.1.3, flask-1.1.0, mock-3.7.0, swh.journal-1.0.1.dev10+gdb9d202, swh.core-2.13
asyncio: mode=legacy
collected 1 item
tests/test_graph_replayer.py . [100%]
=============================== warnings summary ================================
../../.virtualenvs/swh/lib/python3.9/site-packages/pytest_asyncio/plugin.py:191
/home/ddouard/.virtualenvs/swh/lib/python3.9/site-packages/pytest_asyncio/plugin.py:191: DeprecationWarning: The 'asyncio_mode' default value will change to 'strict' in future, please explicitly use 'asyncio_mode=strict' or 'asyncio_mode=auto' in pytest configuration file.
config.issue_config_time_warning(LEGACY_MODE, stacklevel=2)
-- Docs: https://docs.pytest.org/en/stable/warnings.html
=================== 1 passed, 1 warning in 923.19s (0:15:23) ====================
```
Note the test takes quite some time to execute, so be patient.
Troubleshooting
===============
### Watch out for stale services
If something goes wrong, you might want to check if you have any remaining Docker services setup:
docker service ls
If you want to shut them all down, you can use:
docker service rm $(docker service ls --format '{{.Name}}')
### I want a shell!
To run a shell in an image in the Swarm context, use the following:
docker run --network=swhtest_mirror0_swh-mirror -ti --env-file env/common-python.env --env STATSD_TAGS="role:content-replayer,hostname:${HOSTNAME}" -v /tmp/pytest-of-lunar/pytest-current/mirrorcurrent/conf/content-replayer.yml:/etc/softwareheritage/config.yml softwareheritage/replayer:20220915-163058 shell
### Some containers are never started
If you notice that some container stay at 0 replicas in `docker service ls`, it probably means the rule for services, as described in `mirror.yml`, cannot be fulfilled by the current nodes part of the swarm.
Most likely, you are missing the labels locating the volumes needed by the containers. You might want to run:
docker node update $HOSTNAME \
- --label-add org.softwareheritage.mirror.volumes.storage-db=true \
- --label-add org.softwareheritage.mirror.volumes.web-db=true \
+ --label-add org.softwareheritage.mirror.monitoring=true \
--label-add org.softwareheritage.mirror.volumes.objstorage=true \
- --label-add org.softwareheritage.mirror.volumes.redis=true
+ --label-add org.softwareheritage.mirror.volumes.redis=true \
+ --label-add org.softwareheritage.mirror.volumes.storage-db=true \
+ --label-add org.softwareheritage.mirror.volumes.web-db=true
### SWH services keep restarting
If SWH services keep restarting, look at the service logs, but don’t forget to look at the logs for Docker service (using `journalctl -u docker.service` for example).
If you see:
error="task: non-zero exit (124)"
It means that `wait-for-it` has reached its timeout. You should double check the network configuration, including the firewall.
### Failure while checking the Vault service
If the test fail with the following exception:
~~~
> assert isinstance(tarfilecontent, bytes)
E assert False
E + where False = isinstance({'exception': 'NotFoundExc', 'reason': 'Cooked archive for swh:1:dir:c1695cab57e5bfe64ea4b0900c4575bf7240483d not found.', 'traceback': 'Traceback (most recent call last):\n File "/usr/lib/python3/dist-packages/rest_framework/views.py", line 492, in dispatch\n response = handler(request, *args, **kwargs)\n File "/usr/lib/python3/dist-packages/rest_framework/decorators.py", line 54, in handler\n return func(*args, **kwargs)\n File "/usr/lib/python3/dist-pac→
…/swh-mirror/tests/test_graph_replayer.py:423: AssertionError
~~~
It is most likely because of a stale database. Remove the vault volume using:
docker volume rm swhtest_mirror0_vault-db
In general, the test has been designed to be run on empty volumes.

File Metadata

Mime Type
text/x-diff
Expires
Sat, Jun 21, 7:52 PM (3 w, 35 m ago)
Storage Engine
blob
Storage Format
Raw Data
Storage Handle
3277050

Event Timeline