diff --git a/PKG-INFO b/PKG-INFO index a7c5453c..055e0312 100644 --- a/PKG-INFO +++ b/PKG-INFO @@ -1,218 +1,218 @@ Metadata-Version: 2.1 Name: swh.storage -Version: 0.11.0 +Version: 0.11.1 Summary: Software Heritage storage manager Home-page: https://forge.softwareheritage.org/diffusion/DSTO/ Author: Software Heritage developers Author-email: swh-devel@inria.fr License: UNKNOWN Project-URL: Bug Reports, https://forge.softwareheritage.org/maniphest Project-URL: Funding, https://www.softwareheritage.org/donate Project-URL: Source, https://forge.softwareheritage.org/source/swh-storage Project-URL: Documentation, https://docs.softwareheritage.org/devel/swh-storage/ Description: swh-storage =========== Abstraction layer over the archive, allowing to access all stored source code artifacts as well as their metadata. See the [documentation](https://docs.softwareheritage.org/devel/swh-storage/index.html) for more details. ## Quick start ### Dependencies Python tests for this module include tests that cannot be run without a local Postgresql database, so you need the Postgresql server executable on your machine (no need to have a running Postgresql server). They also expect a cassandra server. #### Debian-like host ``` $ sudo apt install libpq-dev postgresql-11 cassandra ``` #### Non Debian-like host The tests expects the path to `cassandra` to either be unspecified, it is then looked up at `/usr/sbin/cassandra`, either specified through the environment variable `SWH_CASSANDRA_BIN`. Optionally, you can avoid running the cassandra tests. ``` (swh) :~/swh-storage$ tox -- -m 'not cassandra' ``` ### Installation It is strongly recommended to use a virtualenv. In the following, we consider you work in a virtualenv named `swh`. See the [developer setup guide](https://docs.softwareheritage.org/devel/developer-setup.html#developer-setup) for a more details on how to setup a working environment. You can install the package directly from [pypi](https://pypi.org/p/swh.storage): ``` (swh) :~$ pip install swh.storage [...] ``` Or from sources: ``` (swh) :~$ git clone https://forge.softwareheritage.org/source/swh-storage.git [...] (swh) :~$ cd swh-storage (swh) :~/swh-storage$ pip install . [...] ``` Then you can check it's properly installed: ``` (swh) :~$ swh storage --help Usage: swh storage [OPTIONS] COMMAND [ARGS]... Software Heritage Storage tools. Options: -h, --help Show this message and exit. Commands: rpc-serve Software Heritage Storage RPC server. ``` ## Tests The best way of running Python tests for this module is to use [tox](https://tox.readthedocs.io/). ``` (swh) :~$ pip install tox ``` ### tox From the sources directory, simply use tox: ``` (swh) :~/swh-storage$ tox [...] ========= 315 passed, 6 skipped, 15 warnings in 40.86 seconds ========== _______________________________ summary ________________________________ flake8: commands succeeded py3: commands succeeded congratulations :) ``` ## Development The storage server can be locally started. It requires a configuration file and a running Postgresql database. ### Sample configuration A typical configuration `storage.yml` file is: ``` storage: cls: local args: db: "dbname=softwareheritage-dev user= password=" objstorage: cls: pathslicing args: root: /tmp/swh-storage/ slicing: 0:2/2:4/4:6 ``` which means, this uses: - a local storage instance whose db connection is to `softwareheritage-dev` local instance, - the objstorage uses a local objstorage instance whose: - `root` path is /tmp/swh-storage, - slicing scheme is `0:2/2:4/4:6`. This means that the identifier of the content (sha1) which will be stored on disk at first level with the first 2 hex characters, the second level with the next 2 hex characters and the third level with the next 2 hex characters. And finally the complete hash file holding the raw content. For example: 00062f8bd330715c4f819373653d97b3cd34394c will be stored at 00/06/2f/00062f8bd330715c4f819373653d97b3cd34394c Note that the `root` path should exist on disk before starting the server. ### Starting the storage server If the python package has been properly installed (e.g. in a virtual env), you should be able to use the command: ``` (swh) :~/swh-storage$ swh storage rpc-serve storage.yml ``` This runs a local swh-storage api at 5002 port. ``` (swh) :~/swh-storage$ curl http://127.0.0.1:5002 Software Heritage storage server

You have reached the Software Heritage storage server.
See its documentation and API for more information

``` ### And then what? In your upper layer ([loader-git](https://forge.softwareheritage.org/source/swh-loader-git/), [loader-svn](https://forge.softwareheritage.org/source/swh-loader-svn/), etc...), you can define a remote storage with this snippet of yaml configuration. ``` storage: cls: remote args: url: http://localhost:5002/ ``` You could directly define a local storage with the following snippet: ``` storage: cls: local args: db: service=swh-dev objstorage: cls: pathslicing args: root: /home/storage/swh-storage/ slicing: 0:2/2:4/4:6 ``` Platform: UNKNOWN Classifier: Programming Language :: Python :: 3 Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3) Classifier: Operating System :: OS Independent Classifier: Development Status :: 5 - Production/Stable Requires-Python: >=3.7 Description-Content-Type: text/markdown Provides-Extra: testing Provides-Extra: schemata Provides-Extra: journal diff --git a/docs/extrinsic-metadata-specification.rst b/docs/extrinsic-metadata-specification.rst index d82bb55a..b3d1a84c 100644 --- a/docs/extrinsic-metadata-specification.rst +++ b/docs/extrinsic-metadata-specification.rst @@ -1,251 +1,251 @@ :orphan: .. _extrinsic-metadata-specification: Extrinsic metadata specification ================================ :term:`Extrinsic metadata` is information about software that is not part of the source code itself but still closely related to the software. Typical sources for extrinsic metadata are: the hosting place of a repository, which can offer metadata via its web view or API; external registries like collaborative curation initiatives; and out-of-band information available at source code archival time. Since they are not part of the source code, a dedicated mechanism to fetch and store them is needed. This specification assumes the reader is familiar with Software Heritage's :ref:`architecture` and :ref:`data-model`. Metadata sources ---------------- Authorities ^^^^^^^^^^^ Metadata authorities are entities that provide metadata about an :term:`origin`. Metadata authorities include: code hosting places, :term:`deposit` submitters, and registries (eg. Wikidata). An authority is uniquely defined by these properties: * its type, representing the kind of authority, which is one of these values: - * `deposit`, for metadata pushed to Software Heritage at the same time + * `deposit_client`, for metadata pushed to Software Heritage at the same time as a software artifact * `forge`, for metadata pulled from the same source as the one hosting the software artifacts (which includes package managers) * `registry`, for metadata pulled from a third-party * its URL, which unambiguously identifies an instance of the authority type. Examples: =============== ================================= type url =============== ================================= -deposit https://hal.archives-ouvertes.fr/ -deposit https://hal.inria.fr/ -deposit https://software.intel.com/ +deposit_client https://hal.archives-ouvertes.fr/ +deposit_client https://hal.inria.fr/ +deposit_client https://software.intel.com/ forge https://gitlab.com/ forge https://gitlab.inria.fr/ forge https://0xacab.org/ forge https://github.com/ registry https://www.wikidata.org/ registry https://swmath.org/ registry https://ascl.net/ =============== ================================= Metadata fetchers ^^^^^^^^^^^^^^^^^ Metadata fetchers are software components used to fetch metadata from a metadata authority, and ingest them into the Software Heritage archive. A metadata fetcher is uniquely defined by these properties: * its type * its version Examples: * :term:`loaders `, which may either discover metadata as a side-effect of loading source code, or be dedicated to fetching metadata. * :term:`listers `, which may discover metadata as a side-effect of discovering origins. * :term:`deposit` submitters, which push metadata to SWH from a third-party; usually at the same time as a :term:`software artifact` * crawlers, which fetch metadata from an authority in a way that is none of the above (eg. by querying a specific API of the origin's forge). Storage API ----------- Authorities and metadata fetchers ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The :term:`storage` API offers these endpoints to manipulate metadata authorities and metadata fetchers: * ``metadata_authority_add(type, url, metadata)`` which adds a new metadata authority to the storage. * ``metadata_authority_get(type, url)`` which looks up a known authority (there is at most one) and if it is known, returns a dictionary with keys ``type``, ``url``, and ``metadata``. * ``metadata_fetcher_add(name, version, metadata)`` which adds a new metadata fetcher to the storage. * ``metadata_fetcher_get(name, version)`` which looks up a known fetcher (there is at most one) and if it is known, returns a dictionary with keys ``name``, ``version``, and ``metadata``. These `metadata` fields contain JSON-encodable dictionaries with information about the authority/fetcher, in a format specific to each authority/fetcher. With authority, the `metadata` field is reserved for information describing and qualifying the authority. With fetchers, the `metadata` field is reserved for configuration metadata and other technical usage. Origin metadata ^^^^^^^^^^^^^^^ Extrinsic metadata are stored in SWH's :term:`storage database`. The storage API offers three endpoints to manipulate origin metadata: * Adding metadata:: origin_metadata_add(origin_url, discovery_date, authority, fetcher, format, metadata) which adds a new `metadata` byte string obtained from a given authority and associated to the origin. `discovery_date` is a Python datetime. `authority` must be a dict containing keys `type` and `url`, and `fetcher` a dict containing keys `name` and `version`. The authority and fetcher must be known to the storage before using this endpoint. `format` is a text field indicating the format of the content of the `metadata` byte string. * Getting latest metadata:: origin_metadata_get_latest(origin_url, authority) where `authority` must be a dict containing keys `type` and `url`, which returns a dictionary corresponding to the latest metadata entry added from this origin, in the format:: { 'origin_url': ..., 'authority': {'type': ..., 'url': ...}, 'fetcher': {'name': ..., 'version': ...}, 'discovery_date': ..., 'format': '...', 'metadata': b'...' } * Getting all metadata:: origin_metadata_get(origin_url, authority, page_token, limit) where `authority` must be a dict containing keys `type` and `url` which returns a dictionary with keys: * `next_page_token`, which is an opaque token to be used as `page_token` for retrieving the next page. if absent, there is no more pages to gather. * `results`: list of dictionaries, one for each metadata item deposited, corresponding to the given origin and obtained from the specified authority. Each of these dictionaries is in the following format:: { 'authority': {'type': ..., 'url': ...}, 'fetcher': {'name': ..., 'version': ...}, 'discovery_date': ..., 'format': '...', 'metadata': b'...' } The parameters ``page_token`` and ``limit`` are used for pagination based on an arbitrary order. An initial query to ``origin_metadata_get`` must set ``page_token`` to ``None``, and further query must use the value from the previous query's ``next_page_token`` to get the next page of results. ``metadata`` is a bytes array (eventually encoded using Base64). Its format is specific to each authority; and is treated as an opaque value by the storage. Unifying these various formats into a common language is outside the scope of this specification. Artifact metadata ^^^^^^^^^^^^^^^^^ In addition to origin metadata, the storage database stores metadata on all software artifacts supported by the data model. This works similarly to origin metadata, with one major difference: extrinsic metadata can be given on a specific artifact within a specified context (for example: a directory in a specific revision from a specific visit on a specific origin) which will be stored along the metadata itself. For example, two origins may develop the same file independently; the information about authorship, licensing or even description may vary about the same artifact in a different context. This is why it is important to qualify the metadata with the complete context for which it is intended, if any. for each artifact type ````, there are two endpoints to manipulate metadata associated with artifacts of that type: * Adding metadata:: _metadata_add(id, context, discovery_date, authority, fetcher, format, metadata) * Getting all metadata:: _metadata_get(id, authority, after, page_token, limit) definited similarly to ``origin_metadata_add`` and ``origin_metadata_get``, but where ``id`` is a core SWHID (with type matching ````), and with an extra ``context`` (argument when adding metadata, and dictionary key when getting them) that is a dictionary with keys depending on the artifact type ````: * for ``snapshot``: ``origin`` (a URL) and ``visit`` (an integer) * for ``release``: those above, plus ``snapshot`` (the core SWHID of a snapshot) * for ``revision``: all those above, plus ``release`` (the core SWHID of a release) * for ``directory``: all those above, plus ``revision`` (the core SWHID of a revision) and ``path`` (a byte string), representing the path to this directory from the root of the ``revision`` * for ``content``: all those above, plus ``directory`` (the core SWHID of a directory) All keys are optional, but should be provided whenever possible. The dictionary may be empty, if metadata is fully independent from context. In all cases, ``visit`` should only be provided if ``origin`` is (as visit ids are only unique with respect to an origin). diff --git a/swh.storage.egg-info/PKG-INFO b/swh.storage.egg-info/PKG-INFO index a7c5453c..055e0312 100644 --- a/swh.storage.egg-info/PKG-INFO +++ b/swh.storage.egg-info/PKG-INFO @@ -1,218 +1,218 @@ Metadata-Version: 2.1 Name: swh.storage -Version: 0.11.0 +Version: 0.11.1 Summary: Software Heritage storage manager Home-page: https://forge.softwareheritage.org/diffusion/DSTO/ Author: Software Heritage developers Author-email: swh-devel@inria.fr License: UNKNOWN Project-URL: Bug Reports, https://forge.softwareheritage.org/maniphest Project-URL: Funding, https://www.softwareheritage.org/donate Project-URL: Source, https://forge.softwareheritage.org/source/swh-storage Project-URL: Documentation, https://docs.softwareheritage.org/devel/swh-storage/ Description: swh-storage =========== Abstraction layer over the archive, allowing to access all stored source code artifacts as well as their metadata. See the [documentation](https://docs.softwareheritage.org/devel/swh-storage/index.html) for more details. ## Quick start ### Dependencies Python tests for this module include tests that cannot be run without a local Postgresql database, so you need the Postgresql server executable on your machine (no need to have a running Postgresql server). They also expect a cassandra server. #### Debian-like host ``` $ sudo apt install libpq-dev postgresql-11 cassandra ``` #### Non Debian-like host The tests expects the path to `cassandra` to either be unspecified, it is then looked up at `/usr/sbin/cassandra`, either specified through the environment variable `SWH_CASSANDRA_BIN`. Optionally, you can avoid running the cassandra tests. ``` (swh) :~/swh-storage$ tox -- -m 'not cassandra' ``` ### Installation It is strongly recommended to use a virtualenv. In the following, we consider you work in a virtualenv named `swh`. See the [developer setup guide](https://docs.softwareheritage.org/devel/developer-setup.html#developer-setup) for a more details on how to setup a working environment. You can install the package directly from [pypi](https://pypi.org/p/swh.storage): ``` (swh) :~$ pip install swh.storage [...] ``` Or from sources: ``` (swh) :~$ git clone https://forge.softwareheritage.org/source/swh-storage.git [...] (swh) :~$ cd swh-storage (swh) :~/swh-storage$ pip install . [...] ``` Then you can check it's properly installed: ``` (swh) :~$ swh storage --help Usage: swh storage [OPTIONS] COMMAND [ARGS]... Software Heritage Storage tools. Options: -h, --help Show this message and exit. Commands: rpc-serve Software Heritage Storage RPC server. ``` ## Tests The best way of running Python tests for this module is to use [tox](https://tox.readthedocs.io/). ``` (swh) :~$ pip install tox ``` ### tox From the sources directory, simply use tox: ``` (swh) :~/swh-storage$ tox [...] ========= 315 passed, 6 skipped, 15 warnings in 40.86 seconds ========== _______________________________ summary ________________________________ flake8: commands succeeded py3: commands succeeded congratulations :) ``` ## Development The storage server can be locally started. It requires a configuration file and a running Postgresql database. ### Sample configuration A typical configuration `storage.yml` file is: ``` storage: cls: local args: db: "dbname=softwareheritage-dev user= password=" objstorage: cls: pathslicing args: root: /tmp/swh-storage/ slicing: 0:2/2:4/4:6 ``` which means, this uses: - a local storage instance whose db connection is to `softwareheritage-dev` local instance, - the objstorage uses a local objstorage instance whose: - `root` path is /tmp/swh-storage, - slicing scheme is `0:2/2:4/4:6`. This means that the identifier of the content (sha1) which will be stored on disk at first level with the first 2 hex characters, the second level with the next 2 hex characters and the third level with the next 2 hex characters. And finally the complete hash file holding the raw content. For example: 00062f8bd330715c4f819373653d97b3cd34394c will be stored at 00/06/2f/00062f8bd330715c4f819373653d97b3cd34394c Note that the `root` path should exist on disk before starting the server. ### Starting the storage server If the python package has been properly installed (e.g. in a virtual env), you should be able to use the command: ``` (swh) :~/swh-storage$ swh storage rpc-serve storage.yml ``` This runs a local swh-storage api at 5002 port. ``` (swh) :~/swh-storage$ curl http://127.0.0.1:5002 Software Heritage storage server

You have reached the Software Heritage storage server.
See its documentation and API for more information

``` ### And then what? In your upper layer ([loader-git](https://forge.softwareheritage.org/source/swh-loader-git/), [loader-svn](https://forge.softwareheritage.org/source/swh-loader-svn/), etc...), you can define a remote storage with this snippet of yaml configuration. ``` storage: cls: remote args: url: http://localhost:5002/ ``` You could directly define a local storage with the following snippet: ``` storage: cls: local args: db: service=swh-dev objstorage: cls: pathslicing args: root: /home/storage/swh-storage/ slicing: 0:2/2:4/4:6 ``` Platform: UNKNOWN Classifier: Programming Language :: Python :: 3 Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3) Classifier: Operating System :: OS Independent Classifier: Development Status :: 5 - Production/Stable Requires-Python: >=3.7 Description-Content-Type: text/markdown Provides-Extra: testing Provides-Extra: schemata Provides-Extra: journal diff --git a/swh.storage.egg-info/SOURCES.txt b/swh.storage.egg-info/SOURCES.txt index 60cd54e9..6edbaf6c 100644 --- a/swh.storage.egg-info/SOURCES.txt +++ b/swh.storage.egg-info/SOURCES.txt @@ -1,293 +1,292 @@ .gitignore .pre-commit-config.yaml AUTHORS CODE_OF_CONDUCT.md CONTRIBUTORS LICENSE MANIFEST.in Makefile Makefile.local README.md conftest.py mypy.ini pyproject.toml pytest.ini requirements-swh-journal.txt requirements-swh.txt requirements-test.txt requirements.txt setup.cfg setup.py tox.ini version.txt ./requirements-swh-journal.txt ./requirements-swh.txt ./requirements-test.txt ./requirements.txt bin/swh-storage-add-dir docs/.gitignore docs/Makefile docs/Makefile.local docs/archive-copies.rst docs/conf.py docs/extrinsic-metadata-specification.rst docs/index.rst docs/sql-storage.rst docs/_static/.placeholder docs/_templates/.placeholder docs/images/.gitignore docs/images/Makefile docs/images/swh-archive-copies.dia sql/.gitignore sql/Makefile sql/TODO sql/clusters.dot sql/bin/db-upgrade sql/bin/dot_add_content sql/doc/json sql/doc/json/.gitignore sql/doc/json/Makefile sql/doc/json/entity.lister_metadata.schema.json sql/doc/json/entity.metadata.schema.json sql/doc/json/entity_history.lister_metadata.schema.json sql/doc/json/entity_history.metadata.schema.json sql/doc/json/fetch_history.result.schema.json sql/doc/json/list_history.result.schema.json sql/doc/json/listable_entity.list_params.schema.json sql/doc/json/origin_visit.metadata.json sql/doc/json/tool.tool_configuration.schema.json sql/json/.gitignore sql/json/Makefile sql/json/entity.lister_metadata.schema.json sql/json/entity.metadata.schema.json sql/json/entity_history.lister_metadata.schema.json sql/json/entity_history.metadata.schema.json sql/json/fetch_history.result.schema.json sql/json/list_history.result.schema.json sql/json/listable_entity.list_params.schema.json sql/json/origin_visit.metadata.json sql/json/tool.tool_configuration.schema.json sql/upgrades/015.sql sql/upgrades/016.sql sql/upgrades/017.sql sql/upgrades/018.sql sql/upgrades/019.sql sql/upgrades/020.sql sql/upgrades/021.sql sql/upgrades/022.sql sql/upgrades/023.sql sql/upgrades/024.sql sql/upgrades/025.sql sql/upgrades/026.sql sql/upgrades/027.sql sql/upgrades/028.sql sql/upgrades/029.sql sql/upgrades/030.sql sql/upgrades/032.sql sql/upgrades/033.sql sql/upgrades/034.sql sql/upgrades/035.sql sql/upgrades/036.sql sql/upgrades/037.sql sql/upgrades/038.sql sql/upgrades/039.sql sql/upgrades/040.sql sql/upgrades/041.sql sql/upgrades/042.sql sql/upgrades/043.sql sql/upgrades/044.sql sql/upgrades/045.sql sql/upgrades/046.sql sql/upgrades/047.sql sql/upgrades/048.sql sql/upgrades/049.sql sql/upgrades/050.sql sql/upgrades/051.sql sql/upgrades/052.sql sql/upgrades/053.sql sql/upgrades/054.sql sql/upgrades/055.sql sql/upgrades/056.sql sql/upgrades/057.sql sql/upgrades/058.sql sql/upgrades/059.sql sql/upgrades/060.sql sql/upgrades/061.sql sql/upgrades/062.sql sql/upgrades/063.sql sql/upgrades/064.sql sql/upgrades/065.sql sql/upgrades/066.sql sql/upgrades/067.sql sql/upgrades/068.sql sql/upgrades/069.sql sql/upgrades/070.sql sql/upgrades/071.sql sql/upgrades/072.sql sql/upgrades/073.sql sql/upgrades/074.sql sql/upgrades/075.sql sql/upgrades/076.sql sql/upgrades/077.sql sql/upgrades/078.sql sql/upgrades/079.sql sql/upgrades/080.sql sql/upgrades/081.sql sql/upgrades/082.sql sql/upgrades/083.sql sql/upgrades/084.sql sql/upgrades/085.sql sql/upgrades/086.sql sql/upgrades/087.sql sql/upgrades/088.sql sql/upgrades/089.sql sql/upgrades/090.sql sql/upgrades/091.sql sql/upgrades/092.sql sql/upgrades/093.sql sql/upgrades/094.sql sql/upgrades/095.sql sql/upgrades/096.sql sql/upgrades/097.sql sql/upgrades/098.sql sql/upgrades/099.sql sql/upgrades/100.sql sql/upgrades/101.sql sql/upgrades/102.sql sql/upgrades/103.sql sql/upgrades/104.sql sql/upgrades/105.sql sql/upgrades/106.sql sql/upgrades/107.sql sql/upgrades/108.sql sql/upgrades/109.sql sql/upgrades/110.sql sql/upgrades/111.sql sql/upgrades/112.sql sql/upgrades/113.sql sql/upgrades/114.sql sql/upgrades/115.sql sql/upgrades/116.sql sql/upgrades/117.sql sql/upgrades/118.sql sql/upgrades/119.sql sql/upgrades/120.sql sql/upgrades/121.sql sql/upgrades/122.sql sql/upgrades/123.sql sql/upgrades/124.sql sql/upgrades/125.sql sql/upgrades/126.sql sql/upgrades/127.sql sql/upgrades/128.sql sql/upgrades/129.sql sql/upgrades/130.sql sql/upgrades/131.sql sql/upgrades/132.sql sql/upgrades/133.sql sql/upgrades/134.sql sql/upgrades/135.sql sql/upgrades/136.sql sql/upgrades/137.sql sql/upgrades/138.sql sql/upgrades/139.sql sql/upgrades/140.sql sql/upgrades/141.sql sql/upgrades/142.sql sql/upgrades/143.sql sql/upgrades/144.sql sql/upgrades/145.sql sql/upgrades/146.sql sql/upgrades/147.sql sql/upgrades/148.sql sql/upgrades/149.sql sql/upgrades/150.sql sql/upgrades/151.sql sql/upgrades/152.sql sql/upgrades/153.sql sql/upgrades/154.sql sql/upgrades/155.sql sql/upgrades/156.sql sql/upgrades/157.sql sql/upgrades/158.sql swh/__init__.py swh.storage.egg-info/PKG-INFO swh.storage.egg-info/SOURCES.txt swh.storage.egg-info/dependency_links.txt swh.storage.egg-info/entry_points.txt swh.storage.egg-info/requires.txt swh.storage.egg-info/top_level.txt swh/storage/__init__.py swh/storage/backfill.py swh/storage/buffer.py swh/storage/cli.py swh/storage/common.py swh/storage/converters.py swh/storage/db.py swh/storage/exc.py swh/storage/filter.py swh/storage/fixer.py swh/storage/in_memory.py swh/storage/interface.py swh/storage/metrics.py swh/storage/objstorage.py swh/storage/py.typed swh/storage/pytest_plugin.py swh/storage/replay.py swh/storage/retry.py swh/storage/storage.py swh/storage/utils.py swh/storage/validate.py swh/storage/writer.py swh/storage/algos/__init__.py swh/storage/algos/diff.py swh/storage/algos/dir_iterators.py swh/storage/algos/origin.py swh/storage/algos/revisions_walker.py swh/storage/algos/snapshot.py swh/storage/api/__init__.py swh/storage/api/client.py swh/storage/api/serializers.py swh/storage/api/server.py swh/storage/cassandra/__init__.py swh/storage/cassandra/common.py swh/storage/cassandra/converters.py swh/storage/cassandra/cql.py swh/storage/cassandra/schema.py swh/storage/cassandra/storage.py swh/storage/sql/10-swh-init.sql swh/storage/sql/20-swh-enums.sql swh/storage/sql/30-swh-schema.sql swh/storage/sql/40-swh-func.sql swh/storage/sql/60-swh-indexes.sql swh/storage/tests/__init__.py swh/storage/tests/conftest.py -swh/storage/tests/generate_data_test.py swh/storage/tests/storage_data.py swh/storage/tests/test_api_client.py swh/storage/tests/test_backfill.py swh/storage/tests/test_buffer.py swh/storage/tests/test_cassandra.py swh/storage/tests/test_cassandra_converters.py swh/storage/tests/test_cli.py swh/storage/tests/test_converters.py swh/storage/tests/test_exception.py swh/storage/tests/test_filter.py swh/storage/tests/test_in_memory.py swh/storage/tests/test_init.py swh/storage/tests/test_kafka_writer.py swh/storage/tests/test_metrics.py swh/storage/tests/test_pytest_plugin.py swh/storage/tests/test_replay.py swh/storage/tests/test_retry.py swh/storage/tests/test_revision_bw_compat.py swh/storage/tests/test_server.py swh/storage/tests/test_storage.py swh/storage/tests/test_utils.py swh/storage/tests/algos/__init__.py swh/storage/tests/algos/test_diff.py swh/storage/tests/algos/test_dir_iterator.py swh/storage/tests/algos/test_origin.py swh/storage/tests/algos/test_revisions_walker.py swh/storage/tests/algos/test_snapshot.py swh/storage/tests/data/storage.yml \ No newline at end of file diff --git a/swh/storage/interface.py b/swh/storage/interface.py index cd5c214b..ffe1ef6e 100644 --- a/swh/storage/interface.py +++ b/swh/storage/interface.py @@ -1,1293 +1,1293 @@ # Copyright (C) 2015-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import datetime from typing import Any, Dict, Iterable, List, Optional, Union from swh.core.api import remote_api_endpoint from swh.model.identifiers import SWHID from swh.model.model import ( Content, Directory, Origin, OriginVisit, OriginVisitStatus, Revision, Release, Snapshot, SkippedContent, MetadataAuthority, MetadataAuthorityType, MetadataFetcher, MetadataTargetType, RawExtrinsicMetadata, ) def deprecated(f): f.deprecated_endpoint = True return f class StorageInterface: @remote_api_endpoint("check_config") def check_config(self, *, check_write): """Check that the storage is configured and ready to go.""" ... @remote_api_endpoint("content/add") def content_add(self, content: Iterable[Content]) -> Dict: """Add content blobs to the storage Args: contents (iterable): iterable of dictionaries representing individual pieces of content to add. Each dictionary has the following keys: - data (bytes): the actual content - length (int): content length - one key for each checksum algorithm in :data:`swh.model.hashutil.ALGORITHMS`, mapped to the corresponding checksum - status (str): one of visible, hidden Raises: The following exceptions can occur: - HashCollision in case of collision - Any other exceptions raise by the db In case of errors, some of the content may have been stored in the DB and in the objstorage. Since additions to both idempotent, that should not be a problem. Returns: Summary dict with the following keys and associated values: content:add: New contents added content:add:bytes: Sum of the contents' length data """ ... @remote_api_endpoint("content/update") def content_update(self, content, keys=[]): """Update content blobs to the storage. Does nothing for unknown contents or skipped ones. Args: content (iterable): iterable of dictionaries representing individual pieces of content to update. Each dictionary has the following keys: - data (bytes): the actual content - length (int): content length (default: -1) - one key for each checksum algorithm in :data:`swh.model.hashutil.ALGORITHMS`, mapped to the corresponding checksum - status (str): one of visible, hidden, absent keys (list): List of keys (str) whose values needs an update, e.g., new hash column """ ... @remote_api_endpoint("content/add_metadata") def content_add_metadata(self, content: Iterable[Content]) -> Dict: """Add content metadata to the storage (like `content_add`, but without inserting to the objstorage). Args: content (iterable): iterable of dictionaries representing individual pieces of content to add. Each dictionary has the following keys: - length (int): content length (default: -1) - one key for each checksum algorithm in :data:`swh.model.hashutil.ALGORITHMS`, mapped to the corresponding checksum - status (str): one of visible, hidden, absent - reason (str): if status = absent, the reason why - origin (int): if status = absent, the origin we saw the content in - ctime (datetime): time of insertion in the archive Returns: Summary dict with the following key and associated values: content:add: New contents added skipped_content:add: New skipped contents (no data) added """ ... @remote_api_endpoint("content/data") def content_get(self, content): """Retrieve in bulk contents and their data. This generator yields exactly as many items than provided sha1 identifiers, but callers should not assume this will always be true. It may also yield `None` values in case an object was not found. Args: content: iterables of sha1 Yields: Dict[str, bytes]: Generates streams of contents as dict with their raw data: - sha1 (bytes): content id - data (bytes): content's raw data Raises: ValueError in case of too much contents are required. cf. BULK_BLOCK_CONTENT_LEN_MAX """ ... @deprecated @remote_api_endpoint("content/range") def content_get_range(self, start, end, limit=1000): """Retrieve contents within range [start, end] bound by limit. Note that this function may return more than one blob per hash. The limit is enforced with multiplicity (ie. two blobs with the same hash will count twice toward the limit). Args: **start** (bytes): Starting identifier range (expected smaller than end) **end** (bytes): Ending identifier range (expected larger than start) **limit** (int): Limit result (default to 1000) Returns: a dict with keys: - contents [dict]: iterable of contents in between the range. - next (bytes): There remains content in the range starting from this next sha1 """ ... @remote_api_endpoint("content/partition") def content_get_partition( self, partition_id: int, nb_partitions: int, limit: int = 1000, page_token: str = None, ): """Splits contents into nb_partitions, and returns one of these based on partition_id (which must be in [0, nb_partitions-1]) There is no guarantee on how the partitioning is done, or the result order. Args: partition_id (int): index of the partition to fetch nb_partitions (int): total number of partitions to split into limit (int): Limit result (default to 1000) page_token (Optional[str]): opaque token used for pagination. Returns: a dict with keys: - contents (List[dict]): iterable of contents in the partition. - **next_page_token** (Optional[str]): opaque token to be used as `page_token` for retrieving the next page. if absent, there is no more pages to gather. """ ... @remote_api_endpoint("content/metadata") def content_get_metadata(self, contents: List[bytes]) -> Dict[bytes, List[Dict]]: """Retrieve content metadata in bulk Args: content: iterable of content identifiers (sha1) Returns: a dict with keys the content's sha1 and the associated value either the existing content's metadata or None if the content does not exist. """ ... @remote_api_endpoint("content/missing") def content_missing(self, content, key_hash="sha1"): """List content missing from storage Args: content ([dict]): iterable of dictionaries whose keys are either 'length' or an item of :data:`swh.model.hashutil.ALGORITHMS`; mapped to the corresponding checksum (or length). key_hash (str): name of the column to use as hash id result (default: 'sha1') Returns: iterable ([bytes]): missing content ids (as per the key_hash column) Raises: TODO: an exception when we get a hash collision. """ ... @remote_api_endpoint("content/missing/sha1") def content_missing_per_sha1(self, contents): """List content missing from storage based only on sha1. Args: contents: Iterable of sha1 to check for absence. Returns: iterable: missing ids Raises: TODO: an exception when we get a hash collision. """ ... @remote_api_endpoint("content/missing/sha1_git") def content_missing_per_sha1_git(self, contents): """List content missing from storage based only on sha1_git. Args: contents (Iterable): An iterable of content id (sha1_git) Yields: missing contents sha1_git """ ... @remote_api_endpoint("content/present") def content_find(self, content): """Find a content hash in db. Args: content: a dictionary representing one content hash, mapping checksum algorithm names (see swh.model.hashutil.ALGORITHMS) to checksum values Returns: a triplet (sha1, sha1_git, sha256) if the content exist or None otherwise. Raises: ValueError: in case the key of the dictionary is not sha1, sha1_git nor sha256. """ ... @remote_api_endpoint("content/get_random") def content_get_random(self): """Finds a random content id. Returns: a sha1_git """ ... @remote_api_endpoint("content/skipped/add") def skipped_content_add(self, content: Iterable[SkippedContent]) -> Dict: """Add contents to the skipped_content list, which contains (partial) information about content missing from the archive. Args: contents (iterable): iterable of dictionaries representing individual pieces of content to add. Each dictionary has the following keys: - length (Optional[int]): content length (default: -1) - one key for each checksum algorithm in :data:`swh.model.hashutil.ALGORITHMS`, mapped to the corresponding checksum; each is optional - status (str): must be "absent" - reason (str): the reason why the content is absent - origin (int): if status = absent, the origin we saw the content in Raises: The following exceptions can occur: - HashCollision in case of collision - Any other exceptions raise by the backend In case of errors, some content may have been stored in the DB and in the objstorage. Since additions to both idempotent, that should not be a problem. Returns: Summary dict with the following key and associated values: skipped_content:add: New skipped contents (no data) added """ ... @remote_api_endpoint("content/skipped/missing") def skipped_content_missing(self, contents): """List skipped_content missing from storage Args: content: iterable of dictionaries containing the data for each checksum algorithm. Returns: iterable: missing signatures """ ... @remote_api_endpoint("directory/add") def directory_add(self, directories: Iterable[Directory]) -> Dict: """Add directories to the storage Args: directories (iterable): iterable of dictionaries representing the individual directories to add. Each dict has the following keys: - id (sha1_git): the id of the directory to add - entries (list): list of dicts for each entry in the directory. Each dict has the following keys: - name (bytes) - type (one of 'file', 'dir', 'rev'): type of the directory entry (file, directory, revision) - target (sha1_git): id of the object pointed at by the directory entry - perms (int): entry permissions Returns: Summary dict of keys with associated count as values: directory:add: Number of directories actually added """ ... @remote_api_endpoint("directory/missing") def directory_missing(self, directories): """List directories missing from storage Args: directories (iterable): an iterable of directory ids Yields: missing directory ids """ ... @remote_api_endpoint("directory/ls") def directory_ls(self, directory, recursive=False): """Get entries for one directory. Args: - directory: the directory to list entries from. - recursive: if flag on, this list recursively from this directory. Returns: List of entries for such directory. If `recursive=True`, names in the path of a dir/file not at the root are concatenated with a slash (`/`). """ ... @remote_api_endpoint("directory/path") def directory_entry_get_by_path(self, directory, paths): """Get the directory entry (either file or dir) from directory with path. Args: - directory: sha1 of the top level directory - paths: path to lookup from the top level directory. From left (top) to right (bottom). Returns: The corresponding directory entry if found, None otherwise. """ ... @remote_api_endpoint("directory/get_random") def directory_get_random(self): """Finds a random directory id. Returns: a sha1_git """ ... @remote_api_endpoint("revision/add") def revision_add(self, revisions: Iterable[Revision]) -> Dict: """Add revisions to the storage Args: revisions (Iterable[dict]): iterable of dictionaries representing the individual revisions to add. Each dict has the following keys: - **id** (:class:`sha1_git`): id of the revision to add - **date** (:class:`dict`): date the revision was written - **committer_date** (:class:`dict`): date the revision got added to the origin - **type** (one of 'git', 'tar'): type of the revision added - **directory** (:class:`sha1_git`): the directory the revision points at - **message** (:class:`bytes`): the message associated with the revision - **author** (:class:`Dict[str, bytes]`): dictionary with keys: name, fullname, email - **committer** (:class:`Dict[str, bytes]`): dictionary with keys: name, fullname, email - **metadata** (:class:`jsonb`): extra information as dictionary - **synthetic** (:class:`bool`): revision's nature (tarball, directory creates synthetic revision`) - **parents** (:class:`list[sha1_git]`): the parents of this revision date dictionaries have the form defined in :mod:`swh.model`. Returns: Summary dict of keys with associated count as values revision:add: New objects actually stored in db """ ... @remote_api_endpoint("revision/missing") def revision_missing(self, revisions): """List revisions missing from storage Args: revisions (iterable): revision ids Yields: missing revision ids """ ... @remote_api_endpoint("revision") def revision_get(self, revisions): """Get all revisions from storage Args: revisions: an iterable of revision ids Returns: iterable: an iterable of revisions as dictionaries (or None if the revision doesn't exist) """ ... @remote_api_endpoint("revision/log") def revision_log(self, revisions, limit=None): """Fetch revision entry from the given root revisions. Args: revisions: array of root revision to lookup limit: limitation on the output result. Default to None. Yields: List of revision log from such revisions root. """ ... @remote_api_endpoint("revision/shortlog") def revision_shortlog(self, revisions, limit=None): """Fetch the shortlog for the given revisions Args: revisions: list of root revisions to lookup limit: depth limitation for the output Yields: a list of (id, parents) tuples. """ ... @remote_api_endpoint("revision/get_random") def revision_get_random(self): """Finds a random revision id. Returns: a sha1_git """ ... @remote_api_endpoint("release/add") def release_add(self, releases: Iterable[Release]) -> Dict: """Add releases to the storage Args: releases (Iterable[dict]): iterable of dictionaries representing the individual releases to add. Each dict has the following keys: - **id** (:class:`sha1_git`): id of the release to add - **revision** (:class:`sha1_git`): id of the revision the release points to - **date** (:class:`dict`): the date the release was made - **name** (:class:`bytes`): the name of the release - **comment** (:class:`bytes`): the comment associated with the release - **author** (:class:`Dict[str, bytes]`): dictionary with keys: name, fullname, email the date dictionary has the form defined in :mod:`swh.model`. Returns: Summary dict of keys with associated count as values release:add: New objects contents actually stored in db """ ... @remote_api_endpoint("release/missing") def release_missing(self, releases): """List releases missing from storage Args: releases: an iterable of release ids Returns: a list of missing release ids """ ... @remote_api_endpoint("release") def release_get(self, releases): """Given a list of sha1, return the releases's information Args: releases: list of sha1s Yields: dicts with the same keys as those given to `release_add` (or ``None`` if a release does not exist) """ ... @remote_api_endpoint("release/get_random") def release_get_random(self): """Finds a random release id. Returns: a sha1_git """ ... @remote_api_endpoint("snapshot/add") def snapshot_add(self, snapshots: Iterable[Snapshot]) -> Dict: """Add snapshots to the storage. Args: snapshot ([dict]): the snapshots to add, containing the following keys: - **id** (:class:`bytes`): id of the snapshot - **branches** (:class:`dict`): branches the snapshot contains, mapping the branch name (:class:`bytes`) to the branch target, itself a :class:`dict` (or ``None`` if the branch points to an unknown object) - **target_type** (:class:`str`): one of ``content``, ``directory``, ``revision``, ``release``, ``snapshot``, ``alias`` - **target** (:class:`bytes`): identifier of the target (currently a ``sha1_git`` for all object kinds, or the name of the target branch for aliases) Raises: ValueError: if the origin or visit id does not exist. Returns: Summary dict of keys with associated count as values snapshot:add: Count of object actually stored in db """ ... @remote_api_endpoint("snapshot/missing") def snapshot_missing(self, snapshots): """List snapshots missing from storage Args: snapshots (iterable): an iterable of snapshot ids Yields: missing snapshot ids """ ... @remote_api_endpoint("snapshot") def snapshot_get(self, snapshot_id): """Get the content, possibly partial, of a snapshot with the given id The branches of the snapshot are iterated in the lexicographical order of their names. .. warning:: At most 1000 branches contained in the snapshot will be returned for performance reasons. In order to browse the whole set of branches, the method :meth:`snapshot_get_branches` should be used instead. Args: snapshot_id (bytes): identifier of the snapshot Returns: dict: a dict with three keys: * **id**: identifier of the snapshot * **branches**: a dict of branches contained in the snapshot whose keys are the branches' names. * **next_branch**: the name of the first branch not returned or :const:`None` if the snapshot has less than 1000 branches. """ ... @remote_api_endpoint("snapshot/by_origin_visit") def snapshot_get_by_origin_visit(self, origin, visit): """Get the content, possibly partial, of a snapshot for the given origin visit The branches of the snapshot are iterated in the lexicographical order of their names. .. warning:: At most 1000 branches contained in the snapshot will be returned for performance reasons. In order to browse the whole set of branches, the method :meth:`snapshot_get_branches` should be used instead. Args: origin (int): the origin identifier visit (int): the visit identifier Returns: dict: None if the snapshot does not exist; a dict with three keys otherwise: * **id**: identifier of the snapshot * **branches**: a dict of branches contained in the snapshot whose keys are the branches' names. * **next_branch**: the name of the first branch not returned or :const:`None` if the snapshot has less than 1000 branches. """ ... @remote_api_endpoint("snapshot/count_branches") def snapshot_count_branches(self, snapshot_id): """Count the number of branches in the snapshot with the given id Args: snapshot_id (bytes): identifier of the snapshot Returns: dict: A dict whose keys are the target types of branches and values their corresponding amount """ ... @remote_api_endpoint("snapshot/get_branches") def snapshot_get_branches( self, snapshot_id, branches_from=b"", branches_count=1000, target_types=None ): """Get the content, possibly partial, of a snapshot with the given id The branches of the snapshot are iterated in the lexicographical order of their names. Args: snapshot_id (bytes): identifier of the snapshot branches_from (bytes): optional parameter used to skip branches whose name is lesser than it before returning them branches_count (int): optional parameter used to restrain the amount of returned branches target_types (list): optional parameter used to filter the target types of branch to return (possible values that can be contained in that list are `'content', 'directory', 'revision', 'release', 'snapshot', 'alias'`) Returns: dict: None if the snapshot does not exist; a dict with three keys otherwise: * **id**: identifier of the snapshot * **branches**: a dict of branches contained in the snapshot whose keys are the branches' names. * **next_branch**: the name of the first branch not returned or :const:`None` if the snapshot has less than `branches_count` branches after `branches_from` included. """ ... @remote_api_endpoint("snapshot/get_random") def snapshot_get_random(self): """Finds a random snapshot id. Returns: a sha1_git """ ... @remote_api_endpoint("origin/visit/add") def origin_visit_add(self, visits: Iterable[OriginVisit]) -> Iterable[OriginVisit]: """Add visits to storage. If the visits have no id, they will be created and assigned one. The resulted visits are visits with their visit id set. Args: visits: Iterable of OriginVisit objects to add Raises: StorageArgumentException if some origin visit reference unknown origins Returns: Iterable[OriginVisit] stored """ ... @remote_api_endpoint("origin/visit_status/add") def origin_visit_status_add( self, visit_statuses: Iterable[OriginVisitStatus], ) -> None: """Add origin visit statuses. If there is already a status for the same origin and visit id at the same date, the new one will be either dropped or will replace the existing one (it is unspecified which one of these two behaviors happens). Args: visit_statuses: origin visit statuses to add Raises: StorageArgumentException if the origin of the visit status is unknown """ ... @remote_api_endpoint("origin/visit/get") def origin_visit_get( self, origin: str, last_visit: Optional[int] = None, limit: Optional[int] = None, order: str = "asc", ) -> Iterable[Dict[str, Any]]: """Retrieve all the origin's visit's information. Args: origin: The visited origin last_visit: Starting point from which listing the next visits Default to None limit: Number of results to return from the last visit. Default to None order: Order on visit id fields to list origin visits (default to asc) Yields: List of visits. """ ... @remote_api_endpoint("origin/visit/find_by_date") def origin_visit_find_by_date( self, origin: str, visit_date: datetime.datetime ) -> Optional[Dict[str, Any]]: """Retrieves the origin visit whose date is closest to the provided timestamp. In case of a tie, the visit with largest id is selected. Args: origin: origin (URL) visit_date: expected visit date Returns: A visit """ ... @remote_api_endpoint("origin/visit/getby") def origin_visit_get_by(self, origin: str, visit: int) -> Optional[Dict[str, Any]]: """Retrieve origin visit's information. Args: origin: origin (URL) visit: visit id Returns: The information on that particular (origin, visit) or None if it does not exist """ ... @remote_api_endpoint("origin/visit/get_latest") def origin_visit_get_latest( self, origin: str, type: Optional[str] = None, allowed_statuses: Optional[List[str]] = None, require_snapshot: bool = False, ) -> Optional[Dict[str, Any]]: """Get the latest origin visit for the given origin, optionally looking only for those with one of the given allowed_statuses or for those with a snapshot. Args: origin: origin URL type: Optional visit type to filter on (e.g git, tar, dsc, svn, hg, npm, pypi, ...) allowed_statuses: list of visit statuses considered to find the latest visit. For instance, ``allowed_statuses=['full']`` will only consider visits that have successfully run to completion. require_snapshot: If True, only a visit with a snapshot will be returned. Returns: dict: a dict with the following keys: - **origin**: the URL of the origin - **visit**: origin visit id - **type**: type of loader used for the visit - **date**: timestamp of such visit - **status**: Visit's new status - **metadata**: Data associated to the visit - **snapshot** (Optional[sha1_git]): identifier of the snapshot associated to the visit """ ... @remote_api_endpoint("origin/visit_status/get_latest") def origin_visit_status_get_latest( self, origin_url: str, visit: int, allowed_statuses: Optional[List[str]] = None, require_snapshot: bool = False, ) -> Optional[OriginVisitStatus]: """Get the latest origin visit status for the given origin visit, optionally looking only for those with one of the given allowed_statuses or with a snapshot. Args: origin: origin URL allowed_statuses: list of visit statuses considered to find the latest visit. Possible values are {created, ongoing, partial, full}. For instance, ``allowed_statuses=['full']`` will only consider visits that have successfully run to completion. require_snapshot: If True, only a visit with a snapshot will be returned. Returns: The OriginVisitStatus matching the criteria """ ... @remote_api_endpoint("origin/visit/get_random") def origin_visit_get_random(self, type: str) -> Optional[Dict[str, Any]]: """Randomly select one successful origin visit with made in the last 3 months. Returns: dict representing an origin visit, in the same format as :py:meth:`origin_visit_get`. """ ... @remote_api_endpoint("object/find_by_sha1_git") def object_find_by_sha1_git(self, ids): """Return the objects found with the given ids. Args: ids: a generator of sha1_gits Returns: dict: a mapping from id to the list of objects found. Each object found is itself a dict with keys: - sha1_git: the input id - type: the type of object found """ ... @remote_api_endpoint("origin/get") def origin_get(self, origins): """Return origins, either all identified by their ids or all identified by tuples (type, url). If the url is given and the type is omitted, one of the origins with that url is returned. Args: origin: a list of dictionaries representing the individual origins to find. These dicts have the key url: - url (bytes): the url the origin points to Returns: dict: the origin dictionary with the keys: - id: origin's id - url: origin's url Raises: ValueError: if the url or the id don't exist. """ ... @remote_api_endpoint("origin/get_sha1") def origin_get_by_sha1(self, sha1s): """Return origins, identified by the sha1 of their URLs. Args: sha1s (list[bytes]): a list of sha1s Yields: dicts containing origin information as returned by :meth:`swh.storage.storage.Storage.origin_get`, or None if an origin matching the sha1 is not found. """ ... @deprecated @remote_api_endpoint("origin/get_range") def origin_get_range(self, origin_from=1, origin_count=100): """Retrieve ``origin_count`` origins whose ids are greater or equal than ``origin_from``. Origins are sorted by id before retrieving them. Args: origin_from (int): the minimum id of origins to retrieve origin_count (int): the maximum number of origins to retrieve Yields: dicts containing origin information as returned by :meth:`swh.storage.storage.Storage.origin_get`. """ ... @remote_api_endpoint("origin/list") def origin_list(self, page_token: Optional[str] = None, limit: int = 100) -> dict: """Returns the list of origins Args: page_token: opaque token used for pagination. limit: the maximum number of results to return Returns: dict: dict with the following keys: - **next_page_token** (str, optional): opaque token to be used as `page_token` for retrieving the next page. if absent, there is no more pages to gather. - **origins** (List[dict]): list of origins, as returned by `origin_get`. """ ... @remote_api_endpoint("origin/search") def origin_search( self, url_pattern, offset=0, limit=50, regexp=False, with_visit=False ): """Search for origins whose urls contain a provided string pattern or match a provided regular expression. The search is performed in a case insensitive way. Args: url_pattern (str): the string pattern to search for in origin urls offset (int): number of found origins to skip before returning results limit (int): the maximum number of found origins to return regexp (bool): if True, consider the provided pattern as a regular expression and return origins whose urls match it with_visit (bool): if True, filter out origins with no visit Yields: dicts containing origin information as returned by :meth:`swh.storage.storage.Storage.origin_get`. """ ... @deprecated @remote_api_endpoint("origin/count") def origin_count(self, url_pattern, regexp=False, with_visit=False): """Count origins whose urls contain a provided string pattern or match a provided regular expression. The pattern search in origin urls is performed in a case insensitive way. Args: url_pattern (str): the string pattern to search for in origin urls regexp (bool): if True, consider the provided pattern as a regular expression and return origins whose urls match it with_visit (bool): if True, filter out origins with no visit Returns: int: The number of origins matching the search criterion. """ ... @remote_api_endpoint("origin/add_multi") def origin_add(self, origins: Iterable[Origin]) -> Dict[str, int]: """Add origins to the storage Args: origins: list of dictionaries representing the individual origins, with the following keys: - type: the origin type ('git', 'svn', 'deb', ...) - url (bytes): the url the origin points to Returns: Summary dict of keys with associated count as values origin:add: Count of object actually stored in db """ ... @deprecated @remote_api_endpoint("origin/add") def origin_add_one(self, origin: Origin) -> str: """Add origin to the storage Args: origin: dictionary representing the individual origin to add. This dict has the following keys: - type (FIXME: enum TBD): the origin type ('git', 'wget', ...) - url (bytes): the url the origin points to Returns: the id of the added origin, or of the identical one that already exists. """ ... def stat_counters(self): """compute statistics about the number of tuples in various tables Returns: dict: a dictionary mapping textual labels (e.g., content) to integer values (e.g., the number of tuples in table content) """ ... def refresh_stat_counters(self): """Recomputes the statistics for `stat_counters`.""" ... @remote_api_endpoint("object_metadata/add") def object_metadata_add(self, metadata: Iterable[RawExtrinsicMetadata],) -> None: """Add extrinsic metadata on objects (contents, directories, ...). The authority and fetcher must be known to the storage before using this endpoint. If there is already metadata for the same object, authority, fetcher, and at the same date; the new one will be either dropped or will replace the existing one (it is unspecified which one of these two behaviors happens). Args: metadata: iterable of RawExtrinsicMetadata objects to be inserted. """ ... @remote_api_endpoint("object_metadata/get") def object_metadata_get( self, object_type: MetadataTargetType, id: Union[str, SWHID], authority: MetadataAuthority, after: Optional[datetime.datetime] = None, page_token: Optional[bytes] = None, limit: int = 1000, ) -> Dict[str, Union[Optional[bytes], List[RawExtrinsicMetadata]]]: """Retrieve list of all object_metadata entries for the id Args: object_type: one of the values of swh.model.model.MetadataTargetType id: an URL if object_type is 'origin', else a core SWHID authority: a dict containing keys `type` and `url`. after: minimum discovery_date for a result to be returned page_token: opaque token, used to get the next page of results limit: maximum number of results to be returned Returns: dict with keys `next_page_token` and `results`. `next_page_token` is an opaque token that is used to get the next page of results, or `None` if there are no more results. `results` is a list of RawExtrinsicMetadata objects: """ ... @remote_api_endpoint("metadata_fetcher/add") def metadata_fetcher_add(self, fetchers: Iterable[MetadataFetcher],) -> None: """Add new metadata fetchers to the storage. Their `name` and `version` together are unique identifiers of this fetcher; and `metadata` is an arbitrary dict of JSONable data with information about this fetcher, which must not be `None` (but may be empty). Args: fetchers: iterable of MetadataFetcher to be inserted """ ... @remote_api_endpoint("metadata_fetcher/get") def metadata_fetcher_get( self, name: str, version: str ) -> Optional[MetadataFetcher]: """Retrieve information about a fetcher Args: name: the name of the fetcher version: version of the fetcher Returns: a MetadataFetcher object (with a non-None metadata field) if it is known, else None. """ ... @remote_api_endpoint("metadata_authority/add") def metadata_authority_add(self, authorities: Iterable[MetadataAuthority]) -> None: """Add new metadata authorities to the storage. Their `type` and `url` together are unique identifiers of this authority; and `metadata` is an arbitrary dict of JSONable data with information about this authority, which must not be `None` (but may be empty). Args: authorities: iterable of MetadataAuthority to be inserted """ ... @remote_api_endpoint("metadata_authority/get") def metadata_authority_get( self, type: MetadataAuthorityType, url: str ) -> Optional[MetadataAuthority]: """Retrieve information about an authority Args: - type: one of "deposit", "forge", or "registry" + type: one of "deposit_client", "forge", or "registry" url: unique URI identifying the authority Returns: a MetadataAuthority object (with a non-None metadata field) if it is known, else None. """ ... @deprecated @remote_api_endpoint("algos/diff_directories") def diff_directories(self, from_dir, to_dir, track_renaming=False): """Compute the list of file changes introduced between two arbitrary directories (insertion / deletion / modification / renaming of files). Args: from_dir (bytes): identifier of the directory to compare from to_dir (bytes): identifier of the directory to compare to track_renaming (bool): whether or not to track files renaming Returns: A list of dict describing the introduced file changes (see :func:`swh.storage.algos.diff.diff_directories` for more details). """ ... @deprecated @remote_api_endpoint("algos/diff_revisions") def diff_revisions(self, from_rev, to_rev, track_renaming=False): """Compute the list of file changes introduced between two arbitrary revisions (insertion / deletion / modification / renaming of files). Args: from_rev (bytes): identifier of the revision to compare from to_rev (bytes): identifier of the revision to compare to track_renaming (bool): whether or not to track files renaming Returns: A list of dict describing the introduced file changes (see :func:`swh.storage.algos.diff.diff_directories` for more details). """ ... @deprecated @remote_api_endpoint("algos/diff_revision") def diff_revision(self, revision, track_renaming=False): """Compute the list of file changes introduced by a specific revision (insertion / deletion / modification / renaming of files) by comparing it against its first parent. Args: revision (bytes): identifier of the revision from which to compute the list of files changes track_renaming (bool): whether or not to track files renaming Returns: A list of dict describing the introduced file changes (see :func:`swh.storage.algos.diff.diff_directories` for more details). """ ... @remote_api_endpoint("clear/buffer") def clear_buffers(self, object_types: Optional[Iterable[str]] = None) -> None: """For backend storages (pg, storage, in-memory), this is a noop operation. For proxy storages (especially filter, buffer), this is an operation which cleans internal state. """ @remote_api_endpoint("flush") def flush(self, object_types: Optional[Iterable[str]] = None) -> Dict: """For backend storages (pg, storage, in-memory), this is expected to be a noop operation. For proxy storages (especially buffer), this is expected to trigger actual writes to the backend. """ ... diff --git a/swh/storage/pytest_plugin.py b/swh/storage/pytest_plugin.py index 59760941..74682f64 100644 --- a/swh/storage/pytest_plugin.py +++ b/swh/storage/pytest_plugin.py @@ -1,264 +1,264 @@ # Copyright (C) 2019-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import glob from os import path, environ from typing import Dict, Iterable, Union import pytest import swh.storage from pytest_postgresql import factories from pytest_postgresql.janitor import DatabaseJanitor, psycopg2, Version from swh.core.utils import numfile_sortkey as sortkey from swh.model.model import ( BaseModel, Content, Directory, MetadataAuthority, MetadataFetcher, Origin, OriginVisit, Person, RawExtrinsicMetadata, Release, Revision, SkippedContent, Snapshot, ) from swh.storage import get_storage from swh.storage.tests.storage_data import data SQL_DIR = path.join(path.dirname(swh.storage.__file__), "sql") environ["LC_ALL"] = "C.UTF-8" DUMP_FILES = path.join(SQL_DIR, "*.sql") @pytest.fixture def swh_storage_backend_config(postgresql_proc, swh_storage_postgresql): """Basic pg storage configuration with no journal collaborator (to avoid pulling optional dependency on clients of this fixture) """ yield { "cls": "local", "db": "postgresql://{user}@{host}:{port}/{dbname}".format( host=postgresql_proc.host, port=postgresql_proc.port, user="postgres", dbname="tests", ), "objstorage": {"cls": "memory", "args": {}}, } @pytest.fixture def swh_storage(swh_storage_backend_config): return get_storage(**swh_storage_backend_config) # the postgres_fact factory fixture below is mostly a copy of the code # from pytest-postgresql. We need a custom version here to be able to # specify our version of the DBJanitor we use. def postgresql_fact(process_fixture_name, db_name=None, dump_files=DUMP_FILES): @pytest.fixture def postgresql_factory(request): """ Fixture factory for PostgreSQL. :param FixtureRequest request: fixture request object :rtype: psycopg2.connection :returns: postgresql client """ config = factories.get_config(request) if not psycopg2: raise ImportError("No module named psycopg2. Please install it.") proc_fixture = request.getfixturevalue(process_fixture_name) # _, config = try_import('psycopg2', request) pg_host = proc_fixture.host pg_port = proc_fixture.port pg_user = proc_fixture.user pg_options = proc_fixture.options pg_db = db_name or config["dbname"] with SwhDatabaseJanitor( pg_user, pg_host, pg_port, pg_db, proc_fixture.version, dump_files=dump_files, ): connection = psycopg2.connect( dbname=pg_db, user=pg_user, host=pg_host, port=pg_port, options=pg_options, ) yield connection connection.close() return postgresql_factory swh_storage_postgresql = postgresql_fact("postgresql_proc") # This version of the DatabaseJanitor implement a different setup/teardown # behavior than than the stock one: instead of dropping, creating and # initializing the database for each test, it create and initialize the db only # once, then it truncate the tables. This is needed to have acceptable test # performances. class SwhDatabaseJanitor(DatabaseJanitor): def __init__( self, user: str, host: str, port: str, db_name: str, version: Union[str, float, Version], dump_files: str = DUMP_FILES, ) -> None: super().__init__(user, host, port, db_name, version) self.dump_files = sorted(glob.glob(dump_files), key=sortkey) def db_setup(self): with psycopg2.connect( dbname=self.db_name, user=self.user, host=self.host, port=self.port, ) as cnx: with cnx.cursor() as cur: for fname in self.dump_files: with open(fname) as fobj: sql = fobj.read().replace("concurrently", "").strip() if sql: cur.execute(sql) cnx.commit() def db_reset(self): with psycopg2.connect( dbname=self.db_name, user=self.user, host=self.host, port=self.port, ) as cnx: with cnx.cursor() as cur: cur.execute( "SELECT table_name FROM information_schema.tables " "WHERE table_schema = %s", ("public",), ) tables = set(table for (table,) in cur.fetchall()) for table in tables: cur.execute("truncate table %s cascade" % table) cur.execute( "SELECT sequence_name FROM information_schema.sequences " "WHERE sequence_schema = %s", ("public",), ) seqs = set(seq for (seq,) in cur.fetchall()) for seq in seqs: cur.execute("ALTER SEQUENCE %s RESTART;" % seq) cnx.commit() def init(self): with self.cursor() as cur: cur.execute( "SELECT COUNT(1) FROM pg_database WHERE datname=%s;", (self.db_name,) ) db_exists = cur.fetchone()[0] == 1 if db_exists: cur.execute( "UPDATE pg_database SET datallowconn=true " "WHERE datname = %s;", (self.db_name,), ) if db_exists: self.db_reset() else: with self.cursor() as cur: cur.execute('CREATE DATABASE "{}";'.format(self.db_name)) self.db_setup() def drop(self): pid_column = "pid" with self.cursor() as cur: cur.execute( "UPDATE pg_database SET datallowconn=false " "WHERE datname = %s;", (self.db_name,), ) cur.execute( "SELECT pg_terminate_backend(pg_stat_activity.{})" "FROM pg_stat_activity " "WHERE pg_stat_activity.datname = %s;".format(pid_column), (self.db_name,), ) @pytest.fixture def sample_data() -> Dict: """Pre-defined sample storage object data to manipulate Returns: Dict of data (keys: content, directory, revision, release, person, origin) """ return { "content": [data.cont, data.cont2], "content_metadata": [data.cont3], "skipped_content": [data.skipped_cont, data.skipped_cont2], "person": [data.person], - "directory": [data.dir2, data.dir], + "directory": [data.dir2, data.dir, data.dir3, data.dir4], "revision": [data.revision, data.revision2, data.revision3], "release": [data.release, data.release2, data.release3], "snapshot": [data.snapshot, data.empty_snapshot, data.complete_snapshot], "origin": [data.origin, data.origin2], "origin_visit": [data.origin_visit, data.origin_visit2, data.origin_visit3], "fetcher": [data.metadata_fetcher.to_dict()], "authority": [data.metadata_authority.to_dict()], "origin_metadata": [ data.origin_metadata.to_dict(), data.origin_metadata2.to_dict(), ], } # FIXME: Add the metadata keys when we can (right now, we cannot as the data model # changed but not the endpoints yet) OBJECT_FACTORY = { "content": Content.from_dict, "content_metadata": Content.from_dict, "skipped_content": SkippedContent.from_dict, "person": Person.from_dict, "directory": Directory.from_dict, "revision": Revision.from_dict, "release": Release.from_dict, "snapshot": Snapshot.from_dict, "origin": Origin.from_dict, "origin_visit": OriginVisit.from_dict, "fetcher": MetadataFetcher.from_dict, "authority": MetadataAuthority.from_dict, "origin_metadata": RawExtrinsicMetadata.from_dict, } @pytest.fixture def sample_data_model(sample_data) -> Dict[str, Iterable[BaseModel]]: """Pre-defined sample storage object model to manipulate Returns: Dict of data (keys: content, directory, revision, release, person, origin, ...) values list of object data model with the corresponding types """ return { object_type: [convert_fn(obj) for obj in sample_data[object_type]] for object_type, convert_fn in OBJECT_FACTORY.items() } diff --git a/swh/storage/sql/30-swh-schema.sql b/swh/storage/sql/30-swh-schema.sql index d267d380..bb7a8044 100644 --- a/swh/storage/sql/30-swh-schema.sql +++ b/swh/storage/sql/30-swh-schema.sql @@ -1,499 +1,499 @@ --- --- SQL implementation of the Software Heritage data model --- -- schema versions create table dbversion ( version int primary key, release timestamptz, description text ); comment on table dbversion is 'Details of current db version'; comment on column dbversion.version is 'SQL schema version'; comment on column dbversion.release is 'Version deployment timestamp'; comment on column dbversion.description is 'Release description'; -- latest schema version insert into dbversion(version, release, description) values(158, now(), 'Work In Progress'); -- a SHA1 checksum create domain sha1 as bytea check (length(value) = 20); -- a Git object ID, i.e., a Git-style salted SHA1 checksum create domain sha1_git as bytea check (length(value) = 20); -- a SHA256 checksum create domain sha256 as bytea check (length(value) = 32); -- a blake2 checksum create domain blake2s256 as bytea check (length(value) = 32); -- UNIX path (absolute, relative, individual path component, etc.) create domain unix_path as bytea; -- a set of UNIX-like access permissions, as manipulated by, e.g., chmod create domain file_perms as int; -- an SWHID create domain swhid as text check (value ~ '^swh:[0-9]+:.*'); -- Checksums about actual file content. Note that the content itself is not -- stored in the DB, but on external (key-value) storage. A single checksum is -- used as key there, but the other can be used to verify that we do not inject -- content collisions not knowingly. create table content ( sha1 sha1 not null, sha1_git sha1_git not null, sha256 sha256 not null, blake2s256 blake2s256 not null, length bigint not null, ctime timestamptz not null default now(), -- creation time, i.e. time of (first) injection into the storage status content_status not null default 'visible', object_id bigserial ); comment on table content is 'Checksums of file content which is actually stored externally'; comment on column content.sha1 is 'Content sha1 hash'; comment on column content.sha1_git is 'Git object sha1 hash'; comment on column content.sha256 is 'Content Sha256 hash'; comment on column content.blake2s256 is 'Content blake2s hash'; comment on column content.length is 'Content length'; comment on column content.ctime is 'First seen time'; comment on column content.status is 'Content status (absent, visible, hidden)'; comment on column content.object_id is 'Content identifier'; -- An origin is a place, identified by an URL, where software source code -- artifacts can be found. We support different kinds of origins, e.g., git and -- other VCS repositories, web pages that list tarballs URLs (e.g., -- http://www.kernel.org), indirect tarball URLs (e.g., -- http://www.example.org/latest.tar.gz), etc. The key feature of an origin is -- that it can be *fetched* from (wget, git clone, svn checkout, etc.) to -- retrieve all the contained software. create table origin ( id bigserial not null, url text not null ); comment on column origin.id is 'Artifact origin id'; comment on column origin.url is 'URL of origin'; -- Content blobs observed somewhere, but not ingested into the archive for -- whatever reason. This table is separate from the content table as we might -- not have the sha1 checksum of skipped contents (for instance when we inject -- git repositories, objects that are too big will be skipped here, and we will -- only know their sha1_git). 'reason' contains the reason the content was -- skipped. origin is a nullable column allowing to find out which origin -- contains that skipped content. create table skipped_content ( sha1 sha1, sha1_git sha1_git, sha256 sha256, blake2s256 blake2s256, length bigint not null, ctime timestamptz not null default now(), status content_status not null default 'absent', reason text not null, origin bigint, object_id bigserial ); comment on table skipped_content is 'Content blobs observed, but not ingested in the archive'; comment on column skipped_content.sha1 is 'Skipped content sha1 hash'; comment on column skipped_content.sha1_git is 'Git object sha1 hash'; comment on column skipped_content.sha256 is 'Skipped content sha256 hash'; comment on column skipped_content.blake2s256 is 'Skipped content blake2s hash'; comment on column skipped_content.length is 'Skipped content length'; comment on column skipped_content.ctime is 'First seen time'; comment on column skipped_content.status is 'Skipped content status (absent, visible, hidden)'; comment on column skipped_content.reason is 'Reason for skipping'; comment on column skipped_content.origin is 'Origin table identifier'; comment on column skipped_content.object_id is 'Skipped content identifier'; -- A file-system directory. A directory is a list of directory entries (see -- tables: directory_entry_{dir,file}). -- -- To list the contents of a directory: -- 1. list the contained directory_entry_dir using array dir_entries -- 2. list the contained directory_entry_file using array file_entries -- 3. list the contained directory_entry_rev using array rev_entries -- 4. UNION -- -- Synonyms/mappings: -- * git: tree create table directory ( id sha1_git not null, dir_entries bigint[], -- sub-directories, reference directory_entry_dir file_entries bigint[], -- contained files, reference directory_entry_file rev_entries bigint[], -- mounted revisions, reference directory_entry_rev object_id bigserial -- short object identifier ); comment on table directory is 'Contents of a directory, synonymous to tree (git)'; comment on column directory.id is 'Git object sha1 hash'; comment on column directory.dir_entries is 'Sub-directories, reference directory_entry_dir'; comment on column directory.file_entries is 'Contained files, reference directory_entry_file'; comment on column directory.rev_entries is 'Mounted revisions, reference directory_entry_rev'; comment on column directory.object_id is 'Short object identifier'; -- A directory entry pointing to a (sub-)directory. create table directory_entry_dir ( id bigserial, target sha1_git not null, -- id of target directory name unix_path not null, -- path name, relative to containing dir perms file_perms not null -- unix-like permissions ); comment on table directory_entry_dir is 'Directory entry for directory'; comment on column directory_entry_dir.id is 'Directory identifier'; comment on column directory_entry_dir.target is 'Target directory identifier'; comment on column directory_entry_dir.name is 'Path name, relative to containing directory'; comment on column directory_entry_dir.perms is 'Unix-like permissions'; -- A directory entry pointing to a file content. create table directory_entry_file ( id bigserial, target sha1_git not null, -- id of target file name unix_path not null, -- path name, relative to containing dir perms file_perms not null -- unix-like permissions ); comment on table directory_entry_file is 'Directory entry for file'; comment on column directory_entry_file.id is 'File identifier'; comment on column directory_entry_file.target is 'Target file identifier'; comment on column directory_entry_file.name is 'Path name, relative to containing directory'; comment on column directory_entry_file.perms is 'Unix-like permissions'; -- A directory entry pointing to a revision. create table directory_entry_rev ( id bigserial, target sha1_git not null, -- id of target revision name unix_path not null, -- path name, relative to containing dir perms file_perms not null -- unix-like permissions ); comment on table directory_entry_rev is 'Directory entry for revision'; comment on column directory_entry_dir.id is 'Revision identifier'; comment on column directory_entry_dir.target is 'Target revision in identifier'; comment on column directory_entry_dir.name is 'Path name, relative to containing directory'; comment on column directory_entry_dir.perms is 'Unix-like permissions'; -- A person referenced by some source code artifacts, e.g., a VCS revision or -- release metadata. create table person ( id bigserial, name bytea, -- advisory: not null if we managed to parse a name email bytea, -- advisory: not null if we managed to parse an email fullname bytea not null -- freeform specification; what is actually used in the checksums -- will usually be of the form 'name ' ); comment on table person is 'Person referenced in code artifact release metadata'; comment on column person.id is 'Person identifier'; comment on column person.name is 'Name'; comment on column person.email is 'Email'; comment on column person.fullname is 'Full name (raw name)'; -- The state of a source code tree at a specific point in time. -- -- Synonyms/mappings: -- * git / subversion / etc: commit -- * tarball: a specific tarball -- -- Revisions are organized as DAGs. Each revision points to 0, 1, or more (in -- case of merges) parent revisions. Each revision points to a directory, i.e., -- a file-system tree containing files and directories. create table revision ( id sha1_git not null, date timestamptz, date_offset smallint, committer_date timestamptz, committer_date_offset smallint, type revision_type not null, directory sha1_git, -- source code 'root' directory message bytea, author bigint, committer bigint, synthetic boolean not null default false, -- true iff revision has been created by Software Heritage metadata jsonb, -- extra metadata (tarball checksums, extra commit information, etc...) object_id bigserial, date_neg_utc_offset boolean, committer_date_neg_utc_offset boolean, extra_headers bytea[][] -- extra headers (used in hash computation) ); comment on table revision is 'A revision represents the state of a source code tree at a specific point in time'; comment on column revision.id is 'Git-style SHA1 commit identifier'; comment on column revision.date is 'Author timestamp as UNIX epoch'; comment on column revision.date_offset is 'Author timestamp timezone, as minute offsets from UTC'; comment on column revision.date_neg_utc_offset is 'True indicates a -0 UTC offset on author timestamp'; comment on column revision.committer_date is 'Committer timestamp as UNIX epoch'; comment on column revision.committer_date_offset is 'Committer timestamp timezone, as minute offsets from UTC'; comment on column revision.committer_date_neg_utc_offset is 'True indicates a -0 UTC offset on committer timestamp'; comment on column revision.type is 'Type of revision'; comment on column revision.directory is 'Directory identifier'; comment on column revision.message is 'Commit message'; comment on column revision.author is 'Author identity'; comment on column revision.committer is 'Committer identity'; comment on column revision.synthetic is 'True iff revision has been synthesized by Software Heritage'; comment on column revision.metadata is 'Extra revision metadata'; comment on column revision.object_id is 'Non-intrinsic, sequential object identifier'; comment on column revision.extra_headers is 'Extra revision headers; used in revision hash computation'; -- either this table or the sha1_git[] column on the revision table create table revision_history ( id sha1_git not null, parent_id sha1_git not null, parent_rank int not null default 0 -- parent position in merge commits, 0-based ); comment on table revision_history is 'Sequence of revision history with parent and position in history'; comment on column revision_history.id is 'Revision history git object sha1 checksum'; comment on column revision_history.parent_id is 'Parent revision git object identifier'; comment on column revision_history.parent_rank is 'Parent position in merge commits, 0-based'; -- Crawling history of software origins visited by Software Heritage. Each -- visit is a 3-way mapping between a software origin, a timestamp, and a -- snapshot object capturing the full-state of the origin at visit time. create table origin_visit ( origin bigint not null, visit bigint not null, date timestamptz not null, type text not null ); comment on column origin_visit.origin is 'Visited origin'; comment on column origin_visit.visit is 'Sequential visit number for the origin'; comment on column origin_visit.date is 'Visit timestamp'; comment on column origin_visit.type is 'Type of loader that did the visit (hg, git, ...)'; -- Crawling history of software origin visits by Software Heritage. Each -- visit see its history change through new origin visit status updates create table origin_visit_status ( origin bigint not null, visit bigint not null, date timestamptz not null, status origin_visit_state not null, metadata jsonb, snapshot sha1_git ); comment on column origin_visit_status.origin is 'Origin concerned by the visit update'; comment on column origin_visit_status.visit is 'Visit concerned by the visit update'; comment on column origin_visit_status.date is 'Visit update timestamp'; comment on column origin_visit_status.status is 'Visit status (ongoing, failed, full)'; comment on column origin_visit_status.metadata is 'Optional origin visit metadata'; comment on column origin_visit_status.snapshot is 'Optional, possibly partial, snapshot of the origin visit. It can be partial.'; -- A snapshot represents the entire state of a software origin as crawled by -- Software Heritage. This table is a simple mapping between (public) intrinsic -- snapshot identifiers and (private) numeric sequential identifiers. create table snapshot ( object_id bigserial not null, -- PK internal object identifier id sha1_git not null -- snapshot intrinsic identifier ); comment on table snapshot is 'State of a software origin as crawled by Software Heritage'; comment on column snapshot.object_id is 'Internal object identifier'; comment on column snapshot.id is 'Intrinsic snapshot identifier'; -- Each snapshot associate "branch" names to other objects in the Software -- Heritage Merkle DAG. This table describes branches as mappings between names -- and target typed objects. create table snapshot_branch ( object_id bigserial not null, -- PK internal object identifier name bytea not null, -- branch name, e.g., "master" or "feature/drag-n-drop" target bytea, -- target object identifier, e.g., a revision identifier target_type snapshot_target -- target object type, e.g., "revision" ); comment on table snapshot_branch is 'Associates branches with objects in Heritage Merkle DAG'; comment on column snapshot_branch.object_id is 'Internal object identifier'; comment on column snapshot_branch.name is 'Branch name'; comment on column snapshot_branch.target is 'Target object identifier'; comment on column snapshot_branch.target_type is 'Target object type'; -- Mapping between snapshots and their branches. create table snapshot_branches ( snapshot_id bigint not null, -- snapshot identifier, ref. snapshot.object_id branch_id bigint not null -- branch identifier, ref. snapshot_branch.object_id ); comment on table snapshot_branches is 'Mapping between snapshot and their branches'; comment on column snapshot_branches.snapshot_id is 'Snapshot identifier'; comment on column snapshot_branches.branch_id is 'Branch identifier'; -- A "memorable" point in time in the development history of a software -- project. -- -- Synonyms/mappings: -- * git: tag (of the annotated kind, otherwise they are just references) -- * tarball: the release version number create table release ( id sha1_git not null, target sha1_git, date timestamptz, date_offset smallint, name bytea, comment bytea, author bigint, synthetic boolean not null default false, -- true iff release has been created by Software Heritage object_id bigserial, target_type object_type not null, date_neg_utc_offset boolean ); comment on table release is 'Details of a software release, synonymous with a tag (git) or version number (tarball)'; comment on column release.id is 'Release git identifier'; comment on column release.target is 'Target git identifier'; comment on column release.date is 'Release timestamp'; comment on column release.date_offset is 'Timestamp offset from UTC'; comment on column release.name is 'Name'; comment on column release.comment is 'Comment'; comment on column release.author is 'Author'; comment on column release.synthetic is 'Indicates if created by Software Heritage'; comment on column release.object_id is 'Object identifier'; comment on column release.target_type is 'Object type (''content'', ''directory'', ''revision'', ''release'', ''snapshot'')'; comment on column release.date_neg_utc_offset is 'True indicates -0 UTC offset for release timestamp'; -- Tools create table metadata_fetcher ( id serial not null, name text not null, version text not null, metadata jsonb not null ); comment on table metadata_fetcher is 'Tools used to retrieve metadata'; comment on column metadata_fetcher.id is 'Internal identifier of the fetcher'; comment on column metadata_fetcher.name is 'Fetcher name'; comment on column metadata_fetcher.version is 'Fetcher version'; comment on column metadata_fetcher.metadata is 'Extra information about the fetcher'; create table metadata_authority ( id serial not null, type text not null, url text not null, metadata jsonb not null ); comment on table metadata_authority is 'Metadata authority information'; comment on column metadata_authority.id is 'Internal identifier of the authority'; -comment on column metadata_authority.type is 'Type of authority (deposit/forge/registry)'; +comment on column metadata_authority.type is 'Type of authority (deposit_client/forge/registry)'; comment on column metadata_authority.url is 'Authority''s uri'; comment on column metadata_authority.metadata is 'Other metadata about authority'; -- Extrinsic metadata on a DAG objects and origins. create table object_metadata ( type text not null, id text not null, -- metadata source authority_id bigint not null, fetcher_id bigint not null, discovery_date timestamptz not null, -- metadata itself format text not null, metadata bytea not null, -- context origin text, visit bigint, snapshot swhid, release swhid, revision swhid, path bytea, directory swhid ); comment on table object_metadata is 'keeps all metadata found concerning an object'; comment on column object_metadata.type is 'the type of object (content/directory/revision/release/snapshot/origin) the metadata is on'; comment on column object_metadata.id is 'the SWHID or origin URL for which the metadata was found'; comment on column object_metadata.discovery_date is 'the date of retrieval'; comment on column object_metadata.authority_id is 'the metadata provider: github, openhub, deposit, etc.'; comment on column object_metadata.fetcher_id is 'the tool used for extracting metadata: loaders, crawlers, etc.'; comment on column object_metadata.format is 'name of the format of metadata, used by readers to interpret it.'; comment on column object_metadata.metadata is 'original metadata in opaque format'; -- Keep a cache of object counts create table object_counts ( object_type text, -- table for which we're counting objects (PK) value bigint, -- count of objects in the table last_update timestamptz, -- last update for the object count in this table single_update boolean -- whether we update this table standalone (true) or through bucketed counts (false) ); comment on table object_counts is 'Cache of object counts'; comment on column object_counts.object_type is 'Object type (''content'', ''directory'', ''revision'', ''release'', ''snapshot'')'; comment on column object_counts.value is 'Count of objects in the table'; comment on column object_counts.last_update is 'Last update for object count'; comment on column object_counts.single_update is 'standalone (true) or bucketed counts (false)'; create table object_counts_bucketed ( line serial not null, -- PK object_type text not null, -- table for which we're counting objects identifier text not null, -- identifier across which we're bucketing objects bucket_start bytea, -- lower bound (inclusive) for the bucket bucket_end bytea, -- upper bound (exclusive) for the bucket value bigint, -- count of objects in the bucket last_update timestamptz -- last update for the object count in this bucket ); comment on table object_counts_bucketed is 'Bucketed count for objects ordered by type'; comment on column object_counts_bucketed.line is 'Auto incremented idenitfier value'; comment on column object_counts_bucketed.object_type is 'Object type (''content'', ''directory'', ''revision'', ''release'', ''snapshot'')'; comment on column object_counts_bucketed.identifier is 'Common identifier for bucketed objects'; comment on column object_counts_bucketed.bucket_start is 'Lower bound (inclusive) for the bucket'; comment on column object_counts_bucketed.bucket_end is 'Upper bound (exclusive) for the bucket'; comment on column object_counts_bucketed.value is 'Count of objects in the bucket'; comment on column object_counts_bucketed.last_update is 'Last update for the object count in this bucket'; diff --git a/swh/storage/tests/conftest.py b/swh/storage/tests/conftest.py index d96e7453..7cda0b3b 100644 --- a/swh/storage/tests/conftest.py +++ b/swh/storage/tests/conftest.py @@ -1,65 +1,69 @@ # Copyright (C) 2019-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import pytest import multiprocessing.util from hypothesis import settings try: import pytest_cov.embed except ImportError: pytest_cov = None +from typing import Iterable + +from swh.model.model import BaseContent from swh.model.tests.generate_testdata import gen_contents, gen_origins from swh.storage import get_storage +from swh.storage.interface import StorageInterface # define tests profile. Full documentation is at: # https://hypothesis.readthedocs.io/en/latest/settings.html#settings-profiles settings.register_profile("fast", max_examples=5, deadline=5000) settings.register_profile("slow", max_examples=20, deadline=5000) if pytest_cov is not None: # pytest_cov + multiprocessing can cause a segmentation fault when starting # the child process ; so we're # removing pytest-coverage's hook that runs when a child process starts. # This means code run in child processes won't be counted in the coverage # report, but this is not an issue because the only code that runs only in # child processes is the RPC server. for (key, value) in multiprocessing.util._afterfork_registry.items(): if value is pytest_cov.embed.multiprocessing_start: del multiprocessing.util._afterfork_registry[key] break else: assert False, "missing pytest_cov.embed.multiprocessing_start?" @pytest.fixture def swh_storage_backend_config(swh_storage_backend_config): """storage should test with its journal writer collaborator on """ yield {**swh_storage_backend_config, "journal_writer": {"cls": "memory",}} @pytest.fixture def swh_storage(swh_storage_backend_config): return get_storage(cls="validate", storage=swh_storage_backend_config) @pytest.fixture -def swh_contents(swh_storage): - contents = gen_contents(n=20) - swh_storage.content_add([c for c in contents if c["status"] != "absent"]) - swh_storage.skipped_content_add([c for c in contents if c["status"] == "absent"]) +def swh_contents(swh_storage: StorageInterface) -> Iterable[BaseContent]: + contents = [BaseContent.from_dict(c) for c in gen_contents(n=20)] + swh_storage.content_add([c for c in contents if c.status != "absent"]) + swh_storage.skipped_content_add([c for c in contents if c.status == "absent"]) return contents @pytest.fixture def swh_origins(swh_storage): origins = gen_origins(n=100) swh_storage.origin_add(origins) return origins diff --git a/swh/storage/tests/generate_data_test.py b/swh/storage/tests/generate_data_test.py deleted file mode 100644 index 32cb642e..00000000 --- a/swh/storage/tests/generate_data_test.py +++ /dev/null @@ -1,49 +0,0 @@ -# Copyright (C) 2018 The Software Heritage developers -# See the AUTHORS file at the top-level directory of this distribution -# License: GNU General Public License version 3, or any later version -# See top-level LICENSE file for more information - -from hypothesis.strategies import binary, composite, sets - -from swh.model.hashutil import MultiHash - - -def gen_raw_content(): - """Generate raw content binary. - - """ - return binary(min_size=20, max_size=100) - - -@composite -def gen_contents(draw, *, min_size=0, max_size=100): - """Generate valid and consistent content. - - Context: Test purposes - - Args: - **draw**: Used by hypothesis to generate data - **min_size** (int): Minimal number of elements to generate - (default: 0) - **max_size** (int): Maximal number of elements to generate - (default: 100) - - Returns: - [dict] representing contents. The list's size is between - [min_size:max_size]. - - """ - raw_contents = draw(sets(gen_raw_content(), min_size=min_size, max_size=max_size)) - - contents = [] - for raw_content in raw_contents: - contents.append( - { - "data": raw_content, - "length": len(raw_content), - "status": "visible", - **MultiHash.from_data(raw_content).digest(), - } - ) - - return contents diff --git a/swh/storage/tests/storage_data.py b/swh/storage/tests/storage_data.py index 93957dc6..3c8e2337 100644 --- a/swh/storage/tests/storage_data.py +++ b/swh/storage/tests/storage_data.py @@ -1,597 +1,597 @@ # Copyright (C) 2015-2019 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import datetime import attr from swh.model.hashutil import hash_to_bytes, hash_to_hex from swh.model import from_disk from swh.model.identifiers import parse_swhid from swh.model.model import ( MetadataAuthority, MetadataAuthorityType, MetadataFetcher, RawExtrinsicMetadata, MetadataTargetType, ) class StorageData: def __getattr__(self, key): try: v = globals()[key] except KeyError as e: raise AttributeError(e.args[0]) if hasattr(v, "copy"): return v.copy() return v data = StorageData() cont = { "data": b"42\n", "length": 3, "sha1": hash_to_bytes("34973274ccef6ab4dfaaf86599792fa9c3fe4689"), "sha1_git": hash_to_bytes("d81cc0710eb6cf9efd5b920a8453e1e07157b6cd"), "sha256": hash_to_bytes( "673650f936cb3b0a2f93ce09d81be10748b1b203c19e8176b4eefc1964a0cf3a" ), "blake2s256": hash_to_bytes( "d5fe1939576527e42cfd76a9455a2432fe7f56669564577dd93c4280e76d661d" ), "status": "visible", } cont2 = { "data": b"4242\n", "length": 5, "sha1": hash_to_bytes("61c2b3a30496d329e21af70dd2d7e097046d07b7"), "sha1_git": hash_to_bytes("36fade77193cb6d2bd826161a0979d64c28ab4fa"), "sha256": hash_to_bytes( "859f0b154fdb2d630f45e1ecae4a862915435e663248bb8461d914696fc047cd" ), "blake2s256": hash_to_bytes( "849c20fad132b7c2d62c15de310adfe87be94a379941bed295e8141c6219810d" ), "status": "visible", } cont3 = { "data": b"424242\n", "length": 7, "sha1": hash_to_bytes("3e21cc4942a4234c9e5edd8a9cacd1670fe59f13"), "sha1_git": hash_to_bytes("c932c7649c6dfa4b82327d121215116909eb3bea"), "sha256": hash_to_bytes( "92fb72daf8c6818288a35137b72155f507e5de8d892712ab96277aaed8cf8a36" ), "blake2s256": hash_to_bytes( "76d0346f44e5a27f6bafdd9c2befd304aff83780f93121d801ab6a1d4769db11" ), "status": "visible", "ctime": "2019-12-01 00:00:00Z", } contents = (cont, cont2, cont3) missing_cont = { - "data": b"missing\n", "length": 8, "sha1": hash_to_bytes("f9c24e2abb82063a3ba2c44efd2d3c797f28ac90"), "sha1_git": hash_to_bytes("33e45d56f88993aae6a0198013efa80716fd8919"), "sha256": hash_to_bytes( "6bbd052ab054ef222c1c87be60cd191addedd24cc882d1f5f7f7be61dc61bb3a" ), "blake2s256": hash_to_bytes( "306856b8fd879edb7b6f1aeaaf8db9bbecc993cd7f776c333ac3a782fa5c6eba" ), + "reason": "Content too long", "status": "absent", } skipped_cont = { "length": 1024 * 1024 * 200, "sha1_git": hash_to_bytes("33e45d56f88993aae6a0198013efa80716fd8920"), "sha1": hash_to_bytes("43e45d56f88993aae6a0198013efa80716fd8920"), "sha256": hash_to_bytes( "7bbd052ab054ef222c1c87be60cd191addedd24cc882d1f5f7f7be61dc61bb3a" ), "blake2s256": hash_to_bytes( "ade18b1adecb33f891ca36664da676e12c772cc193778aac9a137b8dc5834b9b" ), "reason": "Content too long", "status": "absent", "origin": "file:///dev/zero", } skipped_cont2 = { "length": 1024 * 1024 * 300, "sha1_git": hash_to_bytes("44e45d56f88993aae6a0198013efa80716fd8921"), "sha1": hash_to_bytes("54e45d56f88993aae6a0198013efa80716fd8920"), "sha256": hash_to_bytes( "8cbd052ab054ef222c1c87be60cd191addedd24cc882d1f5f7f7be61dc61bb3a" ), "blake2s256": hash_to_bytes( "9ce18b1adecb33f891ca36664da676e12c772cc193778aac9a137b8dc5834b9b" ), "reason": "Content too long", "status": "absent", } skipped_contents = (skipped_cont, skipped_cont2) dir = { "id": hash_to_bytes("34f335a750111ca0a8b64d8034faec9eedc396be"), "entries": ( { "name": b"foo", "type": "file", "target": hash_to_bytes("d81cc0710eb6cf9efd5b920a8453e1e07157b6cd"), # cont "perms": from_disk.DentryPerms.content, }, { "name": b"bar\xc3", "type": "dir", "target": b"12345678901234567890", "perms": from_disk.DentryPerms.directory, }, ), } dir2 = { "id": hash_to_bytes("8505808532953da7d2581741f01b29c04b1cb9ab"), "entries": ( { "name": b"oof", "type": "file", "target": hash_to_bytes( # cont2 "36fade77193cb6d2bd826161a0979d64c28ab4fa" ), "perms": from_disk.DentryPerms.content, }, ), } dir3 = { "id": hash_to_bytes("4ea8c6b2f54445e5dd1a9d5bb2afd875d66f3150"), "entries": ( { "name": b"foo", "type": "file", "target": hash_to_bytes("d81cc0710eb6cf9efd5b920a8453e1e07157b6cd"), # cont "perms": from_disk.DentryPerms.content, }, { "name": b"subdir", "type": "dir", "target": hash_to_bytes("34f335a750111ca0a8b64d8034faec9eedc396be"), # dir "perms": from_disk.DentryPerms.directory, }, { "name": b"hello", "type": "file", "target": b"12345678901234567890", "perms": from_disk.DentryPerms.content, }, ), } dir4 = { "id": hash_to_bytes("377aa5fcd944fbabf502dbfda55cd14d33c8c3c6"), "entries": ( { "name": b"subdir1", "type": "dir", "target": hash_to_bytes("4ea8c6b2f54445e5dd1a9d5bb2afd875d66f3150"), # dir3 "perms": from_disk.DentryPerms.directory, }, ), } directories = (dir, dir2, dir3, dir4) minus_offset = datetime.timezone(datetime.timedelta(minutes=-120)) plus_offset = datetime.timezone(datetime.timedelta(minutes=120)) revision = { "id": hash_to_bytes("066b1b62dbfa033362092af468bf6cfabec230e7"), "message": b"hello", "author": { "name": b"Nicolas Dandrimont", "email": b"nicolas@example.com", "fullname": b"Nicolas Dandrimont ", }, "date": { "timestamp": {"seconds": 1234567890, "microseconds": 0}, "offset": 120, "negative_utc": False, }, "committer": { "name": b"St\xc3fano Zacchiroli", "email": b"stefano@example.com", "fullname": b"St\xc3fano Zacchiroli ", }, "committer_date": { "timestamp": {"seconds": 1123456789, "microseconds": 0}, "offset": 0, "negative_utc": True, }, "parents": (b"01234567890123456789", b"23434512345123456789"), "type": "git", "directory": hash_to_bytes("34f335a750111ca0a8b64d8034faec9eedc396be"), # dir "metadata": { "checksums": {"sha1": "tarball-sha1", "sha256": "tarball-sha256",}, "signed-off-by": "some-dude", }, "extra_headers": ( (b"gpgsig", b"test123"), (b"mergetag", b"foo\\bar"), (b"mergetag", b"\x22\xaf\x89\x80\x01\x00"), ), "synthetic": True, } revision2 = { "id": hash_to_bytes("df7a6f6a99671fb7f7343641aff983a314ef6161"), "message": b"hello again", "author": { "name": b"Roberto Dicosmo", "email": b"roberto@example.com", "fullname": b"Roberto Dicosmo ", }, "date": { "timestamp": {"seconds": 1234567843, "microseconds": 220000,}, "offset": -720, "negative_utc": False, }, "committer": { "name": b"tony", "email": b"ar@dumont.fr", "fullname": b"tony ", }, "committer_date": { "timestamp": {"seconds": 1123456789, "microseconds": 0}, "offset": 0, "negative_utc": False, }, "parents": (b"01234567890123456789",), "type": "git", "directory": hash_to_bytes("8505808532953da7d2581741f01b29c04b1cb9ab"), # dir2 "metadata": None, "extra_headers": (), "synthetic": False, } revision3 = { "id": hash_to_bytes("2cbd7bb22c653bbb23a29657852a50a01b591d46"), "message": b"a simple revision with no parents this time", "author": { "name": b"Roberto Dicosmo", "email": b"roberto@example.com", "fullname": b"Roberto Dicosmo ", }, "date": { "timestamp": {"seconds": 1234567843, "microseconds": 220000,}, "offset": -720, "negative_utc": False, }, "committer": { "name": b"tony", "email": b"ar@dumont.fr", "fullname": b"tony ", }, "committer_date": { "timestamp": {"seconds": 1127351742, "microseconds": 0}, "offset": 0, "negative_utc": False, }, "parents": (), "type": "git", "directory": hash_to_bytes("8505808532953da7d2581741f01b29c04b1cb9ab"), # dir2 "metadata": None, "extra_headers": (), "synthetic": True, } revision4 = { "id": hash_to_bytes("88cd5126fc958ed70089d5340441a1c2477bcc20"), "message": b"parent of self.revision2", "author": { "name": b"me", "email": b"me@soft.heri", "fullname": b"me ", }, "date": { "timestamp": {"seconds": 1244567843, "microseconds": 220000,}, "offset": -720, "negative_utc": False, }, "committer": { "name": b"committer-dude", "email": b"committer@dude.com", "fullname": b"committer-dude ", }, "committer_date": { "timestamp": {"seconds": 1244567843, "microseconds": 220000,}, "offset": -720, "negative_utc": False, }, "parents": ( hash_to_bytes("2cbd7bb22c653bbb23a29657852a50a01b591d46"), ), # revision3 "type": "git", "directory": hash_to_bytes("34f335a750111ca0a8b64d8034faec9eedc396be"), # dir "metadata": None, "extra_headers": (), "synthetic": False, } revisions = (revision, revision2, revision3, revision4) origin = { "url": "file:///dev/null", } origin2 = { "url": "file:///dev/zero", } origins = (origin, origin2) metadata_authority = MetadataAuthority( - type=MetadataAuthorityType.DEPOSIT, + type=MetadataAuthorityType.DEPOSIT_CLIENT, url="http://hal.inria.example.com/", metadata={"location": "France"}, ) metadata_authority2 = MetadataAuthority( type=MetadataAuthorityType.REGISTRY, url="http://wikidata.example.com/", metadata={}, ) metadata_fetcher = MetadataFetcher( name="swh-deposit", version="0.0.1", metadata={"sword_version": "2"}, ) metadata_fetcher2 = MetadataFetcher(name="swh-example", version="0.0.1", metadata={},) date_visit1 = datetime.datetime(2015, 1, 1, 23, 0, 0, tzinfo=datetime.timezone.utc) type_visit1 = "git" date_visit2 = datetime.datetime(2017, 1, 1, 23, 0, 0, tzinfo=datetime.timezone.utc) type_visit2 = "hg" date_visit3 = datetime.datetime(2018, 1, 1, 23, 0, 0, tzinfo=datetime.timezone.utc) type_visit3 = "deb" origin_visit = { "origin": origin["url"], "visit": 1, "date": date_visit1, "type": type_visit1, } origin_visit2 = { "origin": origin["url"], "visit": 2, "date": date_visit2, "type": type_visit1, } origin_visit3 = { "origin": origin2["url"], "visit": 1, "date": date_visit1, "type": type_visit2, } origin_visits = [origin_visit, origin_visit2, origin_visit3] release = { "id": hash_to_bytes("a673e617fcc6234e29b2cad06b8245f96c415c61"), "name": b"v0.0.1", "author": { "name": b"olasd", "email": b"nic@olasd.fr", "fullname": b"olasd ", }, "date": { "timestamp": {"seconds": 1234567890, "microseconds": 0}, "offset": 42, "negative_utc": False, }, "target": b"43210987654321098765", "target_type": "revision", "message": b"synthetic release", "synthetic": True, } release2 = { "id": hash_to_bytes("6902bd4c82b7d19a421d224aedab2b74197e420d"), "name": b"v0.0.2", "author": { "name": b"tony", "email": b"ar@dumont.fr", "fullname": b"tony ", }, "date": { "timestamp": {"seconds": 1634366813, "microseconds": 0}, "offset": -120, "negative_utc": False, }, "target": b"432109\xa9765432\xc309\x00765", "target_type": "revision", "message": b"v0.0.2\nMisc performance improvements + bug fixes", "synthetic": False, } release3 = { "id": hash_to_bytes("3e9050196aa288264f2a9d279d6abab8b158448b"), "name": b"v0.0.2", "author": { "name": b"tony", "email": b"tony@ardumont.fr", "fullname": b"tony ", }, "date": { "timestamp": {"seconds": 1634336813, "microseconds": 0}, "offset": 0, "negative_utc": False, }, "target": hash_to_bytes("df7a6f6a99671fb7f7343641aff983a314ef6161"), "target_type": "revision", "message": b"yet another synthetic release", "synthetic": True, } releases = (release, release2, release3) snapshot = { "id": hash_to_bytes("409ee1ff3f10d166714bc90581debfd0446dda57"), "branches": { b"master": { "target": hash_to_bytes("066b1b62dbfa033362092af468bf6cfabec230e7"), "target_type": "revision", }, }, } empty_snapshot = { "id": hash_to_bytes("1a8893e6a86f444e8be8e7bda6cb34fb1735a00e"), "branches": {}, } complete_snapshot = { "id": hash_to_bytes("a56ce2d81c190023bb99a3a36279307522cb85f6"), "branches": { b"directory": { "target": hash_to_bytes("1bd0e65f7d2ff14ae994de17a1e7fe65111dcad8"), "target_type": "directory", }, b"directory2": { "target": hash_to_bytes("1bd0e65f7d2ff14ae994de17a1e7fe65111dcad8"), "target_type": "directory", }, b"content": { "target": hash_to_bytes("fe95a46679d128ff167b7c55df5d02356c5a1ae1"), "target_type": "content", }, b"alias": {"target": b"revision", "target_type": "alias",}, b"revision": { "target": hash_to_bytes("aafb16d69fd30ff58afdd69036a26047f3aebdc6"), "target_type": "revision", }, b"release": { "target": hash_to_bytes("7045404f3d1c54e6473c71bbb716529fbad4be24"), "target_type": "release", }, b"snapshot": { "target": hash_to_bytes("1a8893e6a86f444e8be8e7bda6cb34fb1735a00e"), "target_type": "snapshot", }, b"dangling": None, }, } snapshots = (snapshot, empty_snapshot, complete_snapshot) content_metadata = RawExtrinsicMetadata( type=MetadataTargetType.CONTENT, id=parse_swhid(f"swh:1:cnt:{hash_to_hex(cont['sha1_git'])}"), origin=origin["url"], discovery_date=datetime.datetime( 2015, 1, 1, 21, 0, 0, tzinfo=datetime.timezone.utc ), authority=attr.evolve(metadata_authority, metadata=None), fetcher=attr.evolve(metadata_fetcher, metadata=None), format="json", metadata=b'{"foo": "bar"}', ) content_metadata2 = RawExtrinsicMetadata( type=MetadataTargetType.CONTENT, id=parse_swhid(f"swh:1:cnt:{hash_to_hex(cont['sha1_git'])}"), origin=origin2["url"], discovery_date=datetime.datetime( 2017, 1, 1, 22, 0, 0, tzinfo=datetime.timezone.utc ), authority=attr.evolve(metadata_authority, metadata=None), fetcher=attr.evolve(metadata_fetcher, metadata=None), format="yaml", metadata=b"foo: bar", ) content_metadata3 = RawExtrinsicMetadata( type=MetadataTargetType.CONTENT, id=parse_swhid(f"swh:1:cnt:{hash_to_hex(cont['sha1_git'])}"), discovery_date=datetime.datetime( 2017, 1, 1, 22, 0, 0, tzinfo=datetime.timezone.utc ), authority=attr.evolve(metadata_authority2, metadata=None), fetcher=attr.evolve(metadata_fetcher2, metadata=None), format="yaml", metadata=b"foo: bar", origin=origin["url"], visit=42, snapshot=parse_swhid(f"swh:1:snp:{hash_to_hex(snapshot['id'])}"), release=parse_swhid(f"swh:1:rel:{hash_to_hex(release['id'])}"), revision=parse_swhid(f"swh:1:rev:{hash_to_hex(revision['id'])}"), directory=parse_swhid(f"swh:1:dir:{hash_to_hex(dir['id'])}"), path=b"/foo/bar", ) origin_metadata = RawExtrinsicMetadata( type=MetadataTargetType.ORIGIN, id=origin["url"], discovery_date=datetime.datetime( 2015, 1, 1, 21, 0, 0, tzinfo=datetime.timezone.utc ), authority=attr.evolve(metadata_authority, metadata=None), fetcher=attr.evolve(metadata_fetcher, metadata=None), format="json", metadata=b'{"foo": "bar"}', ) origin_metadata2 = RawExtrinsicMetadata( type=MetadataTargetType.ORIGIN, id=origin["url"], discovery_date=datetime.datetime( 2017, 1, 1, 22, 0, 0, tzinfo=datetime.timezone.utc ), authority=attr.evolve(metadata_authority, metadata=None), fetcher=attr.evolve(metadata_fetcher, metadata=None), format="yaml", metadata=b"foo: bar", ) origin_metadata3 = RawExtrinsicMetadata( type=MetadataTargetType.ORIGIN, id=origin["url"], discovery_date=datetime.datetime( 2017, 1, 1, 22, 0, 0, tzinfo=datetime.timezone.utc ), authority=attr.evolve(metadata_authority2, metadata=None), fetcher=attr.evolve(metadata_fetcher2, metadata=None), format="yaml", metadata=b"foo: bar", ) person = { "name": b"John Doe", "email": b"john.doe@institute.org", "fullname": b"John Doe ", } objects = { "content": contents, "skipped_content": skipped_contents, "directory": directories, "revision": revisions, "origin": origins, "release": releases, "snapshot": snapshots, } diff --git a/swh/storage/tests/test_storage.py b/swh/storage/tests/test_storage.py index 0870c159..879b4e0f 100644 --- a/swh/storage/tests/test_storage.py +++ b/swh/storage/tests/test_storage.py @@ -1,4275 +1,4272 @@ # Copyright (C) 2015-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import copy import datetime import inspect import itertools import math import queue import random import threading from collections import defaultdict from contextlib import contextmanager from datetime import timedelta from unittest.mock import Mock import attr import psycopg2 import pytest from hypothesis import given, strategies, settings, HealthCheck from typing import ClassVar, Optional from swh.model import from_disk, identifiers from swh.model.hashutil import hash_to_bytes from swh.model.identifiers import SWHID from swh.model.model import ( Content, Directory, Origin, OriginVisit, OriginVisitStatus, Release, Revision, + SkippedContent, Snapshot, MetadataTargetType, ) from swh.model.hypothesis_strategies import objects from swh.storage import get_storage from swh.storage.converters import origin_url_to_sha1 as sha1 from swh.storage.exc import HashCollision, StorageArgumentException from swh.storage.interface import StorageInterface from swh.storage.utils import content_hex_hashes, now from .storage_data import data @contextmanager def db_transaction(storage): with storage.db() as db: with db.transaction() as cur: yield db, cur def normalize_entity(entity): entity = copy.deepcopy(entity) for key in ("date", "committer_date"): if key in entity: entity[key] = identifiers.normalize_timestamp(entity[key]) return entity def transform_entries(dir_, *, prefix=b""): - for ent in dir_["entries"]: + for ent in dir_.entries: yield { - "dir_id": dir_["id"], - "type": ent["type"], - "target": ent["target"], - "name": prefix + ent["name"], - "perms": ent["perms"], + "dir_id": dir_.id, + "type": ent.type, + "target": ent.target, + "name": prefix + ent.name, + "perms": ent.perms, "status": None, "sha1": None, "sha1_git": None, "sha256": None, "length": None, } def cmpdir(directory): return (directory["type"], directory["dir_id"]) def short_revision(revision): return [revision["id"], revision["parents"]] def assert_contents_ok( expected_contents, actual_contents, keys_to_check={"sha1", "data"} ): """Assert that a given list of contents matches on a given set of keys. """ for k in keys_to_check: expected_list = set([c.get(k) for c in expected_contents]) actual_list = set([c.get(k) for c in actual_contents]) assert actual_list == expected_list, k def round_to_milliseconds(date): """Round datetime to milliseconds before insertion, so equality doesn't fail after a round-trip through a DB (eg. Cassandra) """ return date.replace(microsecond=(date.microsecond // 1000) * 1000) def test_round_to_milliseconds(): date = now() for (ms, expected_ms) in [(0, 0), (1000, 1000), (555555, 555000), (999500, 999000)]: date = date.replace(microsecond=ms) actual_date = round_to_milliseconds(date) assert actual_date.microsecond == expected_ms class LazyContent(Content): def with_data(self): return Content.from_dict({**self.to_dict(), "data": data.cont["data"]}) class TestStorage: """Main class for Storage testing. This class is used as-is to test local storage (see TestLocalStorage below) and remote storage (see TestRemoteStorage in test_remote_storage.py. We need to have the two classes inherit from this base class separately to avoid nosetests running the tests from the base class twice. """ maxDiff = None # type: ClassVar[Optional[int]] def test_types(self, swh_storage_backend_config): """Checks all methods of StorageInterface are implemented by this backend, and that they have the same signature.""" # Create an instance of the protocol (which cannot be instantiated # directly, so this creates a subclass, then instantiates it) interface = type("_", (StorageInterface,), {})() storage = get_storage(**swh_storage_backend_config) assert "content_add" in dir(interface) missing_methods = [] for meth_name in dir(interface): if meth_name.startswith("_"): continue interface_meth = getattr(interface, meth_name) try: concrete_meth = getattr(storage, meth_name) except AttributeError: if not getattr(interface_meth, "deprecated_endpoint", False): # The backend is missing a (non-deprecated) endpoint missing_methods.append(meth_name) continue expected_signature = inspect.signature(interface_meth) actual_signature = inspect.signature(concrete_meth) assert expected_signature == actual_signature, meth_name assert missing_methods == [] def test_check_config(self, swh_storage): assert swh_storage.check_config(check_write=True) assert swh_storage.check_config(check_write=False) def test_content_add(self, swh_storage, sample_data_model): cont = sample_data_model["content"][0] insertion_start_time = now() actual_result = swh_storage.content_add([cont]) insertion_end_time = now() assert actual_result == { "content:add": 1, "content:add:bytes": cont.length, } assert list(swh_storage.content_get([cont.sha1])) == [ {"sha1": cont.sha1, "data": cont.data} ] expected_cont = attr.evolve(cont, data=None) contents = [ obj for (obj_type, obj) in swh_storage.journal_writer.journal.objects if obj_type == "content" ] assert len(contents) == 1 for obj in contents: assert insertion_start_time <= obj.ctime assert obj.ctime <= insertion_end_time assert obj == expected_cont swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["content"] == 1 def test_content_add_from_generator(self, swh_storage, sample_data_model): cont = sample_data_model["content"][0] def _cnt_gen(): yield cont actual_result = swh_storage.content_add(_cnt_gen()) assert actual_result == { "content:add": 1, "content:add:bytes": cont.length, } swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["content"] == 1 def test_content_add_from_lazy_content(self, swh_storage, sample_data_model): cont = sample_data_model["content"][0] lazy_content = LazyContent.from_dict(cont.to_dict()) insertion_start_time = now() # bypass the validation proxy for now, to directly put a dict actual_result = swh_storage.storage.content_add([lazy_content]) insertion_end_time = now() assert actual_result == { "content:add": 1, "content:add:bytes": cont.length, } # the fact that we retrieve the content object from the storage with # the correct 'data' field ensures it has been 'called' assert list(swh_storage.content_get([cont.sha1])) == [ {"sha1": cont.sha1, "data": cont.data} ] expected_cont = attr.evolve(lazy_content, data=None, ctime=None) contents = [ obj for (obj_type, obj) in swh_storage.journal_writer.journal.objects if obj_type == "content" ] assert len(contents) == 1 for obj in contents: assert insertion_start_time <= obj.ctime assert obj.ctime <= insertion_end_time assert attr.evolve(obj, ctime=None).to_dict() == expected_cont.to_dict() swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["content"] == 1 def test_content_add_validation(self, swh_storage, sample_data_model): cont = sample_data_model["content"][0].to_dict() with pytest.raises(StorageArgumentException, match="status"): swh_storage.content_add([{**cont, "status": "absent"}]) with pytest.raises(StorageArgumentException, match="status"): swh_storage.content_add([{**cont, "status": "foobar"}]) with pytest.raises(StorageArgumentException, match="(?i)length"): swh_storage.content_add([{**cont, "length": -2}]) with pytest.raises(StorageArgumentException, match="reason"): swh_storage.content_add([{**cont, "reason": "foobar"}]) def test_skipped_content_add_validation(self, swh_storage, sample_data_model): cont = attr.evolve(sample_data_model["content"][0], data=None).to_dict() with pytest.raises(StorageArgumentException, match="status"): swh_storage.skipped_content_add([{**cont, "status": "visible"}]) with pytest.raises(StorageArgumentException, match="reason") as cm: swh_storage.skipped_content_add([{**cont, "status": "absent"}]) if type(cm.value) == psycopg2.IntegrityError: assert cm.exception.pgcode == psycopg2.errorcodes.NOT_NULL_VIOLATION def test_content_get_missing(self, swh_storage, sample_data_model): cont, cont2 = sample_data_model["content"][:2] swh_storage.content_add([cont]) # Query a single missing content results = list(swh_storage.content_get([cont2.sha1])) assert results == [None] # Check content_get does not abort after finding a missing content results = list(swh_storage.content_get([cont.sha1, cont2.sha1])) assert results == [{"sha1": cont.sha1, "data": cont.data}, None] # Check content_get does not discard found countent when it finds # a missing content. results = list(swh_storage.content_get([cont2.sha1, cont.sha1])) assert results == [None, {"sha1": cont.sha1, "data": cont.data}] def test_content_add_different_input(self, swh_storage, sample_data_model): cont, cont2 = sample_data_model["content"][:2] actual_result = swh_storage.content_add([cont, cont2]) assert actual_result == { "content:add": 2, "content:add:bytes": cont.length + cont2.length, } def test_content_add_twice(self, swh_storage, sample_data_model): cont, cont2 = sample_data_model["content"][:2] actual_result = swh_storage.content_add([cont]) assert actual_result == { "content:add": 1, "content:add:bytes": cont.length, } assert len(swh_storage.journal_writer.journal.objects) == 1 actual_result = swh_storage.content_add([cont, cont2]) assert actual_result == { "content:add": 1, "content:add:bytes": cont2.length, } assert 2 <= len(swh_storage.journal_writer.journal.objects) <= 3 assert len(swh_storage.content_find(cont.to_dict())) == 1 assert len(swh_storage.content_find(cont2.to_dict())) == 1 def test_content_add_collision(self, swh_storage, sample_data_model): cont1 = sample_data_model["content"][0] # create (corrupted) content with same sha1{,_git} but != sha256 sha256_array = bytearray(cont1.sha256) sha256_array[0] += 1 cont1b = attr.evolve(cont1, sha256=bytes(sha256_array)) with pytest.raises(HashCollision) as cm: swh_storage.content_add([cont1, cont1b]) exc = cm.value actual_algo = exc.algo assert actual_algo in ["sha1", "sha1_git", "blake2s256"] actual_id = exc.hash_id assert actual_id == getattr(cont1, actual_algo).hex() collisions = exc.args[2] assert len(collisions) == 2 assert collisions == [ content_hex_hashes(cont1.hashes()), content_hex_hashes(cont1b.hashes()), ] assert exc.colliding_content_hashes() == [ cont1.hashes(), cont1b.hashes(), ] def test_content_add_duplicate(self, swh_storage, sample_data_model): cont = sample_data_model["content"][0] swh_storage.content_add([cont, cont]) assert list(swh_storage.content_get([cont.sha1])) == [ {"sha1": cont.sha1, "data": cont.data} ] def test_content_update(self, swh_storage, sample_data_model): cont1 = sample_data_model["content"][0] if hasattr(swh_storage, "storage"): swh_storage.journal_writer.journal = None # TODO, not supported swh_storage.content_add([cont1]) # alter the sha1_git for example cont1b = attr.evolve( cont1, sha1_git=hash_to_bytes("3a60a5275d0333bf13468e8b3dcab90f4046e654") ) swh_storage.content_update([cont1b.to_dict()], keys=["sha1_git"]) results = swh_storage.content_get_metadata([cont1.sha1]) expected_content = attr.evolve(cont1b, data=None).to_dict() del expected_content["ctime"] assert tuple(results[cont1.sha1]) == (expected_content,) def test_content_add_metadata(self, swh_storage, sample_data_model): cont = attr.evolve(sample_data_model["content"][0], data=None, ctime=now()) actual_result = swh_storage.content_add_metadata([cont]) assert actual_result == { "content:add": 1, } expected_cont = cont.to_dict() del expected_cont["ctime"] assert tuple(swh_storage.content_get_metadata([cont.sha1])[cont.sha1]) == ( expected_cont, ) contents = [ obj for (obj_type, obj) in swh_storage.journal_writer.journal.objects if obj_type == "content" ] assert len(contents) == 1 for obj in contents: obj = attr.evolve(obj, ctime=None) assert obj == cont def test_content_add_metadata_different_input(self, swh_storage, sample_data_model): contents = sample_data_model["content"][:2] cont = attr.evolve(contents[0], data=None, ctime=now()) cont2 = attr.evolve(contents[1], data=None, ctime=now()) actual_result = swh_storage.content_add_metadata([cont, cont2]) assert actual_result == { "content:add": 2, } def test_content_add_metadata_collision(self, swh_storage, sample_data_model): cont1 = attr.evolve(sample_data_model["content"][0], data=None, ctime=now()) # create (corrupted) content with same sha1{,_git} but != sha256 sha1_git_array = bytearray(cont1.sha256) sha1_git_array[0] += 1 cont1b = attr.evolve(cont1, sha256=bytes(sha1_git_array)) with pytest.raises(HashCollision) as cm: swh_storage.content_add_metadata([cont1, cont1b]) exc = cm.value actual_algo = exc.algo assert actual_algo in ["sha1", "sha1_git", "blake2s256"] actual_id = exc.hash_id assert actual_id == getattr(cont1, actual_algo).hex() collisions = exc.args[2] assert len(collisions) == 2 assert collisions == [ content_hex_hashes(cont1.hashes()), content_hex_hashes(cont1b.hashes()), ] assert exc.colliding_content_hashes() == [ cont1.hashes(), cont1b.hashes(), ] - def test_skipped_content_add(self, swh_storage): - cont = data.skipped_cont - cont2 = data.skipped_cont2 - cont2["blake2s256"] = None + def test_skipped_content_add(self, swh_storage, sample_data_model): + contents = sample_data_model["skipped_content"][:2] + cont = contents[0] + cont2 = attr.evolve(contents[1], blake2s256=None) - missing = list(swh_storage.skipped_content_missing([cont, cont2])) + contents_dict = [c.to_dict() for c in [cont, cont2]] - assert missing == [ - { - "sha1": cont["sha1"], - "sha1_git": cont["sha1_git"], - "blake2s256": cont["blake2s256"], - "sha256": cont["sha256"], - }, - { - "sha1": cont2["sha1"], - "sha1_git": cont2["sha1_git"], - "blake2s256": cont2["blake2s256"], - "sha256": cont2["sha256"], - }, - ] + missing = list(swh_storage.skipped_content_missing(contents_dict)) + + assert missing == [cont.hashes(), cont2.hashes()] actual_result = swh_storage.skipped_content_add([cont, cont, cont2]) assert 2 <= actual_result.pop("skipped_content:add") <= 3 assert actual_result == {} - missing = list(swh_storage.skipped_content_missing([cont, cont2])) - + missing = list(swh_storage.skipped_content_missing(contents_dict)) assert missing == [] - def test_skipped_content_add_missing_hashes(self, swh_storage): - cont = data.skipped_cont - cont2 = data.skipped_cont2 - cont["sha1_git"] = cont2["sha1_git"] = None - - missing = list(swh_storage.skipped_content_missing([cont, cont2])) + def test_skipped_content_add_missing_hashes(self, swh_storage, sample_data_model): + cont, cont2 = [ + attr.evolve(c, sha1_git=None) + for c in sample_data_model["skipped_content"][:2] + ] + contents_dict = [c.to_dict() for c in [cont, cont2]] + missing = list(swh_storage.skipped_content_missing(contents_dict)) assert len(missing) == 2 actual_result = swh_storage.skipped_content_add([cont, cont, cont2]) assert 2 <= actual_result.pop("skipped_content:add") <= 3 assert actual_result == {} - missing = list(swh_storage.skipped_content_missing([cont, cont2])) - + missing = list(swh_storage.skipped_content_missing(contents_dict)) assert missing == [] - def test_skipped_content_missing_partial_hash(self, swh_storage): - cont = data.skipped_cont - cont2 = cont.copy() - cont2["sha1_git"] = None - - missing = list(swh_storage.skipped_content_missing([cont, cont2])) + def test_skipped_content_missing_partial_hash(self, swh_storage, sample_data_model): + cont = sample_data_model["skipped_content"][0] + cont2 = attr.evolve(cont, sha1_git=None) + contents_dict = [c.to_dict() for c in [cont, cont2]] + missing = list(swh_storage.skipped_content_missing(contents_dict)) assert len(missing) == 2 actual_result = swh_storage.skipped_content_add([cont]) assert actual_result.pop("skipped_content:add") == 1 assert actual_result == {} - missing = list(swh_storage.skipped_content_missing([cont, cont2])) - - assert missing == [ - { - "sha1": cont2["sha1"], - "sha1_git": cont2["sha1_git"], - "blake2s256": cont2["blake2s256"], - "sha256": cont2["sha256"], - } - ] + missing = list(swh_storage.skipped_content_missing(contents_dict)) + assert missing == [cont2.hashes()] @pytest.mark.property_based @settings(deadline=None) # this test is very slow @given( strategies.sets( elements=strategies.sampled_from(["sha256", "sha1_git", "blake2s256"]), min_size=0, ) ) def test_content_missing(self, swh_storage, algos): algos |= {"sha1"} - cont2 = data.cont2 - missing_cont = data.missing_cont - swh_storage.content_add([cont2]) - test_contents = [cont2] + cont = Content.from_dict(data.cont2) + missing_cont = SkippedContent.from_dict(data.missing_cont) + swh_storage.content_add([cont]) + + test_contents = [cont.to_dict()] missing_per_hash = defaultdict(list) for i in range(256): - test_content = missing_cont.copy() + test_content = missing_cont.to_dict() for hash in algos: test_content[hash] = bytes([i]) + test_content[hash][1:] missing_per_hash[hash].append(test_content[hash]) test_contents.append(test_content) assert set(swh_storage.content_missing(test_contents)) == set( missing_per_hash["sha1"] ) for hash in algos: assert set( swh_storage.content_missing(test_contents, key_hash=hash) ) == set(missing_per_hash[hash]) @pytest.mark.property_based @given( strategies.sets( elements=strategies.sampled_from(["sha256", "sha1_git", "blake2s256"]), min_size=0, ) ) def test_content_missing_unknown_algo(self, swh_storage, algos): algos |= {"sha1"} - cont2 = data.cont2 - missing_cont = data.missing_cont - swh_storage.content_add([cont2]) - test_contents = [cont2] + cont = Content.from_dict(data.cont2) + missing_cont = SkippedContent.from_dict(data.missing_cont) + swh_storage.content_add([cont]) + + test_contents = [cont.to_dict()] missing_per_hash = defaultdict(list) for i in range(16): - test_content = missing_cont.copy() + test_content = missing_cont.to_dict() for hash in algos: test_content[hash] = bytes([i]) + test_content[hash][1:] missing_per_hash[hash].append(test_content[hash]) test_content["nonexisting_algo"] = b"\x00" test_contents.append(test_content) assert set(swh_storage.content_missing(test_contents)) == set( missing_per_hash["sha1"] ) for hash in algos: assert set( swh_storage.content_missing(test_contents, key_hash=hash) ) == set(missing_per_hash[hash]) - def test_content_missing_per_sha1(self, swh_storage): + def test_content_missing_per_sha1(self, swh_storage, sample_data_model): # given - cont2 = data.cont2 - missing_cont = data.missing_cont - swh_storage.content_add([cont2]) + cont = sample_data_model["content"][0] + missing_cont = sample_data_model["skipped_content"][0] + swh_storage.content_add([cont]) + # when - gen = swh_storage.content_missing_per_sha1( - [cont2["sha1"], missing_cont["sha1"]] - ) + gen = swh_storage.content_missing_per_sha1([cont.sha1, missing_cont.sha1]) # then - assert list(gen) == [missing_cont["sha1"]] + assert list(gen) == [missing_cont.sha1] - def test_content_missing_per_sha1_git(self, swh_storage): - cont = data.cont - cont2 = data.cont2 - missing_cont = data.missing_cont + def test_content_missing_per_sha1_git(self, swh_storage, sample_data_model): + cont, cont2 = sample_data_model["content"][:2] + missing_cont = sample_data_model["skipped_content"][0] swh_storage.content_add([cont, cont2]) - contents = [cont["sha1_git"], cont2["sha1_git"], missing_cont["sha1_git"]] + contents = [cont.sha1_git, cont2.sha1_git, missing_cont.sha1_git] missing_contents = swh_storage.content_missing_per_sha1_git(contents) - assert list(missing_contents) == [missing_cont["sha1_git"]] + assert list(missing_contents) == [missing_cont.sha1_git] def test_content_get_partition(self, swh_storage, swh_contents): """content_get_partition paginates results if limit exceeded""" - expected_contents = [c for c in swh_contents if c["status"] != "absent"] + expected_contents = [c.to_dict() for c in swh_contents if c.status != "absent"] actual_contents = [] for i in range(16): actual_result = swh_storage.content_get_partition(i, 16) assert actual_result["next_page_token"] is None actual_contents.extend(actual_result["contents"]) assert_contents_ok(expected_contents, actual_contents, ["sha1"]) def test_content_get_partition_full(self, swh_storage, swh_contents): """content_get_partition for a single partition returns all available contents""" - expected_contents = [c for c in swh_contents if c["status"] != "absent"] + expected_contents = [c.to_dict() for c in swh_contents if c.status != "absent"] actual_result = swh_storage.content_get_partition(0, 1) assert actual_result["next_page_token"] is None actual_contents = actual_result["contents"] assert_contents_ok(expected_contents, actual_contents, ["sha1"]) def test_content_get_partition_empty(self, swh_storage, swh_contents): """content_get_partition when at least one of the partitions is empty""" expected_contents = { - cont["sha1"] for cont in swh_contents if cont["status"] != "absent" + cont.sha1 for cont in swh_contents if cont.status != "absent" } # nb_partitions = smallest power of 2 such that at least one of # the partitions is empty nb_partitions = 1 << math.floor(math.log2(len(swh_contents)) + 1) seen_sha1s = [] for i in range(nb_partitions): actual_result = swh_storage.content_get_partition( i, nb_partitions, limit=len(swh_contents) + 1 ) for cont in actual_result["contents"]: seen_sha1s.append(cont["sha1"]) # Limit is higher than the max number of results assert actual_result["next_page_token"] is None assert set(seen_sha1s) == expected_contents def test_content_get_partition_limit_none(self, swh_storage): """content_get_partition call with wrong limit input should fail""" with pytest.raises(StorageArgumentException) as e: swh_storage.content_get_partition(1, 16, limit=None) assert e.value.args == ("limit should not be None",) def test_generate_content_get_partition_pagination(self, swh_storage, swh_contents): """content_get_partition returns contents within range provided""" - expected_contents = [c for c in swh_contents if c["status"] != "absent"] + expected_contents = [c.to_dict() for c in swh_contents if c.status != "absent"] # retrieve contents actual_contents = [] for i in range(4): page_token = None while True: actual_result = swh_storage.content_get_partition( i, 4, limit=3, page_token=page_token ) actual_contents.extend(actual_result["contents"]) page_token = actual_result["next_page_token"] if page_token is None: break assert_contents_ok(expected_contents, actual_contents, ["sha1"]) - def test_content_get_metadata(self, swh_storage): - cont1 = data.cont - cont2 = data.cont2 + def test_content_get_metadata(self, swh_storage, sample_data_model): + cont1, cont2 = sample_data_model["content"][:2] swh_storage.content_add([cont1, cont2]) - actual_md = swh_storage.content_get_metadata([cont1["sha1"], cont2["sha1"]]) + actual_md = swh_storage.content_get_metadata([cont1.sha1, cont2.sha1]) - # we only retrieve the metadata - cont1.pop("data") - cont2.pop("data") + # we only retrieve the metadata so no data nor ctime within + expected_cont1, expected_cont2 = [ + attr.evolve(c, data=None).to_dict() for c in [cont1, cont2] + ] + expected_cont1.pop("ctime") + expected_cont2.pop("ctime") - assert tuple(actual_md[cont1["sha1"]]) == (cont1,) - assert tuple(actual_md[cont2["sha1"]]) == (cont2,) + assert tuple(actual_md[cont1.sha1]) == (expected_cont1,) + assert tuple(actual_md[cont2.sha1]) == (expected_cont2,) assert len(actual_md.keys()) == 2 - def test_content_get_metadata_missing_sha1(self, swh_storage): - cont1 = data.cont - cont2 = data.cont2 - missing_cont = data.missing_cont + def test_content_get_metadata_missing_sha1(self, swh_storage, sample_data_model): + cont1, cont2 = sample_data_model["content"][:2] + missing_cont = sample_data_model["skipped_content"][0] swh_storage.content_add([cont1, cont2]) - actual_contents = swh_storage.content_get_metadata([missing_cont["sha1"]]) + actual_contents = swh_storage.content_get_metadata([missing_cont.sha1]) assert len(actual_contents) == 1 - assert tuple(actual_contents[missing_cont["sha1"]]) == () + assert tuple(actual_contents[missing_cont.sha1]) == () - def test_content_get_random(self, swh_storage): - swh_storage.content_add([data.cont, data.cont2, data.cont3]) + def test_content_get_random(self, swh_storage, sample_data_model): + cont, cont2 = sample_data_model["content"][:2] + cont3 = sample_data_model["content_metadata"][0] + swh_storage.content_add([cont, cont2, cont3]) assert swh_storage.content_get_random() in { - data.cont["sha1_git"], - data.cont2["sha1_git"], - data.cont3["sha1_git"], + cont.sha1_git, + cont2.sha1_git, + cont3.sha1_git, } - def test_directory_add(self, swh_storage): - init_missing = list(swh_storage.directory_missing([data.dir["id"]])) - assert [data.dir["id"]] == init_missing + def test_directory_add(self, swh_storage, sample_data_model): + directory = sample_data_model["directory"][1] + + init_missing = list(swh_storage.directory_missing([directory.id])) + assert [directory.id] == init_missing - actual_result = swh_storage.directory_add([data.dir]) + actual_result = swh_storage.directory_add([directory]) assert actual_result == {"directory:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ ("directory", Directory.from_dict(data.dir)) ] - actual_data = list(swh_storage.directory_ls(data.dir["id"])) - expected_data = list(transform_entries(data.dir)) + actual_data = list(swh_storage.directory_ls(directory.id)) + expected_data = list(transform_entries(directory)) assert sorted(expected_data, key=cmpdir) == sorted(actual_data, key=cmpdir) - after_missing = list(swh_storage.directory_missing([data.dir["id"]])) + after_missing = list(swh_storage.directory_missing([directory.id])) assert after_missing == [] swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["directory"] == 1 - def test_directory_add_from_generator(self, swh_storage): + def test_directory_add_from_generator(self, swh_storage, sample_data_model): + directory = sample_data_model["directory"][1] + def _dir_gen(): - yield data.dir + yield directory actual_result = swh_storage.directory_add(directories=_dir_gen()) assert actual_result == {"directory:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ - ("directory", Directory.from_dict(data.dir)) + ("directory", directory) ] swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["directory"] == 1 - def test_directory_add_validation(self, swh_storage): - dir_ = copy.deepcopy(data.dir) + def test_directory_add_validation(self, swh_storage, sample_data_model): + directory = sample_data_model["directory"][1] + dir_ = directory.to_dict() dir_["entries"][0]["type"] = "foobar" with pytest.raises(StorageArgumentException, match="type.*foobar"): swh_storage.directory_add([dir_]) - dir_ = copy.deepcopy(data.dir) + dir_ = directory.to_dict() del dir_["entries"][0]["target"] with pytest.raises(StorageArgumentException, match="target") as cm: swh_storage.directory_add([dir_]) if type(cm.value) == psycopg2.IntegrityError: assert cm.value.pgcode == psycopg2.errorcodes.NOT_NULL_VIOLATION - def test_directory_add_twice(self, swh_storage): - actual_result = swh_storage.directory_add([data.dir]) + def test_directory_add_twice(self, swh_storage, sample_data_model): + directory = sample_data_model["directory"][1] + + actual_result = swh_storage.directory_add([directory]) assert actual_result == {"directory:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ - ("directory", Directory.from_dict(data.dir)) + ("directory", directory) ] - actual_result = swh_storage.directory_add([data.dir]) + actual_result = swh_storage.directory_add([directory]) assert actual_result == {"directory:add": 0} assert list(swh_storage.journal_writer.journal.objects) == [ - ("directory", Directory.from_dict(data.dir)) + ("directory", directory) ] - def test_directory_get_recursive(self, swh_storage): - init_missing = list(swh_storage.directory_missing([data.dir["id"]])) - assert init_missing == [data.dir["id"]] + def test_directory_get_recursive(self, swh_storage, sample_data_model): + dir1, dir2, dir3 = sample_data_model["directory"][:3] + + init_missing = list(swh_storage.directory_missing([dir1.id])) + assert init_missing == [dir1.id] - actual_result = swh_storage.directory_add([data.dir, data.dir2, data.dir3]) + actual_result = swh_storage.directory_add([dir1, dir2, dir3]) assert actual_result == {"directory:add": 3} assert list(swh_storage.journal_writer.journal.objects) == [ - ("directory", Directory.from_dict(data.dir)), - ("directory", Directory.from_dict(data.dir2)), - ("directory", Directory.from_dict(data.dir3)), + ("directory", dir1), + ("directory", dir2), + ("directory", dir3), ] # List directory containing a file and an unknown subdirectory - actual_data = list(swh_storage.directory_ls(data.dir["id"], recursive=True)) - expected_data = list(transform_entries(data.dir)) + actual_data = list(swh_storage.directory_ls(dir1.id, recursive=True)) + expected_data = list(transform_entries(dir1)) assert sorted(expected_data, key=cmpdir) == sorted(actual_data, key=cmpdir) # List directory containing a file and an unknown subdirectory - actual_data = list(swh_storage.directory_ls(data.dir2["id"], recursive=True)) - expected_data = list(transform_entries(data.dir2)) + actual_data = list(swh_storage.directory_ls(dir2.id, recursive=True)) + expected_data = list(transform_entries(dir2)) assert sorted(expected_data, key=cmpdir) == sorted(actual_data, key=cmpdir) # List directory containing a known subdirectory, entries should # be both those of the directory and of the subdir - actual_data = list(swh_storage.directory_ls(data.dir3["id"], recursive=True)) + actual_data = list(swh_storage.directory_ls(dir3.id, recursive=True)) expected_data = list( itertools.chain( - transform_entries(data.dir3), - transform_entries(data.dir, prefix=b"subdir/"), + transform_entries(dir3), transform_entries(dir2, prefix=b"subdir/"), ) ) assert sorted(expected_data, key=cmpdir) == sorted(actual_data, key=cmpdir) - def test_directory_get_non_recursive(self, swh_storage): - init_missing = list(swh_storage.directory_missing([data.dir["id"]])) - assert init_missing == [data.dir["id"]] + def test_directory_get_non_recursive(self, swh_storage, sample_data_model): + dir1, dir2, dir3 = sample_data_model["directory"][:3] - actual_result = swh_storage.directory_add([data.dir, data.dir2, data.dir3]) + init_missing = list(swh_storage.directory_missing([dir1.id])) + assert init_missing == [dir1.id] + + actual_result = swh_storage.directory_add([dir1, dir2, dir3]) assert actual_result == {"directory:add": 3} assert list(swh_storage.journal_writer.journal.objects) == [ - ("directory", Directory.from_dict(data.dir)), - ("directory", Directory.from_dict(data.dir2)), - ("directory", Directory.from_dict(data.dir3)), + ("directory", dir1), + ("directory", dir2), + ("directory", dir3), ] # List directory containing a file and an unknown subdirectory - actual_data = list(swh_storage.directory_ls(data.dir["id"])) - expected_data = list(transform_entries(data.dir)) + actual_data = list(swh_storage.directory_ls(dir1.id)) + expected_data = list(transform_entries(dir1)) assert sorted(expected_data, key=cmpdir) == sorted(actual_data, key=cmpdir) # List directory contaiining a single file - actual_data = list(swh_storage.directory_ls(data.dir2["id"])) - expected_data = list(transform_entries(data.dir2)) + actual_data = list(swh_storage.directory_ls(dir2.id)) + expected_data = list(transform_entries(dir2)) assert sorted(expected_data, key=cmpdir) == sorted(actual_data, key=cmpdir) # List directory containing a known subdirectory, entries should # only be those of the parent directory, not of the subdir - actual_data = list(swh_storage.directory_ls(data.dir3["id"])) - expected_data = list(transform_entries(data.dir3)) + actual_data = list(swh_storage.directory_ls(dir3.id)) + expected_data = list(transform_entries(dir3)) assert sorted(expected_data, key=cmpdir) == sorted(actual_data, key=cmpdir) - def test_directory_entry_get_by_path(self, swh_storage): + def test_directory_entry_get_by_path(self, swh_storage, sample_data_model): + cont = sample_data_model["content"][0] + dir1, dir2, dir3, dir4 = sample_data_model["directory"][:4] + # given - init_missing = list(swh_storage.directory_missing([data.dir3["id"]])) - assert [data.dir3["id"]] == init_missing + init_missing = list(swh_storage.directory_missing([dir3.id])) + assert init_missing == [dir3.id] - actual_result = swh_storage.directory_add([data.dir3, data.dir4]) + actual_result = swh_storage.directory_add([dir3, dir4]) assert actual_result == {"directory:add": 2} expected_entries = [ { - "dir_id": data.dir3["id"], + "dir_id": dir3.id, "name": b"foo", "type": "file", - "target": data.cont["sha1_git"], + "target": cont.sha1_git, "sha1": None, "sha1_git": None, "sha256": None, "status": None, "perms": from_disk.DentryPerms.content, "length": None, }, { - "dir_id": data.dir3["id"], + "dir_id": dir3.id, "name": b"subdir", "type": "dir", - "target": data.dir["id"], + "target": dir2.id, "sha1": None, "sha1_git": None, "sha256": None, "status": None, "perms": from_disk.DentryPerms.directory, "length": None, }, { - "dir_id": data.dir3["id"], + "dir_id": dir3.id, "name": b"hello", "type": "file", "target": b"12345678901234567890", "sha1": None, "sha1_git": None, "sha256": None, "status": None, "perms": from_disk.DentryPerms.content, "length": None, }, ] # when (all must be found here) - for entry, expected_entry in zip(data.dir3["entries"], expected_entries): + for entry, expected_entry in zip(dir3.entries, expected_entries): actual_entry = swh_storage.directory_entry_get_by_path( - data.dir3["id"], [entry["name"]] + dir3.id, [entry.name] ) assert actual_entry == expected_entry # same, but deeper - for entry, expected_entry in zip(data.dir3["entries"], expected_entries): + for entry, expected_entry in zip(dir3.entries, expected_entries): actual_entry = swh_storage.directory_entry_get_by_path( - data.dir4["id"], [b"subdir1", entry["name"]] + dir4.id, [b"subdir1", entry.name] ) expected_entry = expected_entry.copy() expected_entry["name"] = b"subdir1/" + expected_entry["name"] assert actual_entry == expected_entry # when (nothing should be found here since data.dir is not persisted.) - for entry in data.dir["entries"]: + for entry in dir2.entries: actual_entry = swh_storage.directory_entry_get_by_path( - data.dir["id"], [entry["name"]] + dir2.id, [entry.name] ) assert actual_entry is None - def test_directory_get_random(self, swh_storage): - swh_storage.directory_add([data.dir, data.dir2, data.dir3]) + def test_directory_get_random(self, swh_storage, sample_data_model): + dir1, dir2, dir3 = sample_data_model["directory"][:3] + swh_storage.directory_add([dir1, dir2, dir3]) assert swh_storage.directory_get_random() in { - data.dir["id"], - data.dir2["id"], - data.dir3["id"], + dir1.id, + dir2.id, + dir3.id, } def test_revision_add(self, swh_storage): init_missing = swh_storage.revision_missing([data.revision["id"]]) assert list(init_missing) == [data.revision["id"]] actual_result = swh_storage.revision_add([data.revision]) assert actual_result == {"revision:add": 1} end_missing = swh_storage.revision_missing([data.revision["id"]]) assert list(end_missing) == [] assert list(swh_storage.journal_writer.journal.objects) == [ ("revision", Revision.from_dict(data.revision)) ] # already there so nothing added actual_result = swh_storage.revision_add([data.revision]) assert actual_result == {"revision:add": 0} swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["revision"] == 1 def test_revision_add_from_generator(self, swh_storage): def _rev_gen(): yield data.revision actual_result = swh_storage.revision_add(_rev_gen()) assert actual_result == {"revision:add": 1} swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["revision"] == 1 def test_revision_add_validation(self, swh_storage): rev = copy.deepcopy(data.revision) rev["date"]["offset"] = 2 ** 16 with pytest.raises(StorageArgumentException, match="offset") as cm: swh_storage.revision_add([rev]) if type(cm.value) == psycopg2.DataError: assert cm.value.pgcode == psycopg2.errorcodes.NUMERIC_VALUE_OUT_OF_RANGE rev = copy.deepcopy(data.revision) rev["committer_date"]["offset"] = 2 ** 16 with pytest.raises(StorageArgumentException, match="offset") as cm: swh_storage.revision_add([rev]) if type(cm.value) == psycopg2.DataError: assert cm.value.pgcode == psycopg2.errorcodes.NUMERIC_VALUE_OUT_OF_RANGE rev = copy.deepcopy(data.revision) rev["type"] = "foobar" with pytest.raises(StorageArgumentException, match="(?i)type") as cm: swh_storage.revision_add([rev]) if type(cm.value) == psycopg2.DataError: assert cm.value.pgcode == psycopg2.errorcodes.INVALID_TEXT_REPRESENTATION def test_revision_add_twice(self, swh_storage): actual_result = swh_storage.revision_add([data.revision]) assert actual_result == {"revision:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ ("revision", Revision.from_dict(data.revision)) ] actual_result = swh_storage.revision_add([data.revision, data.revision2]) assert actual_result == {"revision:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ ("revision", Revision.from_dict(data.revision)), ("revision", Revision.from_dict(data.revision2)), ] def test_revision_add_name_clash(self, swh_storage): revision1 = data.revision revision2 = data.revision2 revision1["author"] = { "fullname": b"John Doe ", "name": b"John Doe", "email": b"john.doe@example.com", } revision2["author"] = { "fullname": b"John Doe ", "name": b"John Doe ", "email": b"john.doe@example.com ", } actual_result = swh_storage.revision_add([revision1, revision2]) assert actual_result == {"revision:add": 2} def test_revision_get_order(self, swh_storage): add_result = swh_storage.revision_add([data.revision, data.revision2]) assert add_result == {"revision:add": 2} # order 1 res1 = swh_storage.revision_get([data.revision["id"], data.revision2["id"]]) assert list(res1) == [data.revision, data.revision2] # order 2 res2 = swh_storage.revision_get([data.revision2["id"], data.revision["id"]]) assert list(res2) == [data.revision2, data.revision] def test_revision_log(self, swh_storage): # given # data.revision4 -is-child-of-> data.revision3 swh_storage.revision_add([data.revision3, data.revision4]) # when actual_results = list(swh_storage.revision_log([data.revision4["id"]])) # hack: ids generated for actual_result in actual_results: if "id" in actual_result["author"]: del actual_result["author"]["id"] if "id" in actual_result["committer"]: del actual_result["committer"]["id"] assert len(actual_results) == 2 # rev4 -child-> rev3 assert actual_results[0] == normalize_entity(data.revision4) assert actual_results[1] == normalize_entity(data.revision3) assert list(swh_storage.journal_writer.journal.objects) == [ ("revision", Revision.from_dict(data.revision3)), ("revision", Revision.from_dict(data.revision4)), ] def test_revision_log_with_limit(self, swh_storage): # given # data.revision4 -is-child-of-> data.revision3 swh_storage.revision_add([data.revision3, data.revision4]) actual_results = list(swh_storage.revision_log([data.revision4["id"]], 1)) # hack: ids generated for actual_result in actual_results: if "id" in actual_result["author"]: del actual_result["author"]["id"] if "id" in actual_result["committer"]: del actual_result["committer"]["id"] assert len(actual_results) == 1 assert actual_results[0] == data.revision4 def test_revision_log_unknown_revision(self, swh_storage): rev_log = list(swh_storage.revision_log([data.revision["id"]])) assert rev_log == [] def test_revision_shortlog(self, swh_storage): # given # data.revision4 -is-child-of-> data.revision3 swh_storage.revision_add([data.revision3, data.revision4]) # when actual_results = list(swh_storage.revision_shortlog([data.revision4["id"]])) assert len(actual_results) == 2 # rev4 -child-> rev3 assert list(actual_results[0]) == short_revision(data.revision4) assert list(actual_results[1]) == short_revision(data.revision3) def test_revision_shortlog_with_limit(self, swh_storage): # given # data.revision4 -is-child-of-> data.revision3 swh_storage.revision_add([data.revision3, data.revision4]) actual_results = list(swh_storage.revision_shortlog([data.revision4["id"]], 1)) assert len(actual_results) == 1 assert list(actual_results[0]) == short_revision(data.revision4) def test_revision_get(self, swh_storage): swh_storage.revision_add([data.revision]) actual_revisions = list( swh_storage.revision_get([data.revision["id"], data.revision2["id"]]) ) # when if "id" in actual_revisions[0]["author"]: del actual_revisions[0]["author"]["id"] # hack: ids are generated if "id" in actual_revisions[0]["committer"]: del actual_revisions[0]["committer"]["id"] assert len(actual_revisions) == 2 assert actual_revisions[0] == normalize_entity(data.revision) assert actual_revisions[1] is None def test_revision_get_no_parents(self, swh_storage): swh_storage.revision_add([data.revision3]) get = list(swh_storage.revision_get([data.revision3["id"]])) assert len(get) == 1 assert get[0]["parents"] == () # no parents on this one def test_revision_get_random(self, swh_storage): swh_storage.revision_add([data.revision, data.revision2, data.revision3]) assert swh_storage.revision_get_random() in { data.revision["id"], data.revision2["id"], data.revision3["id"], } def test_release_add(self, swh_storage): init_missing = swh_storage.release_missing( [data.release["id"], data.release2["id"]] ) assert [data.release["id"], data.release2["id"]] == list(init_missing) actual_result = swh_storage.release_add([data.release, data.release2]) assert actual_result == {"release:add": 2} end_missing = swh_storage.release_missing( [data.release["id"], data.release2["id"]] ) assert list(end_missing) == [] assert list(swh_storage.journal_writer.journal.objects) == [ ("release", Release.from_dict(data.release)), ("release", Release.from_dict(data.release2)), ] # already present so nothing added actual_result = swh_storage.release_add([data.release, data.release2]) assert actual_result == {"release:add": 0} swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["release"] == 2 def test_release_add_from_generator(self, swh_storage): def _rel_gen(): yield data.release yield data.release2 actual_result = swh_storage.release_add(_rel_gen()) assert actual_result == {"release:add": 2} assert list(swh_storage.journal_writer.journal.objects) == [ ("release", Release.from_dict(data.release)), ("release", Release.from_dict(data.release2)), ] swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["release"] == 2 def test_release_add_no_author_date(self, swh_storage): release = data.release release["author"] = None release["date"] = None actual_result = swh_storage.release_add([release]) assert actual_result == {"release:add": 1} end_missing = swh_storage.release_missing([data.release["id"]]) assert list(end_missing) == [] assert list(swh_storage.journal_writer.journal.objects) == [ ("release", Release.from_dict(release)) ] def test_release_add_validation(self, swh_storage): rel = copy.deepcopy(data.release) rel["date"]["offset"] = 2 ** 16 with pytest.raises(StorageArgumentException, match="offset") as cm: swh_storage.release_add([rel]) if type(cm.value) == psycopg2.DataError: assert cm.value.pgcode == psycopg2.errorcodes.NUMERIC_VALUE_OUT_OF_RANGE rel = copy.deepcopy(data.release) rel["author"] = None with pytest.raises(StorageArgumentException, match="date") as cm: swh_storage.release_add([rel]) if type(cm.value) == psycopg2.IntegrityError: assert cm.value.pgcode == psycopg2.errorcodes.CHECK_VIOLATION def test_release_add_validation_type(self, swh_storage): rel = copy.deepcopy(data.release) rel["date"]["offset"] = "toto" with pytest.raises(StorageArgumentException): swh_storage.release_add([rel]) def test_release_add_twice(self, swh_storage): actual_result = swh_storage.release_add([data.release]) assert actual_result == {"release:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ ("release", Release.from_dict(data.release)) ] actual_result = swh_storage.release_add( [data.release, data.release2, data.release, data.release2] ) assert actual_result == {"release:add": 1} assert set(swh_storage.journal_writer.journal.objects) == set( [ ("release", Release.from_dict(data.release)), ("release", Release.from_dict(data.release2)), ] ) def test_release_add_name_clash(self, swh_storage): release1 = data.release.copy() release2 = data.release2.copy() release1["author"] = { "fullname": b"John Doe ", "name": b"John Doe", "email": b"john.doe@example.com", } release2["author"] = { "fullname": b"John Doe ", "name": b"John Doe ", "email": b"john.doe@example.com ", } actual_result = swh_storage.release_add([release1, release2]) assert actual_result == {"release:add": 2} def test_release_get(self, swh_storage): # given swh_storage.release_add([data.release, data.release2]) # when actual_releases = list( swh_storage.release_get([data.release["id"], data.release2["id"]]) ) # then for actual_release in actual_releases: if "id" in actual_release["author"]: del actual_release["author"]["id"] # hack: ids are generated assert [normalize_entity(data.release), normalize_entity(data.release2)] == [ actual_releases[0], actual_releases[1], ] unknown_releases = list(swh_storage.release_get([data.release3["id"]])) assert unknown_releases[0] is None def test_release_get_order(self, swh_storage): add_result = swh_storage.release_add([data.release, data.release2]) assert add_result == {"release:add": 2} # order 1 res1 = swh_storage.release_get([data.release["id"], data.release2["id"]]) assert list(res1) == [data.release, data.release2] # order 2 res2 = swh_storage.release_get([data.release2["id"], data.release["id"]]) assert list(res2) == [data.release2, data.release] def test_release_get_random(self, swh_storage): swh_storage.release_add([data.release, data.release2, data.release3]) assert swh_storage.release_get_random() in { data.release["id"], data.release2["id"], data.release3["id"], } def test_origin_add_one(self, swh_storage): origin0 = swh_storage.origin_get(data.origin) assert origin0 is None id = swh_storage.origin_add_one(data.origin) actual_origin = swh_storage.origin_get({"url": data.origin["url"]}) assert actual_origin["url"] == data.origin["url"] id2 = swh_storage.origin_add_one(data.origin) assert id == id2 def test_origin_add(self, swh_storage): origin0 = swh_storage.origin_get([data.origin])[0] assert origin0 is None stats = swh_storage.origin_add([data.origin, data.origin2]) assert stats == {"origin:add": 2} actual_origin = swh_storage.origin_get([{"url": data.origin["url"],}])[0] assert actual_origin["url"] == data.origin["url"] actual_origin2 = swh_storage.origin_get([{"url": data.origin2["url"],}])[0] assert actual_origin2["url"] == data.origin2["url"] assert set(swh_storage.journal_writer.journal.objects) == set( [ ("origin", Origin.from_dict(actual_origin)), ("origin", Origin.from_dict(actual_origin2)), ] ) swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["origin"] == 2 def test_origin_add_from_generator(self, swh_storage): def _ori_gen(): yield data.origin yield data.origin2 stats = swh_storage.origin_add(_ori_gen()) assert stats == {"origin:add": 2} actual_origin = swh_storage.origin_get([{"url": data.origin["url"],}])[0] assert actual_origin["url"] == data.origin["url"] actual_origin2 = swh_storage.origin_get([{"url": data.origin2["url"],}])[0] assert actual_origin2["url"] == data.origin2["url"] if "id" in actual_origin: del actual_origin["id"] del actual_origin2["id"] assert set(swh_storage.journal_writer.journal.objects) == set( [ ("origin", Origin.from_dict(actual_origin)), ("origin", Origin.from_dict(actual_origin2)), ] ) swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["origin"] == 2 def test_origin_add_twice(self, swh_storage): add1 = swh_storage.origin_add([data.origin, data.origin2]) assert set(swh_storage.journal_writer.journal.objects) == set( [ ("origin", Origin.from_dict(data.origin)), ("origin", Origin.from_dict(data.origin2)), ] ) assert add1 == {"origin:add": 2} add2 = swh_storage.origin_add([data.origin, data.origin2]) assert set(swh_storage.journal_writer.journal.objects) == set( [ ("origin", Origin.from_dict(data.origin)), ("origin", Origin.from_dict(data.origin2)), ] ) assert add2 == {"origin:add": 0} def test_origin_add_validation(self, swh_storage): """Incorrect formatted origin should fail the validation """ with pytest.raises(StorageArgumentException, match="url"): swh_storage.origin_add([{}]) with pytest.raises( StorageArgumentException, match="unexpected keyword argument" ): swh_storage.origin_add([{"ul": "mistyped url key"}]) def test_origin_get_legacy(self, swh_storage): assert swh_storage.origin_get(data.origin) is None swh_storage.origin_add_one(data.origin) actual_origin0 = swh_storage.origin_get({"url": data.origin["url"]}) assert actual_origin0["url"] == data.origin["url"] def test_origin_get(self, swh_storage): assert swh_storage.origin_get(data.origin) is None assert swh_storage.origin_get([data.origin]) == [None] swh_storage.origin_add_one(data.origin) actual_origin0 = swh_storage.origin_get([{"url": data.origin["url"]}]) assert len(actual_origin0) == 1 assert actual_origin0[0]["url"] == data.origin["url"] actual_origins = swh_storage.origin_get( [{"url": data.origin["url"]}, {"url": "not://exists"}] ) assert actual_origins == [{"url": data.origin["url"]}, None] def _generate_random_visits(self, nb_visits=100, start=0, end=7): """Generate random visits within the last 2 months (to avoid computations) """ visits = [] today = now() for weeks in range(nb_visits, 0, -1): hours = random.randint(0, 24) minutes = random.randint(0, 60) seconds = random.randint(0, 60) days = random.randint(0, 28) weeks = random.randint(start, end) date_visit = today - timedelta( weeks=weeks, hours=hours, minutes=minutes, seconds=seconds, days=days ) visits.append(date_visit) return visits def test_origin_visit_get_all(self, swh_storage): origin = Origin.from_dict(data.origin) swh_storage.origin_add_one(origin) visits = swh_storage.origin_visit_add( [ OriginVisit( origin=origin.url, date=data.date_visit1, type=data.type_visit1, ), OriginVisit( origin=origin.url, date=data.date_visit2, type=data.type_visit2, ), OriginVisit( origin=origin.url, date=data.date_visit2, type=data.type_visit2, ), ] ) ov1, ov2, ov3 = [ {**v.to_dict(), "status": "created", "snapshot": None, "metadata": None,} for v in visits ] # order asc, no pagination, no limit all_visits = list(swh_storage.origin_visit_get(origin.url)) assert all_visits == [ov1, ov2, ov3] # order asc, no pagination, limit all_visits2 = list(swh_storage.origin_visit_get(origin.url, limit=2)) assert all_visits2 == [ov1, ov2] # order asc, pagination, no limit all_visits3 = list( swh_storage.origin_visit_get(origin.url, last_visit=ov1["visit"]) ) assert all_visits3 == [ov2, ov3] # order asc, pagination, limit all_visits4 = list( swh_storage.origin_visit_get(origin.url, last_visit=ov2["visit"], limit=1) ) assert all_visits4 == [ov3] # order desc, no pagination, no limit all_visits5 = list(swh_storage.origin_visit_get(origin.url, order="desc")) assert all_visits5 == [ov3, ov2, ov1] # order desc, no pagination, limit all_visits6 = list( swh_storage.origin_visit_get(origin.url, limit=2, order="desc") ) assert all_visits6 == [ov3, ov2] # order desc, pagination, no limit all_visits7 = list( swh_storage.origin_visit_get( origin.url, last_visit=ov3["visit"], order="desc" ) ) assert all_visits7 == [ov2, ov1] # order desc, pagination, limit all_visits8 = list( swh_storage.origin_visit_get( origin.url, last_visit=ov3["visit"], order="desc", limit=1 ) ) assert all_visits8 == [ov2] def test_origin_visit_get__unknown_origin(self, swh_storage): assert [] == list(swh_storage.origin_visit_get("foo")) def test_origin_visit_get_random(self, swh_storage): swh_storage.origin_add(data.origins) # Add some random visits within the selection range visits = self._generate_random_visits() visit_type = "git" # Add visits to those origins for origin in data.origins: origin_url = origin["url"] for date_visit in visits: visit = swh_storage.origin_visit_add( [OriginVisit(origin=origin_url, date=date_visit, type=visit_type,)] )[0] swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin_url, visit=visit.visit, date=now(), status="full", snapshot=None, ) ] ) swh_storage.refresh_stat_counters() stats = swh_storage.stat_counters() assert stats["origin"] == len(data.origins) assert stats["origin_visit"] == len(data.origins) * len(visits) random_origin_visit = swh_storage.origin_visit_get_random(visit_type) assert random_origin_visit assert random_origin_visit["origin"] is not None original_urls = [o["url"] for o in data.origins] assert random_origin_visit["origin"] in original_urls def test_origin_visit_get_random_nothing_found(self, swh_storage): swh_storage.origin_add(data.origins) visit_type = "hg" # Add some visits outside of the random generation selection so nothing # will be found by the random selection visits = self._generate_random_visits(nb_visits=3, start=13, end=24) for origin in data.origins: origin_url = origin["url"] for date_visit in visits: visit = swh_storage.origin_visit_add( [OriginVisit(origin=origin_url, date=date_visit, type=visit_type,)] )[0] swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin_url, visit=visit.visit, date=now(), status="full", snapshot=None, ) ] ) random_origin_visit = swh_storage.origin_visit_get_random(visit_type) assert random_origin_visit is None def test_origin_get_by_sha1(self, swh_storage): assert swh_storage.origin_get(data.origin) is None swh_storage.origin_add_one(data.origin) origins = list(swh_storage.origin_get_by_sha1([sha1(data.origin["url"])])) assert len(origins) == 1 assert origins[0]["url"] == data.origin["url"] def test_origin_get_by_sha1_not_found(self, swh_storage): assert swh_storage.origin_get(data.origin) is None origins = list(swh_storage.origin_get_by_sha1([sha1(data.origin["url"])])) assert len(origins) == 1 assert origins[0] is None def test_origin_search_single_result(self, swh_storage): found_origins = list(swh_storage.origin_search(data.origin["url"])) assert len(found_origins) == 0 found_origins = list(swh_storage.origin_search(data.origin["url"], regexp=True)) assert len(found_origins) == 0 swh_storage.origin_add_one(data.origin) origin_data = {"url": data.origin["url"]} found_origins = list(swh_storage.origin_search(data.origin["url"])) assert len(found_origins) == 1 if "id" in found_origins[0]: del found_origins[0]["id"] assert found_origins[0] == origin_data found_origins = list( swh_storage.origin_search("." + data.origin["url"][1:-1] + ".", regexp=True) ) assert len(found_origins) == 1 if "id" in found_origins[0]: del found_origins[0]["id"] assert found_origins[0] == origin_data swh_storage.origin_add_one(data.origin2) origin2_data = {"url": data.origin2["url"]} found_origins = list(swh_storage.origin_search(data.origin2["url"])) assert len(found_origins) == 1 if "id" in found_origins[0]: del found_origins[0]["id"] assert found_origins[0] == origin2_data found_origins = list( swh_storage.origin_search( "." + data.origin2["url"][1:-1] + ".", regexp=True ) ) assert len(found_origins) == 1 if "id" in found_origins[0]: del found_origins[0]["id"] assert found_origins[0] == origin2_data def test_origin_search_no_regexp(self, swh_storage): swh_storage.origin_add_one(data.origin) swh_storage.origin_add_one(data.origin2) origin = swh_storage.origin_get({"url": data.origin["url"]}) origin2 = swh_storage.origin_get({"url": data.origin2["url"]}) # no pagination found_origins = list(swh_storage.origin_search("/")) assert len(found_origins) == 2 # offset=0 found_origins0 = list(swh_storage.origin_search("/", offset=0, limit=1)) # noqa assert len(found_origins0) == 1 assert found_origins0[0] in [origin, origin2] # offset=1 found_origins1 = list(swh_storage.origin_search("/", offset=1, limit=1)) # noqa assert len(found_origins1) == 1 assert found_origins1[0] in [origin, origin2] # check both origins were returned assert found_origins0 != found_origins1 def test_origin_search_regexp_substring(self, swh_storage): swh_storage.origin_add_one(data.origin) swh_storage.origin_add_one(data.origin2) origin = swh_storage.origin_get({"url": data.origin["url"]}) origin2 = swh_storage.origin_get({"url": data.origin2["url"]}) # no pagination found_origins = list(swh_storage.origin_search("/", regexp=True)) assert len(found_origins) == 2 # offset=0 found_origins0 = list( swh_storage.origin_search("/", offset=0, limit=1, regexp=True) ) # noqa assert len(found_origins0) == 1 assert found_origins0[0] in [origin, origin2] # offset=1 found_origins1 = list( swh_storage.origin_search("/", offset=1, limit=1, regexp=True) ) # noqa assert len(found_origins1) == 1 assert found_origins1[0] in [origin, origin2] # check both origins were returned assert found_origins0 != found_origins1 def test_origin_search_regexp_fullstring(self, swh_storage): swh_storage.origin_add_one(data.origin) swh_storage.origin_add_one(data.origin2) origin = swh_storage.origin_get({"url": data.origin["url"]}) origin2 = swh_storage.origin_get({"url": data.origin2["url"]}) # no pagination found_origins = list(swh_storage.origin_search(".*/.*", regexp=True)) assert len(found_origins) == 2 # offset=0 found_origins0 = list( swh_storage.origin_search(".*/.*", offset=0, limit=1, regexp=True) ) # noqa assert len(found_origins0) == 1 assert found_origins0[0] in [origin, origin2] # offset=1 found_origins1 = list( swh_storage.origin_search(".*/.*", offset=1, limit=1, regexp=True) ) # noqa assert len(found_origins1) == 1 assert found_origins1[0] in [origin, origin2] # check both origins were returned assert found_origins0 != found_origins1 def test_origin_visit_add(self, swh_storage): origin1 = Origin.from_dict(data.origin2) swh_storage.origin_add_one(origin1) date_visit = now() date_visit2 = date_visit + datetime.timedelta(minutes=1) date_visit = round_to_milliseconds(date_visit) date_visit2 = round_to_milliseconds(date_visit2) visit1 = OriginVisit( origin=origin1.url, date=date_visit, type=data.type_visit1, ) visit2 = OriginVisit( origin=origin1.url, date=date_visit2, type=data.type_visit2, ) # add once ov1, ov2 = swh_storage.origin_visit_add([visit1, visit2]) # then again (will be ignored as they already exist) origin_visit1, origin_visit2 = swh_storage.origin_visit_add([ov1, ov2]) assert ov1 == origin_visit1 assert ov2 == origin_visit2 ovs1 = OriginVisitStatus( origin=origin1.url, visit=ov1.visit, date=date_visit, status="created", snapshot=None, ) ovs2 = OriginVisitStatus( origin=origin1.url, visit=ov2.visit, date=date_visit2, status="created", snapshot=None, ) actual_origin_visits = list(swh_storage.origin_visit_get(origin1.url)) expected_visits = [ {**ovs1.to_dict(), "type": ov1.type}, {**ovs2.to_dict(), "type": ov2.type}, ] assert len(expected_visits) == len(actual_origin_visits) for visit in expected_visits: assert visit in actual_origin_visits actual_objects = list(swh_storage.journal_writer.journal.objects) expected_objects = list( [("origin", origin1)] + [("origin_visit", visit) for visit in [ov1, ov2]] * 2 + [("origin_visit_status", ovs) for ovs in [ovs1, ovs2]] ) for obj in expected_objects: assert obj in actual_objects def test_origin_visit_add_validation(self, swh_storage): """Unknown origin when adding visits should raise""" visit = OriginVisit( origin="something-unknown", date=now(), type=data.type_visit1, ) with pytest.raises(StorageArgumentException, match="Unknown origin"): swh_storage.origin_visit_add([visit]) objects = list(swh_storage.journal_writer.journal.objects) assert not objects def test_origin_visit_status_add_validation(self, swh_storage): """Wrong origin_visit_status input should raise storage argument error""" date_visit = now() visit_status1 = OriginVisitStatus( origin="unknown-origin-url", visit=10, date=date_visit, status="full", snapshot=None, ) with pytest.raises(StorageArgumentException, match="Unknown origin"): swh_storage.origin_visit_status_add([visit_status1]) objects = list(swh_storage.journal_writer.journal.objects) assert not objects def test_origin_visit_status_add(self, swh_storage): """Correct origin visit statuses should add a new visit status """ origin1 = Origin.from_dict(data.origin2) origin2 = Origin(url="new-origin") swh_storage.origin_add([origin1, origin2]) ov1, ov2 = swh_storage.origin_visit_add( [ OriginVisit( origin=origin1.url, date=data.date_visit1, type=data.type_visit1, ), OriginVisit( origin=origin2.url, date=data.date_visit2, type=data.type_visit2, ), ] ) ovs1 = OriginVisitStatus( origin=origin1.url, visit=ov1.visit, date=data.date_visit1, status="created", snapshot=None, ) ovs2 = OriginVisitStatus( origin=origin2.url, visit=ov2.visit, date=data.date_visit2, status="created", snapshot=None, ) snapshot_id = data.snapshot["id"] date_visit_now = now() visit_status1 = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=date_visit_now, status="full", snapshot=snapshot_id, ) date_visit_now = now() visit_status2 = OriginVisitStatus( origin=ov2.origin, visit=ov2.visit, date=date_visit_now, status="ongoing", snapshot=None, metadata={"intrinsic": "something"}, ) swh_storage.origin_visit_status_add([visit_status1, visit_status2]) origin_visit1 = swh_storage.origin_visit_get_latest( origin1.url, require_snapshot=True ) assert origin_visit1 assert origin_visit1["status"] == "full" assert origin_visit1["snapshot"] == snapshot_id origin_visit2 = swh_storage.origin_visit_get_latest( origin2.url, require_snapshot=False ) assert origin2.url != origin1.url assert origin_visit2 assert origin_visit2["status"] == "ongoing" assert origin_visit2["snapshot"] is None assert origin_visit2["metadata"] == {"intrinsic": "something"} actual_objects = list(swh_storage.journal_writer.journal.objects) expected_origins = [origin1, origin2] expected_visits = [ov1, ov2] expected_visit_statuses = [ovs1, ovs2, visit_status1, visit_status2] expected_objects = ( [("origin", o) for o in expected_origins] + [("origin_visit", v) for v in expected_visits] + [("origin_visit_status", ovs) for ovs in expected_visit_statuses] ) for obj in expected_objects: assert obj in actual_objects def test_origin_visit_status_add_twice(self, swh_storage): """Correct origin visit statuses should add a new visit status """ origin1 = Origin.from_dict(data.origin2) swh_storage.origin_add([origin1]) ov1 = swh_storage.origin_visit_add( [ OriginVisit( origin=origin1.url, date=data.date_visit1, type=data.type_visit1, ), ] )[0] ovs1 = OriginVisitStatus( origin=origin1.url, visit=ov1.visit, date=data.date_visit1, status="created", snapshot=None, ) snapshot_id = data.snapshot["id"] date_visit_now = now() visit_status1 = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=date_visit_now, status="full", snapshot=snapshot_id, ) swh_storage.origin_visit_status_add([visit_status1]) # second call will ignore existing entries (will send to storage though) swh_storage.origin_visit_status_add([visit_status1]) origin_visits = list(swh_storage.origin_visit_get(ov1.origin)) assert len(origin_visits) == 1 origin_visit1 = origin_visits[0] assert origin_visit1 assert origin_visit1["status"] == "full" assert origin_visit1["snapshot"] == snapshot_id actual_objects = list(swh_storage.journal_writer.journal.objects) expected_origins = [origin1] expected_visits = [ov1] expected_visit_statuses = [ovs1, visit_status1, visit_status1] # write twice in the journal expected_objects = ( [("origin", o) for o in expected_origins] + [("origin_visit", v) for v in expected_visits] + [("origin_visit_status", ovs) for ovs in expected_visit_statuses] ) for obj in expected_objects: assert obj in actual_objects def test_origin_visit_find_by_date(self, swh_storage): # given origin = Origin.from_dict(data.origin) swh_storage.origin_add_one(data.origin) visit1 = OriginVisit( origin=origin.url, date=data.date_visit2, type=data.type_visit1, ) visit2 = OriginVisit( origin=origin.url, date=data.date_visit3, type=data.type_visit2, ) visit3 = OriginVisit( origin=origin.url, date=data.date_visit2, type=data.type_visit3, ) ov1, ov2, ov3 = swh_storage.origin_visit_add([visit1, visit2, visit3]) ovs1 = OriginVisitStatus( origin=origin.url, visit=ov1.visit, date=data.date_visit2, status="ongoing", snapshot=None, ) ovs2 = OriginVisitStatus( origin=origin.url, visit=ov2.visit, date=data.date_visit3, status="ongoing", snapshot=None, ) ovs3 = OriginVisitStatus( origin=origin.url, visit=ov3.visit, date=data.date_visit2, status="ongoing", snapshot=None, ) swh_storage.origin_visit_status_add([ovs1, ovs2, ovs3]) # Simple case visit = swh_storage.origin_visit_find_by_date(origin.url, data.date_visit3) assert visit["visit"] == ov2.visit # There are two visits at the same date, the latest must be returned visit = swh_storage.origin_visit_find_by_date(origin.url, data.date_visit2) assert visit["visit"] == ov3.visit def test_origin_visit_find_by_date__unknown_origin(self, swh_storage): swh_storage.origin_visit_find_by_date("foo", data.date_visit2) def test_origin_visit_get_by(self, swh_storage): origin_url = swh_storage.origin_add_one(data.origin) origin_url2 = swh_storage.origin_add_one(data.origin2) visit = OriginVisit( origin=origin_url, date=data.date_visit2, type=data.type_visit2, ) origin_visit1 = swh_storage.origin_visit_add([visit])[0] swh_storage.snapshot_add([data.snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin_url, visit=origin_visit1.visit, date=now(), status="ongoing", snapshot=data.snapshot["id"], ) ] ) # Add some other {origin, visit} entries visit2 = OriginVisit( origin=origin_url, date=data.date_visit3, type=data.type_visit3, ) visit3 = OriginVisit( origin=origin_url2, date=data.date_visit3, type=data.type_visit3, ) swh_storage.origin_visit_add([visit2, visit3]) # when visit1_metadata = { "contents": 42, "directories": 22, } swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin_url, visit=origin_visit1.visit, date=now(), status="full", snapshot=data.snapshot["id"], metadata=visit1_metadata, ) ] ) expected_origin_visit = origin_visit1.to_dict() expected_origin_visit.update( { "origin": origin_url, "visit": origin_visit1.visit, "date": data.date_visit2, "type": data.type_visit2, "metadata": visit1_metadata, "status": "full", "snapshot": data.snapshot["id"], } ) # when actual_origin_visit1 = swh_storage.origin_visit_get_by( origin_url, origin_visit1.visit ) # then assert actual_origin_visit1 == expected_origin_visit def test_origin_visit_get_by__unknown_origin(self, swh_storage): assert swh_storage.origin_visit_get_by("foo", 10) is None def test_origin_visit_get_by_no_result(self, swh_storage): swh_storage.origin_add([data.origin]) actual_origin_visit = swh_storage.origin_visit_get_by(data.origin["url"], 999) assert actual_origin_visit is None def test_origin_visit_get_latest_none(self, swh_storage): """Origin visit get latest on unknown objects should return nothing """ # unknown origin so no result assert swh_storage.origin_visit_get_latest("unknown-origin") is None # unknown type origin = Origin.from_dict(data.origin) swh_storage.origin_add_one(origin) assert swh_storage.origin_visit_get_latest(origin.url, type="unknown") is None def test_origin_visit_get_latest_filter_type(self, swh_storage): """Filtering origin visit get latest with filter type should be ok """ origin = Origin.from_dict(data.origin) swh_storage.origin_add_one(origin) visit1 = OriginVisit( origin=origin.url, date=data.date_visit1, type=data.type_visit1, ) visit2 = OriginVisit( origin=origin.url, date=data.date_visit2, type=data.type_visit2, ) # Add a visit with the same date as the previous one visit3 = OriginVisit( origin=origin.url, date=data.date_visit2, type=data.type_visit2, ) assert data.type_visit1 != data.type_visit2 assert data.date_visit1 < data.date_visit2 ov1, ov2, ov3 = swh_storage.origin_visit_add([visit1, visit2, visit3]) origin_visit1 = swh_storage.origin_visit_get_by(origin.url, ov1.visit) origin_visit3 = swh_storage.origin_visit_get_by(origin.url, ov3.visit) assert data.type_visit1 != data.type_visit2 # Check type filter is ok actual_ov1 = swh_storage.origin_visit_get_latest( origin.url, type=data.type_visit1, ) assert actual_ov1 == origin_visit1 actual_ov3 = swh_storage.origin_visit_get_latest( origin.url, type=data.type_visit2, ) assert actual_ov3 == origin_visit3 new_type = "npm" assert new_type not in [data.type_visit1, data.type_visit2] assert ( swh_storage.origin_visit_get_latest( origin.url, type=new_type, # no visit matching that type ) is None ) def test_origin_visit_get_latest(self, swh_storage): origin = Origin.from_dict(data.origin) swh_storage.origin_add_one(origin) visit1 = OriginVisit( origin=origin.url, date=data.date_visit1, type=data.type_visit1, ) visit2 = OriginVisit( origin=origin.url, date=data.date_visit2, type=data.type_visit2, ) # Add a visit with the same date as the previous one visit3 = OriginVisit( origin=origin.url, date=data.date_visit2, type=data.type_visit2, ) ov1, ov2, ov3 = swh_storage.origin_visit_add([visit1, visit2, visit3]) origin_visit1 = swh_storage.origin_visit_get_by(origin.url, ov1.visit) origin_visit2 = swh_storage.origin_visit_get_by(origin.url, ov2.visit) origin_visit3 = swh_storage.origin_visit_get_by(origin.url, ov3.visit) # Two visits, both with no snapshot assert origin_visit3 == swh_storage.origin_visit_get_latest(origin.url) assert ( swh_storage.origin_visit_get_latest(origin.url, require_snapshot=True) is None ) # Add snapshot to visit1; require_snapshot=True makes it return # visit1 and require_snapshot=False still returns visit2 complete_snapshot = Snapshot.from_dict(data.complete_snapshot) swh_storage.snapshot_add([complete_snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=ov1.visit, date=now(), status="ongoing", snapshot=complete_snapshot.id, ) ] ) actual_visit = swh_storage.origin_visit_get_latest( origin.url, require_snapshot=True ) assert actual_visit == { **origin_visit1, "snapshot": complete_snapshot.id, "status": "ongoing", # visit1 has status created now } assert origin_visit3 == swh_storage.origin_visit_get_latest(origin.url) # Status filter: all three visits are status=ongoing, so no visit # returned assert ( swh_storage.origin_visit_get_latest(origin.url, allowed_statuses=["full"]) is None ) # Mark the first visit as completed and check status filter again swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=ov1.visit, date=now(), status="full", snapshot=complete_snapshot.id, ) ] ) assert { **origin_visit1, "snapshot": complete_snapshot.id, "status": "full", } == swh_storage.origin_visit_get_latest(origin.url, allowed_statuses=["full"]) assert origin_visit3 == swh_storage.origin_visit_get_latest(origin.url) # Add snapshot to visit2 and check that the new snapshot is returned empty_snapshot = Snapshot.from_dict(data.empty_snapshot) swh_storage.snapshot_add([empty_snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=ov2.visit, date=now(), status="ongoing", snapshot=empty_snapshot.id, ) ] ) assert { **origin_visit2, "snapshot": empty_snapshot.id, "status": "ongoing", } == swh_storage.origin_visit_get_latest(origin.url, require_snapshot=True) assert origin_visit3 == swh_storage.origin_visit_get_latest(origin.url) # Check that the status filter is still working assert { **origin_visit1, "snapshot": complete_snapshot.id, "status": "full", } == swh_storage.origin_visit_get_latest(origin.url, allowed_statuses=["full"]) # Add snapshot to visit3 (same date as visit2) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=ov3.visit, date=now(), status="ongoing", snapshot=complete_snapshot.id, ) ] ) assert { **origin_visit1, "snapshot": complete_snapshot.id, "status": "full", } == swh_storage.origin_visit_get_latest(origin.url, allowed_statuses=["full"]) assert { **origin_visit1, "snapshot": complete_snapshot.id, "status": "full", } == swh_storage.origin_visit_get_latest( origin.url, allowed_statuses=["full"], require_snapshot=True ) assert { **origin_visit3, "snapshot": complete_snapshot.id, "status": "ongoing", } == swh_storage.origin_visit_get_latest(origin.url) assert { **origin_visit3, "snapshot": complete_snapshot.id, "status": "ongoing", } == swh_storage.origin_visit_get_latest(origin.url, require_snapshot=True) def test_origin_visit_status_get_latest(self, swh_storage): origin1 = Origin.from_dict(data.origin) swh_storage.origin_add_one(data.origin) # to have some reference visits ov1, ov2 = swh_storage.origin_visit_add( [ OriginVisit( origin=origin1.url, date=data.date_visit1, type=data.type_visit1, ), OriginVisit( origin=origin1.url, date=data.date_visit2, type=data.type_visit2, ), ] ) snapshot = Snapshot.from_dict(data.complete_snapshot) swh_storage.snapshot_add([snapshot]) date_now = now() date_now = round_to_milliseconds(date_now) assert data.date_visit1 < data.date_visit2 assert data.date_visit2 < date_now ovs1 = OriginVisitStatus( origin=origin1.url, visit=ov1.visit, date=data.date_visit1, status="partial", snapshot=None, ) ovs2 = OriginVisitStatus( origin=origin1.url, visit=ov1.visit, date=data.date_visit2, status="ongoing", snapshot=None, ) ovs3 = OriginVisitStatus( origin=origin1.url, visit=ov2.visit, date=data.date_visit2 + datetime.timedelta(minutes=1), # to not be ignored status="ongoing", snapshot=None, ) ovs4 = OriginVisitStatus( origin=origin1.url, visit=ov2.visit, date=date_now, status="full", snapshot=snapshot.id, metadata={"something": "wicked"}, ) swh_storage.origin_visit_status_add([ovs1, ovs2, ovs3, ovs4]) # unknown origin so no result actual_origin_visit = swh_storage.origin_visit_status_get_latest( "unknown-origin", ov1.visit ) assert actual_origin_visit is None # unknown visit so no result actual_origin_visit = swh_storage.origin_visit_status_get_latest( ov1.origin, ov1.visit + 10 ) assert actual_origin_visit is None # Two visits, both with no snapshot, take the most recent actual_origin_visit2 = swh_storage.origin_visit_status_get_latest( origin1.url, ov1.visit ) assert isinstance(actual_origin_visit2, OriginVisitStatus) assert actual_origin_visit2 == ovs2 assert ovs2.origin == origin1.url assert ovs2.visit == ov1.visit actual_origin_visit = swh_storage.origin_visit_status_get_latest( origin1.url, ov1.visit, require_snapshot=True ) # there is no visit with snapshot yet for that visit assert actual_origin_visit is None actual_origin_visit2 = swh_storage.origin_visit_status_get_latest( origin1.url, ov1.visit, allowed_statuses=["partial", "ongoing"] ) # visit status with partial status visit elected assert actual_origin_visit2 == ovs2 assert actual_origin_visit2.status == "ongoing" actual_origin_visit4 = swh_storage.origin_visit_status_get_latest( origin1.url, ov2.visit, require_snapshot=True ) assert actual_origin_visit4 == ovs4 assert actual_origin_visit4.snapshot == snapshot.id actual_origin_visit = swh_storage.origin_visit_status_get_latest( origin1.url, ov2.visit, require_snapshot=True, allowed_statuses=["ongoing"] ) # nothing matches so nothing assert actual_origin_visit is None # there is no visit with status full actual_origin_visit3 = swh_storage.origin_visit_status_get_latest( origin1.url, ov2.visit, allowed_statuses=["ongoing"] ) assert actual_origin_visit3 == ovs3 def test_person_fullname_unicity(self, swh_storage): # given (person injection through revisions for example) revision = data.revision # create a revision with same committer fullname but wo name and email revision2 = copy.deepcopy(data.revision2) revision2["committer"] = dict(revision["committer"]) revision2["committer"]["email"] = None revision2["committer"]["name"] = None swh_storage.revision_add([revision]) swh_storage.revision_add([revision2]) # when getting added revisions revisions = list(swh_storage.revision_get([revision["id"], revision2["id"]])) # then # check committers are the same assert revisions[0]["committer"] == revisions[1]["committer"] def test_snapshot_add_get_empty(self, swh_storage): origin_url = swh_storage.origin_add_one(data.origin) ov1 = swh_storage.origin_visit_add( [ OriginVisit( origin=origin_url, date=data.date_visit1, type=data.type_visit1, ) ] )[0] actual_result = swh_storage.snapshot_add([data.empty_snapshot]) assert actual_result == {"snapshot:add": 1} date_now = now() swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin_url, visit=ov1.visit, date=date_now, status="full", snapshot=data.empty_snapshot["id"], ) ] ) by_id = swh_storage.snapshot_get(data.empty_snapshot["id"]) assert by_id == {**data.empty_snapshot, "next_branch": None} by_ov = swh_storage.snapshot_get_by_origin_visit(origin_url, ov1.visit) assert by_ov == {**data.empty_snapshot, "next_branch": None} ovs1 = OriginVisitStatus.from_dict( { "origin": origin_url, "date": data.date_visit1, "visit": ov1.visit, "status": "created", "snapshot": None, "metadata": None, } ) ovs2 = OriginVisitStatus.from_dict( { "origin": origin_url, "date": date_now, "visit": ov1.visit, "status": "full", "metadata": None, "snapshot": data.empty_snapshot["id"], } ) actual_objects = list(swh_storage.journal_writer.journal.objects) expected_objects = [ ("origin", Origin.from_dict(data.origin)), ("origin_visit", ov1), ("origin_visit_status", ovs1,), ("snapshot", Snapshot.from_dict(data.empty_snapshot)), ("origin_visit_status", ovs2,), ] for obj in expected_objects: assert obj in actual_objects def test_snapshot_add_get_complete(self, swh_storage): origin_url = data.origin["url"] origin_url = swh_storage.origin_add_one(data.origin) visit = OriginVisit( origin=origin_url, date=data.date_visit1, type=data.type_visit1, ) origin_visit1 = swh_storage.origin_visit_add([visit])[0] visit_id = origin_visit1.visit actual_result = swh_storage.snapshot_add([data.complete_snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin_url, visit=origin_visit1.visit, date=now(), status="ongoing", snapshot=data.complete_snapshot["id"], ) ] ) assert actual_result == {"snapshot:add": 1} by_id = swh_storage.snapshot_get(data.complete_snapshot["id"]) assert by_id == {**data.complete_snapshot, "next_branch": None} by_ov = swh_storage.snapshot_get_by_origin_visit(origin_url, visit_id) assert by_ov == {**data.complete_snapshot, "next_branch": None} def test_snapshot_add_many(self, swh_storage): actual_result = swh_storage.snapshot_add( [data.snapshot, data.complete_snapshot] ) assert actual_result == {"snapshot:add": 2} assert { **data.complete_snapshot, "next_branch": None, } == swh_storage.snapshot_get(data.complete_snapshot["id"]) assert {**data.snapshot, "next_branch": None} == swh_storage.snapshot_get( data.snapshot["id"] ) swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["snapshot"] == 2 def test_snapshot_add_many_from_generator(self, swh_storage): def _snp_gen(): yield data.snapshot yield data.complete_snapshot actual_result = swh_storage.snapshot_add(_snp_gen()) assert actual_result == {"snapshot:add": 2} swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["snapshot"] == 2 def test_snapshot_add_many_incremental(self, swh_storage): actual_result = swh_storage.snapshot_add([data.complete_snapshot]) assert actual_result == {"snapshot:add": 1} actual_result2 = swh_storage.snapshot_add( [data.snapshot, data.complete_snapshot] ) assert actual_result2 == {"snapshot:add": 1} assert { **data.complete_snapshot, "next_branch": None, } == swh_storage.snapshot_get(data.complete_snapshot["id"]) assert {**data.snapshot, "next_branch": None} == swh_storage.snapshot_get( data.snapshot["id"] ) def test_snapshot_add_twice(self, swh_storage): actual_result = swh_storage.snapshot_add([data.empty_snapshot]) assert actual_result == {"snapshot:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ ("snapshot", Snapshot.from_dict(data.empty_snapshot)) ] actual_result = swh_storage.snapshot_add([data.snapshot]) assert actual_result == {"snapshot:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ ("snapshot", Snapshot.from_dict(data.empty_snapshot)), ("snapshot", Snapshot.from_dict(data.snapshot)), ] def test_snapshot_add_validation(self, swh_storage): snap = copy.deepcopy(data.snapshot) snap["branches"][b"foo"] = {"target_type": "revision"} with pytest.raises(StorageArgumentException, match="target"): swh_storage.snapshot_add([snap]) snap = copy.deepcopy(data.snapshot) snap["branches"][b"foo"] = {"target": b"\x42" * 20} with pytest.raises(StorageArgumentException, match="target_type"): swh_storage.snapshot_add([snap]) def test_snapshot_add_count_branches(self, swh_storage): actual_result = swh_storage.snapshot_add([data.complete_snapshot]) assert actual_result == {"snapshot:add": 1} snp_id = data.complete_snapshot["id"] snp_size = swh_storage.snapshot_count_branches(snp_id) expected_snp_size = { "alias": 1, "content": 1, "directory": 2, "release": 1, "revision": 1, "snapshot": 1, None: 1, } assert snp_size == expected_snp_size def test_snapshot_add_get_paginated(self, swh_storage): swh_storage.snapshot_add([data.complete_snapshot]) snp_id = data.complete_snapshot["id"] branches = data.complete_snapshot["branches"] branch_names = list(sorted(branches)) # Test branch_from snapshot = swh_storage.snapshot_get_branches(snp_id, branches_from=b"release") rel_idx = branch_names.index(b"release") expected_snapshot = { "id": snp_id, "branches": {name: branches[name] for name in branch_names[rel_idx:]}, "next_branch": None, } assert snapshot == expected_snapshot # Test branches_count snapshot = swh_storage.snapshot_get_branches(snp_id, branches_count=1) expected_snapshot = { "id": snp_id, "branches": {branch_names[0]: branches[branch_names[0]],}, "next_branch": b"content", } assert snapshot == expected_snapshot # test branch_from + branches_count snapshot = swh_storage.snapshot_get_branches( snp_id, branches_from=b"directory", branches_count=3 ) dir_idx = branch_names.index(b"directory") expected_snapshot = { "id": snp_id, "branches": { name: branches[name] for name in branch_names[dir_idx : dir_idx + 3] }, "next_branch": branch_names[dir_idx + 3], } assert snapshot == expected_snapshot def test_snapshot_add_get_filtered(self, swh_storage): origin_url = swh_storage.origin_add_one(data.origin) visit = OriginVisit( origin=origin_url, date=data.date_visit1, type=data.type_visit1, ) origin_visit1 = swh_storage.origin_visit_add([visit])[0] swh_storage.snapshot_add([data.complete_snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin_url, visit=origin_visit1.visit, date=now(), status="ongoing", snapshot=data.complete_snapshot["id"], ) ] ) snp_id = data.complete_snapshot["id"] branches = data.complete_snapshot["branches"] snapshot = swh_storage.snapshot_get_branches( snp_id, target_types=["release", "revision"] ) expected_snapshot = { "id": snp_id, "branches": { name: tgt for name, tgt in branches.items() if tgt and tgt["target_type"] in ["release", "revision"] }, "next_branch": None, } assert snapshot == expected_snapshot snapshot = swh_storage.snapshot_get_branches(snp_id, target_types=["alias"]) expected_snapshot = { "id": snp_id, "branches": { name: tgt for name, tgt in branches.items() if tgt and tgt["target_type"] == "alias" }, "next_branch": None, } assert snapshot == expected_snapshot def test_snapshot_add_get_filtered_and_paginated(self, swh_storage): swh_storage.snapshot_add([data.complete_snapshot]) snp_id = data.complete_snapshot["id"] branches = data.complete_snapshot["branches"] branch_names = list(sorted(branches)) # Test branch_from snapshot = swh_storage.snapshot_get_branches( snp_id, target_types=["directory", "release"], branches_from=b"directory2" ) expected_snapshot = { "id": snp_id, "branches": {name: branches[name] for name in (b"directory2", b"release")}, "next_branch": None, } assert snapshot == expected_snapshot # Test branches_count snapshot = swh_storage.snapshot_get_branches( snp_id, target_types=["directory", "release"], branches_count=1 ) expected_snapshot = { "id": snp_id, "branches": {b"directory": branches[b"directory"]}, "next_branch": b"directory2", } assert snapshot == expected_snapshot # Test branches_count snapshot = swh_storage.snapshot_get_branches( snp_id, target_types=["directory", "release"], branches_count=2 ) expected_snapshot = { "id": snp_id, "branches": { name: branches[name] for name in (b"directory", b"directory2") }, "next_branch": b"release", } assert snapshot == expected_snapshot # test branch_from + branches_count snapshot = swh_storage.snapshot_get_branches( snp_id, target_types=["directory", "release"], branches_from=b"directory2", branches_count=1, ) dir_idx = branch_names.index(b"directory2") expected_snapshot = { "id": snp_id, "branches": {branch_names[dir_idx]: branches[branch_names[dir_idx]],}, "next_branch": b"release", } assert snapshot == expected_snapshot def test_snapshot_add_get_branch_by_type(self, swh_storage): snapshot = copy.deepcopy(data.complete_snapshot) alias1 = b"alias1" alias2 = b"alias2" target1 = random.choice(list(snapshot["branches"].keys())) target2 = random.choice(list(snapshot["branches"].keys())) snapshot["branches"][alias2] = { "target": target2, "target_type": "alias", } snapshot["branches"][alias1] = { "target": target1, "target_type": "alias", } swh_storage.snapshot_add([snapshot]) branches = swh_storage.snapshot_get_branches( snapshot["id"], target_types=["alias"], branches_from=alias1, branches_count=1, )["branches"] assert len(branches) == 1 assert alias1 in branches def test_snapshot_add_get(self, swh_storage): origin_url = swh_storage.origin_add_one(data.origin) visit = OriginVisit( origin=origin_url, date=data.date_visit1, type=data.type_visit1, ) origin_visit1 = swh_storage.origin_visit_add([visit])[0] visit_id = origin_visit1.visit swh_storage.snapshot_add([data.snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin_url, visit=origin_visit1.visit, date=now(), status="ongoing", snapshot=data.snapshot["id"], ) ] ) by_id = swh_storage.snapshot_get(data.snapshot["id"]) assert by_id == {**data.snapshot, "next_branch": None} by_ov = swh_storage.snapshot_get_by_origin_visit(origin_url, visit_id) assert by_ov == {**data.snapshot, "next_branch": None} origin_visit_info = swh_storage.origin_visit_get_by(origin_url, visit_id) assert origin_visit_info["snapshot"] == data.snapshot["id"] def test_snapshot_add_twice__by_origin_visit(self, swh_storage): origin_url = swh_storage.origin_add_one(data.origin) ov1 = swh_storage.origin_visit_add( [ OriginVisit( origin=origin_url, date=data.date_visit1, type=data.type_visit1, ) ] )[0] swh_storage.snapshot_add([data.snapshot]) date_now2 = now() swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin_url, visit=ov1.visit, date=date_now2, status="ongoing", snapshot=data.snapshot["id"], ) ] ) by_ov1 = swh_storage.snapshot_get_by_origin_visit(origin_url, ov1.visit) assert by_ov1 == {**data.snapshot, "next_branch": None} ov2 = swh_storage.origin_visit_add( [ OriginVisit( origin=origin_url, date=data.date_visit2, type=data.type_visit2, ) ] )[0] swh_storage.snapshot_add([data.snapshot]) date_now4 = now() swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin_url, visit=ov2.visit, date=date_now4, status="ongoing", snapshot=data.snapshot["id"], ) ] ) by_ov2 = swh_storage.snapshot_get_by_origin_visit(origin_url, ov2.visit) assert by_ov2 == {**data.snapshot, "next_branch": None} ovs1 = OriginVisitStatus.from_dict( { "origin": origin_url, "date": data.date_visit1, "visit": ov1.visit, "status": "created", "metadata": None, "snapshot": None, } ) ovs2 = OriginVisitStatus.from_dict( { "origin": origin_url, "date": date_now2, "visit": ov1.visit, "status": "ongoing", "metadata": None, "snapshot": data.snapshot["id"], } ) ovs3 = OriginVisitStatus.from_dict( { "origin": origin_url, "date": data.date_visit2, "visit": ov2.visit, "status": "created", "metadata": None, "snapshot": None, } ) ovs4 = OriginVisitStatus.from_dict( { "origin": origin_url, "date": date_now4, "visit": ov2.visit, "status": "ongoing", "metadata": None, "snapshot": data.snapshot["id"], } ) actual_objects = list(swh_storage.journal_writer.journal.objects) expected_objects = [ ("origin", Origin.from_dict(data.origin)), ("origin_visit", ov1), ("origin_visit_status", ovs1), ("snapshot", Snapshot.from_dict(data.snapshot)), ("origin_visit_status", ovs2), ("origin_visit", ov2), ("origin_visit_status", ovs3), ("origin_visit_status", ovs4), ] for obj in expected_objects: assert obj in actual_objects def test_snapshot_get_random(self, swh_storage): swh_storage.snapshot_add( [data.snapshot, data.empty_snapshot, data.complete_snapshot] ) assert swh_storage.snapshot_get_random() in { data.snapshot["id"], data.empty_snapshot["id"], data.complete_snapshot["id"], } def test_snapshot_missing(self, swh_storage): snap = data.snapshot missing_snap = data.empty_snapshot snapshots = [snap["id"], missing_snap["id"]] swh_storage.snapshot_add([snap]) missing_snapshots = swh_storage.snapshot_missing(snapshots) assert list(missing_snapshots) == [missing_snap["id"]] def test_stat_counters(self, swh_storage): expected_keys = ["content", "directory", "origin", "revision"] # Initially, all counters are 0 swh_storage.refresh_stat_counters() counters = swh_storage.stat_counters() assert set(expected_keys) <= set(counters) for key in expected_keys: assert counters[key] == 0 # Add a content. Only the content counter should increase. swh_storage.content_add([data.cont]) swh_storage.refresh_stat_counters() counters = swh_storage.stat_counters() assert set(expected_keys) <= set(counters) for key in expected_keys: if key != "content": assert counters[key] == 0 assert counters["content"] == 1 # Add other objects. Check their counter increased as well. origin_url = swh_storage.origin_add_one(data.origin2) visit = OriginVisit( origin=origin_url, date=data.date_visit2, type=data.type_visit2, ) origin_visit1 = swh_storage.origin_visit_add([visit])[0] swh_storage.snapshot_add([data.snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin_url, visit=origin_visit1.visit, date=now(), status="ongoing", snapshot=data.snapshot["id"], ) ] ) swh_storage.directory_add([data.dir]) swh_storage.revision_add([data.revision]) swh_storage.release_add([data.release]) swh_storage.refresh_stat_counters() counters = swh_storage.stat_counters() assert counters["content"] == 1 assert counters["directory"] == 1 assert counters["snapshot"] == 1 assert counters["origin"] == 1 assert counters["origin_visit"] == 1 assert counters["revision"] == 1 assert counters["release"] == 1 assert counters["snapshot"] == 1 if "person" in counters: assert counters["person"] == 3 def test_content_find_ctime(self, swh_storage): cont = data.cont.copy() del cont["data"] ctime = now() cont["ctime"] = ctime swh_storage.content_add_metadata([cont]) actually_present = swh_storage.content_find({"sha1": cont["sha1"]}) # check ctime up to one second dt = actually_present[0]["ctime"] - ctime assert abs(dt.total_seconds()) <= 1 del actually_present[0]["ctime"] assert actually_present[0] == { "sha1": cont["sha1"], "sha256": cont["sha256"], "sha1_git": cont["sha1_git"], "blake2s256": cont["blake2s256"], "length": cont["length"], "status": "visible", } def test_content_find_with_present_content(self, swh_storage): # 1. with something to find cont = data.cont swh_storage.content_add([cont, data.cont2]) actually_present = swh_storage.content_find({"sha1": cont["sha1"]}) assert 1 == len(actually_present) actually_present[0].pop("ctime") assert actually_present[0] == { "sha1": cont["sha1"], "sha256": cont["sha256"], "sha1_git": cont["sha1_git"], "blake2s256": cont["blake2s256"], "length": cont["length"], "status": "visible", } # 2. with something to find actually_present = swh_storage.content_find({"sha1_git": cont["sha1_git"]}) assert 1 == len(actually_present) actually_present[0].pop("ctime") assert actually_present[0] == { "sha1": cont["sha1"], "sha256": cont["sha256"], "sha1_git": cont["sha1_git"], "blake2s256": cont["blake2s256"], "length": cont["length"], "status": "visible", } # 3. with something to find actually_present = swh_storage.content_find({"sha256": cont["sha256"]}) assert 1 == len(actually_present) actually_present[0].pop("ctime") assert actually_present[0] == { "sha1": cont["sha1"], "sha256": cont["sha256"], "sha1_git": cont["sha1_git"], "blake2s256": cont["blake2s256"], "length": cont["length"], "status": "visible", } # 4. with something to find actually_present = swh_storage.content_find( { "sha1": cont["sha1"], "sha1_git": cont["sha1_git"], "sha256": cont["sha256"], "blake2s256": cont["blake2s256"], } ) assert 1 == len(actually_present) actually_present[0].pop("ctime") assert actually_present[0] == { "sha1": cont["sha1"], "sha256": cont["sha256"], "sha1_git": cont["sha1_git"], "blake2s256": cont["blake2s256"], "length": cont["length"], "status": "visible", } def test_content_find_with_non_present_content(self, swh_storage): # 1. with something that does not exist missing_cont = data.missing_cont actually_present = swh_storage.content_find({"sha1": missing_cont["sha1"]}) assert actually_present == [] # 2. with something that does not exist actually_present = swh_storage.content_find( {"sha1_git": missing_cont["sha1_git"]} ) assert actually_present == [] # 3. with something that does not exist actually_present = swh_storage.content_find({"sha256": missing_cont["sha256"]}) assert actually_present == [] def test_content_find_with_duplicate_input(self, swh_storage): cont1 = data.cont duplicate_cont = cont1.copy() # Create fake data with colliding sha256 and blake2s256 sha1_array = bytearray(duplicate_cont["sha1"]) sha1_array[0] += 1 duplicate_cont["sha1"] = bytes(sha1_array) sha1git_array = bytearray(duplicate_cont["sha1_git"]) sha1git_array[0] += 1 duplicate_cont["sha1_git"] = bytes(sha1git_array) # Inject the data swh_storage.content_add([cont1, duplicate_cont]) finder = { "blake2s256": duplicate_cont["blake2s256"], "sha256": duplicate_cont["sha256"], } actual_result = list(swh_storage.content_find(finder)) cont1.pop("data") duplicate_cont.pop("data") actual_result[0].pop("ctime") actual_result[1].pop("ctime") expected_result = [cont1, duplicate_cont] for result in expected_result: assert result in actual_result def test_content_find_with_duplicate_sha256(self, swh_storage): cont1 = data.cont duplicate_cont = cont1.copy() # Create fake data with colliding sha256 for hashalgo in ("sha1", "sha1_git", "blake2s256"): value = bytearray(duplicate_cont[hashalgo]) value[0] += 1 duplicate_cont[hashalgo] = bytes(value) swh_storage.content_add([cont1, duplicate_cont]) finder = {"sha256": duplicate_cont["sha256"]} actual_result = list(swh_storage.content_find(finder)) assert len(actual_result) == 2 cont1.pop("data") duplicate_cont.pop("data") actual_result[0].pop("ctime") actual_result[1].pop("ctime") expected_result = [cont1, duplicate_cont] assert expected_result == sorted(actual_result, key=lambda x: x["sha1"]) # Find with both sha256 and blake2s256 finder = { "sha256": duplicate_cont["sha256"], "blake2s256": duplicate_cont["blake2s256"], } actual_result = list(swh_storage.content_find(finder)) assert len(actual_result) == 1 actual_result[0].pop("ctime") expected_result = [duplicate_cont] assert actual_result[0] == duplicate_cont def test_content_find_with_duplicate_blake2s256(self, swh_storage): cont1 = data.cont duplicate_cont = cont1.copy() # Create fake data with colliding sha256 and blake2s256 sha1_array = bytearray(duplicate_cont["sha1"]) sha1_array[0] += 1 duplicate_cont["sha1"] = bytes(sha1_array) sha1git_array = bytearray(duplicate_cont["sha1_git"]) sha1git_array[0] += 1 duplicate_cont["sha1_git"] = bytes(sha1git_array) sha256_array = bytearray(duplicate_cont["sha256"]) sha256_array[0] += 1 duplicate_cont["sha256"] = bytes(sha256_array) swh_storage.content_add([cont1, duplicate_cont]) finder = {"blake2s256": duplicate_cont["blake2s256"]} actual_result = list(swh_storage.content_find(finder)) cont1.pop("data") duplicate_cont.pop("data") actual_result[0].pop("ctime") actual_result[1].pop("ctime") expected_result = [cont1, duplicate_cont] for result in expected_result: assert result in actual_result # Find with both sha256 and blake2s256 finder = { "sha256": duplicate_cont["sha256"], "blake2s256": duplicate_cont["blake2s256"], } actual_result = list(swh_storage.content_find(finder)) actual_result[0].pop("ctime") expected_result = [duplicate_cont] assert expected_result == actual_result def test_content_find_bad_input(self, swh_storage): # 1. with bad input with pytest.raises(StorageArgumentException): swh_storage.content_find({}) # empty is bad # 2. with bad input with pytest.raises(StorageArgumentException): swh_storage.content_find({"unknown-sha1": "something"}) # not the right key def test_object_find_by_sha1_git(self, swh_storage): sha1_gits = [b"00000000000000000000"] expected = { b"00000000000000000000": [], } swh_storage.content_add([data.cont]) sha1_gits.append(data.cont["sha1_git"]) expected[data.cont["sha1_git"]] = [ {"sha1_git": data.cont["sha1_git"], "type": "content",} ] swh_storage.directory_add([data.dir]) sha1_gits.append(data.dir["id"]) expected[data.dir["id"]] = [{"sha1_git": data.dir["id"], "type": "directory",}] swh_storage.revision_add([data.revision]) sha1_gits.append(data.revision["id"]) expected[data.revision["id"]] = [ {"sha1_git": data.revision["id"], "type": "revision",} ] swh_storage.release_add([data.release]) sha1_gits.append(data.release["id"]) expected[data.release["id"]] = [ {"sha1_git": data.release["id"], "type": "release",} ] ret = swh_storage.object_find_by_sha1_git(sha1_gits) assert expected == ret def test_metadata_fetcher_add_get(self, swh_storage): actual_fetcher = swh_storage.metadata_fetcher_get( data.metadata_fetcher.name, data.metadata_fetcher.version ) assert actual_fetcher is None # does not exist swh_storage.metadata_fetcher_add([data.metadata_fetcher]) res = swh_storage.metadata_fetcher_get( data.metadata_fetcher.name, data.metadata_fetcher.version ) assert res == data.metadata_fetcher def test_metadata_authority_add_get(self, swh_storage): actual_authority = swh_storage.metadata_authority_get( data.metadata_authority.type, data.metadata_authority.url ) assert actual_authority is None # does not exist swh_storage.metadata_authority_add([data.metadata_authority]) res = swh_storage.metadata_authority_get( data.metadata_authority.type, data.metadata_authority.url ) assert res == data.metadata_authority def test_content_metadata_add(self, swh_storage): content = data.cont fetcher = data.metadata_fetcher authority = data.metadata_authority content_swhid = SWHID( object_type="content", object_id=hash_to_bytes(content["sha1_git"]) ) swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.object_metadata_add([data.content_metadata, data.content_metadata2]) result = swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content_swhid, authority ) assert result["next_page_token"] is None assert [data.content_metadata, data.content_metadata2] == list( sorted(result["results"], key=lambda x: x.discovery_date,) ) def test_content_metadata_add_duplicate(self, swh_storage): """Duplicates should be silently updated.""" content = data.cont fetcher = data.metadata_fetcher authority = data.metadata_authority content_swhid = SWHID( object_type="content", object_id=hash_to_bytes(content["sha1_git"]) ) new_content_metadata2 = attr.evolve( data.content_metadata2, format="new-format", metadata=b"new-metadata", ) swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.object_metadata_add([data.content_metadata, data.content_metadata2]) swh_storage.object_metadata_add([new_content_metadata2]) result = swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content_swhid, authority ) assert result["next_page_token"] is None expected_results1 = (data.content_metadata, new_content_metadata2) expected_results2 = (data.content_metadata, data.content_metadata2) assert tuple(sorted(result["results"], key=lambda x: x.discovery_date,)) in ( expected_results1, # cassandra expected_results2, # postgresql ) def test_content_metadata_get(self, swh_storage): authority = data.metadata_authority fetcher = data.metadata_fetcher authority2 = data.metadata_authority2 fetcher2 = data.metadata_fetcher2 content1_swhid = SWHID( object_type="content", object_id=hash_to_bytes(data.cont["sha1_git"]) ) content2_swhid = SWHID( object_type="content", object_id=hash_to_bytes(data.cont2["sha1_git"]) ) content1_metadata1 = data.content_metadata content1_metadata2 = data.content_metadata2 content1_metadata3 = data.content_metadata3 content2_metadata = attr.evolve(data.content_metadata2, id=content2_swhid) swh_storage.metadata_authority_add([authority, authority2]) swh_storage.metadata_fetcher_add([fetcher, fetcher2]) swh_storage.object_metadata_add( [ content1_metadata1, content1_metadata2, content1_metadata3, content2_metadata, ] ) result = swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content1_swhid, authority ) assert result["next_page_token"] is None assert [content1_metadata1, content1_metadata2] == list( sorted(result["results"], key=lambda x: x.discovery_date,) ) result = swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content1_swhid, authority2 ) assert result["next_page_token"] is None assert [content1_metadata3] == list( sorted(result["results"], key=lambda x: x.discovery_date,) ) result = swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content2_swhid, authority ) assert result["next_page_token"] is None assert [content2_metadata] == list(result["results"],) def test_content_metadata_get_after(self, swh_storage): content = data.cont fetcher = data.metadata_fetcher authority = data.metadata_authority content_swhid = SWHID( object_type="content", object_id=hash_to_bytes(content["sha1_git"]) ) swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.object_metadata_add([data.content_metadata, data.content_metadata2]) result = swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content_swhid, authority, after=data.content_metadata.discovery_date - timedelta(seconds=1), ) assert result["next_page_token"] is None assert [data.content_metadata, data.content_metadata2] == list( sorted(result["results"], key=lambda x: x.discovery_date,) ) result = swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content_swhid, authority, after=data.content_metadata.discovery_date, ) assert result["next_page_token"] is None assert [data.content_metadata2] == result["results"] result = swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content_swhid, authority, after=data.content_metadata2.discovery_date, ) assert result["next_page_token"] is None assert [] == result["results"] def test_content_metadata_get_paginate(self, swh_storage): content = data.cont fetcher = data.metadata_fetcher authority = data.metadata_authority content_swhid = SWHID( object_type="content", object_id=hash_to_bytes(content["sha1_git"]) ) swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.object_metadata_add([data.content_metadata, data.content_metadata2]) swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content_swhid, authority ) result = swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content_swhid, authority, limit=1 ) assert result["next_page_token"] is not None assert [data.content_metadata] == result["results"] result = swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content_swhid, authority, limit=1, page_token=result["next_page_token"], ) assert result["next_page_token"] is None assert [data.content_metadata2] == result["results"] def test_content_metadata_get_paginate_same_date(self, swh_storage): content = data.cont fetcher1 = data.metadata_fetcher fetcher2 = data.metadata_fetcher2 authority = data.metadata_authority content_swhid = SWHID( object_type="content", object_id=hash_to_bytes(content["sha1_git"]) ) swh_storage.metadata_fetcher_add([fetcher1, fetcher2]) swh_storage.metadata_authority_add([authority]) content_metadata2 = attr.evolve( data.content_metadata2, discovery_date=data.content_metadata2.discovery_date, fetcher=attr.evolve(fetcher2, metadata=None), ) swh_storage.object_metadata_add([data.content_metadata, content_metadata2]) result = swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content_swhid, authority, limit=1 ) assert result["next_page_token"] is not None assert [data.content_metadata] == result["results"] result = swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content_swhid, authority, limit=1, page_token=result["next_page_token"], ) assert result["next_page_token"] is None assert [content_metadata2] == result["results"] def test_content_metadata_get__invalid_id(self, swh_storage): fetcher = data.metadata_fetcher authority = data.metadata_authority swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.object_metadata_add([data.content_metadata, data.content_metadata2]) with pytest.raises(StorageArgumentException, match="SWHID"): swh_storage.object_metadata_get( MetadataTargetType.CONTENT, data.origin["url"], authority ) def test_origin_metadata_add(self, swh_storage): origin = data.origin fetcher = data.metadata_fetcher authority = data.metadata_authority assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.object_metadata_add([data.origin_metadata, data.origin_metadata2]) result = swh_storage.object_metadata_get( MetadataTargetType.ORIGIN, origin["url"], authority ) assert result["next_page_token"] is None assert [data.origin_metadata, data.origin_metadata2] == list( sorted(result["results"], key=lambda x: x.discovery_date) ) def test_origin_metadata_add_duplicate(self, swh_storage): """Duplicates should be silently updated.""" origin = data.origin fetcher = data.metadata_fetcher authority = data.metadata_authority assert swh_storage.origin_add([origin]) == {"origin:add": 1} new_origin_metadata2 = attr.evolve( data.origin_metadata2, format="new-format", metadata=b"new-metadata", ) swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.object_metadata_add([data.origin_metadata, data.origin_metadata2]) swh_storage.object_metadata_add([new_origin_metadata2]) result = swh_storage.object_metadata_get( MetadataTargetType.ORIGIN, origin["url"], authority ) assert result["next_page_token"] is None # which of the two behavior happens is backend-specific. expected_results1 = (data.origin_metadata, new_origin_metadata2) expected_results2 = (data.origin_metadata, data.origin_metadata2) assert tuple(sorted(result["results"], key=lambda x: x.discovery_date,)) in ( expected_results1, # cassandra expected_results2, # postgresql ) def test_origin_metadata_get(self, swh_storage): authority = data.metadata_authority fetcher = data.metadata_fetcher authority2 = data.metadata_authority2 fetcher2 = data.metadata_fetcher2 origin_url1 = data.origin["url"] origin_url2 = data.origin2["url"] assert swh_storage.origin_add([data.origin, data.origin2]) == {"origin:add": 2} origin1_metadata1 = data.origin_metadata origin1_metadata2 = data.origin_metadata2 origin1_metadata3 = data.origin_metadata3 origin2_metadata = attr.evolve(data.origin_metadata2, id=origin_url2) swh_storage.metadata_authority_add([authority, authority2]) swh_storage.metadata_fetcher_add([fetcher, fetcher2]) swh_storage.object_metadata_add( [origin1_metadata1, origin1_metadata2, origin1_metadata3, origin2_metadata] ) result = swh_storage.object_metadata_get( MetadataTargetType.ORIGIN, origin_url1, authority ) assert result["next_page_token"] is None assert [origin1_metadata1, origin1_metadata2] == list( sorted(result["results"], key=lambda x: x.discovery_date,) ) result = swh_storage.object_metadata_get( MetadataTargetType.ORIGIN, origin_url1, authority2 ) assert result["next_page_token"] is None assert [origin1_metadata3] == list( sorted(result["results"], key=lambda x: x.discovery_date,) ) result = swh_storage.object_metadata_get( MetadataTargetType.ORIGIN, origin_url2, authority ) assert result["next_page_token"] is None assert [origin2_metadata] == list(result["results"],) def test_origin_metadata_get_after(self, swh_storage): origin = data.origin fetcher = data.metadata_fetcher authority = data.metadata_authority assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.object_metadata_add([data.origin_metadata, data.origin_metadata2]) result = swh_storage.object_metadata_get( MetadataTargetType.ORIGIN, origin["url"], authority, after=data.origin_metadata.discovery_date - timedelta(seconds=1), ) assert result["next_page_token"] is None assert [data.origin_metadata, data.origin_metadata2] == list( sorted(result["results"], key=lambda x: x.discovery_date,) ) result = swh_storage.object_metadata_get( MetadataTargetType.ORIGIN, origin["url"], authority, after=data.origin_metadata.discovery_date, ) assert result["next_page_token"] is None assert [data.origin_metadata2] == result["results"] result = swh_storage.object_metadata_get( MetadataTargetType.ORIGIN, origin["url"], authority, after=data.origin_metadata2.discovery_date, ) assert result["next_page_token"] is None assert [] == result["results"] def test_origin_metadata_get_paginate(self, swh_storage): origin = data.origin fetcher = data.metadata_fetcher authority = data.metadata_authority assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.object_metadata_add([data.origin_metadata, data.origin_metadata2]) swh_storage.object_metadata_get( MetadataTargetType.ORIGIN, origin["url"], authority ) result = swh_storage.object_metadata_get( MetadataTargetType.ORIGIN, origin["url"], authority, limit=1 ) assert result["next_page_token"] is not None assert [data.origin_metadata] == result["results"] result = swh_storage.object_metadata_get( MetadataTargetType.ORIGIN, origin["url"], authority, limit=1, page_token=result["next_page_token"], ) assert result["next_page_token"] is None assert [data.origin_metadata2] == result["results"] def test_origin_metadata_get_paginate_same_date(self, swh_storage): origin = data.origin fetcher1 = data.metadata_fetcher fetcher2 = data.metadata_fetcher2 authority = data.metadata_authority assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_fetcher_add([fetcher1]) swh_storage.metadata_fetcher_add([fetcher2]) swh_storage.metadata_authority_add([authority]) origin_metadata2 = attr.evolve( data.origin_metadata2, discovery_date=data.origin_metadata2.discovery_date, fetcher=attr.evolve(fetcher2, metadata=None), ) swh_storage.object_metadata_add([data.origin_metadata, origin_metadata2]) result = swh_storage.object_metadata_get( MetadataTargetType.ORIGIN, origin["url"], authority, limit=1 ) assert result["next_page_token"] is not None assert [data.origin_metadata] == result["results"] result = swh_storage.object_metadata_get( MetadataTargetType.ORIGIN, origin["url"], authority, limit=1, page_token=result["next_page_token"], ) assert result["next_page_token"] is None assert [origin_metadata2] == result["results"] def test_origin_metadata_add_missing_authority(self, swh_storage): origin = data.origin fetcher = data.metadata_fetcher assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_fetcher_add([fetcher]) with pytest.raises(StorageArgumentException, match="authority"): swh_storage.object_metadata_add( [data.origin_metadata, data.origin_metadata2] ) def test_origin_metadata_add_missing_fetcher(self, swh_storage): origin = data.origin authority = data.metadata_authority assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_authority_add([authority]) with pytest.raises(StorageArgumentException, match="fetcher"): swh_storage.object_metadata_add( [data.origin_metadata, data.origin_metadata2] ) def test_origin_metadata_get__invalid_id_type(self, swh_storage): origin = data.origin fetcher = data.metadata_fetcher authority = data.metadata_authority assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.object_metadata_add([data.origin_metadata, data.origin_metadata2]) with pytest.raises(StorageArgumentException, match="SWHID"): swh_storage.object_metadata_get( MetadataTargetType.ORIGIN, data.content_metadata.id, authority, ) class TestStorageGeneratedData: def test_generate_content_get(self, swh_storage, swh_contents): - contents_with_data = [c for c in swh_contents if c["status"] != "absent"] + contents_with_data = [c.to_dict() for c in swh_contents if c.status != "absent"] # input the list of sha1s we want from storage get_sha1s = [c["sha1"] for c in contents_with_data] # retrieve contents actual_contents = list(swh_storage.content_get(get_sha1s)) assert None not in actual_contents assert_contents_ok(contents_with_data, actual_contents) def test_generate_content_get_metadata(self, swh_storage, swh_contents): # input the list of sha1s we want from storage - expected_contents = [c for c in swh_contents if c["status"] != "absent"] + expected_contents = [c.to_dict() for c in swh_contents if c.status != "absent"] get_sha1s = [c["sha1"] for c in expected_contents] # retrieve contents meta_contents = swh_storage.content_get_metadata(get_sha1s) assert len(list(meta_contents)) == len(get_sha1s) actual_contents = [] for contents in meta_contents.values(): actual_contents.extend(contents) keys_to_check = {"length", "status", "sha1", "sha1_git", "sha256", "blake2s256"} assert_contents_ok( expected_contents, actual_contents, keys_to_check=keys_to_check ) def test_generate_content_get_range(self, swh_storage, swh_contents): """content_get_range returns complete range""" - present_contents = [c for c in swh_contents if c["status"] != "absent"] + present_contents = [c.to_dict() for c in swh_contents if c.status != "absent"] - get_sha1s = sorted([c["sha1"] for c in swh_contents if c["status"] != "absent"]) + get_sha1s = sorted([c.sha1 for c in swh_contents if c.status != "absent"]) start = get_sha1s[2] end = get_sha1s[-2] actual_result = swh_storage.content_get_range(start, end) assert actual_result["next"] is None actual_contents = actual_result["contents"] expected_contents = [c for c in present_contents if start <= c["sha1"] <= end] if expected_contents: assert_contents_ok(expected_contents, actual_contents, ["sha1"]) else: assert actual_contents == [] def test_generate_content_get_range_full(self, swh_storage, swh_contents): """content_get_range for a full range returns all available contents""" - present_contents = [c for c in swh_contents if c["status"] != "absent"] + present_contents = [c.to_dict() for c in swh_contents if c.status != "absent"] start = b"0" * 40 end = b"f" * 40 actual_result = swh_storage.content_get_range(start, end) assert actual_result["next"] is None actual_contents = actual_result["contents"] expected_contents = [c for c in present_contents if start <= c["sha1"] <= end] if expected_contents: assert_contents_ok(expected_contents, actual_contents, ["sha1"]) else: assert actual_contents == [] def test_generate_content_get_range_empty(self, swh_storage, swh_contents): """content_get_range for an empty range returns nothing""" start = b"0" * 40 end = b"f" * 40 actual_result = swh_storage.content_get_range(end, start) assert actual_result["next"] is None assert len(actual_result["contents"]) == 0 def test_generate_content_get_range_limit_none(self, swh_storage): """content_get_range call with wrong limit input should fail""" with pytest.raises(StorageArgumentException) as e: swh_storage.content_get_range(start=None, end=None, limit=None) assert e.value.args == ("limit should not be None",) def test_generate_content_get_range_no_limit(self, swh_storage, swh_contents): """content_get_range returns contents within range provided""" # input the list of sha1s we want from storage - get_sha1s = sorted([c["sha1"] for c in swh_contents if c["status"] != "absent"]) + get_sha1s = sorted([c.sha1 for c in swh_contents if c.status != "absent"]) start = get_sha1s[0] end = get_sha1s[-1] # retrieve contents actual_result = swh_storage.content_get_range(start, end) actual_contents = actual_result["contents"] assert actual_result["next"] is None assert len(actual_contents) == len(get_sha1s) - expected_contents = [c for c in swh_contents if c["status"] != "absent"] + expected_contents = [c.to_dict() for c in swh_contents if c.status != "absent"] assert_contents_ok(expected_contents, actual_contents, ["sha1"]) def test_generate_content_get_range_limit(self, swh_storage, swh_contents): """content_get_range paginates results if limit exceeded""" - contents_map = {c["sha1"]: c for c in swh_contents} + contents_map = {c.sha1: c.to_dict() for c in swh_contents} # input the list of sha1s we want from storage - get_sha1s = sorted([c["sha1"] for c in swh_contents if c["status"] != "absent"]) + get_sha1s = sorted([c.sha1 for c in swh_contents if c.status != "absent"]) start = get_sha1s[0] end = get_sha1s[-1] # retrieve contents limited to n-1 results limited_results = len(get_sha1s) - 1 actual_result = swh_storage.content_get_range(start, end, limit=limited_results) actual_contents = actual_result["contents"] assert actual_result["next"] == get_sha1s[-1] assert len(actual_contents) == limited_results expected_contents = [contents_map[sha1] for sha1 in get_sha1s[:-1]] assert_contents_ok(expected_contents, actual_contents, ["sha1"]) # retrieve next part actual_results2 = swh_storage.content_get_range(start=end, end=end) assert actual_results2["next"] is None actual_contents2 = actual_results2["contents"] assert len(actual_contents2) == 1 assert_contents_ok([contents_map[get_sha1s[-1]]], actual_contents2, ["sha1"]) def test_origin_get_range_from_zero(self, swh_storage, swh_origins): actual_origins = list( swh_storage.origin_get_range(origin_from=0, origin_count=0) ) assert len(actual_origins) == 0 actual_origins = list( swh_storage.origin_get_range(origin_from=0, origin_count=1) ) assert len(actual_origins) == 1 assert actual_origins[0]["id"] == 1 assert actual_origins[0]["url"] == swh_origins[0]["url"] @pytest.mark.parametrize( "origin_from,origin_count", [(1, 1), (1, 10), (1, 20), (1, 101), (11, 0), (11, 10), (91, 11)], ) def test_origin_get_range( self, swh_storage, swh_origins, origin_from, origin_count ): actual_origins = list( swh_storage.origin_get_range( origin_from=origin_from, origin_count=origin_count ) ) origins_with_id = list(enumerate(swh_origins, start=1)) expected_origins = [ {"url": origin["url"], "id": origin_id,} for (origin_id, origin) in origins_with_id[ origin_from - 1 : origin_from + origin_count - 1 ] ] assert actual_origins == expected_origins @pytest.mark.parametrize("limit", [1, 7, 10, 100, 1000]) def test_origin_list(self, swh_storage, swh_origins, limit): returned_origins = [] page_token = None i = 0 while True: result = swh_storage.origin_list(page_token=page_token, limit=limit) assert len(result["origins"]) <= limit returned_origins.extend(origin["url"] for origin in result["origins"]) i += 1 page_token = result.get("next_page_token") if page_token is None: assert i * limit >= len(swh_origins) break else: assert len(result["origins"]) == limit expected_origins = [origin["url"] for origin in swh_origins] assert sorted(returned_origins) == sorted(expected_origins) ORIGINS = [ "https://github.com/user1/repo1", "https://github.com/user2/repo1", "https://github.com/user3/repo1", "https://gitlab.com/user1/repo1", "https://gitlab.com/user2/repo1", "https://forge.softwareheritage.org/source/repo1", ] def test_origin_count(self, swh_storage): swh_storage.origin_add([{"url": url} for url in self.ORIGINS]) assert swh_storage.origin_count("github") == 3 assert swh_storage.origin_count("gitlab") == 2 assert swh_storage.origin_count(".*user.*", regexp=True) == 5 assert swh_storage.origin_count(".*user.*", regexp=False) == 0 assert swh_storage.origin_count(".*user1.*", regexp=True) == 2 assert swh_storage.origin_count(".*user1.*", regexp=False) == 0 def test_origin_count_with_visit_no_visits(self, swh_storage): swh_storage.origin_add([{"url": url} for url in self.ORIGINS]) # none of them have visits, so with_visit=True => 0 assert swh_storage.origin_count("github", with_visit=True) == 0 assert swh_storage.origin_count("gitlab", with_visit=True) == 0 assert swh_storage.origin_count(".*user.*", regexp=True, with_visit=True) == 0 assert swh_storage.origin_count(".*user.*", regexp=False, with_visit=True) == 0 assert swh_storage.origin_count(".*user1.*", regexp=True, with_visit=True) == 0 assert swh_storage.origin_count(".*user1.*", regexp=False, with_visit=True) == 0 def test_origin_count_with_visit_with_visits_no_snapshot(self, swh_storage): swh_storage.origin_add([{"url": url} for url in self.ORIGINS]) origin_url = "https://github.com/user1/repo1" visit = OriginVisit(origin=origin_url, date=now(), type="git",) swh_storage.origin_visit_add([visit]) assert swh_storage.origin_count("github", with_visit=False) == 3 # it has a visit, but no snapshot, so with_visit=True => 0 assert swh_storage.origin_count("github", with_visit=True) == 0 assert swh_storage.origin_count("gitlab", with_visit=False) == 2 # these gitlab origins have no visit assert swh_storage.origin_count("gitlab", with_visit=True) == 0 assert ( swh_storage.origin_count("github.*user1", regexp=True, with_visit=False) == 1 ) assert ( swh_storage.origin_count("github.*user1", regexp=True, with_visit=True) == 0 ) assert swh_storage.origin_count("github", regexp=True, with_visit=True) == 0 def test_origin_count_with_visit_with_visits_and_snapshot(self, swh_storage): swh_storage.origin_add([{"url": url} for url in self.ORIGINS]) swh_storage.snapshot_add([data.snapshot]) origin_url = "https://github.com/user1/repo1" visit = OriginVisit(origin=origin_url, date=now(), type="git",) visit = swh_storage.origin_visit_add([visit])[0] swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin_url, visit=visit.visit, date=now(), status="ongoing", snapshot=data.snapshot["id"], ) ] ) assert swh_storage.origin_count("github", with_visit=False) == 3 # github/user1 has a visit and a snapshot, so with_visit=True => 1 assert swh_storage.origin_count("github", with_visit=True) == 1 assert ( swh_storage.origin_count("github.*user1", regexp=True, with_visit=False) == 1 ) assert ( swh_storage.origin_count("github.*user1", regexp=True, with_visit=True) == 1 ) assert swh_storage.origin_count("github", regexp=True, with_visit=True) == 1 @settings(suppress_health_check=[HealthCheck.too_slow]) @given(strategies.lists(objects(), max_size=2)) def test_add_arbitrary(self, swh_storage, objects): for (obj_type, obj) in objects: obj = obj.to_dict() if obj_type == "origin_visit": origin_url = obj.pop("origin") swh_storage.origin_add_one({"url": origin_url}) if "visit" in obj: del obj["visit"] visit = OriginVisit( origin=origin_url, date=obj["date"], type=obj["type"], ) swh_storage.origin_visit_add([visit]) else: if obj_type == "content" and obj["status"] == "absent": obj_type = "skipped_content" method = getattr(swh_storage, obj_type + "_add") try: method([obj]) except HashCollision: pass @pytest.mark.db class TestLocalStorage: """Test the local storage""" # This test is only relevant on the local storage, with an actual # objstorage raising an exception def test_content_add_objstorage_exception(self, swh_storage): swh_storage.objstorage.content_add = Mock( side_effect=Exception("mocked broken objstorage") ) with pytest.raises(Exception) as e: swh_storage.content_add([data.cont]) assert e.value.args == ("mocked broken objstorage",) missing = list(swh_storage.content_missing([data.cont])) assert missing == [data.cont["sha1"]] @pytest.mark.db class TestStorageRaceConditions: @pytest.mark.xfail def test_content_add_race(self, swh_storage): results = queue.Queue() def thread(): try: with db_transaction(swh_storage) as (db, cur): ret = swh_storage.content_add([data.cont], db=db, cur=cur) results.put((threading.get_ident(), "data", ret)) except Exception as e: results.put((threading.get_ident(), "exc", e)) t1 = threading.Thread(target=thread) t2 = threading.Thread(target=thread) t1.start() # this avoids the race condition # import time # time.sleep(1) t2.start() t1.join() t2.join() r1 = results.get(block=False) r2 = results.get(block=False) with pytest.raises(queue.Empty): results.get(block=False) assert r1[0] != r2[0] assert r1[1] == "data", "Got exception %r in Thread%s" % (r1[2], r1[0]) assert r2[1] == "data", "Got exception %r in Thread%s" % (r2[2], r2[0]) @pytest.mark.db class TestPgStorage: """This class is dedicated for the rare case where the schema needs to be altered dynamically. Otherwise, the tests could be blocking when ran altogether. """ def test_content_update_with_new_cols(self, swh_storage): swh_storage.journal_writer.journal = None # TODO, not supported with db_transaction(swh_storage) as (_, cur): cur.execute( """alter table content add column test text default null, add column test2 text default null""" ) cont = copy.deepcopy(data.cont2) swh_storage.content_add([cont]) cont["test"] = "value-1" cont["test2"] = "value-2" swh_storage.content_update([cont], keys=["test", "test2"]) with db_transaction(swh_storage) as (_, cur): cur.execute( """SELECT sha1, sha1_git, sha256, length, status, test, test2 FROM content WHERE sha1 = %s""", (cont["sha1"],), ) datum = cur.fetchone() assert datum == ( cont["sha1"], cont["sha1_git"], cont["sha256"], cont["length"], "visible", cont["test"], cont["test2"], ) with db_transaction(swh_storage) as (_, cur): cur.execute( """alter table content drop column test, drop column test2""" ) def test_content_add_db(self, swh_storage): cont = data.cont actual_result = swh_storage.content_add([cont]) assert actual_result == { "content:add": 1, "content:add:bytes": cont["length"], } if hasattr(swh_storage, "objstorage"): assert cont["sha1"] in swh_storage.objstorage.objstorage with db_transaction(swh_storage) as (_, cur): cur.execute( "SELECT sha1, sha1_git, sha256, length, status" " FROM content WHERE sha1 = %s", (cont["sha1"],), ) datum = cur.fetchone() assert datum == ( cont["sha1"], cont["sha1_git"], cont["sha256"], cont["length"], "visible", ) expected_cont = cont.copy() del expected_cont["data"] contents = [ obj for (obj_type, obj) in swh_storage.journal_writer.journal.objects if obj_type == "content" ] assert len(contents) == 1 for obj in contents: obj_d = obj.to_dict() del obj_d["ctime"] assert obj_d == expected_cont def test_content_add_metadata_db(self, swh_storage): cont = data.cont del cont["data"] cont["ctime"] = now() actual_result = swh_storage.content_add_metadata([cont]) assert actual_result == { "content:add": 1, } if hasattr(swh_storage, "objstorage"): assert cont["sha1"] not in swh_storage.objstorage.objstorage with db_transaction(swh_storage) as (_, cur): cur.execute( "SELECT sha1, sha1_git, sha256, length, status" " FROM content WHERE sha1 = %s", (cont["sha1"],), ) datum = cur.fetchone() assert datum == ( cont["sha1"], cont["sha1_git"], cont["sha256"], cont["length"], "visible", ) contents = [ obj for (obj_type, obj) in swh_storage.journal_writer.journal.objects if obj_type == "content" ] assert len(contents) == 1 for obj in contents: obj_d = obj.to_dict() assert obj_d == cont def test_skipped_content_add_db(self, swh_storage): cont = data.skipped_cont cont2 = data.skipped_cont2 cont2["blake2s256"] = None actual_result = swh_storage.skipped_content_add([cont, cont, cont2]) assert 2 <= actual_result.pop("skipped_content:add") <= 3 assert actual_result == {} with db_transaction(swh_storage) as (_, cur): cur.execute( "SELECT sha1, sha1_git, sha256, blake2s256, " "length, status, reason " "FROM skipped_content ORDER BY sha1_git" ) dbdata = cur.fetchall() assert len(dbdata) == 2 assert dbdata[0] == ( cont["sha1"], cont["sha1_git"], cont["sha256"], cont["blake2s256"], cont["length"], "absent", "Content too long", ) assert dbdata[1] == ( cont2["sha1"], cont2["sha1_git"], cont2["sha256"], cont2["blake2s256"], cont2["length"], "absent", "Content too long", ) def test_clear_buffers(self, swh_storage): """Calling clear buffers on real storage does nothing """ assert swh_storage.clear_buffers() is None def test_flush(self, swh_storage): """Calling clear buffers on real storage does nothing """ assert swh_storage.flush() == {}