diff --git a/PKG-INFO b/PKG-INFO index 055e0312..6b060bf7 100644 --- a/PKG-INFO +++ b/PKG-INFO @@ -1,218 +1,218 @@ Metadata-Version: 2.1 Name: swh.storage -Version: 0.11.1 +Version: 0.11.2 Summary: Software Heritage storage manager Home-page: https://forge.softwareheritage.org/diffusion/DSTO/ Author: Software Heritage developers Author-email: swh-devel@inria.fr License: UNKNOWN Project-URL: Bug Reports, https://forge.softwareheritage.org/maniphest Project-URL: Funding, https://www.softwareheritage.org/donate Project-URL: Source, https://forge.softwareheritage.org/source/swh-storage Project-URL: Documentation, https://docs.softwareheritage.org/devel/swh-storage/ Description: swh-storage =========== Abstraction layer over the archive, allowing to access all stored source code artifacts as well as their metadata. See the [documentation](https://docs.softwareheritage.org/devel/swh-storage/index.html) for more details. ## Quick start ### Dependencies Python tests for this module include tests that cannot be run without a local Postgresql database, so you need the Postgresql server executable on your machine (no need to have a running Postgresql server). They also expect a cassandra server. #### Debian-like host ``` $ sudo apt install libpq-dev postgresql-11 cassandra ``` #### Non Debian-like host The tests expects the path to `cassandra` to either be unspecified, it is then looked up at `/usr/sbin/cassandra`, either specified through the environment variable `SWH_CASSANDRA_BIN`. Optionally, you can avoid running the cassandra tests. ``` (swh) :~/swh-storage$ tox -- -m 'not cassandra' ``` ### Installation It is strongly recommended to use a virtualenv. In the following, we consider you work in a virtualenv named `swh`. See the [developer setup guide](https://docs.softwareheritage.org/devel/developer-setup.html#developer-setup) for a more details on how to setup a working environment. You can install the package directly from [pypi](https://pypi.org/p/swh.storage): ``` (swh) :~$ pip install swh.storage [...] ``` Or from sources: ``` (swh) :~$ git clone https://forge.softwareheritage.org/source/swh-storage.git [...] (swh) :~$ cd swh-storage (swh) :~/swh-storage$ pip install . [...] ``` Then you can check it's properly installed: ``` (swh) :~$ swh storage --help Usage: swh storage [OPTIONS] COMMAND [ARGS]... Software Heritage Storage tools. Options: -h, --help Show this message and exit. Commands: rpc-serve Software Heritage Storage RPC server. ``` ## Tests The best way of running Python tests for this module is to use [tox](https://tox.readthedocs.io/). ``` (swh) :~$ pip install tox ``` ### tox From the sources directory, simply use tox: ``` (swh) :~/swh-storage$ tox [...] ========= 315 passed, 6 skipped, 15 warnings in 40.86 seconds ========== _______________________________ summary ________________________________ flake8: commands succeeded py3: commands succeeded congratulations :) ``` ## Development The storage server can be locally started. It requires a configuration file and a running Postgresql database. ### Sample configuration A typical configuration `storage.yml` file is: ``` storage: cls: local args: db: "dbname=softwareheritage-dev user= password=" objstorage: cls: pathslicing args: root: /tmp/swh-storage/ slicing: 0:2/2:4/4:6 ``` which means, this uses: - a local storage instance whose db connection is to `softwareheritage-dev` local instance, - the objstorage uses a local objstorage instance whose: - `root` path is /tmp/swh-storage, - slicing scheme is `0:2/2:4/4:6`. This means that the identifier of the content (sha1) which will be stored on disk at first level with the first 2 hex characters, the second level with the next 2 hex characters and the third level with the next 2 hex characters. And finally the complete hash file holding the raw content. For example: 00062f8bd330715c4f819373653d97b3cd34394c will be stored at 00/06/2f/00062f8bd330715c4f819373653d97b3cd34394c Note that the `root` path should exist on disk before starting the server. ### Starting the storage server If the python package has been properly installed (e.g. in a virtual env), you should be able to use the command: ``` (swh) :~/swh-storage$ swh storage rpc-serve storage.yml ``` This runs a local swh-storage api at 5002 port. ``` (swh) :~/swh-storage$ curl http://127.0.0.1:5002 Software Heritage storage server

You have reached the Software Heritage storage server.
See its documentation and API for more information

``` ### And then what? In your upper layer ([loader-git](https://forge.softwareheritage.org/source/swh-loader-git/), [loader-svn](https://forge.softwareheritage.org/source/swh-loader-svn/), etc...), you can define a remote storage with this snippet of yaml configuration. ``` storage: cls: remote args: url: http://localhost:5002/ ``` You could directly define a local storage with the following snippet: ``` storage: cls: local args: db: service=swh-dev objstorage: cls: pathslicing args: root: /home/storage/swh-storage/ slicing: 0:2/2:4/4:6 ``` Platform: UNKNOWN Classifier: Programming Language :: Python :: 3 Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3) Classifier: Operating System :: OS Independent Classifier: Development Status :: 5 - Production/Stable Requires-Python: >=3.7 Description-Content-Type: text/markdown Provides-Extra: testing Provides-Extra: schemata Provides-Extra: journal diff --git a/sql/upgrades/157.sql b/sql/upgrades/157.sql index 6245e5dd..5e79fe18 100644 --- a/sql/upgrades/157.sql +++ b/sql/upgrades/157.sql @@ -1,68 +1,66 @@ -- SWH DB schema upgrade -- from_version: 156 -- to_version: 157 -- description: Add extrinsic artifact metadata -- latest schema version insert into dbversion(version, release, description) values(157, now(), 'Work In Progress'); create domain swhid as text check (value ~ '^swh:[0-9]+:.*'); -- Extrinsic metadata on a DAG objects and origins. create table object_metadata ( type text not null, id text not null, -- metadata source authority_id bigint not null, fetcher_id bigint not null, discovery_date timestamptz not null, -- metadata itself format text not null, metadata bytea not null, -- context origin text, visit bigint, snapshot swhid, release swhid, revision swhid, path bytea, directory swhid ); comment on table object_metadata is 'keeps all metadata found concerning an object'; comment on column object_metadata.type is 'the type of object (content/directory/revision/release/snapshot/origin) the metadata is on'; comment on column object_metadata.id is 'the SWHID or origin URL for which the metadata was found'; comment on column object_metadata.discovery_date is 'the date of retrieval'; comment on column object_metadata.authority_id is 'the metadata provider: github, openhub, deposit, etc.'; comment on column object_metadata.fetcher_id is 'the tool used for extracting metadata: loaders, crawlers, etc.'; comment on column object_metadata.format is 'name of the format of metadata, used by readers to interpret it.'; comment on column object_metadata.metadata is 'original metadata in opaque format'; -- migrate data from origin_metadata -insert into object_metadata(type, id, authority_id, fetcher_id, discovery_date, format, metadata, - origin) -select 'origin', id, authority_id, fetcher_id, discovery_date, format, metadata, - (select url from origin o where o.id = om.origin_id) -from origin_metadata om; +insert into object_metadata(type, id, authority_id, fetcher_id, discovery_date, format, metadata) + select 'origin', (select url from origin o where o.id = om.origin_id), authority_id, fetcher_id, discovery_date, format, metadata, + from origin_metadata om; create unique index object_metadata_content_authority_date_fetcher on object_metadata(id, authority_id, discovery_date, fetcher_id); alter table object_metadata add constraint object_metadata_authority_fkey foreign key (authority_id) references metadata_authority(id) not valid; alter table object_metadata validate constraint object_metadata_authority_fkey; alter table object_metadata add constraint object_metadata_fetcher_fkey foreign key (fetcher_id) references metadata_fetcher(id) not valid; alter table object_metadata validate constraint object_metadata_fetcher_fkey; diff --git a/swh.storage.egg-info/PKG-INFO b/swh.storage.egg-info/PKG-INFO index 055e0312..6b060bf7 100644 --- a/swh.storage.egg-info/PKG-INFO +++ b/swh.storage.egg-info/PKG-INFO @@ -1,218 +1,218 @@ Metadata-Version: 2.1 Name: swh.storage -Version: 0.11.1 +Version: 0.11.2 Summary: Software Heritage storage manager Home-page: https://forge.softwareheritage.org/diffusion/DSTO/ Author: Software Heritage developers Author-email: swh-devel@inria.fr License: UNKNOWN Project-URL: Bug Reports, https://forge.softwareheritage.org/maniphest Project-URL: Funding, https://www.softwareheritage.org/donate Project-URL: Source, https://forge.softwareheritage.org/source/swh-storage Project-URL: Documentation, https://docs.softwareheritage.org/devel/swh-storage/ Description: swh-storage =========== Abstraction layer over the archive, allowing to access all stored source code artifacts as well as their metadata. See the [documentation](https://docs.softwareheritage.org/devel/swh-storage/index.html) for more details. ## Quick start ### Dependencies Python tests for this module include tests that cannot be run without a local Postgresql database, so you need the Postgresql server executable on your machine (no need to have a running Postgresql server). They also expect a cassandra server. #### Debian-like host ``` $ sudo apt install libpq-dev postgresql-11 cassandra ``` #### Non Debian-like host The tests expects the path to `cassandra` to either be unspecified, it is then looked up at `/usr/sbin/cassandra`, either specified through the environment variable `SWH_CASSANDRA_BIN`. Optionally, you can avoid running the cassandra tests. ``` (swh) :~/swh-storage$ tox -- -m 'not cassandra' ``` ### Installation It is strongly recommended to use a virtualenv. In the following, we consider you work in a virtualenv named `swh`. See the [developer setup guide](https://docs.softwareheritage.org/devel/developer-setup.html#developer-setup) for a more details on how to setup a working environment. You can install the package directly from [pypi](https://pypi.org/p/swh.storage): ``` (swh) :~$ pip install swh.storage [...] ``` Or from sources: ``` (swh) :~$ git clone https://forge.softwareheritage.org/source/swh-storage.git [...] (swh) :~$ cd swh-storage (swh) :~/swh-storage$ pip install . [...] ``` Then you can check it's properly installed: ``` (swh) :~$ swh storage --help Usage: swh storage [OPTIONS] COMMAND [ARGS]... Software Heritage Storage tools. Options: -h, --help Show this message and exit. Commands: rpc-serve Software Heritage Storage RPC server. ``` ## Tests The best way of running Python tests for this module is to use [tox](https://tox.readthedocs.io/). ``` (swh) :~$ pip install tox ``` ### tox From the sources directory, simply use tox: ``` (swh) :~/swh-storage$ tox [...] ========= 315 passed, 6 skipped, 15 warnings in 40.86 seconds ========== _______________________________ summary ________________________________ flake8: commands succeeded py3: commands succeeded congratulations :) ``` ## Development The storage server can be locally started. It requires a configuration file and a running Postgresql database. ### Sample configuration A typical configuration `storage.yml` file is: ``` storage: cls: local args: db: "dbname=softwareheritage-dev user= password=" objstorage: cls: pathslicing args: root: /tmp/swh-storage/ slicing: 0:2/2:4/4:6 ``` which means, this uses: - a local storage instance whose db connection is to `softwareheritage-dev` local instance, - the objstorage uses a local objstorage instance whose: - `root` path is /tmp/swh-storage, - slicing scheme is `0:2/2:4/4:6`. This means that the identifier of the content (sha1) which will be stored on disk at first level with the first 2 hex characters, the second level with the next 2 hex characters and the third level with the next 2 hex characters. And finally the complete hash file holding the raw content. For example: 00062f8bd330715c4f819373653d97b3cd34394c will be stored at 00/06/2f/00062f8bd330715c4f819373653d97b3cd34394c Note that the `root` path should exist on disk before starting the server. ### Starting the storage server If the python package has been properly installed (e.g. in a virtual env), you should be able to use the command: ``` (swh) :~/swh-storage$ swh storage rpc-serve storage.yml ``` This runs a local swh-storage api at 5002 port. ``` (swh) :~/swh-storage$ curl http://127.0.0.1:5002 Software Heritage storage server

You have reached the Software Heritage storage server.
See its documentation and API for more information

``` ### And then what? In your upper layer ([loader-git](https://forge.softwareheritage.org/source/swh-loader-git/), [loader-svn](https://forge.softwareheritage.org/source/swh-loader-svn/), etc...), you can define a remote storage with this snippet of yaml configuration. ``` storage: cls: remote args: url: http://localhost:5002/ ``` You could directly define a local storage with the following snippet: ``` storage: cls: local args: db: service=swh-dev objstorage: cls: pathslicing args: root: /home/storage/swh-storage/ slicing: 0:2/2:4/4:6 ``` Platform: UNKNOWN Classifier: Programming Language :: Python :: 3 Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3) Classifier: Operating System :: OS Independent Classifier: Development Status :: 5 - Production/Stable Requires-Python: >=3.7 Description-Content-Type: text/markdown Provides-Extra: testing Provides-Extra: schemata Provides-Extra: journal diff --git a/swh.storage.egg-info/SOURCES.txt b/swh.storage.egg-info/SOURCES.txt index 6edbaf6c..e3628cc7 100644 --- a/swh.storage.egg-info/SOURCES.txt +++ b/swh.storage.egg-info/SOURCES.txt @@ -1,292 +1,292 @@ .gitignore .pre-commit-config.yaml AUTHORS CODE_OF_CONDUCT.md CONTRIBUTORS LICENSE MANIFEST.in Makefile Makefile.local README.md conftest.py mypy.ini pyproject.toml pytest.ini requirements-swh-journal.txt requirements-swh.txt requirements-test.txt requirements.txt setup.cfg setup.py tox.ini version.txt ./requirements-swh-journal.txt ./requirements-swh.txt ./requirements-test.txt ./requirements.txt bin/swh-storage-add-dir docs/.gitignore docs/Makefile docs/Makefile.local docs/archive-copies.rst docs/conf.py docs/extrinsic-metadata-specification.rst docs/index.rst docs/sql-storage.rst docs/_static/.placeholder docs/_templates/.placeholder docs/images/.gitignore docs/images/Makefile docs/images/swh-archive-copies.dia sql/.gitignore sql/Makefile sql/TODO sql/clusters.dot sql/bin/db-upgrade sql/bin/dot_add_content sql/doc/json sql/doc/json/.gitignore sql/doc/json/Makefile sql/doc/json/entity.lister_metadata.schema.json sql/doc/json/entity.metadata.schema.json sql/doc/json/entity_history.lister_metadata.schema.json sql/doc/json/entity_history.metadata.schema.json sql/doc/json/fetch_history.result.schema.json sql/doc/json/list_history.result.schema.json sql/doc/json/listable_entity.list_params.schema.json sql/doc/json/origin_visit.metadata.json sql/doc/json/tool.tool_configuration.schema.json sql/json/.gitignore sql/json/Makefile sql/json/entity.lister_metadata.schema.json sql/json/entity.metadata.schema.json sql/json/entity_history.lister_metadata.schema.json sql/json/entity_history.metadata.schema.json sql/json/fetch_history.result.schema.json sql/json/list_history.result.schema.json sql/json/listable_entity.list_params.schema.json sql/json/origin_visit.metadata.json sql/json/tool.tool_configuration.schema.json sql/upgrades/015.sql sql/upgrades/016.sql sql/upgrades/017.sql sql/upgrades/018.sql sql/upgrades/019.sql sql/upgrades/020.sql sql/upgrades/021.sql sql/upgrades/022.sql sql/upgrades/023.sql sql/upgrades/024.sql sql/upgrades/025.sql sql/upgrades/026.sql sql/upgrades/027.sql sql/upgrades/028.sql sql/upgrades/029.sql sql/upgrades/030.sql sql/upgrades/032.sql sql/upgrades/033.sql sql/upgrades/034.sql sql/upgrades/035.sql sql/upgrades/036.sql sql/upgrades/037.sql sql/upgrades/038.sql sql/upgrades/039.sql sql/upgrades/040.sql sql/upgrades/041.sql sql/upgrades/042.sql sql/upgrades/043.sql sql/upgrades/044.sql sql/upgrades/045.sql sql/upgrades/046.sql sql/upgrades/047.sql sql/upgrades/048.sql sql/upgrades/049.sql sql/upgrades/050.sql sql/upgrades/051.sql sql/upgrades/052.sql sql/upgrades/053.sql sql/upgrades/054.sql sql/upgrades/055.sql sql/upgrades/056.sql sql/upgrades/057.sql sql/upgrades/058.sql sql/upgrades/059.sql sql/upgrades/060.sql sql/upgrades/061.sql sql/upgrades/062.sql sql/upgrades/063.sql sql/upgrades/064.sql sql/upgrades/065.sql sql/upgrades/066.sql sql/upgrades/067.sql sql/upgrades/068.sql sql/upgrades/069.sql sql/upgrades/070.sql sql/upgrades/071.sql sql/upgrades/072.sql sql/upgrades/073.sql sql/upgrades/074.sql sql/upgrades/075.sql sql/upgrades/076.sql sql/upgrades/077.sql sql/upgrades/078.sql sql/upgrades/079.sql sql/upgrades/080.sql sql/upgrades/081.sql sql/upgrades/082.sql sql/upgrades/083.sql sql/upgrades/084.sql sql/upgrades/085.sql sql/upgrades/086.sql sql/upgrades/087.sql sql/upgrades/088.sql sql/upgrades/089.sql sql/upgrades/090.sql sql/upgrades/091.sql sql/upgrades/092.sql sql/upgrades/093.sql sql/upgrades/094.sql sql/upgrades/095.sql sql/upgrades/096.sql sql/upgrades/097.sql sql/upgrades/098.sql sql/upgrades/099.sql sql/upgrades/100.sql sql/upgrades/101.sql sql/upgrades/102.sql sql/upgrades/103.sql sql/upgrades/104.sql sql/upgrades/105.sql sql/upgrades/106.sql sql/upgrades/107.sql sql/upgrades/108.sql sql/upgrades/109.sql sql/upgrades/110.sql sql/upgrades/111.sql sql/upgrades/112.sql sql/upgrades/113.sql sql/upgrades/114.sql sql/upgrades/115.sql sql/upgrades/116.sql sql/upgrades/117.sql sql/upgrades/118.sql sql/upgrades/119.sql sql/upgrades/120.sql sql/upgrades/121.sql sql/upgrades/122.sql sql/upgrades/123.sql sql/upgrades/124.sql sql/upgrades/125.sql sql/upgrades/126.sql sql/upgrades/127.sql sql/upgrades/128.sql sql/upgrades/129.sql sql/upgrades/130.sql sql/upgrades/131.sql sql/upgrades/132.sql sql/upgrades/133.sql sql/upgrades/134.sql sql/upgrades/135.sql sql/upgrades/136.sql sql/upgrades/137.sql sql/upgrades/138.sql sql/upgrades/139.sql sql/upgrades/140.sql sql/upgrades/141.sql sql/upgrades/142.sql sql/upgrades/143.sql sql/upgrades/144.sql sql/upgrades/145.sql sql/upgrades/146.sql sql/upgrades/147.sql sql/upgrades/148.sql sql/upgrades/149.sql sql/upgrades/150.sql sql/upgrades/151.sql sql/upgrades/152.sql sql/upgrades/153.sql sql/upgrades/154.sql sql/upgrades/155.sql sql/upgrades/156.sql sql/upgrades/157.sql sql/upgrades/158.sql swh/__init__.py swh.storage.egg-info/PKG-INFO swh.storage.egg-info/SOURCES.txt swh.storage.egg-info/dependency_links.txt swh.storage.egg-info/entry_points.txt swh.storage.egg-info/requires.txt swh.storage.egg-info/top_level.txt swh/storage/__init__.py swh/storage/backfill.py swh/storage/buffer.py swh/storage/cli.py swh/storage/common.py swh/storage/converters.py swh/storage/db.py swh/storage/exc.py swh/storage/filter.py swh/storage/fixer.py swh/storage/in_memory.py swh/storage/interface.py swh/storage/metrics.py swh/storage/objstorage.py swh/storage/py.typed swh/storage/pytest_plugin.py swh/storage/replay.py swh/storage/retry.py swh/storage/storage.py swh/storage/utils.py -swh/storage/validate.py swh/storage/writer.py swh/storage/algos/__init__.py swh/storage/algos/diff.py swh/storage/algos/dir_iterators.py swh/storage/algos/origin.py swh/storage/algos/revisions_walker.py swh/storage/algos/snapshot.py swh/storage/api/__init__.py swh/storage/api/client.py swh/storage/api/serializers.py swh/storage/api/server.py swh/storage/cassandra/__init__.py swh/storage/cassandra/common.py swh/storage/cassandra/converters.py swh/storage/cassandra/cql.py swh/storage/cassandra/schema.py swh/storage/cassandra/storage.py swh/storage/sql/10-swh-init.sql swh/storage/sql/20-swh-enums.sql swh/storage/sql/30-swh-schema.sql swh/storage/sql/40-swh-func.sql swh/storage/sql/60-swh-indexes.sql swh/storage/tests/__init__.py swh/storage/tests/conftest.py swh/storage/tests/storage_data.py swh/storage/tests/test_api_client.py swh/storage/tests/test_backfill.py swh/storage/tests/test_buffer.py swh/storage/tests/test_cassandra.py swh/storage/tests/test_cassandra_converters.py swh/storage/tests/test_cli.py swh/storage/tests/test_converters.py swh/storage/tests/test_exception.py swh/storage/tests/test_filter.py swh/storage/tests/test_in_memory.py swh/storage/tests/test_init.py swh/storage/tests/test_kafka_writer.py swh/storage/tests/test_metrics.py swh/storage/tests/test_pytest_plugin.py swh/storage/tests/test_replay.py swh/storage/tests/test_retry.py swh/storage/tests/test_revision_bw_compat.py swh/storage/tests/test_server.py swh/storage/tests/test_storage.py +swh/storage/tests/test_storage_data.py swh/storage/tests/test_utils.py swh/storage/tests/algos/__init__.py swh/storage/tests/algos/test_diff.py swh/storage/tests/algos/test_dir_iterator.py swh/storage/tests/algos/test_origin.py swh/storage/tests/algos/test_revisions_walker.py swh/storage/tests/algos/test_snapshot.py swh/storage/tests/data/storage.yml \ No newline at end of file diff --git a/swh/storage/__init__.py b/swh/storage/__init__.py index 757b615d..7d82a4ab 100644 --- a/swh/storage/__init__.py +++ b/swh/storage/__init__.py @@ -1,104 +1,101 @@ # Copyright (C) 2015-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import warnings STORAGE_IMPLEMENTATION = { "pipeline", "local", "remote", "memory", "filter", "buffer", "retry", - "validate", "cassandra", } def get_storage(cls, **kwargs): """Get a storage object of class `storage_class` with arguments `storage_args`. Args: storage (dict): dictionary with keys: - cls (str): storage's class, either local, remote, memory, filter, buffer - args (dict): dictionary with keys Returns: an instance of swh.storage.Storage or compatible class Raises: ValueError if passed an unknown storage class. """ if cls not in STORAGE_IMPLEMENTATION: raise ValueError( "Unknown storage class `%s`. Supported: %s" % (cls, ", ".join(STORAGE_IMPLEMENTATION)) ) if "args" in kwargs: warnings.warn( 'Explicit "args" key is deprecated, use keys directly instead.', DeprecationWarning, ) kwargs = kwargs["args"] if cls == "pipeline": return get_storage_pipeline(**kwargs) if cls == "remote": from .api.client import RemoteStorage as Storage elif cls == "local": from .storage import Storage elif cls == "cassandra": from .cassandra import CassandraStorage as Storage elif cls == "memory": from .in_memory import InMemoryStorage as Storage elif cls == "filter": from .filter import FilteringProxyStorage as Storage elif cls == "buffer": from .buffer import BufferingProxyStorage as Storage elif cls == "retry": from .retry import RetryingProxyStorage as Storage - elif cls == "validate": - from .validate import ValidatingProxyStorage as Storage return Storage(**kwargs) def get_storage_pipeline(steps): """Recursively get a storage object that may use other storage objects as backends. Args: steps (List[dict]): List of dicts that may be used as kwargs for `get_storage`. Returns: an instance of swh.storage.Storage or compatible class Raises: ValueError if passed an unknown storage class. """ storage_config = None for step in reversed(steps): if "args" in step: warnings.warn( 'Explicit "args" key is deprecated, use keys directly ' "instead.", DeprecationWarning, ) step = { "cls": step["cls"], **step["args"], } if storage_config: step["storage"] = storage_config storage_config = step return get_storage(**storage_config) diff --git a/swh/storage/cassandra/storage.py b/swh/storage/cassandra/storage.py index c8a3a980..41008892 100644 --- a/swh/storage/cassandra/storage.py +++ b/swh/storage/cassandra/storage.py @@ -1,1202 +1,1189 @@ # Copyright (C) 2019-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import datetime import itertools import json import random import re from typing import Any, Dict, List, Iterable, Optional, Union import attr -from deprecated import deprecated from swh.core.api.serializers import msgpack_loads, msgpack_dumps from swh.model.identifiers import parse_swhid, SWHID from swh.model.hashutil import DEFAULT_ALGORITHMS from swh.model.model import ( Revision, Release, Directory, DirectoryEntry, Content, SkippedContent, OriginVisit, OriginVisitStatus, Snapshot, Origin, MetadataAuthority, MetadataAuthorityType, MetadataFetcher, MetadataTargetType, RawExtrinsicMetadata, ) from swh.storage.objstorage import ObjStorage from swh.storage.writer import JournalWriter from swh.storage.utils import map_optional, now from ..exc import StorageArgumentException, HashCollision from .common import TOKEN_BEGIN, TOKEN_END from .converters import ( revision_to_db, revision_from_db, release_to_db, release_from_db, row_to_visit_status, ) from .cql import CqlRunner from .schema import HASH_ALGORITHMS # Max block size of contents to return BULK_BLOCK_CONTENT_LEN_MAX = 10000 class CassandraStorage: def __init__(self, hosts, keyspace, objstorage, port=9042, journal_writer=None): self._cql_runner = CqlRunner(hosts, keyspace, port) self.journal_writer = JournalWriter(journal_writer) self.objstorage = ObjStorage(objstorage) def check_config(self, *, check_write): self._cql_runner.check_read() return True def _content_get_from_hash(self, algo, hash_) -> Iterable: """From the name of a hash algorithm and a value of that hash, looks up the "hash -> token" secondary table (content_by_{algo}) to get tokens. Then, looks up the main table (content) to get all contents with that token, and filters out contents whose hash doesn't match.""" found_tokens = self._cql_runner.content_get_tokens_from_single_hash(algo, hash_) for token in found_tokens: # Query the main table ('content'). res = self._cql_runner.content_get_from_token(token) for row in res: # re-check the the hash (in case of murmur3 collision) if getattr(row, algo) == hash_: yield row def _content_add(self, contents: List[Content], with_data: bool) -> Dict: # Filter-out content already in the database. contents = [ c for c in contents if not self._cql_runner.content_get_from_pk(c.to_dict()) ] self.journal_writer.content_add(contents) if with_data: # First insert to the objstorage, if the endpoint is # `content_add` (as opposed to `content_add_metadata`). # TODO: this should probably be done in concurrently to inserting # in index tables (but still before the main table; so an entry is # only added to the main table after everything else was # successfully inserted. summary = self.objstorage.content_add( c for c in contents if c.status != "absent" ) content_add_bytes = summary["content:add:bytes"] content_add = 0 for content in contents: content_add += 1 # Check for sha1 or sha1_git collisions. This test is not atomic # with the insertion, so it won't detect a collision if both # contents are inserted at the same time, but it's good enough. # # The proper way to do it would probably be a BATCH, but this # would be inefficient because of the number of partitions we # need to affect (len(HASH_ALGORITHMS)+1, which is currently 5) for algo in {"sha1", "sha1_git"}: collisions = [] # Get tokens of 'content' rows with the same value for # sha1/sha1_git rows = self._content_get_from_hash(algo, content.get_hash(algo)) for row in rows: if getattr(row, algo) != content.get_hash(algo): # collision of token(partition key), ignore this # row continue for algo in HASH_ALGORITHMS: if getattr(row, algo) != content.get_hash(algo): # This hash didn't match; discard the row. collisions.append( {algo: getattr(row, algo) for algo in HASH_ALGORITHMS} ) if collisions: collisions.append(content.hashes()) raise HashCollision(algo, content.get_hash(algo), collisions) (token, insertion_finalizer) = self._cql_runner.content_add_prepare(content) # Then add to index tables for algo in HASH_ALGORITHMS: self._cql_runner.content_index_add_one(algo, content, token) # Then to the main table insertion_finalizer() summary = { "content:add": content_add, } if with_data: summary["content:add:bytes"] = content_add_bytes return summary def content_add(self, content: Iterable[Content]) -> Dict: contents = [attr.evolve(c, ctime=now()) for c in content] return self._content_add(list(contents), with_data=True) def content_update(self, content, keys=[]): raise NotImplementedError( "content_update is not supported by the Cassandra backend" ) def content_add_metadata(self, content: Iterable[Content]) -> Dict: return self._content_add(list(content), with_data=False) def content_get(self, content): if len(content) > BULK_BLOCK_CONTENT_LEN_MAX: raise StorageArgumentException( "Sending at most %s contents." % BULK_BLOCK_CONTENT_LEN_MAX ) yield from self.objstorage.content_get(content) def content_get_partition( self, partition_id: int, nb_partitions: int, limit: int = 1000, page_token: str = None, ): if limit is None: raise StorageArgumentException("limit should not be None") # Compute start and end of the range of tokens covered by the # requested partition partition_size = (TOKEN_END - TOKEN_BEGIN) // nb_partitions range_start = TOKEN_BEGIN + partition_id * partition_size range_end = TOKEN_BEGIN + (partition_id + 1) * partition_size # offset the range start according to the `page_token`. if page_token is not None: if not (range_start <= int(page_token) <= range_end): raise StorageArgumentException("Invalid page_token.") range_start = int(page_token) # Get the first rows of the range rows = self._cql_runner.content_get_token_range(range_start, range_end, limit) rows = list(rows) if len(rows) == limit: next_page_token: Optional[str] = str(rows[-1].tok + 1) else: next_page_token = None return { "contents": [row._asdict() for row in rows if row.status != "absent"], "next_page_token": next_page_token, } def content_get_metadata(self, contents: List[bytes]) -> Dict[bytes, List[Dict]]: result: Dict[bytes, List[Dict]] = {sha1: [] for sha1 in contents} for sha1 in contents: # Get all (sha1, sha1_git, sha256, blake2s256) whose sha1 # matches the argument, from the index table ('content_by_sha1') for row in self._content_get_from_hash("sha1", sha1): content_metadata = row._asdict() content_metadata.pop("ctime") result[content_metadata["sha1"]].append(content_metadata) return result def content_find(self, content): # Find an algorithm that is common to all the requested contents. # It will be used to do an initial filtering efficiently. filter_algos = list(set(content).intersection(HASH_ALGORITHMS)) if not filter_algos: raise StorageArgumentException( "content keys must contain at least one of: " "%s" % ", ".join(sorted(HASH_ALGORITHMS)) ) common_algo = filter_algos[0] results = [] rows = self._content_get_from_hash(common_algo, content[common_algo]) for row in rows: # Re-check all the hashes, in case of collisions (either of the # hash of the partition key, or the hashes in it) for algo in HASH_ALGORITHMS: if content.get(algo) and getattr(row, algo) != content[algo]: # This hash didn't match; discard the row. break else: # All hashes match, keep this row. results.append( { **row._asdict(), "ctime": row.ctime.replace(tzinfo=datetime.timezone.utc), } ) return results def content_missing(self, content, key_hash="sha1"): for cont in content: res = self.content_find(cont) if not res: yield cont[key_hash] if any(c["status"] == "missing" for c in res): yield cont[key_hash] def content_missing_per_sha1(self, contents): return self.content_missing([{"sha1": c for c in contents}]) def content_missing_per_sha1_git(self, contents): return self.content_missing( [{"sha1_git": c for c in contents}], key_hash="sha1_git" ) def content_get_random(self): return self._cql_runner.content_get_random().sha1_git def _skipped_content_get_from_hash(self, algo, hash_) -> Iterable: """From the name of a hash algorithm and a value of that hash, looks up the "hash -> token" secondary table (skipped_content_by_{algo}) to get tokens. Then, looks up the main table (content) to get all contents with that token, and filters out contents whose hash doesn't match.""" found_tokens = self._cql_runner.skipped_content_get_tokens_from_single_hash( algo, hash_ ) for token in found_tokens: # Query the main table ('content'). res = self._cql_runner.skipped_content_get_from_token(token) for row in res: # re-check the the hash (in case of murmur3 collision) if getattr(row, algo) == hash_: yield row def _skipped_content_add(self, contents: Iterable[SkippedContent]) -> Dict: # Filter-out content already in the database. contents = [ c for c in contents if not self._cql_runner.skipped_content_get_from_pk(c.to_dict()) ] self.journal_writer.skipped_content_add(contents) for content in contents: # Compute token of the row in the main table (token, insertion_finalizer) = self._cql_runner.skipped_content_add_prepare( content ) # Then add to index tables for algo in HASH_ALGORITHMS: self._cql_runner.skipped_content_index_add_one(algo, content, token) # Then to the main table insertion_finalizer() return {"skipped_content:add": len(contents)} def skipped_content_add(self, content: Iterable[SkippedContent]) -> Dict: contents = [attr.evolve(c, ctime=now()) for c in content] return self._skipped_content_add(contents) def skipped_content_missing(self, contents): for content in contents: if not self._cql_runner.skipped_content_get_from_pk(content): yield {algo: content[algo] for algo in DEFAULT_ALGORITHMS} def directory_add(self, directories: Iterable[Directory]) -> Dict: directories = list(directories) # Filter out directories that are already inserted. missing = self.directory_missing([dir_.id for dir_ in directories]) directories = [dir_ for dir_ in directories if dir_.id in missing] self.journal_writer.directory_add(directories) for directory in directories: # Add directory entries to the 'directory_entry' table for entry in directory.entries: self._cql_runner.directory_entry_add_one( {**entry.to_dict(), "directory_id": directory.id} ) # Add the directory *after* adding all the entries, so someone # calling snapshot_get_branch in the meantime won't end up # with half the entries. self._cql_runner.directory_add_one(directory.id) return {"directory:add": len(missing)} def directory_missing(self, directories): return self._cql_runner.directory_missing(directories) def _join_dentry_to_content(self, dentry): keys = ( "status", "sha1", "sha1_git", "sha256", "length", ) ret = dict.fromkeys(keys) ret.update(dentry.to_dict()) if ret["type"] == "file": content = self.content_find({"sha1_git": ret["target"]}) if content: content = content[0] for key in keys: ret[key] = content[key] return ret def _directory_ls(self, directory_id, recursive, prefix=b""): if self.directory_missing([directory_id]): return rows = list(self._cql_runner.directory_entry_get([directory_id])) for row in rows: # Build and yield the directory entry dict entry = row._asdict() del entry["directory_id"] entry = DirectoryEntry.from_dict(entry) ret = self._join_dentry_to_content(entry) ret["name"] = prefix + ret["name"] ret["dir_id"] = directory_id yield ret if recursive and ret["type"] == "dir": yield from self._directory_ls( ret["target"], True, prefix + ret["name"] + b"/" ) def directory_entry_get_by_path(self, directory, paths): return self._directory_entry_get_by_path(directory, paths, b"") def _directory_entry_get_by_path(self, directory, paths, prefix): if not paths: return contents = list(self.directory_ls(directory)) if not contents: return def _get_entry(entries, name): """Finds the entry with the requested name, prepends the prefix (to get its full path), and returns it. If no entry has that name, returns None.""" for entry in entries: if entry["name"] == name: entry = entry.copy() entry["name"] = prefix + entry["name"] return entry first_item = _get_entry(contents, paths[0]) if len(paths) == 1: return first_item if not first_item or first_item["type"] != "dir": return return self._directory_entry_get_by_path( first_item["target"], paths[1:], prefix + paths[0] + b"/" ) def directory_ls(self, directory, recursive=False): yield from self._directory_ls(directory, recursive) def directory_get_random(self): return self._cql_runner.directory_get_random().id def revision_add(self, revisions: Iterable[Revision]) -> Dict: revisions = list(revisions) # Filter-out revisions already in the database missing = self.revision_missing([rev.id for rev in revisions]) revisions = [rev for rev in revisions if rev.id in missing] self.journal_writer.revision_add(revisions) for revision in revisions: revobject = revision_to_db(revision) if revobject: # Add parents first for (rank, parent) in enumerate(revobject["parents"]): self._cql_runner.revision_parent_add_one( revobject["id"], rank, parent ) # Then write the main revision row. # Writing this after all parents were written ensures that # read endpoints don't return a partial view while writing # the parents self._cql_runner.revision_add_one(revobject) return {"revision:add": len(revisions)} def revision_missing(self, revisions): return self._cql_runner.revision_missing(revisions) def revision_get(self, revisions): rows = self._cql_runner.revision_get(revisions) revs = {} for row in rows: # TODO: use a single query to get all parents? # (it might have lower latency, but requires more code and more # bandwidth, because revision id would be part of each returned # row) parent_rows = self._cql_runner.revision_parent_get(row.id) # parent_rank is the clustering key, so results are already # sorted by rank. parents = tuple(row.parent_id for row in parent_rows) rev = revision_from_db(row, parents=parents) revs[rev.id] = rev.to_dict() for rev_id in revisions: yield revs.get(rev_id) def _get_parent_revs(self, rev_ids, seen, limit, short): if limit and len(seen) >= limit: return rev_ids = [id_ for id_ in rev_ids if id_ not in seen] if not rev_ids: return seen |= set(rev_ids) # We need this query, even if short=True, to return consistent # results (ie. not return only a subset of a revision's parents # if it is being written) if short: rows = self._cql_runner.revision_get_ids(rev_ids) else: rows = self._cql_runner.revision_get(rev_ids) for row in rows: # TODO: use a single query to get all parents? # (it might have less latency, but requires less code and more # bandwidth (because revision id would be part of each returned # row) parent_rows = self._cql_runner.revision_parent_get(row.id) # parent_rank is the clustering key, so results are already # sorted by rank. parents = tuple(row.parent_id for row in parent_rows) if short: yield (row.id, parents) else: rev = revision_from_db(row, parents=parents) yield rev.to_dict() yield from self._get_parent_revs(parents, seen, limit, short) def revision_log(self, revisions, limit=None): seen = set() yield from self._get_parent_revs(revisions, seen, limit, False) def revision_shortlog(self, revisions, limit=None): seen = set() yield from self._get_parent_revs(revisions, seen, limit, True) def revision_get_random(self): return self._cql_runner.revision_get_random().id def release_add(self, releases: Iterable[Release]) -> Dict: to_add = [] for rel in releases: if rel not in to_add: to_add.append(rel) missing = set(self.release_missing([rel.id for rel in to_add])) to_add = [rel for rel in to_add if rel.id in missing] self.journal_writer.release_add(to_add) for release in to_add: if release: self._cql_runner.release_add_one(release_to_db(release)) return {"release:add": len(to_add)} def release_missing(self, releases): return self._cql_runner.release_missing(releases) def release_get(self, releases): rows = self._cql_runner.release_get(releases) rels = {} for row in rows: release = release_from_db(row) rels[row.id] = release.to_dict() for rel_id in releases: yield rels.get(rel_id) def release_get_random(self): return self._cql_runner.release_get_random().id def snapshot_add(self, snapshots: Iterable[Snapshot]) -> Dict: + snapshots = list(snapshots) missing = self._cql_runner.snapshot_missing([snp.id for snp in snapshots]) snapshots = [snp for snp in snapshots if snp.id in missing] for snapshot in snapshots: self.journal_writer.snapshot_add([snapshot]) # Add branches for (branch_name, branch) in snapshot.branches.items(): if branch is None: target_type = None target = None else: target_type = branch.target_type.value target = branch.target self._cql_runner.snapshot_branch_add_one( { "snapshot_id": snapshot.id, "name": branch_name, "target_type": target_type, "target": target, } ) # Add the snapshot *after* adding all the branches, so someone # calling snapshot_get_branch in the meantime won't end up # with half the branches. self._cql_runner.snapshot_add_one(snapshot.id) return {"snapshot:add": len(snapshots)} def snapshot_missing(self, snapshots): return self._cql_runner.snapshot_missing(snapshots) def snapshot_get(self, snapshot_id): return self.snapshot_get_branches(snapshot_id) def snapshot_get_by_origin_visit(self, origin, visit): try: visit = self.origin_visit_get_by(origin, visit) except IndexError: return None return self.snapshot_get(visit["snapshot"]) def snapshot_count_branches(self, snapshot_id): if self._cql_runner.snapshot_missing([snapshot_id]): # Makes sure we don't fetch branches for a snapshot that is # being added. return None rows = list(self._cql_runner.snapshot_count_branches(snapshot_id)) assert len(rows) == 1 (nb_none, counts) = rows[0].counts counts = dict(counts) if nb_none: counts[None] = nb_none return counts def snapshot_get_branches( self, snapshot_id, branches_from=b"", branches_count=1000, target_types=None ): if self._cql_runner.snapshot_missing([snapshot_id]): # Makes sure we don't fetch branches for a snapshot that is # being added. return None branches = [] while len(branches) < branches_count + 1: new_branches = list( self._cql_runner.snapshot_branch_get( snapshot_id, branches_from, branches_count + 1 ) ) if not new_branches: break branches_from = new_branches[-1].name new_branches_filtered = new_branches # Filter by target_type if target_types: new_branches_filtered = [ branch for branch in new_branches_filtered if branch.target is not None and branch.target_type in target_types ] branches.extend(new_branches_filtered) if len(new_branches) < branches_count + 1: break if len(branches) > branches_count: last_branch = branches.pop(-1).name else: last_branch = None branches = { branch.name: {"target": branch.target, "target_type": branch.target_type,} if branch.target else None for branch in branches } return { "id": snapshot_id, "branches": branches, "next_branch": last_branch, } def snapshot_get_random(self): return self._cql_runner.snapshot_get_random().id def object_find_by_sha1_git(self, ids): results = {id_: [] for id_ in ids} missing_ids = set(ids) # Mind the order, revision is the most likely one for a given ID, # so we check revisions first. queries = [ ("revision", self._cql_runner.revision_missing), ("release", self._cql_runner.release_missing), ("content", self._cql_runner.content_missing_by_sha1_git), ("directory", self._cql_runner.directory_missing), ] for (object_type, query_fn) in queries: found_ids = missing_ids - set(query_fn(missing_ids)) for sha1_git in found_ids: results[sha1_git].append( {"sha1_git": sha1_git, "type": object_type,} ) missing_ids.remove(sha1_git) if not missing_ids: # We found everything, skipping the next queries. break return results def origin_get(self, origins): if isinstance(origins, dict): # Old API return_single = True origins = [origins] else: return_single = False if any("id" in origin for origin in origins): raise StorageArgumentException("Origin ids are not supported.") results = [self.origin_get_one(origin) for origin in origins] if return_single: assert len(results) == 1 return results[0] else: return results def origin_get_one(self, origin: Dict[str, Any]) -> Optional[Dict[str, Any]]: if "id" in origin: raise StorageArgumentException("Origin ids are not supported.") if "url" not in origin: raise StorageArgumentException("Missing origin url") rows = self._cql_runner.origin_get_by_url(origin["url"]) rows = list(rows) if rows: assert len(rows) == 1 result = rows[0]._asdict() return { "url": result["url"], } else: return None def origin_get_by_sha1(self, sha1s): results = [] for sha1 in sha1s: rows = self._cql_runner.origin_get_by_sha1(sha1) if rows: results.append({"url": rows.one().url}) else: results.append(None) return results def origin_list(self, page_token: Optional[str] = None, limit: int = 100) -> dict: # Compute what token to begin the listing from start_token = TOKEN_BEGIN if page_token: start_token = int(page_token) if not (TOKEN_BEGIN <= start_token <= TOKEN_END): raise StorageArgumentException("Invalid page_token.") rows = self._cql_runner.origin_list(start_token, limit) rows = list(rows) if len(rows) == limit: next_page_token: Optional[str] = str(rows[-1].tok + 1) else: next_page_token = None return { "origins": [{"url": row.url} for row in rows], "next_page_token": next_page_token, } def origin_search( self, url_pattern, offset=0, limit=50, regexp=False, with_visit=False ): # TODO: remove this endpoint, swh-search should be used instead. origins = self._cql_runner.origin_iter_all() if regexp: pat = re.compile(url_pattern) origins = [orig for orig in origins if pat.search(orig.url)] else: origins = [orig for orig in origins if url_pattern in orig.url] if with_visit: origins = [orig for orig in origins if orig.next_visit_id > 1] return [{"url": orig.url,} for orig in origins[offset : offset + limit]] def origin_add(self, origins: Iterable[Origin]) -> Dict[str, int]: + origins = list(origins) known_origins = [ Origin.from_dict(d) for d in self.origin_get([origin.to_dict() for origin in origins]) if d is not None ] to_add = [origin for origin in origins if origin not in known_origins] self.journal_writer.origin_add(to_add) for origin in to_add: self._cql_runner.origin_add_one(origin) return {"origin:add": len(to_add)} - @deprecated("Use origin_add([origin]) instead") - def origin_add_one(self, origin: Origin) -> str: - known_origin = self.origin_get_one(origin.to_dict()) - - if known_origin: - origin_url = known_origin["url"] - else: - self.journal_writer.origin_add([origin]) - - self._cql_runner.origin_add_one(origin) - origin_url = origin.url - - return origin_url - def origin_visit_add(self, visits: Iterable[OriginVisit]) -> Iterable[OriginVisit]: for visit in visits: origin = self.origin_get({"url": visit.origin}) if not origin: # Cannot add a visit without an origin raise StorageArgumentException("Unknown origin %s", visit.origin) all_visits = [] nb_visits = 0 for visit in visits: nb_visits += 1 if not visit.visit: visit_id = self._cql_runner.origin_generate_unique_visit_id( visit.origin ) visit = attr.evolve(visit, visit=visit_id) self.journal_writer.origin_visit_add([visit]) self._cql_runner.origin_visit_add_one(visit) assert visit.visit is not None all_visits.append(visit) self._origin_visit_status_add( OriginVisitStatus( origin=visit.origin, visit=visit.visit, date=visit.date, status="created", snapshot=None, ) ) return all_visits def _origin_visit_status_add(self, visit_status: OriginVisitStatus) -> None: """Add an origin visit status""" self.journal_writer.origin_visit_status_add([visit_status]) self._cql_runner.origin_visit_status_add_one(visit_status) def origin_visit_status_add( self, visit_statuses: Iterable[OriginVisitStatus] ) -> None: # First round to check existence (fail early if any is ko) for visit_status in visit_statuses: origin_url = self.origin_get({"url": visit_status.origin}) if not origin_url: raise StorageArgumentException(f"Unknown origin {visit_status.origin}") for visit_status in visit_statuses: self._origin_visit_status_add(visit_status) def _origin_visit_apply_last_status(self, visit: Dict[str, Any]) -> Dict[str, Any]: """Retrieve the latest visit status information for the origin visit. Then merge it with the visit and return it. """ row = self._cql_runner.origin_visit_status_get_latest( visit["origin"], visit["visit"] ) assert row is not None visit_status = row_to_visit_status(row) return { # default to the values in visit **visit, # override with the last update **visit_status.to_dict(), # visit['origin'] is the URL (via a join), while # visit_status['origin'] is only an id. "origin": visit["origin"], # but keep the date of the creation of the origin visit "date": visit["date"], } def _origin_visit_get_updated(self, origin: str, visit_id: int) -> Dict[str, Any]: """Retrieve origin visit and latest origin visit status and merge them into an origin visit. """ row_visit = self._cql_runner.origin_visit_get_one(origin, visit_id) assert row_visit is not None visit = self._format_origin_visit_row(row_visit) return self._origin_visit_apply_last_status(visit) @staticmethod def _format_origin_visit_row(visit): return { **visit._asdict(), "origin": visit.origin, "date": visit.date.replace(tzinfo=datetime.timezone.utc), } def origin_visit_get( self, origin: str, last_visit: Optional[int] = None, limit: Optional[int] = None, order: str = "asc", ) -> Iterable[Dict[str, Any]]: rows = self._cql_runner.origin_visit_get(origin, last_visit, limit, order) for row in rows: visit = self._format_origin_visit_row(row) yield self._origin_visit_apply_last_status(visit) def origin_visit_find_by_date( self, origin: str, visit_date: datetime.datetime ) -> Optional[Dict[str, Any]]: # Iterator over all the visits of the origin # This should be ok for now, as there aren't too many visits # per origin. rows = list(self._cql_runner.origin_visit_get_all(origin)) def key(visit): dt = visit.date.replace(tzinfo=datetime.timezone.utc) - visit_date return (abs(dt), -visit.visit) if rows: row = min(rows, key=key) visit = self._format_origin_visit_row(row) return self._origin_visit_apply_last_status(visit) return None def origin_visit_get_by(self, origin: str, visit: int) -> Optional[Dict[str, Any]]: row = self._cql_runner.origin_visit_get_one(origin, visit) if row: visit_ = self._format_origin_visit_row(row) return self._origin_visit_apply_last_status(visit_) return None def origin_visit_get_latest( self, origin: str, type: Optional[str] = None, allowed_statuses: Optional[List[str]] = None, require_snapshot: bool = False, ) -> Optional[Dict[str, Any]]: # TODO: Do not fetch all visits rows = self._cql_runner.origin_visit_get_all(origin) latest_visit = None for row in rows: visit = self._format_origin_visit_row(row) updated_visit = self._origin_visit_apply_last_status(visit) if type is not None and updated_visit["type"] != type: continue if allowed_statuses and updated_visit["status"] not in allowed_statuses: continue if require_snapshot and updated_visit["snapshot"] is None: continue # updated_visit is a candidate if latest_visit is not None: if updated_visit["date"] < latest_visit["date"]: continue if updated_visit["visit"] < latest_visit["visit"]: continue latest_visit = updated_visit return latest_visit def origin_visit_status_get_latest( self, origin_url: str, visit: int, allowed_statuses: Optional[List[str]] = None, require_snapshot: bool = False, ) -> Optional[OriginVisitStatus]: rows = self._cql_runner.origin_visit_status_get( origin_url, visit, allowed_statuses, require_snapshot ) # filtering is done python side as we cannot do it server side if allowed_statuses: rows = [row for row in rows if row.status in allowed_statuses] if require_snapshot: rows = [row for row in rows if row.snapshot is not None] if not rows: return None return row_to_visit_status(rows[0]) def origin_visit_get_random(self, type: str) -> Optional[Dict[str, Any]]: back_in_the_day = now() - datetime.timedelta(weeks=12) # 3 months back # Random position to start iteration at start_token = random.randint(TOKEN_BEGIN, TOKEN_END) # Iterator over all visits, ordered by token(origins) then visit_id rows = self._cql_runner.origin_visit_iter(start_token) for row in rows: visit = self._format_origin_visit_row(row) visit_status = self._origin_visit_apply_last_status(visit) if ( visit_status["date"] > back_in_the_day and visit_status["status"] == "full" ): return visit_status else: return None def stat_counters(self): rows = self._cql_runner.stat_counters() keys = ( "content", "directory", "origin", "origin_visit", "release", "revision", "skipped_content", "snapshot", ) stats = {key: 0 for key in keys} stats.update({row.object_type: row.count for row in rows}) return stats def refresh_stat_counters(self): pass def object_metadata_add(self, metadata: Iterable[RawExtrinsicMetadata]) -> None: for metadata_entry in metadata: if not self._cql_runner.metadata_authority_get( metadata_entry.authority.type.value, metadata_entry.authority.url ): raise StorageArgumentException( f"Unknown authority {metadata_entry.authority}" ) if not self._cql_runner.metadata_fetcher_get( metadata_entry.fetcher.name, metadata_entry.fetcher.version ): raise StorageArgumentException( f"Unknown fetcher {metadata_entry.fetcher}" ) try: self._cql_runner.object_metadata_add( type=metadata_entry.type.value, id=str(metadata_entry.id), authority_type=metadata_entry.authority.type.value, authority_url=metadata_entry.authority.url, discovery_date=metadata_entry.discovery_date, fetcher_name=metadata_entry.fetcher.name, fetcher_version=metadata_entry.fetcher.version, format=metadata_entry.format, metadata=metadata_entry.metadata, origin=metadata_entry.origin, visit=metadata_entry.visit, snapshot=map_optional(str, metadata_entry.snapshot), release=map_optional(str, metadata_entry.release), revision=map_optional(str, metadata_entry.revision), path=metadata_entry.path, directory=map_optional(str, metadata_entry.directory), ) except TypeError as e: raise StorageArgumentException(*e.args) def object_metadata_get( self, object_type: MetadataTargetType, id: Union[str, SWHID], authority: MetadataAuthority, after: Optional[datetime.datetime] = None, page_token: Optional[bytes] = None, limit: int = 1000, ) -> Dict[str, Union[Optional[bytes], List[RawExtrinsicMetadata]]]: if object_type == MetadataTargetType.ORIGIN: if isinstance(id, SWHID): raise StorageArgumentException( f"object_metadata_get called with object_type='origin', but " f"provided id is an SWHID: {id!r}" ) else: if not isinstance(id, SWHID): raise StorageArgumentException( f"object_metadata_get called with object_type!='origin', but " f"provided id is not an SWHID: {id!r}" ) if page_token is not None: (after_date, after_fetcher_name, after_fetcher_url) = msgpack_loads( page_token ) if after and after_date < after: raise StorageArgumentException( "page_token is inconsistent with the value of 'after'." ) entries = self._cql_runner.object_metadata_get_after_date_and_fetcher( str(id), authority.type.value, authority.url, after_date, after_fetcher_name, after_fetcher_url, ) elif after is not None: entries = self._cql_runner.object_metadata_get_after_date( str(id), authority.type.value, authority.url, after ) else: entries = self._cql_runner.object_metadata_get( str(id), authority.type.value, authority.url ) if limit: entries = itertools.islice(entries, 0, limit + 1) results = [] for entry in entries: discovery_date = entry.discovery_date.replace(tzinfo=datetime.timezone.utc) assert str(id) == entry.id result = RawExtrinsicMetadata( type=MetadataTargetType(entry.type), id=id, authority=MetadataAuthority( type=MetadataAuthorityType(entry.authority_type), url=entry.authority_url, ), fetcher=MetadataFetcher( name=entry.fetcher_name, version=entry.fetcher_version, ), discovery_date=discovery_date, format=entry.format, metadata=entry.metadata, origin=entry.origin, visit=entry.visit, snapshot=map_optional(parse_swhid, entry.snapshot), release=map_optional(parse_swhid, entry.release), revision=map_optional(parse_swhid, entry.revision), path=entry.path, directory=map_optional(parse_swhid, entry.directory), ) results.append(result) if len(results) > limit: results.pop() assert len(results) == limit last_result = results[-1] next_page_token: Optional[bytes] = msgpack_dumps( ( last_result.discovery_date, last_result.fetcher.name, last_result.fetcher.version, ) ) else: next_page_token = None return { "next_page_token": next_page_token, "results": results, } def metadata_fetcher_add(self, fetchers: Iterable[MetadataFetcher]) -> None: for fetcher in fetchers: self._cql_runner.metadata_fetcher_add( fetcher.name, fetcher.version, json.dumps(map_optional(dict, fetcher.metadata)), ) def metadata_fetcher_get( self, name: str, version: str ) -> Optional[MetadataFetcher]: fetcher = self._cql_runner.metadata_fetcher_get(name, version) if fetcher: return MetadataFetcher( name=fetcher.name, version=fetcher.version, metadata=json.loads(fetcher.metadata), ) else: return None def metadata_authority_add(self, authorities: Iterable[MetadataAuthority]) -> None: for authority in authorities: self._cql_runner.metadata_authority_add( authority.url, authority.type.value, json.dumps(map_optional(dict, authority.metadata)), ) def metadata_authority_get( self, type: MetadataAuthorityType, url: str ) -> Optional[MetadataAuthority]: authority = self._cql_runner.metadata_authority_get(type.value, url) if authority: return MetadataAuthority( type=MetadataAuthorityType(authority.type), url=authority.url, metadata=json.loads(authority.metadata), ) else: return None def clear_buffers(self, object_types: Optional[Iterable[str]] = None) -> None: """Do nothing """ return None def flush(self, object_types: Optional[Iterable[str]] = None) -> Dict: return {} diff --git a/swh/storage/in_memory.py b/swh/storage/in_memory.py index e1849989..894db8ff 100644 --- a/swh/storage/in_memory.py +++ b/swh/storage/in_memory.py @@ -1,1229 +1,1226 @@ # Copyright (C) 2015-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import re import bisect import collections import copy import datetime import itertools import random from collections import defaultdict from datetime import timedelta from typing import ( Any, Callable, Dict, Generic, Hashable, Iterable, Iterator, List, Optional, Tuple, TypeVar, Union, ) import attr -from deprecated import deprecated - from swh.core.api.serializers import msgpack_loads, msgpack_dumps from swh.model.identifiers import SWHID from swh.model.model import ( BaseContent, Content, SkippedContent, Directory, Revision, Release, Snapshot, OriginVisit, OriginVisitStatus, Origin, SHA1_SIZE, MetadataAuthority, MetadataAuthorityType, MetadataFetcher, MetadataTargetType, RawExtrinsicMetadata, ) from swh.model.hashutil import DEFAULT_ALGORITHMS, hash_to_bytes, hash_to_hex from swh.storage.objstorage import ObjStorage from swh.storage.utils import now from .converters import origin_url_to_sha1 from .exc import StorageArgumentException, HashCollision from .utils import get_partition_bounds_bytes from .writer import JournalWriter # Max block size of contents to return BULK_BLOCK_CONTENT_LEN_MAX = 10000 SortedListItem = TypeVar("SortedListItem") SortedListKey = TypeVar("SortedListKey") FetcherKey = Tuple[str, str] class SortedList(collections.UserList, Generic[SortedListKey, SortedListItem]): data: List[Tuple[SortedListKey, SortedListItem]] # https://github.com/python/mypy/issues/708 # key: Callable[[SortedListItem], SortedListKey] def __init__( self, data: List[SortedListItem] = None, key: Optional[Callable[[SortedListItem], SortedListKey]] = None, ): if key is None: def key(item): return item assert key is not None # for mypy super().__init__(sorted((key(x), x) for x in data or [])) self.key: Callable[[SortedListItem], SortedListKey] = key def add(self, item: SortedListItem): k = self.key(item) bisect.insort(self.data, (k, item)) def __iter__(self) -> Iterator[SortedListItem]: for (k, item) in self.data: yield item def iter_from(self, start_key: Any) -> Iterator[SortedListItem]: """Returns an iterator over all the elements whose key is greater or equal to `start_key`. (This is an efficient equivalent to: `(x for x in L if key(x) >= start_key)`) """ from_index = bisect.bisect_left(self.data, (start_key,)) for (k, item) in itertools.islice(self.data, from_index, None): yield item def iter_after(self, start_key: Any) -> Iterator[SortedListItem]: """Same as iter_from, but using a strict inequality.""" it = self.iter_from(start_key) for item in it: if self.key(item) > start_key: # type: ignore yield item break yield from it class InMemoryStorage: def __init__(self, journal_writer=None): self.reset() self.journal_writer = JournalWriter(journal_writer) def reset(self): self._contents = {} self._content_indexes = defaultdict(lambda: defaultdict(set)) self._skipped_contents = {} self._skipped_content_indexes = defaultdict(lambda: defaultdict(set)) self._directories = {} self._revisions = {} self._releases = {} self._snapshots = {} self._origins = {} self._origins_by_id = [] self._origins_by_sha1 = {} self._origin_visits = {} self._origin_visit_statuses: Dict[Tuple[str, int], List[OriginVisitStatus]] = {} self._persons = {} # {object_type: {id: {authority: [metadata]}}} self._object_metadata: Dict[ MetadataTargetType, Dict[ Union[str, SWHID], Dict[ Hashable, SortedList[ Tuple[datetime.datetime, FetcherKey], RawExtrinsicMetadata ], ], ], ] = defaultdict( lambda: defaultdict( lambda: defaultdict( lambda: SortedList( key=lambda x: ( x.discovery_date, self._metadata_fetcher_key(x.fetcher), ) ) ) ) ) # noqa self._metadata_fetchers: Dict[FetcherKey, MetadataFetcher] = {} self._metadata_authorities: Dict[Hashable, MetadataAuthority] = {} self._objects = defaultdict(list) self._sorted_sha1s = SortedList[bytes, bytes]() self.objstorage = ObjStorage({"cls": "memory", "args": {}}) def check_config(self, *, check_write): return True def _content_add(self, contents: Iterable[Content], with_data: bool) -> Dict: self.journal_writer.content_add(contents) content_add = 0 if with_data: summary = self.objstorage.content_add( c for c in contents if c.status != "absent" ) content_add_bytes = summary["content:add:bytes"] for content in contents: key = self._content_key(content) if key in self._contents: continue for algorithm in DEFAULT_ALGORITHMS: hash_ = content.get_hash(algorithm) if hash_ in self._content_indexes[algorithm] and ( algorithm not in {"blake2s256", "sha256"} ): colliding_content_hashes = [] # Add the already stored contents for content_hashes_set in self._content_indexes[algorithm][hash_]: hashes = dict(content_hashes_set) colliding_content_hashes.append(hashes) # Add the new colliding content colliding_content_hashes.append(content.hashes()) raise HashCollision(algorithm, hash_, colliding_content_hashes) for algorithm in DEFAULT_ALGORITHMS: hash_ = content.get_hash(algorithm) self._content_indexes[algorithm][hash_].add(key) self._objects[content.sha1_git].append(("content", content.sha1)) self._contents[key] = content self._sorted_sha1s.add(content.sha1) self._contents[key] = attr.evolve(self._contents[key], data=None) content_add += 1 summary = { "content:add": content_add, } if with_data: summary["content:add:bytes"] = content_add_bytes return summary def content_add(self, content: Iterable[Content]) -> Dict: content = [attr.evolve(c, ctime=now()) for c in content] return self._content_add(content, with_data=True) def content_update(self, content, keys=[]): self.journal_writer.content_update(content) for cont_update in content: cont_update = cont_update.copy() sha1 = cont_update.pop("sha1") for old_key in self._content_indexes["sha1"][sha1]: old_cont = self._contents.pop(old_key) for algorithm in DEFAULT_ALGORITHMS: hash_ = old_cont.get_hash(algorithm) self._content_indexes[algorithm][hash_].remove(old_key) new_cont = attr.evolve(old_cont, **cont_update) new_key = self._content_key(new_cont) self._contents[new_key] = new_cont for algorithm in DEFAULT_ALGORITHMS: hash_ = new_cont.get_hash(algorithm) self._content_indexes[algorithm][hash_].add(new_key) def content_add_metadata(self, content: Iterable[Content]) -> Dict: return self._content_add(content, with_data=False) def content_get(self, content): # FIXME: Make this method support slicing the `data`. if len(content) > BULK_BLOCK_CONTENT_LEN_MAX: raise StorageArgumentException( "Sending at most %s contents." % BULK_BLOCK_CONTENT_LEN_MAX ) yield from self.objstorage.content_get(content) def content_get_range(self, start, end, limit=1000): if limit is None: raise StorageArgumentException("limit should not be None") sha1s = ( (sha1, content_key) for sha1 in self._sorted_sha1s.iter_from(start) for content_key in self._content_indexes["sha1"][sha1] ) matched = [] next_content = None for sha1, key in sha1s: if sha1 > end: break if len(matched) >= limit: next_content = sha1 break matched.append(self._contents[key].to_dict()) return { "contents": matched, "next": next_content, } def content_get_partition( self, partition_id: int, nb_partitions: int, limit: int = 1000, page_token: str = None, ): if limit is None: raise StorageArgumentException("limit should not be None") (start, end) = get_partition_bounds_bytes( partition_id, nb_partitions, SHA1_SIZE ) if page_token: start = hash_to_bytes(page_token) if end is None: end = b"\xff" * SHA1_SIZE result = self.content_get_range(start, end, limit) result2 = { "contents": result["contents"], "next_page_token": None, } if result["next"]: result2["next_page_token"] = hash_to_hex(result["next"]) return result2 def content_get_metadata(self, contents: List[bytes]) -> Dict[bytes, List[Dict]]: result: Dict = {sha1: [] for sha1 in contents} for sha1 in contents: if sha1 in self._content_indexes["sha1"]: objs = self._content_indexes["sha1"][sha1] # only 1 element as content_add_metadata would have raised a # hash collision otherwise for key in objs: d = self._contents[key].to_dict() del d["ctime"] if "data" in d: del d["data"] result[sha1].append(d) return result def content_find(self, content): if not set(content).intersection(DEFAULT_ALGORITHMS): raise StorageArgumentException( "content keys must contain at least one of: %s" % ", ".join(sorted(DEFAULT_ALGORITHMS)) ) found = [] for algo in DEFAULT_ALGORITHMS: hash = content.get(algo) if hash and hash in self._content_indexes[algo]: found.append(self._content_indexes[algo][hash]) if not found: return [] keys = list(set.intersection(*found)) return [self._contents[key].to_dict() for key in keys] def content_missing(self, content, key_hash="sha1"): for cont in content: for (algo, hash_) in cont.items(): if algo not in DEFAULT_ALGORITHMS: continue if hash_ not in self._content_indexes.get(algo, []): yield cont[key_hash] break else: for result in self.content_find(cont): if result["status"] == "missing": yield cont[key_hash] def content_missing_per_sha1(self, contents): for content in contents: if content not in self._content_indexes["sha1"]: yield content def content_missing_per_sha1_git(self, contents): for content in contents: if content not in self._content_indexes["sha1_git"]: yield content def content_get_random(self): return random.choice(list(self._content_indexes["sha1_git"])) def _skipped_content_add(self, contents: List[SkippedContent]) -> Dict: self.journal_writer.skipped_content_add(contents) summary = {"skipped_content:add": 0} missing_contents = self.skipped_content_missing([c.hashes() for c in contents]) missing = {self._content_key(c) for c in missing_contents} contents = [c for c in contents if self._content_key(c) in missing] for content in contents: key = self._content_key(content) for algo in DEFAULT_ALGORITHMS: if content.get_hash(algo): self._skipped_content_indexes[algo][content.get_hash(algo)].add(key) self._skipped_contents[key] = content summary["skipped_content:add"] += 1 return summary def skipped_content_add(self, content: Iterable[SkippedContent]) -> Dict: content = [attr.evolve(c, ctime=now()) for c in content] return self._skipped_content_add(content) def skipped_content_missing(self, contents): for content in contents: matches = list(self._skipped_contents.values()) for (algorithm, key) in self._content_key(content): if algorithm == "blake2s256": continue # Filter out skipped contents with the same hash matches = [ match for match in matches if match.get_hash(algorithm) == key ] # if none of the contents match if not matches: yield {algo: content[algo] for algo in DEFAULT_ALGORITHMS} def directory_add(self, directories: Iterable[Directory]) -> Dict: directories = [dir_ for dir_ in directories if dir_.id not in self._directories] self.journal_writer.directory_add(directories) count = 0 for directory in directories: count += 1 self._directories[directory.id] = directory self._objects[directory.id].append(("directory", directory.id)) return {"directory:add": count} def directory_missing(self, directories): for id in directories: if id not in self._directories: yield id def _join_dentry_to_content(self, dentry): keys = ( "status", "sha1", "sha1_git", "sha256", "length", ) ret = dict.fromkeys(keys) ret.update(dentry) if ret["type"] == "file": # TODO: Make it able to handle more than one content content = self.content_find({"sha1_git": ret["target"]}) if content: content = content[0] for key in keys: ret[key] = content[key] return ret def _directory_ls(self, directory_id, recursive, prefix=b""): if directory_id in self._directories: for entry in self._directories[directory_id].entries: ret = self._join_dentry_to_content(entry.to_dict()) ret["name"] = prefix + ret["name"] ret["dir_id"] = directory_id yield ret if recursive and ret["type"] == "dir": yield from self._directory_ls( ret["target"], True, prefix + ret["name"] + b"/" ) def directory_ls(self, directory, recursive=False): yield from self._directory_ls(directory, recursive) def directory_entry_get_by_path(self, directory, paths): return self._directory_entry_get_by_path(directory, paths, b"") def directory_get_random(self): if not self._directories: return None return random.choice(list(self._directories)) def _directory_entry_get_by_path(self, directory, paths, prefix): if not paths: return contents = list(self.directory_ls(directory)) if not contents: return def _get_entry(entries, name): for entry in entries: if entry["name"] == name: entry = entry.copy() entry["name"] = prefix + entry["name"] return entry first_item = _get_entry(contents, paths[0]) if len(paths) == 1: return first_item if not first_item or first_item["type"] != "dir": return return self._directory_entry_get_by_path( first_item["target"], paths[1:], prefix + paths[0] + b"/" ) def revision_add(self, revisions: Iterable[Revision]) -> Dict: revisions = [rev for rev in revisions if rev.id not in self._revisions] self.journal_writer.revision_add(revisions) count = 0 for revision in revisions: revision = attr.evolve( revision, committer=self._person_add(revision.committer), author=self._person_add(revision.author), ) self._revisions[revision.id] = revision self._objects[revision.id].append(("revision", revision.id)) count += 1 return {"revision:add": count} def revision_missing(self, revisions): for id in revisions: if id not in self._revisions: yield id def revision_get(self, revisions): for id in revisions: if id in self._revisions: yield self._revisions.get(id).to_dict() else: yield None def _get_parent_revs(self, rev_id, seen, limit): if limit and len(seen) >= limit: return if rev_id in seen or rev_id not in self._revisions: return seen.add(rev_id) yield self._revisions[rev_id].to_dict() for parent in self._revisions[rev_id].parents: yield from self._get_parent_revs(parent, seen, limit) def revision_log(self, revisions, limit=None): seen = set() for rev_id in revisions: yield from self._get_parent_revs(rev_id, seen, limit) def revision_shortlog(self, revisions, limit=None): yield from ( (rev["id"], rev["parents"]) for rev in self.revision_log(revisions, limit) ) def revision_get_random(self): return random.choice(list(self._revisions)) def release_add(self, releases: Iterable[Release]) -> Dict: to_add = [] for rel in releases: if rel.id not in self._releases and rel not in to_add: to_add.append(rel) self.journal_writer.release_add(to_add) for rel in to_add: if rel.author: self._person_add(rel.author) self._objects[rel.id].append(("release", rel.id)) self._releases[rel.id] = rel return {"release:add": len(to_add)} def release_missing(self, releases): yield from (rel for rel in releases if rel not in self._releases) def release_get(self, releases): for rel_id in releases: if rel_id in self._releases: yield self._releases[rel_id].to_dict() else: yield None def release_get_random(self): return random.choice(list(self._releases)) def snapshot_add(self, snapshots: Iterable[Snapshot]) -> Dict: count = 0 snapshots = (snap for snap in snapshots if snap.id not in self._snapshots) for snapshot in snapshots: self.journal_writer.snapshot_add([snapshot]) self._snapshots[snapshot.id] = snapshot self._objects[snapshot.id].append(("snapshot", snapshot.id)) count += 1 return {"snapshot:add": count} def snapshot_missing(self, snapshots): for id in snapshots: if id not in self._snapshots: yield id def snapshot_get(self, snapshot_id): return self.snapshot_get_branches(snapshot_id) def snapshot_get_by_origin_visit(self, origin, visit): origin_url = self._get_origin_url(origin) if not origin_url: return if origin_url not in self._origins or visit > len( self._origin_visits[origin_url] ): return None visit = self._origin_visit_get_updated(origin_url, visit) snapshot_id = visit["snapshot"] if snapshot_id: return self.snapshot_get(snapshot_id) else: return None def snapshot_count_branches(self, snapshot_id): snapshot = self._snapshots[snapshot_id] return collections.Counter( branch.target_type.value if branch else None for branch in snapshot.branches.values() ) def snapshot_get_branches( self, snapshot_id, branches_from=b"", branches_count=1000, target_types=None ): snapshot = self._snapshots.get(snapshot_id) if snapshot is None: return None sorted_branches = sorted(snapshot.branches.items()) sorted_branch_names = [k for (k, v) in sorted_branches] from_index = bisect.bisect_left(sorted_branch_names, branches_from) if target_types: next_branch = None branches = {} for (branch_name, branch) in sorted_branches: if branch_name in sorted_branch_names[from_index:]: if branch and branch.target_type.value in target_types: if len(branches) < branches_count: branches[branch_name] = branch else: next_branch = branch_name break else: # As there is no 'target_types', we can do that much faster to_index = from_index + branches_count returned_branch_names = frozenset(sorted_branch_names[from_index:to_index]) branches = dict( (branch_name, branch) for (branch_name, branch) in snapshot.branches.items() if branch_name in returned_branch_names ) if to_index >= len(sorted_branch_names): next_branch = None else: next_branch = sorted_branch_names[to_index] branches = { name: branch.to_dict() if branch else None for (name, branch) in branches.items() } return { "id": snapshot_id, "branches": branches, "next_branch": next_branch, } def snapshot_get_random(self): return random.choice(list(self._snapshots)) def object_find_by_sha1_git(self, ids): ret = {} for id_ in ids: objs = self._objects.get(id_, []) ret[id_] = [{"sha1_git": id_, "type": obj[0],} for obj in objs] return ret def _convert_origin(self, t): if t is None: return None return t.to_dict() def origin_get(self, origins): if isinstance(origins, dict): # Old API return_single = True origins = [origins] else: return_single = False # Sanity check to be error-compatible with the pgsql backend if any("id" in origin for origin in origins) and not all( "id" in origin for origin in origins ): raise StorageArgumentException( 'Either all origins or none at all should have an "id".' ) if any("url" in origin for origin in origins) and not all( "url" in origin for origin in origins ): raise StorageArgumentException( "Either all origins or none at all should have " 'an "url" key.' ) results = [] for origin in origins: result = None if "url" in origin: if origin["url"] in self._origins: result = self._origins[origin["url"]] else: raise StorageArgumentException("Origin must have an url.") results.append(self._convert_origin(result)) if return_single: assert len(results) == 1 return results[0] else: return results def origin_get_by_sha1(self, sha1s): return [self._convert_origin(self._origins_by_sha1.get(sha1)) for sha1 in sha1s] def origin_get_range(self, origin_from=1, origin_count=100): origin_from = max(origin_from, 1) if origin_from <= len(self._origins_by_id): max_idx = origin_from + origin_count - 1 if max_idx > len(self._origins_by_id): max_idx = len(self._origins_by_id) for idx in range(origin_from - 1, max_idx): origin = self._convert_origin(self._origins[self._origins_by_id[idx]]) yield {"id": idx + 1, **origin} def origin_list(self, page_token: Optional[str] = None, limit: int = 100) -> dict: origin_urls = sorted(self._origins) if page_token: from_ = bisect.bisect_left(origin_urls, page_token) else: from_ = 0 result = { "origins": [ {"url": origin_url} for origin_url in origin_urls[from_ : from_ + limit] ] } if from_ + limit < len(origin_urls): result["next_page_token"] = origin_urls[from_ + limit] return result def origin_search( self, url_pattern, offset=0, limit=50, regexp=False, with_visit=False ): origins = map(self._convert_origin, self._origins.values()) if regexp: pat = re.compile(url_pattern) origins = [orig for orig in origins if pat.search(orig["url"])] else: origins = [orig for orig in origins if url_pattern in orig["url"]] if with_visit: filtered_origins = [] for orig in origins: visits = ( self._origin_visit_get_updated(ov.origin, ov.visit) for ov in self._origin_visits[orig["url"]] ) for ov in visits: snapshot = ov["snapshot"] if snapshot and snapshot in self._snapshots: filtered_origins.append(orig) break else: filtered_origins = origins return filtered_origins[offset : offset + limit] def origin_count(self, url_pattern, regexp=False, with_visit=False): return len( self.origin_search( url_pattern, regexp=regexp, with_visit=with_visit, limit=len(self._origins), ) ) def origin_add(self, origins: Iterable[Origin]) -> Dict[str, int]: origins = list(origins) added = 0 for origin in origins: if origin.url not in self._origins: self.origin_add_one(origin) added += 1 return {"origin:add": added} - @deprecated("Use origin_add([origin]) instead") def origin_add_one(self, origin: Origin) -> str: if origin.url not in self._origins: self.journal_writer.origin_add([origin]) # generate an origin_id because it is needed by origin_get_range. # TODO: remove this when we remove origin_get_range origin_id = len(self._origins) + 1 self._origins_by_id.append(origin.url) assert len(self._origins_by_id) == origin_id self._origins[origin.url] = origin self._origins_by_sha1[origin_url_to_sha1(origin.url)] = origin self._origin_visits[origin.url] = [] self._objects[origin.url].append(("origin", origin.url)) return origin.url def origin_visit_add(self, visits: Iterable[OriginVisit]) -> Iterable[OriginVisit]: for visit in visits: origin = self.origin_get({"url": visit.origin}) if not origin: # Cannot add a visit without an origin raise StorageArgumentException("Unknown origin %s", visit.origin) all_visits = [] for visit in visits: origin_url = visit.origin if origin_url in self._origins: origin = self._origins[origin_url] if visit.visit: self.journal_writer.origin_visit_add([visit]) while len(self._origin_visits[origin_url]) < visit.visit: self._origin_visits[origin_url].append(None) self._origin_visits[origin_url][visit.visit - 1] = visit else: # visit ids are in the range [1, +inf[ visit_id = len(self._origin_visits[origin_url]) + 1 visit = attr.evolve(visit, visit=visit_id) self.journal_writer.origin_visit_add([visit]) self._origin_visits[origin_url].append(visit) visit_key = (origin_url, visit.visit) self._objects[visit_key].append(("origin_visit", None)) assert visit.visit is not None self._origin_visit_status_add_one( OriginVisitStatus( origin=visit.origin, visit=visit.visit, date=visit.date, status="created", snapshot=None, ) ) all_visits.append(visit) return all_visits def _origin_visit_status_add_one(self, visit_status: OriginVisitStatus) -> None: """Add an origin visit status without checks. If already present, do nothing. """ self.journal_writer.origin_visit_status_add([visit_status]) visit_key = (visit_status.origin, visit_status.visit) self._origin_visit_statuses.setdefault(visit_key, []) visit_statuses = self._origin_visit_statuses[visit_key] if visit_status not in visit_statuses: visit_statuses.append(visit_status) def origin_visit_status_add( self, visit_statuses: Iterable[OriginVisitStatus], ) -> None: # First round to check existence (fail early if any is ko) for visit_status in visit_statuses: origin_url = self.origin_get({"url": visit_status.origin}) if not origin_url: raise StorageArgumentException(f"Unknown origin {visit_status.origin}") for visit_status in visit_statuses: self._origin_visit_status_add_one(visit_status) def _origin_visit_get_updated(self, origin: str, visit_id: int) -> Dict[str, Any]: """Merge origin visit and latest origin visit status """ assert visit_id >= 1 visit = self._origin_visits[origin][visit_id - 1] assert visit is not None visit_key = (origin, visit_id) visit_update = max(self._origin_visit_statuses[visit_key], key=lambda v: v.date) return { # default to the values in visit **visit.to_dict(), # override with the last update **visit_update.to_dict(), # but keep the date of the creation of the origin visit "date": visit.date, } def origin_visit_get( self, origin: str, last_visit: Optional[int] = None, limit: Optional[int] = None, order: str = "asc", ) -> Iterable[Dict[str, Any]]: order = order.lower() assert order in ["asc", "desc"] origin_url = self._get_origin_url(origin) if origin_url in self._origin_visits: visits = self._origin_visits[origin_url] visits = sorted(visits, key=lambda v: v.visit, reverse=(order == "desc")) if last_visit is not None: if order == "asc": visits = [v for v in visits if v.visit > last_visit] else: visits = [v for v in visits if v.visit < last_visit] if limit is not None: visits = visits[:limit] for visit in visits: if not visit: continue visit_id = visit.visit visit_update = self._origin_visit_get_updated(origin_url, visit_id) assert visit_update is not None yield visit_update def origin_visit_find_by_date( self, origin: str, visit_date: datetime.datetime ) -> Optional[Dict[str, Any]]: origin_url = self._get_origin_url(origin) if origin_url in self._origin_visits: visits = self._origin_visits[origin_url] visit = min(visits, key=lambda v: (abs(v.date - visit_date), -v.visit)) visit_update = self._origin_visit_get_updated(origin, visit.visit) assert visit_update is not None return visit_update return None def origin_visit_get_by(self, origin: str, visit: int) -> Optional[Dict[str, Any]]: origin_url = self._get_origin_url(origin) if origin_url in self._origin_visits and visit <= len( self._origin_visits[origin_url] ): visit_update = self._origin_visit_get_updated(origin_url, visit) assert visit_update is not None return visit_update return None def origin_visit_get_latest( self, origin: str, type: Optional[str] = None, allowed_statuses: Optional[List[str]] = None, require_snapshot: bool = False, ) -> Optional[Dict[str, Any]]: ori = self._origins.get(origin) if not ori: return None visits = self._origin_visits[ori.url] visits = [ self._origin_visit_get_updated(visit.origin, visit.visit) for visit in visits if visit is not None ] if type is not None: visits = [visit for visit in visits if visit["type"] == type] if allowed_statuses is not None: visits = [visit for visit in visits if visit["status"] in allowed_statuses] if require_snapshot: visits = [visit for visit in visits if visit["snapshot"]] visit = max(visits, key=lambda v: (v["date"], v["visit"]), default=None) if visit is None: return None return visit def origin_visit_status_get_latest( self, origin_url: str, visit: int, allowed_statuses: Optional[List[str]] = None, require_snapshot: bool = False, ) -> Optional[OriginVisitStatus]: ori = self._origins.get(origin_url) if not ori: return None visit_key = (origin_url, visit) visits = self._origin_visit_statuses.get(visit_key) if not visits: return None if allowed_statuses is not None: visits = [visit for visit in visits if visit.status in allowed_statuses] if require_snapshot: visits = [visit for visit in visits if visit.snapshot] visit_status = max(visits, key=lambda v: (v.date, v.visit), default=None) return visit_status def _select_random_origin_visit_by_type(self, type: str) -> str: while True: url = random.choice(list(self._origin_visits.keys())) random_origin_visits = self._origin_visits[url] if random_origin_visits[0].type == type: return url def origin_visit_get_random(self, type: str) -> Optional[Dict[str, Any]]: url = self._select_random_origin_visit_by_type(type) random_origin_visits = copy.deepcopy(self._origin_visits[url]) random_origin_visits.reverse() back_in_the_day = now() - timedelta(weeks=12) # 3 months back # This should be enough for tests for visit in random_origin_visits: updated_visit = self._origin_visit_get_updated(url, visit.visit) assert updated_visit is not None if ( updated_visit["date"] > back_in_the_day and updated_visit["status"] == "full" ): return updated_visit else: return None def stat_counters(self): keys = ( "content", "directory", "origin", "origin_visit", "person", "release", "revision", "skipped_content", "snapshot", ) stats = {key: 0 for key in keys} stats.update( collections.Counter( obj_type for (obj_type, obj_id) in itertools.chain(*self._objects.values()) ) ) return stats def refresh_stat_counters(self): pass def object_metadata_add(self, metadata: Iterable[RawExtrinsicMetadata],) -> None: for metadata_entry in metadata: authority_key = self._metadata_authority_key(metadata_entry.authority) if authority_key not in self._metadata_authorities: raise StorageArgumentException( f"Unknown authority {metadata_entry.authority}" ) fetcher_key = self._metadata_fetcher_key(metadata_entry.fetcher) if fetcher_key not in self._metadata_fetchers: raise StorageArgumentException( f"Unknown fetcher {metadata_entry.fetcher}" ) object_metadata_list = self._object_metadata[metadata_entry.type][ metadata_entry.id ][authority_key] for existing_object_metadata in object_metadata_list: if ( self._metadata_fetcher_key(existing_object_metadata.fetcher) == fetcher_key and existing_object_metadata.discovery_date == metadata_entry.discovery_date ): # Duplicate of an existing one; ignore it. break else: object_metadata_list.add(metadata_entry) def object_metadata_get( self, object_type: MetadataTargetType, id: Union[str, SWHID], authority: MetadataAuthority, after: Optional[datetime.datetime] = None, page_token: Optional[bytes] = None, limit: int = 1000, ) -> Dict[str, Union[Optional[bytes], List[RawExtrinsicMetadata]]]: authority_key = self._metadata_authority_key(authority) if object_type == MetadataTargetType.ORIGIN: if isinstance(id, SWHID): raise StorageArgumentException( f"object_metadata_get called with object_type='origin', but " f"provided id is an SWHID: {id!r}" ) else: if not isinstance(id, SWHID): raise StorageArgumentException( f"object_metadata_get called with object_type!='origin', but " f"provided id is not an SWHID: {id!r}" ) if page_token is not None: (after_time, after_fetcher) = msgpack_loads(page_token) after_fetcher = tuple(after_fetcher) if after is not None and after > after_time: raise StorageArgumentException( "page_token is inconsistent with the value of 'after'." ) entries = self._object_metadata[object_type][id][authority_key].iter_after( (after_time, after_fetcher) ) elif after is not None: entries = self._object_metadata[object_type][id][authority_key].iter_from( (after,) ) entries = (entry for entry in entries if entry.discovery_date > after) else: entries = iter(self._object_metadata[object_type][id][authority_key]) if limit: entries = itertools.islice(entries, 0, limit + 1) results = [] for entry in entries: entry_authority = self._metadata_authorities[ self._metadata_authority_key(entry.authority) ] entry_fetcher = self._metadata_fetchers[ self._metadata_fetcher_key(entry.fetcher) ] if after: assert entry.discovery_date > after results.append( attr.evolve( entry, authority=attr.evolve(entry_authority, metadata=None), fetcher=attr.evolve(entry_fetcher, metadata=None), ) ) if len(results) > limit: results.pop() assert len(results) == limit last_result = results[-1] next_page_token: Optional[bytes] = msgpack_dumps( ( last_result.discovery_date, self._metadata_fetcher_key(last_result.fetcher), ) ) else: next_page_token = None return { "next_page_token": next_page_token, "results": results, } def metadata_fetcher_add(self, fetchers: Iterable[MetadataFetcher]) -> None: for fetcher in fetchers: if fetcher.metadata is None: raise StorageArgumentException( "MetadataFetcher.metadata may not be None in metadata_fetcher_add." ) key = self._metadata_fetcher_key(fetcher) if key not in self._metadata_fetchers: self._metadata_fetchers[key] = fetcher def metadata_fetcher_get( self, name: str, version: str ) -> Optional[MetadataFetcher]: return self._metadata_fetchers.get( self._metadata_fetcher_key(MetadataFetcher(name=name, version=version)) ) def metadata_authority_add(self, authorities: Iterable[MetadataAuthority]) -> None: for authority in authorities: if authority.metadata is None: raise StorageArgumentException( "MetadataAuthority.metadata may not be None in " "metadata_authority_add." ) key = self._metadata_authority_key(authority) self._metadata_authorities[key] = authority def metadata_authority_get( self, type: MetadataAuthorityType, url: str ) -> Optional[MetadataAuthority]: return self._metadata_authorities.get( self._metadata_authority_key(MetadataAuthority(type=type, url=url)) ) def _get_origin_url(self, origin): if isinstance(origin, str): return origin else: raise TypeError("origin must be a string.") def _person_add(self, person): key = ("person", person.fullname) if key not in self._objects: self._persons[person.fullname] = person self._objects[key].append(key) return self._persons[person.fullname] @staticmethod def _content_key(content): """ A stable key and the algorithm for a content""" if isinstance(content, BaseContent): content = content.to_dict() return tuple((key, content.get(key)) for key in sorted(DEFAULT_ALGORITHMS)) @staticmethod def _metadata_fetcher_key(fetcher: MetadataFetcher) -> FetcherKey: return (fetcher.name, fetcher.version) @staticmethod def _metadata_authority_key(authority: MetadataAuthority) -> Hashable: return (authority.type, authority.url) def diff_directories(self, from_dir, to_dir, track_renaming=False): raise NotImplementedError("InMemoryStorage.diff_directories") def diff_revisions(self, from_rev, to_rev, track_renaming=False): raise NotImplementedError("InMemoryStorage.diff_revisions") def diff_revision(self, revision, track_renaming=False): raise NotImplementedError("InMemoryStorage.diff_revision") def clear_buffers(self, object_types: Optional[Iterable[str]] = None) -> None: """Do nothing """ return None def flush(self, object_types: Optional[Iterable[str]] = None) -> Dict: return {} diff --git a/swh/storage/interface.py b/swh/storage/interface.py index ffe1ef6e..b73983d0 100644 --- a/swh/storage/interface.py +++ b/swh/storage/interface.py @@ -1,1293 +1,1274 @@ # Copyright (C) 2015-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import datetime from typing import Any, Dict, Iterable, List, Optional, Union from swh.core.api import remote_api_endpoint from swh.model.identifiers import SWHID from swh.model.model import ( Content, Directory, Origin, OriginVisit, OriginVisitStatus, Revision, Release, Snapshot, SkippedContent, MetadataAuthority, MetadataAuthorityType, MetadataFetcher, MetadataTargetType, RawExtrinsicMetadata, ) def deprecated(f): f.deprecated_endpoint = True return f class StorageInterface: @remote_api_endpoint("check_config") def check_config(self, *, check_write): """Check that the storage is configured and ready to go.""" ... @remote_api_endpoint("content/add") def content_add(self, content: Iterable[Content]) -> Dict: """Add content blobs to the storage Args: contents (iterable): iterable of dictionaries representing individual pieces of content to add. Each dictionary has the following keys: - data (bytes): the actual content - length (int): content length - one key for each checksum algorithm in :data:`swh.model.hashutil.ALGORITHMS`, mapped to the corresponding checksum - status (str): one of visible, hidden Raises: The following exceptions can occur: - HashCollision in case of collision - Any other exceptions raise by the db In case of errors, some of the content may have been stored in the DB and in the objstorage. Since additions to both idempotent, that should not be a problem. Returns: Summary dict with the following keys and associated values: content:add: New contents added content:add:bytes: Sum of the contents' length data """ ... @remote_api_endpoint("content/update") def content_update(self, content, keys=[]): """Update content blobs to the storage. Does nothing for unknown contents or skipped ones. Args: content (iterable): iterable of dictionaries representing individual pieces of content to update. Each dictionary has the following keys: - data (bytes): the actual content - length (int): content length (default: -1) - one key for each checksum algorithm in :data:`swh.model.hashutil.ALGORITHMS`, mapped to the corresponding checksum - status (str): one of visible, hidden, absent keys (list): List of keys (str) whose values needs an update, e.g., new hash column """ ... @remote_api_endpoint("content/add_metadata") def content_add_metadata(self, content: Iterable[Content]) -> Dict: """Add content metadata to the storage (like `content_add`, but without inserting to the objstorage). Args: content (iterable): iterable of dictionaries representing individual pieces of content to add. Each dictionary has the following keys: - length (int): content length (default: -1) - one key for each checksum algorithm in :data:`swh.model.hashutil.ALGORITHMS`, mapped to the corresponding checksum - status (str): one of visible, hidden, absent - reason (str): if status = absent, the reason why - origin (int): if status = absent, the origin we saw the content in - ctime (datetime): time of insertion in the archive Returns: Summary dict with the following key and associated values: content:add: New contents added skipped_content:add: New skipped contents (no data) added """ ... @remote_api_endpoint("content/data") def content_get(self, content): """Retrieve in bulk contents and their data. This generator yields exactly as many items than provided sha1 identifiers, but callers should not assume this will always be true. It may also yield `None` values in case an object was not found. Args: content: iterables of sha1 Yields: Dict[str, bytes]: Generates streams of contents as dict with their raw data: - sha1 (bytes): content id - data (bytes): content's raw data Raises: ValueError in case of too much contents are required. cf. BULK_BLOCK_CONTENT_LEN_MAX """ ... @deprecated @remote_api_endpoint("content/range") def content_get_range(self, start, end, limit=1000): """Retrieve contents within range [start, end] bound by limit. Note that this function may return more than one blob per hash. The limit is enforced with multiplicity (ie. two blobs with the same hash will count twice toward the limit). Args: **start** (bytes): Starting identifier range (expected smaller than end) **end** (bytes): Ending identifier range (expected larger than start) **limit** (int): Limit result (default to 1000) Returns: a dict with keys: - contents [dict]: iterable of contents in between the range. - next (bytes): There remains content in the range starting from this next sha1 """ ... @remote_api_endpoint("content/partition") def content_get_partition( self, partition_id: int, nb_partitions: int, limit: int = 1000, page_token: str = None, ): """Splits contents into nb_partitions, and returns one of these based on partition_id (which must be in [0, nb_partitions-1]) There is no guarantee on how the partitioning is done, or the result order. Args: partition_id (int): index of the partition to fetch nb_partitions (int): total number of partitions to split into limit (int): Limit result (default to 1000) page_token (Optional[str]): opaque token used for pagination. Returns: a dict with keys: - contents (List[dict]): iterable of contents in the partition. - **next_page_token** (Optional[str]): opaque token to be used as `page_token` for retrieving the next page. if absent, there is no more pages to gather. """ ... @remote_api_endpoint("content/metadata") def content_get_metadata(self, contents: List[bytes]) -> Dict[bytes, List[Dict]]: """Retrieve content metadata in bulk Args: content: iterable of content identifiers (sha1) Returns: a dict with keys the content's sha1 and the associated value either the existing content's metadata or None if the content does not exist. """ ... @remote_api_endpoint("content/missing") def content_missing(self, content, key_hash="sha1"): """List content missing from storage Args: content ([dict]): iterable of dictionaries whose keys are either 'length' or an item of :data:`swh.model.hashutil.ALGORITHMS`; mapped to the corresponding checksum (or length). key_hash (str): name of the column to use as hash id result (default: 'sha1') Returns: iterable ([bytes]): missing content ids (as per the key_hash column) Raises: TODO: an exception when we get a hash collision. """ ... @remote_api_endpoint("content/missing/sha1") def content_missing_per_sha1(self, contents): """List content missing from storage based only on sha1. Args: contents: Iterable of sha1 to check for absence. Returns: iterable: missing ids Raises: TODO: an exception when we get a hash collision. """ ... @remote_api_endpoint("content/missing/sha1_git") def content_missing_per_sha1_git(self, contents): """List content missing from storage based only on sha1_git. Args: contents (Iterable): An iterable of content id (sha1_git) Yields: missing contents sha1_git """ ... @remote_api_endpoint("content/present") def content_find(self, content): """Find a content hash in db. Args: content: a dictionary representing one content hash, mapping checksum algorithm names (see swh.model.hashutil.ALGORITHMS) to checksum values Returns: a triplet (sha1, sha1_git, sha256) if the content exist or None otherwise. Raises: ValueError: in case the key of the dictionary is not sha1, sha1_git nor sha256. """ ... @remote_api_endpoint("content/get_random") def content_get_random(self): """Finds a random content id. Returns: a sha1_git """ ... @remote_api_endpoint("content/skipped/add") def skipped_content_add(self, content: Iterable[SkippedContent]) -> Dict: """Add contents to the skipped_content list, which contains (partial) information about content missing from the archive. Args: contents (iterable): iterable of dictionaries representing individual pieces of content to add. Each dictionary has the following keys: - length (Optional[int]): content length (default: -1) - one key for each checksum algorithm in :data:`swh.model.hashutil.ALGORITHMS`, mapped to the corresponding checksum; each is optional - status (str): must be "absent" - reason (str): the reason why the content is absent - origin (int): if status = absent, the origin we saw the content in Raises: The following exceptions can occur: - HashCollision in case of collision - Any other exceptions raise by the backend In case of errors, some content may have been stored in the DB and in the objstorage. Since additions to both idempotent, that should not be a problem. Returns: Summary dict with the following key and associated values: skipped_content:add: New skipped contents (no data) added """ ... @remote_api_endpoint("content/skipped/missing") def skipped_content_missing(self, contents): """List skipped_content missing from storage Args: content: iterable of dictionaries containing the data for each checksum algorithm. Returns: iterable: missing signatures """ ... @remote_api_endpoint("directory/add") def directory_add(self, directories: Iterable[Directory]) -> Dict: """Add directories to the storage Args: directories (iterable): iterable of dictionaries representing the individual directories to add. Each dict has the following keys: - id (sha1_git): the id of the directory to add - entries (list): list of dicts for each entry in the directory. Each dict has the following keys: - name (bytes) - type (one of 'file', 'dir', 'rev'): type of the directory entry (file, directory, revision) - target (sha1_git): id of the object pointed at by the directory entry - perms (int): entry permissions Returns: Summary dict of keys with associated count as values: directory:add: Number of directories actually added """ ... @remote_api_endpoint("directory/missing") def directory_missing(self, directories): """List directories missing from storage Args: directories (iterable): an iterable of directory ids Yields: missing directory ids """ ... @remote_api_endpoint("directory/ls") def directory_ls(self, directory, recursive=False): """Get entries for one directory. Args: - directory: the directory to list entries from. - recursive: if flag on, this list recursively from this directory. Returns: List of entries for such directory. If `recursive=True`, names in the path of a dir/file not at the root are concatenated with a slash (`/`). """ ... @remote_api_endpoint("directory/path") def directory_entry_get_by_path(self, directory, paths): """Get the directory entry (either file or dir) from directory with path. Args: - directory: sha1 of the top level directory - paths: path to lookup from the top level directory. From left (top) to right (bottom). Returns: The corresponding directory entry if found, None otherwise. """ ... @remote_api_endpoint("directory/get_random") def directory_get_random(self): """Finds a random directory id. Returns: a sha1_git """ ... @remote_api_endpoint("revision/add") def revision_add(self, revisions: Iterable[Revision]) -> Dict: """Add revisions to the storage Args: revisions (Iterable[dict]): iterable of dictionaries representing the individual revisions to add. Each dict has the following keys: - **id** (:class:`sha1_git`): id of the revision to add - **date** (:class:`dict`): date the revision was written - **committer_date** (:class:`dict`): date the revision got added to the origin - **type** (one of 'git', 'tar'): type of the revision added - **directory** (:class:`sha1_git`): the directory the revision points at - **message** (:class:`bytes`): the message associated with the revision - **author** (:class:`Dict[str, bytes]`): dictionary with keys: name, fullname, email - **committer** (:class:`Dict[str, bytes]`): dictionary with keys: name, fullname, email - **metadata** (:class:`jsonb`): extra information as dictionary - **synthetic** (:class:`bool`): revision's nature (tarball, directory creates synthetic revision`) - **parents** (:class:`list[sha1_git]`): the parents of this revision date dictionaries have the form defined in :mod:`swh.model`. Returns: Summary dict of keys with associated count as values revision:add: New objects actually stored in db """ ... @remote_api_endpoint("revision/missing") def revision_missing(self, revisions): """List revisions missing from storage Args: revisions (iterable): revision ids Yields: missing revision ids """ ... @remote_api_endpoint("revision") def revision_get(self, revisions): """Get all revisions from storage Args: revisions: an iterable of revision ids Returns: iterable: an iterable of revisions as dictionaries (or None if the revision doesn't exist) """ ... @remote_api_endpoint("revision/log") def revision_log(self, revisions, limit=None): """Fetch revision entry from the given root revisions. Args: revisions: array of root revision to lookup limit: limitation on the output result. Default to None. Yields: List of revision log from such revisions root. """ ... @remote_api_endpoint("revision/shortlog") def revision_shortlog(self, revisions, limit=None): """Fetch the shortlog for the given revisions Args: revisions: list of root revisions to lookup limit: depth limitation for the output Yields: a list of (id, parents) tuples. """ ... @remote_api_endpoint("revision/get_random") def revision_get_random(self): """Finds a random revision id. Returns: a sha1_git """ ... @remote_api_endpoint("release/add") def release_add(self, releases: Iterable[Release]) -> Dict: """Add releases to the storage Args: releases (Iterable[dict]): iterable of dictionaries representing the individual releases to add. Each dict has the following keys: - **id** (:class:`sha1_git`): id of the release to add - **revision** (:class:`sha1_git`): id of the revision the release points to - **date** (:class:`dict`): the date the release was made - **name** (:class:`bytes`): the name of the release - **comment** (:class:`bytes`): the comment associated with the release - **author** (:class:`Dict[str, bytes]`): dictionary with keys: name, fullname, email the date dictionary has the form defined in :mod:`swh.model`. Returns: Summary dict of keys with associated count as values release:add: New objects contents actually stored in db """ ... @remote_api_endpoint("release/missing") def release_missing(self, releases): """List releases missing from storage Args: releases: an iterable of release ids Returns: a list of missing release ids """ ... @remote_api_endpoint("release") def release_get(self, releases): """Given a list of sha1, return the releases's information Args: releases: list of sha1s Yields: dicts with the same keys as those given to `release_add` (or ``None`` if a release does not exist) """ ... @remote_api_endpoint("release/get_random") def release_get_random(self): """Finds a random release id. Returns: a sha1_git """ ... @remote_api_endpoint("snapshot/add") def snapshot_add(self, snapshots: Iterable[Snapshot]) -> Dict: """Add snapshots to the storage. Args: snapshot ([dict]): the snapshots to add, containing the following keys: - **id** (:class:`bytes`): id of the snapshot - **branches** (:class:`dict`): branches the snapshot contains, mapping the branch name (:class:`bytes`) to the branch target, itself a :class:`dict` (or ``None`` if the branch points to an unknown object) - **target_type** (:class:`str`): one of ``content``, ``directory``, ``revision``, ``release``, ``snapshot``, ``alias`` - **target** (:class:`bytes`): identifier of the target (currently a ``sha1_git`` for all object kinds, or the name of the target branch for aliases) Raises: ValueError: if the origin or visit id does not exist. Returns: Summary dict of keys with associated count as values snapshot:add: Count of object actually stored in db """ ... @remote_api_endpoint("snapshot/missing") def snapshot_missing(self, snapshots): """List snapshots missing from storage Args: snapshots (iterable): an iterable of snapshot ids Yields: missing snapshot ids """ ... @remote_api_endpoint("snapshot") def snapshot_get(self, snapshot_id): """Get the content, possibly partial, of a snapshot with the given id The branches of the snapshot are iterated in the lexicographical order of their names. .. warning:: At most 1000 branches contained in the snapshot will be returned for performance reasons. In order to browse the whole set of branches, the method :meth:`snapshot_get_branches` should be used instead. Args: snapshot_id (bytes): identifier of the snapshot Returns: dict: a dict with three keys: * **id**: identifier of the snapshot * **branches**: a dict of branches contained in the snapshot whose keys are the branches' names. * **next_branch**: the name of the first branch not returned or :const:`None` if the snapshot has less than 1000 branches. """ ... @remote_api_endpoint("snapshot/by_origin_visit") def snapshot_get_by_origin_visit(self, origin, visit): """Get the content, possibly partial, of a snapshot for the given origin visit The branches of the snapshot are iterated in the lexicographical order of their names. .. warning:: At most 1000 branches contained in the snapshot will be returned for performance reasons. In order to browse the whole set of branches, the method :meth:`snapshot_get_branches` should be used instead. Args: origin (int): the origin identifier visit (int): the visit identifier Returns: dict: None if the snapshot does not exist; a dict with three keys otherwise: * **id**: identifier of the snapshot * **branches**: a dict of branches contained in the snapshot whose keys are the branches' names. * **next_branch**: the name of the first branch not returned or :const:`None` if the snapshot has less than 1000 branches. """ ... @remote_api_endpoint("snapshot/count_branches") def snapshot_count_branches(self, snapshot_id): """Count the number of branches in the snapshot with the given id Args: snapshot_id (bytes): identifier of the snapshot Returns: dict: A dict whose keys are the target types of branches and values their corresponding amount """ ... @remote_api_endpoint("snapshot/get_branches") def snapshot_get_branches( self, snapshot_id, branches_from=b"", branches_count=1000, target_types=None ): """Get the content, possibly partial, of a snapshot with the given id The branches of the snapshot are iterated in the lexicographical order of their names. Args: snapshot_id (bytes): identifier of the snapshot branches_from (bytes): optional parameter used to skip branches whose name is lesser than it before returning them branches_count (int): optional parameter used to restrain the amount of returned branches target_types (list): optional parameter used to filter the target types of branch to return (possible values that can be contained in that list are `'content', 'directory', 'revision', 'release', 'snapshot', 'alias'`) Returns: dict: None if the snapshot does not exist; a dict with three keys otherwise: * **id**: identifier of the snapshot * **branches**: a dict of branches contained in the snapshot whose keys are the branches' names. * **next_branch**: the name of the first branch not returned or :const:`None` if the snapshot has less than `branches_count` branches after `branches_from` included. """ ... @remote_api_endpoint("snapshot/get_random") def snapshot_get_random(self): """Finds a random snapshot id. Returns: a sha1_git """ ... @remote_api_endpoint("origin/visit/add") def origin_visit_add(self, visits: Iterable[OriginVisit]) -> Iterable[OriginVisit]: """Add visits to storage. If the visits have no id, they will be created and assigned one. The resulted visits are visits with their visit id set. Args: visits: Iterable of OriginVisit objects to add Raises: StorageArgumentException if some origin visit reference unknown origins Returns: Iterable[OriginVisit] stored """ ... @remote_api_endpoint("origin/visit_status/add") def origin_visit_status_add( self, visit_statuses: Iterable[OriginVisitStatus], ) -> None: """Add origin visit statuses. If there is already a status for the same origin and visit id at the same date, the new one will be either dropped or will replace the existing one (it is unspecified which one of these two behaviors happens). Args: visit_statuses: origin visit statuses to add Raises: StorageArgumentException if the origin of the visit status is unknown """ ... @remote_api_endpoint("origin/visit/get") def origin_visit_get( self, origin: str, last_visit: Optional[int] = None, limit: Optional[int] = None, order: str = "asc", ) -> Iterable[Dict[str, Any]]: """Retrieve all the origin's visit's information. Args: origin: The visited origin last_visit: Starting point from which listing the next visits Default to None limit: Number of results to return from the last visit. Default to None order: Order on visit id fields to list origin visits (default to asc) Yields: List of visits. """ ... @remote_api_endpoint("origin/visit/find_by_date") def origin_visit_find_by_date( self, origin: str, visit_date: datetime.datetime ) -> Optional[Dict[str, Any]]: """Retrieves the origin visit whose date is closest to the provided timestamp. In case of a tie, the visit with largest id is selected. Args: origin: origin (URL) visit_date: expected visit date Returns: A visit """ ... @remote_api_endpoint("origin/visit/getby") def origin_visit_get_by(self, origin: str, visit: int) -> Optional[Dict[str, Any]]: """Retrieve origin visit's information. Args: origin: origin (URL) visit: visit id Returns: The information on that particular (origin, visit) or None if it does not exist """ ... @remote_api_endpoint("origin/visit/get_latest") def origin_visit_get_latest( self, origin: str, type: Optional[str] = None, allowed_statuses: Optional[List[str]] = None, require_snapshot: bool = False, ) -> Optional[Dict[str, Any]]: """Get the latest origin visit for the given origin, optionally looking only for those with one of the given allowed_statuses or for those with a snapshot. Args: origin: origin URL type: Optional visit type to filter on (e.g git, tar, dsc, svn, hg, npm, pypi, ...) allowed_statuses: list of visit statuses considered to find the latest visit. For instance, ``allowed_statuses=['full']`` will only consider visits that have successfully run to completion. require_snapshot: If True, only a visit with a snapshot will be returned. Returns: dict: a dict with the following keys: - **origin**: the URL of the origin - **visit**: origin visit id - **type**: type of loader used for the visit - **date**: timestamp of such visit - **status**: Visit's new status - **metadata**: Data associated to the visit - **snapshot** (Optional[sha1_git]): identifier of the snapshot associated to the visit """ ... @remote_api_endpoint("origin/visit_status/get_latest") def origin_visit_status_get_latest( self, origin_url: str, visit: int, allowed_statuses: Optional[List[str]] = None, require_snapshot: bool = False, ) -> Optional[OriginVisitStatus]: """Get the latest origin visit status for the given origin visit, optionally looking only for those with one of the given allowed_statuses or with a snapshot. Args: origin: origin URL allowed_statuses: list of visit statuses considered to find the latest visit. Possible values are {created, ongoing, partial, full}. For instance, ``allowed_statuses=['full']`` will only consider visits that have successfully run to completion. require_snapshot: If True, only a visit with a snapshot will be returned. Returns: The OriginVisitStatus matching the criteria """ ... @remote_api_endpoint("origin/visit/get_random") def origin_visit_get_random(self, type: str) -> Optional[Dict[str, Any]]: """Randomly select one successful origin visit with made in the last 3 months. Returns: dict representing an origin visit, in the same format as :py:meth:`origin_visit_get`. """ ... @remote_api_endpoint("object/find_by_sha1_git") def object_find_by_sha1_git(self, ids): """Return the objects found with the given ids. Args: ids: a generator of sha1_gits Returns: dict: a mapping from id to the list of objects found. Each object found is itself a dict with keys: - sha1_git: the input id - type: the type of object found """ ... @remote_api_endpoint("origin/get") def origin_get(self, origins): """Return origins, either all identified by their ids or all identified by tuples (type, url). If the url is given and the type is omitted, one of the origins with that url is returned. Args: origin: a list of dictionaries representing the individual origins to find. These dicts have the key url: - url (bytes): the url the origin points to Returns: dict: the origin dictionary with the keys: - id: origin's id - url: origin's url Raises: ValueError: if the url or the id don't exist. """ ... @remote_api_endpoint("origin/get_sha1") def origin_get_by_sha1(self, sha1s): """Return origins, identified by the sha1 of their URLs. Args: sha1s (list[bytes]): a list of sha1s Yields: dicts containing origin information as returned by :meth:`swh.storage.storage.Storage.origin_get`, or None if an origin matching the sha1 is not found. """ ... @deprecated @remote_api_endpoint("origin/get_range") def origin_get_range(self, origin_from=1, origin_count=100): """Retrieve ``origin_count`` origins whose ids are greater or equal than ``origin_from``. Origins are sorted by id before retrieving them. Args: origin_from (int): the minimum id of origins to retrieve origin_count (int): the maximum number of origins to retrieve Yields: dicts containing origin information as returned by :meth:`swh.storage.storage.Storage.origin_get`. """ ... @remote_api_endpoint("origin/list") def origin_list(self, page_token: Optional[str] = None, limit: int = 100) -> dict: """Returns the list of origins Args: page_token: opaque token used for pagination. limit: the maximum number of results to return Returns: dict: dict with the following keys: - **next_page_token** (str, optional): opaque token to be used as `page_token` for retrieving the next page. if absent, there is no more pages to gather. - **origins** (List[dict]): list of origins, as returned by `origin_get`. """ ... @remote_api_endpoint("origin/search") def origin_search( self, url_pattern, offset=0, limit=50, regexp=False, with_visit=False ): """Search for origins whose urls contain a provided string pattern or match a provided regular expression. The search is performed in a case insensitive way. Args: url_pattern (str): the string pattern to search for in origin urls offset (int): number of found origins to skip before returning results limit (int): the maximum number of found origins to return regexp (bool): if True, consider the provided pattern as a regular expression and return origins whose urls match it with_visit (bool): if True, filter out origins with no visit Yields: dicts containing origin information as returned by :meth:`swh.storage.storage.Storage.origin_get`. """ ... @deprecated @remote_api_endpoint("origin/count") def origin_count(self, url_pattern, regexp=False, with_visit=False): """Count origins whose urls contain a provided string pattern or match a provided regular expression. The pattern search in origin urls is performed in a case insensitive way. Args: url_pattern (str): the string pattern to search for in origin urls regexp (bool): if True, consider the provided pattern as a regular expression and return origins whose urls match it with_visit (bool): if True, filter out origins with no visit Returns: int: The number of origins matching the search criterion. """ ... @remote_api_endpoint("origin/add_multi") def origin_add(self, origins: Iterable[Origin]) -> Dict[str, int]: """Add origins to the storage Args: origins: list of dictionaries representing the individual origins, with the following keys: - type: the origin type ('git', 'svn', 'deb', ...) - url (bytes): the url the origin points to Returns: Summary dict of keys with associated count as values origin:add: Count of object actually stored in db """ ... - @deprecated - @remote_api_endpoint("origin/add") - def origin_add_one(self, origin: Origin) -> str: - """Add origin to the storage - - Args: - origin: dictionary representing the individual origin to add. This - dict has the following keys: - - - type (FIXME: enum TBD): the origin type ('git', 'wget', ...) - - url (bytes): the url the origin points to - - Returns: - the id of the added origin, or of the identical one that already - exists. - - """ - ... - def stat_counters(self): """compute statistics about the number of tuples in various tables Returns: dict: a dictionary mapping textual labels (e.g., content) to integer values (e.g., the number of tuples in table content) """ ... def refresh_stat_counters(self): """Recomputes the statistics for `stat_counters`.""" ... @remote_api_endpoint("object_metadata/add") def object_metadata_add(self, metadata: Iterable[RawExtrinsicMetadata],) -> None: """Add extrinsic metadata on objects (contents, directories, ...). The authority and fetcher must be known to the storage before using this endpoint. If there is already metadata for the same object, authority, fetcher, and at the same date; the new one will be either dropped or will replace the existing one (it is unspecified which one of these two behaviors happens). Args: metadata: iterable of RawExtrinsicMetadata objects to be inserted. """ ... @remote_api_endpoint("object_metadata/get") def object_metadata_get( self, object_type: MetadataTargetType, id: Union[str, SWHID], authority: MetadataAuthority, after: Optional[datetime.datetime] = None, page_token: Optional[bytes] = None, limit: int = 1000, ) -> Dict[str, Union[Optional[bytes], List[RawExtrinsicMetadata]]]: """Retrieve list of all object_metadata entries for the id Args: object_type: one of the values of swh.model.model.MetadataTargetType id: an URL if object_type is 'origin', else a core SWHID authority: a dict containing keys `type` and `url`. after: minimum discovery_date for a result to be returned page_token: opaque token, used to get the next page of results limit: maximum number of results to be returned Returns: dict with keys `next_page_token` and `results`. `next_page_token` is an opaque token that is used to get the next page of results, or `None` if there are no more results. `results` is a list of RawExtrinsicMetadata objects: """ ... @remote_api_endpoint("metadata_fetcher/add") def metadata_fetcher_add(self, fetchers: Iterable[MetadataFetcher],) -> None: """Add new metadata fetchers to the storage. Their `name` and `version` together are unique identifiers of this fetcher; and `metadata` is an arbitrary dict of JSONable data with information about this fetcher, which must not be `None` (but may be empty). Args: fetchers: iterable of MetadataFetcher to be inserted """ ... @remote_api_endpoint("metadata_fetcher/get") def metadata_fetcher_get( self, name: str, version: str ) -> Optional[MetadataFetcher]: """Retrieve information about a fetcher Args: name: the name of the fetcher version: version of the fetcher Returns: a MetadataFetcher object (with a non-None metadata field) if it is known, else None. """ ... @remote_api_endpoint("metadata_authority/add") def metadata_authority_add(self, authorities: Iterable[MetadataAuthority]) -> None: """Add new metadata authorities to the storage. Their `type` and `url` together are unique identifiers of this authority; and `metadata` is an arbitrary dict of JSONable data with information about this authority, which must not be `None` (but may be empty). Args: authorities: iterable of MetadataAuthority to be inserted """ ... @remote_api_endpoint("metadata_authority/get") def metadata_authority_get( self, type: MetadataAuthorityType, url: str ) -> Optional[MetadataAuthority]: """Retrieve information about an authority Args: type: one of "deposit_client", "forge", or "registry" url: unique URI identifying the authority Returns: a MetadataAuthority object (with a non-None metadata field) if it is known, else None. """ ... @deprecated @remote_api_endpoint("algos/diff_directories") def diff_directories(self, from_dir, to_dir, track_renaming=False): """Compute the list of file changes introduced between two arbitrary directories (insertion / deletion / modification / renaming of files). Args: from_dir (bytes): identifier of the directory to compare from to_dir (bytes): identifier of the directory to compare to track_renaming (bool): whether or not to track files renaming Returns: A list of dict describing the introduced file changes (see :func:`swh.storage.algos.diff.diff_directories` for more details). """ ... @deprecated @remote_api_endpoint("algos/diff_revisions") def diff_revisions(self, from_rev, to_rev, track_renaming=False): """Compute the list of file changes introduced between two arbitrary revisions (insertion / deletion / modification / renaming of files). Args: from_rev (bytes): identifier of the revision to compare from to_rev (bytes): identifier of the revision to compare to track_renaming (bool): whether or not to track files renaming Returns: A list of dict describing the introduced file changes (see :func:`swh.storage.algos.diff.diff_directories` for more details). """ ... @deprecated @remote_api_endpoint("algos/diff_revision") def diff_revision(self, revision, track_renaming=False): """Compute the list of file changes introduced by a specific revision (insertion / deletion / modification / renaming of files) by comparing it against its first parent. Args: revision (bytes): identifier of the revision from which to compute the list of files changes track_renaming (bool): whether or not to track files renaming Returns: A list of dict describing the introduced file changes (see :func:`swh.storage.algos.diff.diff_directories` for more details). """ ... @remote_api_endpoint("clear/buffer") def clear_buffers(self, object_types: Optional[Iterable[str]] = None) -> None: """For backend storages (pg, storage, in-memory), this is a noop operation. For proxy storages (especially filter, buffer), this is an operation which cleans internal state. """ @remote_api_endpoint("flush") def flush(self, object_types: Optional[Iterable[str]] = None) -> Dict: """For backend storages (pg, storage, in-memory), this is expected to be a noop operation. For proxy storages (especially buffer), this is expected to trigger actual writes to the backend. """ ... diff --git a/swh/storage/pytest_plugin.py b/swh/storage/pytest_plugin.py index 74682f64..2bcdf672 100644 --- a/swh/storage/pytest_plugin.py +++ b/swh/storage/pytest_plugin.py @@ -1,264 +1,200 @@ # Copyright (C) 2019-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import glob from os import path, environ -from typing import Dict, Iterable, Union +from typing import Union import pytest import swh.storage from pytest_postgresql import factories from pytest_postgresql.janitor import DatabaseJanitor, psycopg2, Version from swh.core.utils import numfile_sortkey as sortkey -from swh.model.model import ( - BaseModel, - Content, - Directory, - MetadataAuthority, - MetadataFetcher, - Origin, - OriginVisit, - Person, - RawExtrinsicMetadata, - Release, - Revision, - SkippedContent, - Snapshot, -) from swh.storage import get_storage -from swh.storage.tests.storage_data import data + +from swh.storage.tests.storage_data import StorageData SQL_DIR = path.join(path.dirname(swh.storage.__file__), "sql") environ["LC_ALL"] = "C.UTF-8" DUMP_FILES = path.join(SQL_DIR, "*.sql") @pytest.fixture def swh_storage_backend_config(postgresql_proc, swh_storage_postgresql): """Basic pg storage configuration with no journal collaborator (to avoid pulling optional dependency on clients of this fixture) """ yield { "cls": "local", "db": "postgresql://{user}@{host}:{port}/{dbname}".format( host=postgresql_proc.host, port=postgresql_proc.port, user="postgres", dbname="tests", ), "objstorage": {"cls": "memory", "args": {}}, } @pytest.fixture def swh_storage(swh_storage_backend_config): return get_storage(**swh_storage_backend_config) # the postgres_fact factory fixture below is mostly a copy of the code # from pytest-postgresql. We need a custom version here to be able to # specify our version of the DBJanitor we use. def postgresql_fact(process_fixture_name, db_name=None, dump_files=DUMP_FILES): @pytest.fixture def postgresql_factory(request): """ Fixture factory for PostgreSQL. :param FixtureRequest request: fixture request object :rtype: psycopg2.connection :returns: postgresql client """ config = factories.get_config(request) if not psycopg2: raise ImportError("No module named psycopg2. Please install it.") proc_fixture = request.getfixturevalue(process_fixture_name) # _, config = try_import('psycopg2', request) pg_host = proc_fixture.host pg_port = proc_fixture.port pg_user = proc_fixture.user pg_options = proc_fixture.options pg_db = db_name or config["dbname"] with SwhDatabaseJanitor( pg_user, pg_host, pg_port, pg_db, proc_fixture.version, dump_files=dump_files, ): connection = psycopg2.connect( dbname=pg_db, user=pg_user, host=pg_host, port=pg_port, options=pg_options, ) yield connection connection.close() return postgresql_factory swh_storage_postgresql = postgresql_fact("postgresql_proc") # This version of the DatabaseJanitor implement a different setup/teardown # behavior than than the stock one: instead of dropping, creating and # initializing the database for each test, it create and initialize the db only # once, then it truncate the tables. This is needed to have acceptable test # performances. class SwhDatabaseJanitor(DatabaseJanitor): def __init__( self, user: str, host: str, port: str, db_name: str, version: Union[str, float, Version], dump_files: str = DUMP_FILES, ) -> None: super().__init__(user, host, port, db_name, version) self.dump_files = sorted(glob.glob(dump_files), key=sortkey) def db_setup(self): with psycopg2.connect( dbname=self.db_name, user=self.user, host=self.host, port=self.port, ) as cnx: with cnx.cursor() as cur: for fname in self.dump_files: with open(fname) as fobj: sql = fobj.read().replace("concurrently", "").strip() if sql: cur.execute(sql) cnx.commit() def db_reset(self): with psycopg2.connect( dbname=self.db_name, user=self.user, host=self.host, port=self.port, ) as cnx: with cnx.cursor() as cur: cur.execute( "SELECT table_name FROM information_schema.tables " "WHERE table_schema = %s", ("public",), ) tables = set(table for (table,) in cur.fetchall()) for table in tables: cur.execute("truncate table %s cascade" % table) cur.execute( "SELECT sequence_name FROM information_schema.sequences " "WHERE sequence_schema = %s", ("public",), ) seqs = set(seq for (seq,) in cur.fetchall()) for seq in seqs: cur.execute("ALTER SEQUENCE %s RESTART;" % seq) cnx.commit() def init(self): with self.cursor() as cur: cur.execute( "SELECT COUNT(1) FROM pg_database WHERE datname=%s;", (self.db_name,) ) db_exists = cur.fetchone()[0] == 1 if db_exists: cur.execute( "UPDATE pg_database SET datallowconn=true " "WHERE datname = %s;", (self.db_name,), ) if db_exists: self.db_reset() else: with self.cursor() as cur: cur.execute('CREATE DATABASE "{}";'.format(self.db_name)) self.db_setup() def drop(self): pid_column = "pid" with self.cursor() as cur: cur.execute( "UPDATE pg_database SET datallowconn=false " "WHERE datname = %s;", (self.db_name,), ) cur.execute( "SELECT pg_terminate_backend(pg_stat_activity.{})" "FROM pg_stat_activity " "WHERE pg_stat_activity.datname = %s;".format(pid_column), (self.db_name,), ) @pytest.fixture -def sample_data() -> Dict: +def sample_data() -> StorageData: """Pre-defined sample storage object data to manipulate Returns: - Dict of data (keys: content, directory, revision, release, person, - origin) - - """ - return { - "content": [data.cont, data.cont2], - "content_metadata": [data.cont3], - "skipped_content": [data.skipped_cont, data.skipped_cont2], - "person": [data.person], - "directory": [data.dir2, data.dir, data.dir3, data.dir4], - "revision": [data.revision, data.revision2, data.revision3], - "release": [data.release, data.release2, data.release3], - "snapshot": [data.snapshot, data.empty_snapshot, data.complete_snapshot], - "origin": [data.origin, data.origin2], - "origin_visit": [data.origin_visit, data.origin_visit2, data.origin_visit3], - "fetcher": [data.metadata_fetcher.to_dict()], - "authority": [data.metadata_authority.to_dict()], - "origin_metadata": [ - data.origin_metadata.to_dict(), - data.origin_metadata2.to_dict(), - ], - } - - -# FIXME: Add the metadata keys when we can (right now, we cannot as the data model -# changed but not the endpoints yet) -OBJECT_FACTORY = { - "content": Content.from_dict, - "content_metadata": Content.from_dict, - "skipped_content": SkippedContent.from_dict, - "person": Person.from_dict, - "directory": Directory.from_dict, - "revision": Revision.from_dict, - "release": Release.from_dict, - "snapshot": Snapshot.from_dict, - "origin": Origin.from_dict, - "origin_visit": OriginVisit.from_dict, - "fetcher": MetadataFetcher.from_dict, - "authority": MetadataAuthority.from_dict, - "origin_metadata": RawExtrinsicMetadata.from_dict, -} - - -@pytest.fixture -def sample_data_model(sample_data) -> Dict[str, Iterable[BaseModel]]: - """Pre-defined sample storage object model to manipulate - - Returns: - Dict of data (keys: content, directory, revision, release, person, origin, ...) - values list of object data model with the corresponding types + StorageData whose attribute keys are data model objects. Either multiple + objects: contents, directories, revisions, releases, ... or simple ones: + content, directory, revision, release, ... """ - return { - object_type: [convert_fn(obj) for obj in sample_data[object_type]] - for object_type, convert_fn in OBJECT_FACTORY.items() - } + return StorageData() diff --git a/swh/storage/retry.py b/swh/storage/retry.py index 2f99533d..10abf2a3 100644 --- a/swh/storage/retry.py +++ b/swh/storage/retry.py @@ -1,151 +1,146 @@ # Copyright (C) 2019-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import logging import traceback from typing import Dict, Iterable, Optional from tenacity import ( retry, stop_after_attempt, wait_random_exponential, ) from swh.model.model import ( Content, SkippedContent, Directory, Revision, Release, Snapshot, - Origin, OriginVisit, MetadataAuthority, MetadataFetcher, RawExtrinsicMetadata, ) from swh.storage import get_storage from swh.storage.exc import StorageArgumentException logger = logging.getLogger(__name__) def should_retry_adding(retry_state) -> bool: """Retry if the error/exception is (probably) not about a caller error """ try: attempt = retry_state.outcome except AttributeError: # tenacity < 5.0 attempt = retry_state if attempt.failed: error = attempt.exception() if isinstance(error, StorageArgumentException): # Exception is due to an invalid argument return False else: # Other exception module = getattr(error, "__module__", None) if module: error_name = error.__module__ + "." + error.__class__.__name__ else: error_name = error.__class__.__name__ logger.warning( "Retry adding a batch", exc_info=False, extra={ "swh_type": "storage_retry", "swh_exception_type": error_name, "swh_exception": traceback.format_exc(), }, ) return True else: # No exception return False swh_retry = retry( retry=should_retry_adding, wait=wait_random_exponential(multiplier=1, max=10), stop=stop_after_attempt(3), ) class RetryingProxyStorage: """Storage implementation which retries adding objects when it specifically fails (hash collision, integrity error). """ def __init__(self, storage): self.storage = get_storage(**storage) def __getattr__(self, key): if key == "storage": raise AttributeError(key) return getattr(self.storage, key) @swh_retry def content_add(self, content: Iterable[Content]) -> Dict: return self.storage.content_add(content) @swh_retry def content_add_metadata(self, content: Iterable[Content]) -> Dict: return self.storage.content_add_metadata(content) @swh_retry def skipped_content_add(self, content: Iterable[SkippedContent]) -> Dict: return self.storage.skipped_content_add(content) - @swh_retry - def origin_add_one(self, origin: Origin) -> str: - return self.storage.origin_add_one(origin) - @swh_retry def origin_visit_add(self, visits: Iterable[OriginVisit]) -> Iterable[OriginVisit]: return self.storage.origin_visit_add(visits) @swh_retry def metadata_fetcher_add(self, fetchers: Iterable[MetadataFetcher],) -> None: return self.storage.metadata_fetcher_add(fetchers) @swh_retry def metadata_authority_add(self, authorities: Iterable[MetadataAuthority]) -> None: return self.storage.metadata_authority_add(authorities) @swh_retry def object_metadata_add(self, metadata: Iterable[RawExtrinsicMetadata],) -> None: return self.storage.object_metadata_add(metadata) @swh_retry def directory_add(self, directories: Iterable[Directory]) -> Dict: return self.storage.directory_add(directories) @swh_retry def revision_add(self, revisions: Iterable[Revision]) -> Dict: return self.storage.revision_add(revisions) @swh_retry def release_add(self, releases: Iterable[Release]) -> Dict: return self.storage.release_add(releases) @swh_retry def snapshot_add(self, snapshots: Iterable[Snapshot]) -> Dict: return self.storage.snapshot_add(snapshots) def clear_buffers(self, object_types: Optional[Iterable[str]] = None) -> None: return self.storage.clear_buffers(object_types) def flush(self, object_types: Optional[Iterable[str]] = None) -> Dict: """Specific case for buffer proxy storage failing to flush data """ return self.storage.flush(object_types) diff --git a/swh/storage/storage.py b/swh/storage/storage.py index 4f5c0749..9f6400ac 100644 --- a/swh/storage/storage.py +++ b/swh/storage/storage.py @@ -1,1365 +1,1313 @@ # Copyright (C) 2015-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import contextlib import datetime import itertools from collections import defaultdict from contextlib import contextmanager -from deprecated import deprecated from typing import ( Any, Counter, Dict, Iterable, List, Optional, Union, ) import attr import psycopg2 import psycopg2.pool import psycopg2.errors from swh.core.api.serializers import msgpack_loads, msgpack_dumps from swh.model.identifiers import parse_swhid, SWHID from swh.model.model import ( Content, Directory, Origin, OriginVisit, OriginVisitStatus, Revision, Release, SkippedContent, Snapshot, SHA1_SIZE, MetadataAuthority, MetadataAuthorityType, MetadataFetcher, MetadataTargetType, RawExtrinsicMetadata, ) from swh.model.hashutil import DEFAULT_ALGORITHMS, hash_to_bytes, hash_to_hex from swh.storage.objstorage import ObjStorage -from swh.storage.validate import VALIDATION_EXCEPTIONS from swh.storage.utils import now from . import converters from .common import db_transaction_generator, db_transaction from .db import Db from .exc import StorageArgumentException, StorageDBError, HashCollision from .algos import diff from .metrics import timed, send_metric, process_metrics from .utils import get_partition_bounds_bytes, extract_collision_hash, map_optional from .writer import JournalWriter # Max block size of contents to return BULK_BLOCK_CONTENT_LEN_MAX = 10000 EMPTY_SNAPSHOT_ID = hash_to_bytes("1a8893e6a86f444e8be8e7bda6cb34fb1735a00e") """Identifier for the empty snapshot""" -VALIDATION_EXCEPTIONS = VALIDATION_EXCEPTIONS + [ +VALIDATION_EXCEPTIONS = ( + KeyError, + TypeError, + ValueError, psycopg2.errors.CheckViolation, psycopg2.errors.IntegrityError, psycopg2.errors.InvalidTextRepresentation, psycopg2.errors.NotNullViolation, psycopg2.errors.NumericValueOutOfRange, psycopg2.errors.UndefinedFunction, # (raised on wrong argument typs) -] +) """Exceptions raised by postgresql when validation of the arguments failed.""" @contextlib.contextmanager def convert_validation_exceptions(): """Catches postgresql errors related to invalid arguments, and re-raises a StorageArgumentException.""" try: yield except tuple(VALIDATION_EXCEPTIONS) as e: raise StorageArgumentException(str(e)) class Storage: """SWH storage proxy, encompassing DB and object storage """ def __init__( self, db, objstorage, min_pool_conns=1, max_pool_conns=10, journal_writer=None ): """ Args: db_conn: either a libpq connection string, or a psycopg2 connection obj_root: path to the root of the object storage """ try: if isinstance(db, psycopg2.extensions.connection): self._pool = None self._db = Db(db) else: self._pool = psycopg2.pool.ThreadedConnectionPool( min_pool_conns, max_pool_conns, db ) self._db = None except psycopg2.OperationalError as e: raise StorageDBError(e) self.journal_writer = JournalWriter(journal_writer) self.objstorage = ObjStorage(objstorage) def get_db(self): if self._db: return self._db else: return Db.from_pool(self._pool) def put_db(self, db): if db is not self._db: db.put_conn() @contextmanager def db(self): db = None try: db = self.get_db() yield db finally: if db: self.put_db(db) @timed @db_transaction() def check_config(self, *, check_write, db=None, cur=None): if not self.objstorage.check_config(check_write=check_write): return False # Check permissions on one of the tables if check_write: check = "INSERT" else: check = "SELECT" cur.execute("select has_table_privilege(current_user, 'content', %s)", (check,)) return cur.fetchone()[0] def _content_unique_key(self, hash, db): """Given a hash (tuple or dict), return a unique key from the aggregation of keys. """ keys = db.content_hash_keys if isinstance(hash, tuple): return hash return tuple([hash[k] for k in keys]) def _content_add_metadata(self, db, cur, content): """Add content to the postgresql database but not the object storage. """ # create temporary table for metadata injection db.mktemp("content", cur) db.copy_to( (c.to_dict() for c in content), "tmp_content", db.content_add_keys, cur ) # move metadata in place try: db.content_add_from_temp(cur) except psycopg2.IntegrityError as e: if e.diag.sqlstate == "23505" and e.diag.table_name == "content": message_detail = e.diag.message_detail if message_detail: hash_name, hash_id = extract_collision_hash(message_detail) collision_contents_hashes = [ c.hashes() for c in content if c.get_hash(hash_name) == hash_id ] else: constraint_to_hash_name = { "content_pkey": "sha1", "content_sha1_git_idx": "sha1_git", "content_sha256_idx": "sha256", } hash_name = constraint_to_hash_name.get(e.diag.constraint_name) hash_id = None collision_contents_hashes = None raise HashCollision( hash_name, hash_id, collision_contents_hashes ) from None else: raise @timed @process_metrics def content_add(self, content: Iterable[Content]) -> Dict: ctime = now() contents = [attr.evolve(c, ctime=ctime) for c in content] objstorage_summary = self.objstorage.content_add(contents) with self.db() as db: with db.transaction() as cur: missing = list( self.content_missing( map(Content.to_dict, contents), key_hash="sha1_git", db=db, cur=cur, ) ) contents = [c for c in contents if c.sha1_git in missing] self.journal_writer.content_add(contents) self._content_add_metadata(db, cur, contents) return { "content:add": len(contents), "content:add:bytes": objstorage_summary["content:add:bytes"], } @timed @db_transaction() def content_update(self, content, keys=[], db=None, cur=None): # TODO: Add a check on input keys. How to properly implement # this? We don't know yet the new columns. self.journal_writer.content_update(content) db.mktemp("content", cur) select_keys = list(set(db.content_get_metadata_keys).union(set(keys))) with convert_validation_exceptions(): db.copy_to(content, "tmp_content", select_keys, cur) db.content_update_from_temp(keys_to_update=keys, cur=cur) @timed @process_metrics @db_transaction() def content_add_metadata( self, content: Iterable[Content], db=None, cur=None ) -> Dict: contents = list(content) missing = self.content_missing( (c.to_dict() for c in contents), key_hash="sha1_git", db=db, cur=cur, ) contents = [c for c in contents if c.sha1_git in missing] self.journal_writer.content_add_metadata(contents) self._content_add_metadata(db, cur, contents) return { "content:add": len(contents), } @timed def content_get(self, content): # FIXME: Make this method support slicing the `data`. if len(content) > BULK_BLOCK_CONTENT_LEN_MAX: raise StorageArgumentException( "Send at maximum %s contents." % BULK_BLOCK_CONTENT_LEN_MAX ) yield from self.objstorage.content_get(content) @timed @db_transaction() def content_get_range(self, start, end, limit=1000, db=None, cur=None): if limit is None: raise StorageArgumentException("limit should not be None") contents = [] next_content = None for counter, content_row in enumerate( db.content_get_range(start, end, limit + 1, cur) ): content = dict(zip(db.content_get_metadata_keys, content_row)) if counter >= limit: # take the last commit for the next page starting from this next_content = content["sha1"] break contents.append(content) return { "contents": contents, "next": next_content, } @timed def content_get_partition( self, partition_id: int, nb_partitions: int, limit: int = 1000, page_token: str = None, ): if limit is None: raise StorageArgumentException("limit should not be None") (start, end) = get_partition_bounds_bytes( partition_id, nb_partitions, SHA1_SIZE ) if page_token: start = hash_to_bytes(page_token) if end is None: end = b"\xff" * SHA1_SIZE result = self.content_get_range(start, end, limit) result2 = { "contents": result["contents"], "next_page_token": None, } if result["next"]: result2["next_page_token"] = hash_to_hex(result["next"]) return result2 @timed @db_transaction(statement_timeout=500) def content_get_metadata( self, contents: List[bytes], db=None, cur=None ) -> Dict[bytes, List[Dict]]: result: Dict[bytes, List[Dict]] = {sha1: [] for sha1 in contents} for row in db.content_get_metadata_from_sha1s(contents, cur): content_meta = dict(zip(db.content_get_metadata_keys, row)) result[content_meta["sha1"]].append(content_meta) return result @timed @db_transaction_generator() def content_missing(self, content, key_hash="sha1", db=None, cur=None): keys = db.content_hash_keys if key_hash not in keys: raise StorageArgumentException("key_hash should be one of %s" % keys) key_hash_idx = keys.index(key_hash) if not content: return for obj in db.content_missing_from_list(content, cur): yield obj[key_hash_idx] @timed @db_transaction_generator() def content_missing_per_sha1(self, contents, db=None, cur=None): for obj in db.content_missing_per_sha1(contents, cur): yield obj[0] @timed @db_transaction_generator() def content_missing_per_sha1_git(self, contents, db=None, cur=None): for obj in db.content_missing_per_sha1_git(contents, cur): yield obj[0] @timed @db_transaction() def content_find(self, content, db=None, cur=None): if not set(content).intersection(DEFAULT_ALGORITHMS): raise StorageArgumentException( "content keys must contain at least one of: " "sha1, sha1_git, sha256, blake2s256" ) contents = db.content_find( sha1=content.get("sha1"), sha1_git=content.get("sha1_git"), sha256=content.get("sha256"), blake2s256=content.get("blake2s256"), cur=cur, ) return [dict(zip(db.content_find_cols, content)) for content in contents] @timed @db_transaction() def content_get_random(self, db=None, cur=None): return db.content_get_random(cur) @staticmethod def _skipped_content_normalize(d): d = d.copy() if d.get("status") is None: d["status"] = "absent" if d.get("length") is None: d["length"] = -1 return d - @staticmethod - def _skipped_content_validate(d): - """Sanity checks on status / reason / length, that postgresql - doesn't enforce.""" - if d["status"] != "absent": - raise StorageArgumentException( - "Invalid content status: {}".format(d["status"]) - ) - - if d.get("reason") is None: - raise StorageArgumentException( - "Must provide a reason if content is absent." - ) - - if d["length"] < -1: - raise StorageArgumentException("Content length must be positive or -1.") - def _skipped_content_add_metadata(self, db, cur, content: Iterable[SkippedContent]): origin_ids = db.origin_id_get_by_url([cont.origin for cont in content], cur=cur) content = [ attr.evolve(c, origin=origin_id) for (c, origin_id) in zip(content, origin_ids) ] db.mktemp("skipped_content", cur) db.copy_to( [c.to_dict() for c in content], "tmp_skipped_content", db.skipped_content_keys, cur, ) # move metadata in place db.skipped_content_add_from_temp(cur) @timed @process_metrics @db_transaction() def skipped_content_add( self, content: Iterable[SkippedContent], db=None, cur=None ) -> Dict: ctime = now() content = [attr.evolve(c, ctime=ctime) for c in content] missing_contents = self.skipped_content_missing( (c.to_dict() for c in content), db=db, cur=cur, ) content = [ c for c in content if any( all( c.get_hash(algo) == missing_content.get(algo) for algo in DEFAULT_ALGORITHMS ) for missing_content in missing_contents ) ] self.journal_writer.skipped_content_add(content) self._skipped_content_add_metadata(db, cur, content) return { "skipped_content:add": len(content), } @timed @db_transaction_generator() def skipped_content_missing(self, contents, db=None, cur=None): contents = list(contents) for content in db.skipped_content_missing(contents, cur): yield dict(zip(db.content_hash_keys, content)) @timed @process_metrics @db_transaction() def directory_add( self, directories: Iterable[Directory], db=None, cur=None ) -> Dict: directories = list(directories) summary = {"directory:add": 0} dirs = set() dir_entries: Dict[str, defaultdict] = { "file": defaultdict(list), "dir": defaultdict(list), "rev": defaultdict(list), } for cur_dir in directories: dir_id = cur_dir.id dirs.add(dir_id) for src_entry in cur_dir.entries: entry = src_entry.to_dict() entry["dir_id"] = dir_id dir_entries[entry["type"]][dir_id].append(entry) dirs_missing = set(self.directory_missing(dirs, db=db, cur=cur)) if not dirs_missing: return summary self.journal_writer.directory_add( dir_ for dir_ in directories if dir_.id in dirs_missing ) # Copy directory ids dirs_missing_dict = ({"id": dir} for dir in dirs_missing) db.mktemp("directory", cur) db.copy_to(dirs_missing_dict, "tmp_directory", ["id"], cur) # Copy entries for entry_type, entry_list in dir_entries.items(): entries = itertools.chain.from_iterable( entries_for_dir for dir_id, entries_for_dir in entry_list.items() if dir_id in dirs_missing ) db.mktemp_dir_entry(entry_type) db.copy_to( entries, "tmp_directory_entry_%s" % entry_type, ["target", "name", "perms", "dir_id"], cur, ) # Do the final copy db.directory_add_from_temp(cur) summary["directory:add"] = len(dirs_missing) return summary @timed @db_transaction_generator() def directory_missing(self, directories, db=None, cur=None): for obj in db.directory_missing_from_list(directories, cur): yield obj[0] @timed @db_transaction_generator(statement_timeout=20000) def directory_ls(self, directory, recursive=False, db=None, cur=None): if recursive: res_gen = db.directory_walk(directory, cur=cur) else: res_gen = db.directory_walk_one(directory, cur=cur) for line in res_gen: yield dict(zip(db.directory_ls_cols, line)) @timed @db_transaction(statement_timeout=2000) def directory_entry_get_by_path(self, directory, paths, db=None, cur=None): res = db.directory_entry_get_by_path(directory, paths, cur) if res: return dict(zip(db.directory_ls_cols, res)) @timed @db_transaction() def directory_get_random(self, db=None, cur=None): return db.directory_get_random(cur) @timed @process_metrics @db_transaction() def revision_add(self, revisions: Iterable[Revision], db=None, cur=None) -> Dict: revisions = list(revisions) summary = {"revision:add": 0} revisions_missing = set( self.revision_missing( set(revision.id for revision in revisions), db=db, cur=cur ) ) if not revisions_missing: return summary db.mktemp_revision(cur) revisions_filtered = [ revision for revision in revisions if revision.id in revisions_missing ] self.journal_writer.revision_add(revisions_filtered) revisions_filtered = list(map(converters.revision_to_db, revisions_filtered)) parents_filtered: List[bytes] = [] with convert_validation_exceptions(): db.copy_to( revisions_filtered, "tmp_revision", db.revision_add_cols, cur, lambda rev: parents_filtered.extend(rev["parents"]), ) db.revision_add_from_temp(cur) db.copy_to( parents_filtered, "revision_history", ["id", "parent_id", "parent_rank"], cur, ) return {"revision:add": len(revisions_missing)} @timed @db_transaction_generator() def revision_missing(self, revisions, db=None, cur=None): if not revisions: return for obj in db.revision_missing_from_list(revisions, cur): yield obj[0] @timed @db_transaction_generator(statement_timeout=1000) def revision_get(self, revisions, db=None, cur=None): for line in db.revision_get_from_list(revisions, cur): data = converters.db_to_revision(dict(zip(db.revision_get_cols, line))) if not data["type"]: yield None continue yield data @timed @db_transaction_generator(statement_timeout=2000) def revision_log(self, revisions, limit=None, db=None, cur=None): for line in db.revision_log(revisions, limit, cur): data = converters.db_to_revision(dict(zip(db.revision_get_cols, line))) if not data["type"]: yield None continue yield data @timed @db_transaction_generator(statement_timeout=2000) def revision_shortlog(self, revisions, limit=None, db=None, cur=None): yield from db.revision_shortlog(revisions, limit, cur) @timed @db_transaction() def revision_get_random(self, db=None, cur=None): return db.revision_get_random(cur) @timed @process_metrics @db_transaction() def release_add(self, releases: Iterable[Release], db=None, cur=None) -> Dict: releases = list(releases) summary = {"release:add": 0} release_ids = set(release.id for release in releases) releases_missing = set(self.release_missing(release_ids, db=db, cur=cur)) if not releases_missing: return summary db.mktemp_release(cur) releases_filtered = [ release for release in releases if release.id in releases_missing ] self.journal_writer.release_add(releases_filtered) releases_filtered = list(map(converters.release_to_db, releases_filtered)) with convert_validation_exceptions(): db.copy_to(releases_filtered, "tmp_release", db.release_add_cols, cur) db.release_add_from_temp(cur) return {"release:add": len(releases_missing)} @timed @db_transaction_generator() def release_missing(self, releases, db=None, cur=None): if not releases: return for obj in db.release_missing_from_list(releases, cur): yield obj[0] @timed @db_transaction_generator(statement_timeout=500) def release_get(self, releases, db=None, cur=None): for release in db.release_get_from_list(releases, cur): data = converters.db_to_release(dict(zip(db.release_get_cols, release))) yield data if data["target_type"] else None @timed @db_transaction() def release_get_random(self, db=None, cur=None): return db.release_get_random(cur) @timed @process_metrics @db_transaction() def snapshot_add(self, snapshots: Iterable[Snapshot], db=None, cur=None) -> Dict: created_temp_table = False count = 0 for snapshot in snapshots: if not db.snapshot_exists(snapshot.id, cur): if not created_temp_table: db.mktemp_snapshot_branch(cur) created_temp_table = True with convert_validation_exceptions(): db.copy_to( ( { "name": name, "target": info.target if info else None, "target_type": ( info.target_type.value if info else None ), } for name, info in snapshot.branches.items() ), "tmp_snapshot_branch", ["name", "target", "target_type"], cur, ) self.journal_writer.snapshot_add([snapshot]) db.snapshot_add(snapshot.id, cur) count += 1 return {"snapshot:add": count} @timed @db_transaction_generator() def snapshot_missing(self, snapshots, db=None, cur=None): for obj in db.snapshot_missing_from_list(snapshots, cur): yield obj[0] @timed @db_transaction(statement_timeout=2000) def snapshot_get(self, snapshot_id, db=None, cur=None): return self.snapshot_get_branches(snapshot_id, db=db, cur=cur) @timed @db_transaction(statement_timeout=2000) def snapshot_get_by_origin_visit(self, origin, visit, db=None, cur=None): snapshot_id = db.snapshot_get_by_origin_visit(origin, visit, cur) if snapshot_id: return self.snapshot_get(snapshot_id, db=db, cur=cur) return None @timed @db_transaction(statement_timeout=2000) def snapshot_count_branches(self, snapshot_id, db=None, cur=None): return dict([bc for bc in db.snapshot_count_branches(snapshot_id, cur)]) @timed @db_transaction(statement_timeout=2000) def snapshot_get_branches( self, snapshot_id, branches_from=b"", branches_count=1000, target_types=None, db=None, cur=None, ): if snapshot_id == EMPTY_SNAPSHOT_ID: return { "id": snapshot_id, "branches": {}, "next_branch": None, } branches = {} next_branch = None fetched_branches = list( db.snapshot_get_by_id( snapshot_id, branches_from=branches_from, branches_count=branches_count + 1, target_types=target_types, cur=cur, ) ) for branch in fetched_branches[:branches_count]: branch = dict(zip(db.snapshot_get_cols, branch)) del branch["snapshot_id"] name = branch.pop("name") if branch == {"target": None, "target_type": None}: branch = None branches[name] = branch if len(fetched_branches) > branches_count: branch = dict(zip(db.snapshot_get_cols, fetched_branches[-1])) next_branch = branch["name"] if branches: return { "id": snapshot_id, "branches": branches, "next_branch": next_branch, } return None @timed @db_transaction() def snapshot_get_random(self, db=None, cur=None): return db.snapshot_get_random(cur) @timed @db_transaction() def origin_visit_add( self, visits: Iterable[OriginVisit], db=None, cur=None ) -> Iterable[OriginVisit]: for visit in visits: origin = self.origin_get({"url": visit.origin}, db=db, cur=cur) if not origin: # Cannot add a visit without an origin raise StorageArgumentException("Unknown origin %s", visit.origin) all_visits = [] nb_visits = 0 for visit in visits: nb_visits += 1 if not visit.visit: with convert_validation_exceptions(): visit_id = db.origin_visit_add( visit.origin, visit.date, visit.type, cur=cur ) visit = attr.evolve(visit, visit=visit_id) else: db.origin_visit_add_with_id(visit, cur=cur) assert visit.visit is not None all_visits.append(visit) # Forced to write after for the case when the visit has no id self.journal_writer.origin_visit_add([visit]) visit_status = OriginVisitStatus( origin=visit.origin, visit=visit.visit, date=visit.date, status="created", snapshot=None, ) self._origin_visit_status_add(visit_status, db=db, cur=cur) send_metric("origin_visit:add", count=nb_visits, method_name="origin_visit") return all_visits def _origin_visit_status_add( self, visit_status: OriginVisitStatus, db, cur ) -> None: """Add an origin visit status""" self.journal_writer.origin_visit_status_add([visit_status]) db.origin_visit_status_add(visit_status, cur=cur) send_metric( "origin_visit_status:add", count=1, method_name="origin_visit_status" ) @timed @db_transaction() def origin_visit_status_add( self, visit_statuses: Iterable[OriginVisitStatus], db=None, cur=None, ) -> None: # First round to check existence (fail early if any is ko) for visit_status in visit_statuses: origin_url = self.origin_get({"url": visit_status.origin}, db=db, cur=cur) if not origin_url: raise StorageArgumentException(f"Unknown origin {visit_status.origin}") for visit_status in visit_statuses: self._origin_visit_status_add(visit_status, db, cur) @timed @db_transaction() def origin_visit_status_get_latest( self, origin_url: str, visit: int, allowed_statuses: Optional[List[str]] = None, require_snapshot: bool = False, db=None, cur=None, ) -> Optional[OriginVisitStatus]: row = db.origin_visit_status_get_latest( origin_url, visit, allowed_statuses, require_snapshot, cur=cur ) if not row: return None return OriginVisitStatus.from_dict(row) - def _origin_visit_apply_update( - self, visit: Dict[str, Any], db, cur=None - ) -> Dict[str, Any]: - """Retrieve the latest visit status information for the origin visit. - Then merge it with the visit and return it. - - """ - visit_status = db.origin_visit_status_get_latest( - visit["origin"], visit["visit"], cur=cur - ) - return { - # default to the values in visit - **visit, - # override with the last update - **visit_status, - # visit['origin'] is the URL (via a join), while - # visit_status['origin'] is only an id. - "origin": visit["origin"], - # but keep the date of the creation of the origin visit - "date": visit["date"], - } - @timed @db_transaction_generator(statement_timeout=500) def origin_visit_get( self, origin: str, last_visit: Optional[int] = None, limit: Optional[int] = None, order: str = "asc", db=None, cur=None, ) -> Iterable[Dict[str, Any]]: assert order in ["asc", "desc"] lines = db.origin_visit_get_all( origin, last_visit=last_visit, limit=limit, order=order, cur=cur ) for line in lines: - visit = dict(zip(db.origin_visit_get_cols, line)) - yield self._origin_visit_apply_update(visit, db) + yield dict(zip(db.origin_visit_get_cols, line)) @timed @db_transaction(statement_timeout=500) def origin_visit_find_by_date( self, origin: str, visit_date: datetime.datetime, db=None, cur=None ) -> Optional[Dict[str, Any]]: - visit = db.origin_visit_find_by_date(origin, visit_date, cur=cur) - if visit: - return self._origin_visit_apply_update(visit, db) - return None + return db.origin_visit_find_by_date(origin, visit_date, cur=cur) @timed @db_transaction(statement_timeout=500) def origin_visit_get_by( self, origin: str, visit: int, db=None, cur=None ) -> Optional[Dict[str, Any]]: row = db.origin_visit_get(origin, visit, cur) if row: - visit_dict = dict(zip(db.origin_visit_get_cols, row)) - return self._origin_visit_apply_update(visit_dict, db) + return dict(zip(db.origin_visit_get_cols, row)) return None @timed @db_transaction(statement_timeout=4000) def origin_visit_get_latest( self, origin: str, type: Optional[str] = None, allowed_statuses: Optional[List[str]] = None, require_snapshot: bool = False, db=None, cur=None, ) -> Optional[Dict[str, Any]]: row = db.origin_visit_get_latest( origin, type=type, allowed_statuses=allowed_statuses, require_snapshot=require_snapshot, cur=cur, ) if row: - visit = dict(zip(db.origin_visit_get_cols, row)) - return self._origin_visit_apply_update(visit, db) + return dict(zip(db.origin_visit_get_cols, row)) return None @timed @db_transaction() def origin_visit_get_random( self, type: str, db=None, cur=None ) -> Optional[Dict[str, Any]]: row = db.origin_visit_get_random(type, cur) if row: - visit = dict(zip(db.origin_visit_get_cols, row)) - return self._origin_visit_apply_update(visit, db) + return dict(zip(db.origin_visit_get_cols, row)) return None @timed @db_transaction(statement_timeout=2000) def object_find_by_sha1_git(self, ids, db=None, cur=None): ret = {id: [] for id in ids} for retval in db.object_find_by_sha1_git(ids, cur=cur): if retval[1]: ret[retval[0]].append( dict(zip(db.object_find_by_sha1_git_cols, retval)) ) return ret @timed @db_transaction(statement_timeout=500) def origin_get(self, origins, db=None, cur=None): if isinstance(origins, dict): # Old API return_single = True origins = [origins] elif len(origins) == 0: return [] else: return_single = False origin_urls = [origin["url"] for origin in origins] results = db.origin_get_by_url(origin_urls, cur) results = [dict(zip(db.origin_cols, result)) for result in results] if return_single: assert len(results) == 1 if results[0]["url"] is not None: return results[0] else: return None else: return [None if res["url"] is None else res for res in results] @timed @db_transaction_generator(statement_timeout=500) def origin_get_by_sha1(self, sha1s, db=None, cur=None): for line in db.origin_get_by_sha1(sha1s, cur): if line[0] is not None: yield dict(zip(db.origin_cols, line)) else: yield None @timed @db_transaction_generator() def origin_get_range(self, origin_from=1, origin_count=100, db=None, cur=None): for origin in db.origin_get_range(origin_from, origin_count, cur): yield dict(zip(db.origin_get_range_cols, origin)) @timed @db_transaction() def origin_list( self, page_token: Optional[str] = None, limit: int = 100, *, db=None, cur=None ) -> dict: page_token = page_token or "0" if not isinstance(page_token, str): raise StorageArgumentException("page_token must be a string.") origin_from = int(page_token) result: Dict[str, Any] = { "origins": [ dict(zip(db.origin_get_range_cols, origin)) for origin in db.origin_get_range(origin_from, limit, cur) ], } assert len(result["origins"]) <= limit if len(result["origins"]) == limit: result["next_page_token"] = str(result["origins"][limit - 1]["id"] + 1) for origin in result["origins"]: del origin["id"] return result @timed @db_transaction_generator() def origin_search( self, url_pattern, offset=0, limit=50, regexp=False, with_visit=False, db=None, cur=None, ): for origin in db.origin_search( url_pattern, offset, limit, regexp, with_visit, cur ): yield dict(zip(db.origin_cols, origin)) @timed @db_transaction() def origin_count( self, url_pattern, regexp=False, with_visit=False, db=None, cur=None ): return db.origin_count(url_pattern, regexp, with_visit, cur) @timed @process_metrics @db_transaction() def origin_add( self, origins: Iterable[Origin], db=None, cur=None ) -> Dict[str, int]: urls = [o.url for o in origins] known_origins = set(url for (url,) in db.origin_get_by_url(urls, cur)) # use lists here to keep origins sorted; some tests depend on this to_add = [url for url in urls if url not in known_origins] self.journal_writer.origin_add([Origin(url=url) for url in to_add]) added = 0 for url in to_add: if db.origin_add(url, cur): added += 1 return {"origin:add": added} - @deprecated("Use origin_add([origin]) instead") - @timed - @db_transaction() - def origin_add_one(self, origin: Origin, db=None, cur=None) -> str: - self.origin_add([origin]) - return origin.url - @db_transaction(statement_timeout=500) def stat_counters(self, db=None, cur=None): return {k: v for (k, v) in db.stat_counters()} @db_transaction() def refresh_stat_counters(self, db=None, cur=None): keys = [ "content", "directory", "directory_entry_dir", "directory_entry_file", "directory_entry_rev", "origin", "origin_visit", "person", "release", "revision", "revision_history", "skipped_content", "snapshot", ] for key in keys: cur.execute("select * from swh_update_counter(%s)", (key,)) @db_transaction() def object_metadata_add( self, metadata: Iterable[RawExtrinsicMetadata], db, cur, ) -> None: counter = Counter[MetadataTargetType]() for metadata_entry in metadata: authority_id = self._get_authority_id(metadata_entry.authority, db, cur) fetcher_id = self._get_fetcher_id(metadata_entry.fetcher, db, cur) db.object_metadata_add( object_type=metadata_entry.type.value, id=str(metadata_entry.id), discovery_date=metadata_entry.discovery_date, authority_id=authority_id, fetcher_id=fetcher_id, format=metadata_entry.format, metadata=metadata_entry.metadata, origin=metadata_entry.origin, visit=metadata_entry.visit, snapshot=map_optional(str, metadata_entry.snapshot), release=map_optional(str, metadata_entry.release), revision=map_optional(str, metadata_entry.revision), path=metadata_entry.path, directory=map_optional(str, metadata_entry.directory), cur=cur, ) counter[metadata_entry.type] += 1 for (object_type, count) in counter.items(): send_metric( f"{object_type.value}_metadata:add", count=count, method_name=f"{object_type.value}_metadata_add", ) @db_transaction() def object_metadata_get( self, object_type: MetadataTargetType, id: Union[str, SWHID], authority: MetadataAuthority, after: Optional[datetime.datetime] = None, page_token: Optional[bytes] = None, limit: int = 1000, db=None, cur=None, ) -> Dict[str, Union[Optional[bytes], List[RawExtrinsicMetadata]]]: if object_type == MetadataTargetType.ORIGIN: if isinstance(id, SWHID): raise StorageArgumentException( f"object_metadata_get called with object_type='origin', but " f"provided id is an SWHID: {id!r}" ) else: if not isinstance(id, SWHID): raise StorageArgumentException( f"object_metadata_get called with object_type!='origin', but " f"provided id is not an SWHID: {id!r}" ) if page_token: (after_time, after_fetcher) = msgpack_loads(page_token) if after and after_time < after: raise StorageArgumentException( "page_token is inconsistent with the value of 'after'." ) else: after_time = after after_fetcher = None authority_id = self._get_authority_id(authority, db, cur) if not authority_id: return { "next_page_token": None, "results": [], } rows = db.object_metadata_get( object_type, str(id), authority_id, after_time, after_fetcher, limit + 1, cur, ) rows = [dict(zip(db.object_metadata_get_cols, row)) for row in rows] results = [] for row in rows: row = row.copy() row.pop("metadata_fetcher.id") assert str(id) == row["object_metadata.id"] result = RawExtrinsicMetadata( type=MetadataTargetType(row["object_metadata.type"]), id=id, authority=MetadataAuthority( type=MetadataAuthorityType(row["metadata_authority.type"]), url=row["metadata_authority.url"], ), fetcher=MetadataFetcher( name=row["metadata_fetcher.name"], version=row["metadata_fetcher.version"], ), discovery_date=row["discovery_date"], format=row["format"], metadata=row["object_metadata.metadata"], origin=row["origin"], visit=row["visit"], snapshot=map_optional(parse_swhid, row["snapshot"]), release=map_optional(parse_swhid, row["release"]), revision=map_optional(parse_swhid, row["revision"]), path=row["path"], directory=map_optional(parse_swhid, row["directory"]), ) results.append(result) if len(results) > limit: results.pop() assert len(results) == limit last_returned_row = rows[-2] # rows[-1] corresponds to the popped result next_page_token: Optional[bytes] = msgpack_dumps( ( last_returned_row["discovery_date"], last_returned_row["metadata_fetcher.id"], ) ) else: next_page_token = None return { "next_page_token": next_page_token, "results": results, } @timed @db_transaction() def metadata_fetcher_add( self, fetchers: Iterable[MetadataFetcher], db=None, cur=None ) -> None: for (i, fetcher) in enumerate(fetchers): if fetcher.metadata is None: raise StorageArgumentException( "MetadataFetcher.metadata may not be None in metadata_fetcher_add." ) db.metadata_fetcher_add( fetcher.name, fetcher.version, dict(fetcher.metadata), cur=cur ) send_metric("metadata_fetcher:add", count=i + 1, method_name="metadata_fetcher") @timed @db_transaction(statement_timeout=500) def metadata_fetcher_get( self, name: str, version: str, db=None, cur=None ) -> Optional[MetadataFetcher]: row = db.metadata_fetcher_get(name, version, cur=cur) if not row: return None return MetadataFetcher.from_dict(dict(zip(db.metadata_fetcher_cols, row))) @timed @db_transaction() def metadata_authority_add( self, authorities: Iterable[MetadataAuthority], db=None, cur=None ) -> None: for (i, authority) in enumerate(authorities): if authority.metadata is None: raise StorageArgumentException( "MetadataAuthority.metadata may not be None in " "metadata_authority_add." ) db.metadata_authority_add( authority.type.value, authority.url, dict(authority.metadata), cur=cur ) send_metric( "metadata_authority:add", count=i + 1, method_name="metadata_authority" ) @timed @db_transaction() def metadata_authority_get( self, type: MetadataAuthorityType, url: str, db=None, cur=None ) -> Optional[MetadataAuthority]: row = db.metadata_authority_get(type.value, url, cur=cur) if not row: return None return MetadataAuthority.from_dict(dict(zip(db.metadata_authority_cols, row))) @timed def diff_directories(self, from_dir, to_dir, track_renaming=False): return diff.diff_directories(self, from_dir, to_dir, track_renaming) @timed def diff_revisions(self, from_rev, to_rev, track_renaming=False): return diff.diff_revisions(self, from_rev, to_rev, track_renaming) @timed def diff_revision(self, revision, track_renaming=False): return diff.diff_revision(self, revision, track_renaming) def clear_buffers(self, object_types: Optional[Iterable[str]] = None) -> None: """Do nothing """ return None def flush(self, object_types: Optional[Iterable[str]] = None) -> Dict: return {} def _get_authority_id(self, authority: MetadataAuthority, db, cur): authority_id = db.metadata_authority_get_id( authority.type.value, authority.url, cur ) if not authority_id: raise StorageArgumentException(f"Unknown authority {authority}") return authority_id def _get_fetcher_id(self, fetcher: MetadataFetcher, db, cur): fetcher_id = db.metadata_fetcher_get_id(fetcher.name, fetcher.version, cur) if not fetcher_id: raise StorageArgumentException(f"Unknown fetcher {fetcher}") return fetcher_id diff --git a/swh/storage/tests/algos/test_origin.py b/swh/storage/tests/algos/test_origin.py index a04e5933..aedb0ed5 100644 --- a/swh/storage/tests/algos/test_origin.py +++ b/swh/storage/tests/algos/test_origin.py @@ -1,315 +1,321 @@ # Copyright (C) 2019-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import pytest from unittest.mock import patch -from swh.model.model import Origin, OriginVisit, OriginVisitStatus, Snapshot +from swh.model.model import Origin, OriginVisit, OriginVisitStatus from swh.storage.algos.origin import iter_origins, origin_get_latest_visit_status from swh.storage.utils import now from swh.storage.tests.test_storage import round_to_milliseconds -from swh.storage.tests.storage_data import data def assert_list_eq(left, right, msg=None): assert list(left) == list(right), msg @pytest.fixture def swh_storage_backend_config(): yield { "cls": "memory", } def test_iter_origins(swh_storage): origins = [ Origin(url="bar"), Origin(url="qux"), Origin(url="quuz"), ] assert swh_storage.origin_add(origins) == {"origin:add": 3} assert_list_eq(iter_origins(swh_storage), origins) assert_list_eq(iter_origins(swh_storage, batch_size=1), origins) assert_list_eq(iter_origins(swh_storage, batch_size=2), origins) for i in range(1, 5): assert_list_eq(iter_origins(swh_storage, origin_from=i + 1), origins[i:], i) assert_list_eq( iter_origins(swh_storage, origin_from=i + 1, batch_size=1), origins[i:], i ) assert_list_eq( iter_origins(swh_storage, origin_from=i + 1, batch_size=2), origins[i:], i ) for j in range(i, 5): assert_list_eq( iter_origins(swh_storage, origin_from=i + 1, origin_to=j + 1), origins[i:j], (i, j), ) assert_list_eq( iter_origins( swh_storage, origin_from=i + 1, origin_to=j + 1, batch_size=1 ), origins[i:j], (i, j), ) assert_list_eq( iter_origins( swh_storage, origin_from=i + 1, origin_to=j + 1, batch_size=2 ), origins[i:j], (i, j), ) @patch("swh.storage.in_memory.InMemoryStorage.origin_get_range") def test_iter_origins_batch_size(mock_origin_get_range, swh_storage): mock_origin_get_range.return_value = [] list(iter_origins(swh_storage)) mock_origin_get_range.assert_called_with(origin_from=1, origin_count=10000) list(iter_origins(swh_storage, batch_size=42)) mock_origin_get_range.assert_called_with(origin_from=1, origin_count=42) -def test_origin_get_latest_visit_status_none(swh_storage, sample_data_model): +def test_origin_get_latest_visit_status_none(swh_storage, sample_data): """Looking up unknown objects should return nothing """ # unknown origin so no result assert origin_get_latest_visit_status(swh_storage, "unknown-origin") is None # unknown type so no result - origin = sample_data_model["origin"][0] - origin_visit = sample_data_model["origin_visit"][0] + origin = sample_data.origin + origin_visit = sample_data.origin_visit assert origin_visit.origin == origin.url - swh_storage.origin_add_one(origin) + swh_storage.origin_add([origin]) swh_storage.origin_visit_add([origin_visit])[0] assert origin_visit.type != "unknown" actual_origin_visit = origin_get_latest_visit_status( swh_storage, origin.url, type="unknown" ) assert actual_origin_visit is None actual_origin_visit = origin_get_latest_visit_status( swh_storage, origin.url, require_snapshot=True ) assert actual_origin_visit is None actual_origin_visit = origin_get_latest_visit_status( swh_storage, origin.url, allowed_statuses=["unknown"] ) assert actual_origin_visit is None -def init_storage_with_origin_visits(swh_storage): +def init_storage_with_origin_visits(swh_storage, sample_data): """Initialize storage with origin/origin-visit/origin-visit-status """ - origin1 = Origin.from_dict(data.origin) - origin2 = Origin.from_dict(data.origin2) + snapshot = sample_data.snapshots[2] + origin1, origin2 = sample_data.origins[:2] swh_storage.origin_add([origin1, origin2]) ov1, ov2 = swh_storage.origin_visit_add( [ OriginVisit( - origin=origin1.url, date=data.date_visit1, type=data.type_visit1, + origin=origin1.url, + date=sample_data.date_visit1, + type=sample_data.type_visit1, ), OriginVisit( - origin=origin2.url, date=data.date_visit2, type=data.type_visit2, + origin=origin2.url, + date=sample_data.date_visit2, + type=sample_data.type_visit2, ), ] ) - snapshot = Snapshot.from_dict(data.complete_snapshot) swh_storage.snapshot_add([snapshot]) date_now = now() date_now = round_to_milliseconds(date_now) - assert data.date_visit1 < data.date_visit2 - assert data.date_visit2 < date_now + assert sample_data.date_visit1 < sample_data.date_visit2 + assert sample_data.date_visit2 < date_now # origin visit status 1 for origin visit 1 ovs11 = OriginVisitStatus( origin=origin1.url, visit=ov1.visit, - date=data.date_visit1, + date=sample_data.date_visit1, status="partial", snapshot=None, ) # origin visit status 2 for origin visit 1 ovs12 = OriginVisitStatus( origin=origin1.url, visit=ov1.visit, - date=data.date_visit2, + date=sample_data.date_visit2, status="ongoing", snapshot=None, ) # origin visit status 1 for origin visit 2 ovs21 = OriginVisitStatus( origin=origin2.url, visit=ov2.visit, - date=data.date_visit2, + date=sample_data.date_visit2, status="ongoing", snapshot=None, ) # origin visit status 2 for origin visit 2 ovs22 = OriginVisitStatus( origin=origin2.url, visit=ov2.visit, date=date_now, status="full", snapshot=snapshot.id, metadata={"something": "wicked"}, ) swh_storage.origin_visit_status_add([ovs11, ovs12, ovs21, ovs22]) return { "origin": [origin1, origin2], "origin_visit": [ov1, ov2], "origin_visit_status": [ovs11, ovs12, ovs21, ovs22], } -def test_origin_get_latest_visit_status_filter_type(swh_storage): +def test_origin_get_latest_visit_status_filter_type(swh_storage, sample_data): """Filtering origin visit per types should yield consistent results """ - objects = init_storage_with_origin_visits(swh_storage) + objects = init_storage_with_origin_visits(swh_storage, sample_data) origin1, origin2 = objects["origin"] ov1, ov2 = objects["origin_visit"] ovs11, ovs12, _, ovs22 = objects["origin_visit_status"] # no visit for origin1 url with type_visit2 assert ( - origin_get_latest_visit_status(swh_storage, origin1.url, type=data.type_visit2) + origin_get_latest_visit_status( + swh_storage, origin1.url, type=sample_data.type_visit2 + ) is None ) # no visit for origin2 url with type_visit1 assert ( - origin_get_latest_visit_status(swh_storage, origin2.url, type=data.type_visit1) + origin_get_latest_visit_status( + swh_storage, origin2.url, type=sample_data.type_visit1 + ) is None ) # Two visits, both with no snapshot, take the most recent actual_ov1, actual_ovs12 = origin_get_latest_visit_status( - swh_storage, origin1.url, type=data.type_visit1 + swh_storage, origin1.url, type=sample_data.type_visit1 ) assert isinstance(actual_ov1, OriginVisit) assert isinstance(actual_ovs12, OriginVisitStatus) assert actual_ov1.origin == ov1.origin assert actual_ov1.visit == ov1.visit - assert actual_ov1.type == data.type_visit1 + assert actual_ov1.type == sample_data.type_visit1 assert actual_ovs12 == ovs12 # take the most recent visit with type_visit2 actual_ov2, actual_ovs22 = origin_get_latest_visit_status( - swh_storage, origin2.url, type=data.type_visit2 + swh_storage, origin2.url, type=sample_data.type_visit2 ) assert isinstance(actual_ov2, OriginVisit) assert isinstance(actual_ovs22, OriginVisitStatus) assert actual_ov2.origin == ov2.origin assert actual_ov2.visit == ov2.visit - assert actual_ov2.type == data.type_visit2 + assert actual_ov2.type == sample_data.type_visit2 assert actual_ovs22 == ovs22 -def test_origin_get_latest_visit_status_filter_status(swh_storage): - objects = init_storage_with_origin_visits(swh_storage) +def test_origin_get_latest_visit_status_filter_status(swh_storage, sample_data): + objects = init_storage_with_origin_visits(swh_storage, sample_data) origin1, origin2 = objects["origin"] ov1, ov2 = objects["origin_visit"] ovs11, ovs12, _, ovs22 = objects["origin_visit_status"] # no failed status for that visit assert ( origin_get_latest_visit_status( swh_storage, origin2.url, allowed_statuses=["failed"] ) is None ) # only 1 partial for that visit actual_ov1, actual_ovs11 = origin_get_latest_visit_status( swh_storage, origin1.url, allowed_statuses=["partial"] ) assert actual_ov1.origin == ov1.origin assert actual_ov1.visit == ov1.visit - assert actual_ov1.type == data.type_visit1 + assert actual_ov1.type == sample_data.type_visit1 assert actual_ovs11 == ovs11 # both status exist, take the latest one actual_ov1, actual_ovs12 = origin_get_latest_visit_status( swh_storage, origin1.url, allowed_statuses=["partial", "ongoing"] ) assert actual_ov1.origin == ov1.origin assert actual_ov1.visit == ov1.visit - assert actual_ov1.type == data.type_visit1 + assert actual_ov1.type == sample_data.type_visit1 assert actual_ovs12 == ovs12 assert isinstance(actual_ov1, OriginVisit) assert isinstance(actual_ovs12, OriginVisitStatus) assert actual_ov1.origin == ov1.origin assert actual_ov1.visit == ov1.visit - assert actual_ov1.type == data.type_visit1 + assert actual_ov1.type == sample_data.type_visit1 assert actual_ovs12 == ovs12 # take the most recent visit with type_visit2 actual_ov2, actual_ovs22 = origin_get_latest_visit_status( swh_storage, origin2.url, allowed_statuses=["full"] ) assert actual_ov2.origin == ov2.origin assert actual_ov2.visit == ov2.visit - assert actual_ov2.type == data.type_visit2 + assert actual_ov2.type == sample_data.type_visit2 assert actual_ovs22 == ovs22 -def test_origin_get_latest_visit_status_filter_snapshot(swh_storage): - objects = init_storage_with_origin_visits(swh_storage) +def test_origin_get_latest_visit_status_filter_snapshot(swh_storage, sample_data): + objects = init_storage_with_origin_visits(swh_storage, sample_data) origin1, origin2 = objects["origin"] _, ov2 = objects["origin_visit"] _, _, _, ovs22 = objects["origin_visit_status"] # there is no visit with snapshot yet for that visit assert ( origin_get_latest_visit_status(swh_storage, origin1.url, require_snapshot=True) is None ) # visit status with partial status visit elected actual_ov2, actual_ovs22 = origin_get_latest_visit_status( swh_storage, origin2.url, require_snapshot=True ) assert actual_ov2.origin == ov2.origin assert actual_ov2.visit == ov2.visit assert actual_ov2.type == ov2.type assert actual_ovs22 == ovs22 date_now = now() # Add another visit swh_storage.origin_visit_add( - [OriginVisit(origin=origin2.url, date=date_now, type=data.type_visit2,),] + [OriginVisit(origin=origin2.url, date=date_now, type=sample_data.type_visit2,),] ) # Requiring the latest visit with a snapshot, we still find the previous visit ov2, ovs22 = origin_get_latest_visit_status( swh_storage, origin2.url, require_snapshot=True ) assert actual_ov2.origin == ov2.origin assert actual_ov2.visit == ov2.visit assert actual_ov2.type == ov2.type assert actual_ovs22 == ovs22 diff --git a/swh/storage/tests/algos/test_snapshot.py b/swh/storage/tests/algos/test_snapshot.py index 10d98ae9..8db201b9 100644 --- a/swh/storage/tests/algos/test_snapshot.py +++ b/swh/storage/tests/algos/test_snapshot.py @@ -1,149 +1,149 @@ # Copyright (C) 2018-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information from hypothesis import given import pytest from swh.model.collections import ImmutableDict from swh.model.hypothesis_strategies import snapshots, branch_names, branch_targets from swh.model.model import OriginVisit, OriginVisitStatus, Snapshot from swh.storage.algos.snapshot import snapshot_get_all_branches, snapshot_get_latest from swh.storage.utils import now @pytest.fixture def swh_storage_backend_config(): yield { "cls": "memory", "journal_writer": None, } @given(snapshot=snapshots(min_size=0, max_size=10, only_objects=False)) def test_snapshot_small(swh_storage, snapshot): # noqa swh_storage.snapshot_add([snapshot]) returned_snapshot = snapshot_get_all_branches(swh_storage, snapshot.id) assert snapshot.to_dict() == returned_snapshot @given(branch_name=branch_names(), branch_target=branch_targets(only_objects=True)) def test_snapshot_large(swh_storage, branch_name, branch_target): # noqa snapshot = Snapshot( branches=ImmutableDict( (b"%s%05d" % (branch_name, i), branch_target) for i in range(10000) ), ) swh_storage.snapshot_add([snapshot]) returned_snapshot = snapshot_get_all_branches(swh_storage, snapshot.id) assert snapshot.to_dict() == returned_snapshot -def test_snapshot_get_latest_none(swh_storage, sample_data_model): +def test_snapshot_get_latest_none(swh_storage, sample_data): """Retrieve latest snapshot on unknown origin or origin without snapshot should yield no result """ # unknown origin so None assert snapshot_get_latest(swh_storage, "unknown-origin") is None # no snapshot on origin visit so None - origin = sample_data_model["origin"][0] - swh_storage.origin_add_one(origin) - origin_visit, origin_visit2 = sample_data_model["origin_visit"][:2] + origin = sample_data.origin + swh_storage.origin_add([origin]) + origin_visit, origin_visit2 = sample_data.origin_visits[:2] assert origin_visit.origin == origin.url swh_storage.origin_visit_add([origin_visit]) assert snapshot_get_latest(swh_storage, origin.url) is None ov1 = swh_storage.origin_visit_get_latest(origin.url) assert ov1 is not None visit_id = ov1["visit"] # visit references a snapshot but the snapshot does not exist in backend for some # reason - complete_snapshot = sample_data_model["snapshot"][2] + complete_snapshot = sample_data.snapshots[2] swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=visit_id, date=origin_visit2.date, status="partial", snapshot=complete_snapshot.id, ) ] ) # so we do not find it assert snapshot_get_latest(swh_storage, origin.url) is None assert snapshot_get_latest(swh_storage, origin.url, branches_count=1) is None -def test_snapshot_get_latest(swh_storage, sample_data_model): - origin = sample_data_model["origin"][0] - swh_storage.origin_add_one(origin) +def test_snapshot_get_latest(swh_storage, sample_data): + origin = sample_data.origin + swh_storage.origin_add([origin]) - visit1, visit2 = sample_data_model["origin_visit"][:2] + visit1, visit2 = sample_data.origin_visits[:2] assert visit1.origin == origin.url swh_storage.origin_visit_add([visit1]) ov1 = swh_storage.origin_visit_get_latest(origin.url) visit_id = ov1["visit"] # Add snapshot to visit1, latest snapshot = visit 1 snapshot - complete_snapshot = sample_data_model["snapshot"][2] + complete_snapshot = sample_data.snapshots[2] swh_storage.snapshot_add([complete_snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=visit_id, date=visit2.date, status="partial", snapshot=None, ) ] ) assert visit1.date < visit2.date # no snapshot associated to the visit, so None actual_snapshot = snapshot_get_latest( swh_storage, origin.url, allowed_statuses=["partial"] ) assert actual_snapshot is None date_now = now() assert visit2.date < date_now swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=visit_id, date=date_now, status="full", snapshot=complete_snapshot.id, ) ] ) swh_storage.origin_visit_add( [OriginVisit(origin=origin.url, date=now(), type=visit1.type,)] ) actual_snapshot = snapshot_get_latest(swh_storage, origin.url) assert actual_snapshot is not None assert actual_snapshot == complete_snapshot actual_snapshot = snapshot_get_latest(swh_storage, origin.url, branches_count=1) assert actual_snapshot is not None assert actual_snapshot.id == complete_snapshot.id assert len(actual_snapshot.branches.values()) == 1 with pytest.raises(ValueError, match="branches_count must be a positive integer"): snapshot_get_latest(swh_storage, origin.url, branches_count="something-wrong") diff --git a/swh/storage/tests/conftest.py b/swh/storage/tests/conftest.py index 7cda0b3b..cd91a78c 100644 --- a/swh/storage/tests/conftest.py +++ b/swh/storage/tests/conftest.py @@ -1,69 +1,63 @@ # Copyright (C) 2019-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import pytest import multiprocessing.util from hypothesis import settings try: import pytest_cov.embed except ImportError: pytest_cov = None from typing import Iterable -from swh.model.model import BaseContent +from swh.model.model import BaseContent, Origin from swh.model.tests.generate_testdata import gen_contents, gen_origins -from swh.storage import get_storage from swh.storage.interface import StorageInterface # define tests profile. Full documentation is at: # https://hypothesis.readthedocs.io/en/latest/settings.html#settings-profiles settings.register_profile("fast", max_examples=5, deadline=5000) settings.register_profile("slow", max_examples=20, deadline=5000) if pytest_cov is not None: # pytest_cov + multiprocessing can cause a segmentation fault when starting # the child process ; so we're # removing pytest-coverage's hook that runs when a child process starts. # This means code run in child processes won't be counted in the coverage # report, but this is not an issue because the only code that runs only in # child processes is the RPC server. for (key, value) in multiprocessing.util._afterfork_registry.items(): if value is pytest_cov.embed.multiprocessing_start: del multiprocessing.util._afterfork_registry[key] break else: assert False, "missing pytest_cov.embed.multiprocessing_start?" @pytest.fixture def swh_storage_backend_config(swh_storage_backend_config): """storage should test with its journal writer collaborator on """ yield {**swh_storage_backend_config, "journal_writer": {"cls": "memory",}} -@pytest.fixture -def swh_storage(swh_storage_backend_config): - return get_storage(cls="validate", storage=swh_storage_backend_config) - - @pytest.fixture def swh_contents(swh_storage: StorageInterface) -> Iterable[BaseContent]: contents = [BaseContent.from_dict(c) for c in gen_contents(n=20)] swh_storage.content_add([c for c in contents if c.status != "absent"]) swh_storage.skipped_content_add([c for c in contents if c.status == "absent"]) return contents @pytest.fixture -def swh_origins(swh_storage): - origins = gen_origins(n=100) +def swh_origins(swh_storage: StorageInterface) -> Iterable[Origin]: + origins = [Origin.from_dict(o) for o in gen_origins(n=100)] swh_storage.origin_add(origins) return origins diff --git a/swh/storage/tests/storage_data.py b/swh/storage/tests/storage_data.py index 3c8e2337..3040c771 100644 --- a/swh/storage/tests/storage_data.py +++ b/swh/storage/tests/storage_data.py @@ -1,597 +1,553 @@ -# Copyright (C) 2015-2019 The Software Heritage developers +# Copyright (C) 2015-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import datetime import attr +from typing import Tuple + from swh.model.hashutil import hash_to_bytes, hash_to_hex from swh.model import from_disk from swh.model.identifiers import parse_swhid from swh.model.model import ( + Content, + Directory, + DirectoryEntry, MetadataAuthority, MetadataAuthorityType, MetadataFetcher, - RawExtrinsicMetadata, MetadataTargetType, + ObjectType, + Origin, + OriginVisit, + Person, + RawExtrinsicMetadata, + Release, + Revision, + RevisionType, + SkippedContent, + Snapshot, + SnapshotBranch, + TargetType, + Timestamp, + TimestampWithTimezone, ) class StorageData: - def __getattr__(self, key): - try: - v = globals()[key] - except KeyError as e: - raise AttributeError(e.args[0]) - if hasattr(v, "copy"): - return v.copy() - return v - - -data = StorageData() - - -cont = { - "data": b"42\n", - "length": 3, - "sha1": hash_to_bytes("34973274ccef6ab4dfaaf86599792fa9c3fe4689"), - "sha1_git": hash_to_bytes("d81cc0710eb6cf9efd5b920a8453e1e07157b6cd"), - "sha256": hash_to_bytes( - "673650f936cb3b0a2f93ce09d81be10748b1b203c19e8176b4eefc1964a0cf3a" - ), - "blake2s256": hash_to_bytes( - "d5fe1939576527e42cfd76a9455a2432fe7f56669564577dd93c4280e76d661d" - ), - "status": "visible", -} - -cont2 = { - "data": b"4242\n", - "length": 5, - "sha1": hash_to_bytes("61c2b3a30496d329e21af70dd2d7e097046d07b7"), - "sha1_git": hash_to_bytes("36fade77193cb6d2bd826161a0979d64c28ab4fa"), - "sha256": hash_to_bytes( - "859f0b154fdb2d630f45e1ecae4a862915435e663248bb8461d914696fc047cd" - ), - "blake2s256": hash_to_bytes( - "849c20fad132b7c2d62c15de310adfe87be94a379941bed295e8141c6219810d" - ), - "status": "visible", -} - -cont3 = { - "data": b"424242\n", - "length": 7, - "sha1": hash_to_bytes("3e21cc4942a4234c9e5edd8a9cacd1670fe59f13"), - "sha1_git": hash_to_bytes("c932c7649c6dfa4b82327d121215116909eb3bea"), - "sha256": hash_to_bytes( - "92fb72daf8c6818288a35137b72155f507e5de8d892712ab96277aaed8cf8a36" - ), - "blake2s256": hash_to_bytes( - "76d0346f44e5a27f6bafdd9c2befd304aff83780f93121d801ab6a1d4769db11" - ), - "status": "visible", - "ctime": "2019-12-01 00:00:00Z", -} - -contents = (cont, cont2, cont3) - - -missing_cont = { - "length": 8, - "sha1": hash_to_bytes("f9c24e2abb82063a3ba2c44efd2d3c797f28ac90"), - "sha1_git": hash_to_bytes("33e45d56f88993aae6a0198013efa80716fd8919"), - "sha256": hash_to_bytes( - "6bbd052ab054ef222c1c87be60cd191addedd24cc882d1f5f7f7be61dc61bb3a" - ), - "blake2s256": hash_to_bytes( - "306856b8fd879edb7b6f1aeaaf8db9bbecc993cd7f776c333ac3a782fa5c6eba" - ), - "reason": "Content too long", - "status": "absent", -} - -skipped_cont = { - "length": 1024 * 1024 * 200, - "sha1_git": hash_to_bytes("33e45d56f88993aae6a0198013efa80716fd8920"), - "sha1": hash_to_bytes("43e45d56f88993aae6a0198013efa80716fd8920"), - "sha256": hash_to_bytes( - "7bbd052ab054ef222c1c87be60cd191addedd24cc882d1f5f7f7be61dc61bb3a" - ), - "blake2s256": hash_to_bytes( - "ade18b1adecb33f891ca36664da676e12c772cc193778aac9a137b8dc5834b9b" - ), - "reason": "Content too long", - "status": "absent", - "origin": "file:///dev/zero", -} - -skipped_cont2 = { - "length": 1024 * 1024 * 300, - "sha1_git": hash_to_bytes("44e45d56f88993aae6a0198013efa80716fd8921"), - "sha1": hash_to_bytes("54e45d56f88993aae6a0198013efa80716fd8920"), - "sha256": hash_to_bytes( - "8cbd052ab054ef222c1c87be60cd191addedd24cc882d1f5f7f7be61dc61bb3a" - ), - "blake2s256": hash_to_bytes( - "9ce18b1adecb33f891ca36664da676e12c772cc193778aac9a137b8dc5834b9b" - ), - "reason": "Content too long", - "status": "absent", -} - -skipped_contents = (skipped_cont, skipped_cont2) - - -dir = { - "id": hash_to_bytes("34f335a750111ca0a8b64d8034faec9eedc396be"), - "entries": ( - { - "name": b"foo", - "type": "file", - "target": hash_to_bytes("d81cc0710eb6cf9efd5b920a8453e1e07157b6cd"), # cont - "perms": from_disk.DentryPerms.content, - }, - { - "name": b"bar\xc3", - "type": "dir", - "target": b"12345678901234567890", - "perms": from_disk.DentryPerms.directory, + """Data model objects to use within tests. + + """ + + content = Content( + data=b"42\n", + length=3, + sha1=hash_to_bytes("34973274ccef6ab4dfaaf86599792fa9c3fe4689"), + sha1_git=hash_to_bytes("d81cc0710eb6cf9efd5b920a8453e1e07157b6cd"), + sha256=hash_to_bytes( + "673650f936cb3b0a2f93ce09d81be10748b1b203c19e8176b4eefc1964a0cf3a" + ), + blake2s256=hash_to_bytes( + "d5fe1939576527e42cfd76a9455a2432fe7f56669564577dd93c4280e76d661d" + ), + status="visible", + ) + content2 = Content( + data=b"4242\n", + length=5, + sha1=hash_to_bytes("61c2b3a30496d329e21af70dd2d7e097046d07b7"), + sha1_git=hash_to_bytes("36fade77193cb6d2bd826161a0979d64c28ab4fa"), + sha256=hash_to_bytes( + "859f0b154fdb2d630f45e1ecae4a862915435e663248bb8461d914696fc047cd" + ), + blake2s256=hash_to_bytes( + "849c20fad132b7c2d62c15de310adfe87be94a379941bed295e8141c6219810d" + ), + status="visible", + ) + content3 = Content( + data=b"424242\n", + length=7, + sha1=hash_to_bytes("3e21cc4942a4234c9e5edd8a9cacd1670fe59f13"), + sha1_git=hash_to_bytes("c932c7649c6dfa4b82327d121215116909eb3bea"), + sha256=hash_to_bytes( + "92fb72daf8c6818288a35137b72155f507e5de8d892712ab96277aaed8cf8a36" + ), + blake2s256=hash_to_bytes( + "76d0346f44e5a27f6bafdd9c2befd304aff83780f93121d801ab6a1d4769db11" + ), + status="visible", + ctime=datetime.datetime(2019, 12, 1, tzinfo=datetime.timezone.utc), + ) + contents: Tuple[Content, ...] = (content, content2, content3) + + skipped_content = SkippedContent( + length=1024 * 1024 * 200, + sha1_git=hash_to_bytes("33e45d56f88993aae6a0198013efa80716fd8920"), + sha1=hash_to_bytes("43e45d56f88993aae6a0198013efa80716fd8920"), + sha256=hash_to_bytes( + "7bbd052ab054ef222c1c87be60cd191addedd24cc882d1f5f7f7be61dc61bb3a" + ), + blake2s256=hash_to_bytes( + "ade18b1adecb33f891ca36664da676e12c772cc193778aac9a137b8dc5834b9b" + ), + reason="Content too long", + status="absent", + origin="file:///dev/zero", + ) + skipped_content2 = SkippedContent( + length=1024 * 1024 * 300, + sha1_git=hash_to_bytes("44e45d56f88993aae6a0198013efa80716fd8921"), + sha1=hash_to_bytes("54e45d56f88993aae6a0198013efa80716fd8920"), + sha256=hash_to_bytes( + "8cbd052ab054ef222c1c87be60cd191addedd24cc882d1f5f7f7be61dc61bb3a" + ), + blake2s256=hash_to_bytes( + "9ce18b1adecb33f891ca36664da676e12c772cc193778aac9a137b8dc5834b9b" + ), + reason="Content too long", + status="absent", + ) + skipped_contents: Tuple[SkippedContent, ...] = (skipped_content, skipped_content2) + + directory5 = Directory(entries=()) + directory = Directory( + id=hash_to_bytes("34f335a750111ca0a8b64d8034faec9eedc396be"), + entries=tuple( + [ + DirectoryEntry( + name=b"foo", + type="file", + target=content.sha1_git, + perms=from_disk.DentryPerms.content, + ), + DirectoryEntry( + name=b"bar\xc3", + type="dir", + target=directory5.id, + perms=from_disk.DentryPerms.directory, + ), + ], + ), + ) + directory2 = Directory( + id=hash_to_bytes("8505808532953da7d2581741f01b29c04b1cb9ab"), + entries=tuple( + [ + DirectoryEntry( + name=b"oof", + type="file", + target=content2.sha1_git, + perms=from_disk.DentryPerms.content, + ) + ], + ), + ) + directory3 = Directory( + id=hash_to_bytes("4ea8c6b2f54445e5dd1a9d5bb2afd875d66f3150"), + entries=tuple( + [ + DirectoryEntry( + name=b"foo", + type="file", + target=content.sha1_git, + perms=from_disk.DentryPerms.content, + ), + DirectoryEntry( + name=b"subdir", + type="dir", + target=directory.id, + perms=from_disk.DentryPerms.directory, + ), + DirectoryEntry( + name=b"hello", + type="file", + target=directory5.id, + perms=from_disk.DentryPerms.content, + ), + ], + ), + ) + directory4 = Directory( + id=hash_to_bytes("377aa5fcd944fbabf502dbfda55cd14d33c8c3c6"), + entries=tuple( + [ + DirectoryEntry( + name=b"subdir1", + type="dir", + target=directory3.id, + perms=from_disk.DentryPerms.directory, + ) + ], + ), + ) + directories: Tuple[Directory, ...] = ( + directory2, + directory, + directory3, + directory4, + directory5, + ) + + revision = Revision( + id=hash_to_bytes("066b1b62dbfa033362092af468bf6cfabec230e7"), + message=b"hello", + author=Person( + name=b"Nicolas Dandrimont", + email=b"nicolas@example.com", + fullname=b"Nicolas Dandrimont ", + ), + date=TimestampWithTimezone( + timestamp=Timestamp(seconds=1234567890, microseconds=0), + offset=120, + negative_utc=False, + ), + committer=Person( + name=b"St\xc3fano Zacchiroli", + email=b"stefano@example.com", + fullname=b"St\xc3fano Zacchiroli ", + ), + committer_date=TimestampWithTimezone( + timestamp=Timestamp(seconds=1123456789, microseconds=0), + offset=120, + negative_utc=False, + ), + parents=(), + type=RevisionType.GIT, + directory=directory.id, + metadata={ + "checksums": {"sha1": "tarball-sha1", "sha256": "tarball-sha256",}, + "signed-off-by": "some-dude", }, - ), -} - -dir2 = { - "id": hash_to_bytes("8505808532953da7d2581741f01b29c04b1cb9ab"), - "entries": ( - { - "name": b"oof", - "type": "file", - "target": hash_to_bytes( # cont2 - "36fade77193cb6d2bd826161a0979d64c28ab4fa" + extra_headers=( + (b"gpgsig", b"test123"), + (b"mergetag", b"foo\\bar"), + (b"mergetag", b"\x22\xaf\x89\x80\x01\x00"), + ), + synthetic=True, + ) + revision2 = Revision( + id=hash_to_bytes("df7a6f6a99671fb7f7343641aff983a314ef6161"), + message=b"hello again", + author=Person( + name=b"Roberto Dicosmo", + email=b"roberto@example.com", + fullname=b"Roberto Dicosmo ", + ), + date=TimestampWithTimezone( + timestamp=Timestamp(seconds=1234567843, microseconds=220000,), + offset=-720, + negative_utc=False, + ), + committer=Person( + name=b"tony", email=b"ar@dumont.fr", fullname=b"tony ", + ), + committer_date=TimestampWithTimezone( + timestamp=Timestamp(seconds=1123456789, microseconds=220000,), + offset=0, + negative_utc=False, + ), + parents=tuple([revision.id]), + type=RevisionType.GIT, + directory=directory2.id, + metadata=None, + extra_headers=(), + synthetic=False, + ) + revision3 = Revision( + id=hash_to_bytes("2cbd7bb22c653bbb23a29657852a50a01b591d46"), + message=b"a simple revision with no parents this time", + author=Person( + name=b"Roberto Dicosmo", + email=b"roberto@example.com", + fullname=b"Roberto Dicosmo ", + ), + date=TimestampWithTimezone( + timestamp=Timestamp(seconds=1234567843, microseconds=220000,), + offset=-720, + negative_utc=False, + ), + committer=Person( + name=b"tony", email=b"ar@dumont.fr", fullname=b"tony ", + ), + committer_date=TimestampWithTimezone( + timestamp=Timestamp(seconds=1127351742, microseconds=220000,), + offset=0, + negative_utc=False, + ), + parents=tuple([revision.id, revision2.id]), + type=RevisionType.GIT, + directory=directory2.id, + metadata=None, + extra_headers=(), + synthetic=True, + ) + revision4 = Revision( + id=hash_to_bytes("88cd5126fc958ed70089d5340441a1c2477bcc20"), + message=b"parent of self.revision2", + author=Person( + name=b"me", email=b"me@soft.heri", fullname=b"me ", + ), + date=TimestampWithTimezone( + timestamp=Timestamp(seconds=1234567843, microseconds=220000,), + offset=-720, + negative_utc=False, + ), + committer=Person( + name=b"committer-dude", + email=b"committer@dude.com", + fullname=b"committer-dude ", + ), + committer_date=TimestampWithTimezone( + timestamp=Timestamp(seconds=1244567843, microseconds=220000,), + offset=-720, + negative_utc=False, + ), + parents=tuple([revision3.id]), + type=RevisionType.GIT, + directory=directory.id, + metadata=None, + extra_headers=(), + synthetic=False, + ) + revisions: Tuple[Revision, ...] = (revision, revision2, revision3, revision4) + + origins: Tuple[Origin, ...] = ( + Origin(url="https://github.com/user1/repo1"), + Origin(url="https://github.com/user2/repo1"), + Origin(url="https://github.com/user3/repo1"), + Origin(url="https://gitlab.com/user1/repo1"), + Origin(url="https://gitlab.com/user2/repo1"), + Origin(url="https://forge.softwareheritage.org/source/repo1"), + ) + origin, origin2 = origins[:2] + + metadata_authority = MetadataAuthority( + type=MetadataAuthorityType.DEPOSIT_CLIENT, + url="http://hal.inria.example.com/", + metadata={"location": "France"}, + ) + metadata_authority2 = MetadataAuthority( + type=MetadataAuthorityType.REGISTRY, + url="http://wikidata.example.com/", + metadata={}, + ) + authorities: Tuple[MetadataAuthority, ...] = ( + metadata_authority, + metadata_authority2, + ) + + metadata_fetcher = MetadataFetcher( + name="swh-deposit", version="0.0.1", metadata={"sword_version": "2"}, + ) + metadata_fetcher2 = MetadataFetcher( + name="swh-example", version="0.0.1", metadata={}, + ) + fetchers: Tuple[MetadataFetcher, ...] = (metadata_fetcher, metadata_fetcher2) + + date_visit1 = datetime.datetime(2015, 1, 1, 23, 0, 0, tzinfo=datetime.timezone.utc) + date_visit2 = datetime.datetime(2017, 1, 1, 23, 0, 0, tzinfo=datetime.timezone.utc) + date_visit3 = datetime.datetime(2018, 1, 1, 23, 0, 0, tzinfo=datetime.timezone.utc) + + type_visit1 = "git" + type_visit2 = "hg" + type_visit3 = "deb" + + origin_visit = OriginVisit( + origin=origin.url, visit=1, date=date_visit1, type=type_visit1, + ) + origin_visit2 = OriginVisit( + origin=origin.url, visit=2, date=date_visit2, type=type_visit1, + ) + origin_visit3 = OriginVisit( + origin=origin2.url, visit=1, date=date_visit1, type=type_visit2, + ) + origin_visits: Tuple[OriginVisit, ...] = ( + origin_visit, + origin_visit2, + origin_visit3, + ) + + release = Release( + id=hash_to_bytes("a673e617fcc6234e29b2cad06b8245f96c415c61"), + name=b"v0.0.1", + author=Person( + name=b"olasd", email=b"nic@olasd.fr", fullname=b"olasd ", + ), + date=TimestampWithTimezone( + timestamp=Timestamp(seconds=1234567890, microseconds=0), + offset=42, + negative_utc=False, + ), + target=revision.id, + target_type=ObjectType.REVISION, + message=b"synthetic release", + synthetic=True, + ) + release2 = Release( + id=hash_to_bytes("6902bd4c82b7d19a421d224aedab2b74197e420d"), + name=b"v0.0.2", + author=Person( + name=b"tony", email=b"ar@dumont.fr", fullname=b"tony ", + ), + date=TimestampWithTimezone( + timestamp=Timestamp(seconds=1634366813, microseconds=0), + offset=-120, + negative_utc=False, + ), + target=revision2.id, + target_type=ObjectType.REVISION, + message=b"v0.0.2\nMisc performance improvements + bug fixes", + synthetic=False, + ) + release3 = Release( + id=hash_to_bytes("3e9050196aa288264f2a9d279d6abab8b158448b"), + name=b"v0.0.2", + author=Person( + name=b"tony", + email=b"tony@ardumont.fr", + fullname=b"tony ", + ), + date=TimestampWithTimezone( + timestamp=Timestamp(seconds=1634366813, microseconds=0), + offset=-120, + negative_utc=False, + ), + target=revision3.id, + target_type=ObjectType.REVISION, + message=b"yet another synthetic release", + synthetic=True, + ) + + releases: Tuple[Release, ...] = (release, release2, release3) + + snapshot = Snapshot( + id=hash_to_bytes("409ee1ff3f10d166714bc90581debfd0446dda57"), + branches={ + b"master": SnapshotBranch( + target=revision.id, target_type=TargetType.REVISION, ), - "perms": from_disk.DentryPerms.content, - }, - ), -} - -dir3 = { - "id": hash_to_bytes("4ea8c6b2f54445e5dd1a9d5bb2afd875d66f3150"), - "entries": ( - { - "name": b"foo", - "type": "file", - "target": hash_to_bytes("d81cc0710eb6cf9efd5b920a8453e1e07157b6cd"), # cont - "perms": from_disk.DentryPerms.content, }, - { - "name": b"subdir", - "type": "dir", - "target": hash_to_bytes("34f335a750111ca0a8b64d8034faec9eedc396be"), # dir - "perms": from_disk.DentryPerms.directory, - }, - { - "name": b"hello", - "type": "file", - "target": b"12345678901234567890", - "perms": from_disk.DentryPerms.content, - }, - ), -} - -dir4 = { - "id": hash_to_bytes("377aa5fcd944fbabf502dbfda55cd14d33c8c3c6"), - "entries": ( - { - "name": b"subdir1", - "type": "dir", - "target": hash_to_bytes("4ea8c6b2f54445e5dd1a9d5bb2afd875d66f3150"), # dir3 - "perms": from_disk.DentryPerms.directory, - }, - ), -} - -directories = (dir, dir2, dir3, dir4) - - -minus_offset = datetime.timezone(datetime.timedelta(minutes=-120)) -plus_offset = datetime.timezone(datetime.timedelta(minutes=120)) - -revision = { - "id": hash_to_bytes("066b1b62dbfa033362092af468bf6cfabec230e7"), - "message": b"hello", - "author": { - "name": b"Nicolas Dandrimont", - "email": b"nicolas@example.com", - "fullname": b"Nicolas Dandrimont ", - }, - "date": { - "timestamp": {"seconds": 1234567890, "microseconds": 0}, - "offset": 120, - "negative_utc": False, - }, - "committer": { - "name": b"St\xc3fano Zacchiroli", - "email": b"stefano@example.com", - "fullname": b"St\xc3fano Zacchiroli ", - }, - "committer_date": { - "timestamp": {"seconds": 1123456789, "microseconds": 0}, - "offset": 0, - "negative_utc": True, - }, - "parents": (b"01234567890123456789", b"23434512345123456789"), - "type": "git", - "directory": hash_to_bytes("34f335a750111ca0a8b64d8034faec9eedc396be"), # dir - "metadata": { - "checksums": {"sha1": "tarball-sha1", "sha256": "tarball-sha256",}, - "signed-off-by": "some-dude", - }, - "extra_headers": ( - (b"gpgsig", b"test123"), - (b"mergetag", b"foo\\bar"), - (b"mergetag", b"\x22\xaf\x89\x80\x01\x00"), - ), - "synthetic": True, -} - -revision2 = { - "id": hash_to_bytes("df7a6f6a99671fb7f7343641aff983a314ef6161"), - "message": b"hello again", - "author": { - "name": b"Roberto Dicosmo", - "email": b"roberto@example.com", - "fullname": b"Roberto Dicosmo ", - }, - "date": { - "timestamp": {"seconds": 1234567843, "microseconds": 220000,}, - "offset": -720, - "negative_utc": False, - }, - "committer": { - "name": b"tony", - "email": b"ar@dumont.fr", - "fullname": b"tony ", - }, - "committer_date": { - "timestamp": {"seconds": 1123456789, "microseconds": 0}, - "offset": 0, - "negative_utc": False, - }, - "parents": (b"01234567890123456789",), - "type": "git", - "directory": hash_to_bytes("8505808532953da7d2581741f01b29c04b1cb9ab"), # dir2 - "metadata": None, - "extra_headers": (), - "synthetic": False, -} - -revision3 = { - "id": hash_to_bytes("2cbd7bb22c653bbb23a29657852a50a01b591d46"), - "message": b"a simple revision with no parents this time", - "author": { - "name": b"Roberto Dicosmo", - "email": b"roberto@example.com", - "fullname": b"Roberto Dicosmo ", - }, - "date": { - "timestamp": {"seconds": 1234567843, "microseconds": 220000,}, - "offset": -720, - "negative_utc": False, - }, - "committer": { - "name": b"tony", - "email": b"ar@dumont.fr", - "fullname": b"tony ", - }, - "committer_date": { - "timestamp": {"seconds": 1127351742, "microseconds": 0}, - "offset": 0, - "negative_utc": False, - }, - "parents": (), - "type": "git", - "directory": hash_to_bytes("8505808532953da7d2581741f01b29c04b1cb9ab"), # dir2 - "metadata": None, - "extra_headers": (), - "synthetic": True, -} - -revision4 = { - "id": hash_to_bytes("88cd5126fc958ed70089d5340441a1c2477bcc20"), - "message": b"parent of self.revision2", - "author": { - "name": b"me", - "email": b"me@soft.heri", - "fullname": b"me ", - }, - "date": { - "timestamp": {"seconds": 1244567843, "microseconds": 220000,}, - "offset": -720, - "negative_utc": False, - }, - "committer": { - "name": b"committer-dude", - "email": b"committer@dude.com", - "fullname": b"committer-dude ", - }, - "committer_date": { - "timestamp": {"seconds": 1244567843, "microseconds": 220000,}, - "offset": -720, - "negative_utc": False, - }, - "parents": ( - hash_to_bytes("2cbd7bb22c653bbb23a29657852a50a01b591d46"), - ), # revision3 - "type": "git", - "directory": hash_to_bytes("34f335a750111ca0a8b64d8034faec9eedc396be"), # dir - "metadata": None, - "extra_headers": (), - "synthetic": False, -} - -revisions = (revision, revision2, revision3, revision4) - - -origin = { - "url": "file:///dev/null", -} - -origin2 = { - "url": "file:///dev/zero", -} - -origins = (origin, origin2) - - -metadata_authority = MetadataAuthority( - type=MetadataAuthorityType.DEPOSIT_CLIENT, - url="http://hal.inria.example.com/", - metadata={"location": "France"}, -) -metadata_authority2 = MetadataAuthority( - type=MetadataAuthorityType.REGISTRY, - url="http://wikidata.example.com/", - metadata={}, -) - -metadata_fetcher = MetadataFetcher( - name="swh-deposit", version="0.0.1", metadata={"sword_version": "2"}, -) -metadata_fetcher2 = MetadataFetcher(name="swh-example", version="0.0.1", metadata={},) - -date_visit1 = datetime.datetime(2015, 1, 1, 23, 0, 0, tzinfo=datetime.timezone.utc) -type_visit1 = "git" - -date_visit2 = datetime.datetime(2017, 1, 1, 23, 0, 0, tzinfo=datetime.timezone.utc) -type_visit2 = "hg" - -date_visit3 = datetime.datetime(2018, 1, 1, 23, 0, 0, tzinfo=datetime.timezone.utc) -type_visit3 = "deb" - -origin_visit = { - "origin": origin["url"], - "visit": 1, - "date": date_visit1, - "type": type_visit1, -} - -origin_visit2 = { - "origin": origin["url"], - "visit": 2, - "date": date_visit2, - "type": type_visit1, -} - -origin_visit3 = { - "origin": origin2["url"], - "visit": 1, - "date": date_visit1, - "type": type_visit2, -} - -origin_visits = [origin_visit, origin_visit2, origin_visit3] - -release = { - "id": hash_to_bytes("a673e617fcc6234e29b2cad06b8245f96c415c61"), - "name": b"v0.0.1", - "author": { - "name": b"olasd", - "email": b"nic@olasd.fr", - "fullname": b"olasd ", - }, - "date": { - "timestamp": {"seconds": 1234567890, "microseconds": 0}, - "offset": 42, - "negative_utc": False, - }, - "target": b"43210987654321098765", - "target_type": "revision", - "message": b"synthetic release", - "synthetic": True, -} - -release2 = { - "id": hash_to_bytes("6902bd4c82b7d19a421d224aedab2b74197e420d"), - "name": b"v0.0.2", - "author": { - "name": b"tony", - "email": b"ar@dumont.fr", - "fullname": b"tony ", - }, - "date": { - "timestamp": {"seconds": 1634366813, "microseconds": 0}, - "offset": -120, - "negative_utc": False, - }, - "target": b"432109\xa9765432\xc309\x00765", - "target_type": "revision", - "message": b"v0.0.2\nMisc performance improvements + bug fixes", - "synthetic": False, -} - -release3 = { - "id": hash_to_bytes("3e9050196aa288264f2a9d279d6abab8b158448b"), - "name": b"v0.0.2", - "author": { - "name": b"tony", - "email": b"tony@ardumont.fr", - "fullname": b"tony ", - }, - "date": { - "timestamp": {"seconds": 1634336813, "microseconds": 0}, - "offset": 0, - "negative_utc": False, - }, - "target": hash_to_bytes("df7a6f6a99671fb7f7343641aff983a314ef6161"), - "target_type": "revision", - "message": b"yet another synthetic release", - "synthetic": True, -} - -releases = (release, release2, release3) - - -snapshot = { - "id": hash_to_bytes("409ee1ff3f10d166714bc90581debfd0446dda57"), - "branches": { - b"master": { - "target": hash_to_bytes("066b1b62dbfa033362092af468bf6cfabec230e7"), - "target_type": "revision", - }, - }, -} - -empty_snapshot = { - "id": hash_to_bytes("1a8893e6a86f444e8be8e7bda6cb34fb1735a00e"), - "branches": {}, -} - -complete_snapshot = { - "id": hash_to_bytes("a56ce2d81c190023bb99a3a36279307522cb85f6"), - "branches": { - b"directory": { - "target": hash_to_bytes("1bd0e65f7d2ff14ae994de17a1e7fe65111dcad8"), - "target_type": "directory", - }, - b"directory2": { - "target": hash_to_bytes("1bd0e65f7d2ff14ae994de17a1e7fe65111dcad8"), - "target_type": "directory", - }, - b"content": { - "target": hash_to_bytes("fe95a46679d128ff167b7c55df5d02356c5a1ae1"), - "target_type": "content", - }, - b"alias": {"target": b"revision", "target_type": "alias",}, - b"revision": { - "target": hash_to_bytes("aafb16d69fd30ff58afdd69036a26047f3aebdc6"), - "target_type": "revision", - }, - b"release": { - "target": hash_to_bytes("7045404f3d1c54e6473c71bbb716529fbad4be24"), - "target_type": "release", - }, - b"snapshot": { - "target": hash_to_bytes("1a8893e6a86f444e8be8e7bda6cb34fb1735a00e"), - "target_type": "snapshot", + ) + empty_snapshot = Snapshot( + id=hash_to_bytes("1a8893e6a86f444e8be8e7bda6cb34fb1735a00e"), branches={}, + ) + complete_snapshot = Snapshot( + id=hash_to_bytes("a56ce2d81c190023bb99a3a36279307522cb85f6"), + branches={ + b"directory": SnapshotBranch( + target=directory.id, target_type=TargetType.DIRECTORY, + ), + b"directory2": SnapshotBranch( + target=directory2.id, target_type=TargetType.DIRECTORY, + ), + b"content": SnapshotBranch( + target=content.sha1_git, target_type=TargetType.CONTENT, + ), + b"alias": SnapshotBranch(target=b"revision", target_type=TargetType.ALIAS,), + b"revision": SnapshotBranch( + target=revision.id, target_type=TargetType.REVISION, + ), + b"release": SnapshotBranch( + target=release.id, target_type=TargetType.RELEASE, + ), + b"snapshot": SnapshotBranch( + target=empty_snapshot.id, target_type=TargetType.SNAPSHOT, + ), + b"dangling": None, }, - b"dangling": None, - }, -} - -snapshots = (snapshot, empty_snapshot, complete_snapshot) - -content_metadata = RawExtrinsicMetadata( - type=MetadataTargetType.CONTENT, - id=parse_swhid(f"swh:1:cnt:{hash_to_hex(cont['sha1_git'])}"), - origin=origin["url"], - discovery_date=datetime.datetime( - 2015, 1, 1, 21, 0, 0, tzinfo=datetime.timezone.utc - ), - authority=attr.evolve(metadata_authority, metadata=None), - fetcher=attr.evolve(metadata_fetcher, metadata=None), - format="json", - metadata=b'{"foo": "bar"}', -) -content_metadata2 = RawExtrinsicMetadata( - type=MetadataTargetType.CONTENT, - id=parse_swhid(f"swh:1:cnt:{hash_to_hex(cont['sha1_git'])}"), - origin=origin2["url"], - discovery_date=datetime.datetime( - 2017, 1, 1, 22, 0, 0, tzinfo=datetime.timezone.utc - ), - authority=attr.evolve(metadata_authority, metadata=None), - fetcher=attr.evolve(metadata_fetcher, metadata=None), - format="yaml", - metadata=b"foo: bar", -) -content_metadata3 = RawExtrinsicMetadata( - type=MetadataTargetType.CONTENT, - id=parse_swhid(f"swh:1:cnt:{hash_to_hex(cont['sha1_git'])}"), - discovery_date=datetime.datetime( - 2017, 1, 1, 22, 0, 0, tzinfo=datetime.timezone.utc - ), - authority=attr.evolve(metadata_authority2, metadata=None), - fetcher=attr.evolve(metadata_fetcher2, metadata=None), - format="yaml", - metadata=b"foo: bar", - origin=origin["url"], - visit=42, - snapshot=parse_swhid(f"swh:1:snp:{hash_to_hex(snapshot['id'])}"), - release=parse_swhid(f"swh:1:rel:{hash_to_hex(release['id'])}"), - revision=parse_swhid(f"swh:1:rev:{hash_to_hex(revision['id'])}"), - directory=parse_swhid(f"swh:1:dir:{hash_to_hex(dir['id'])}"), - path=b"/foo/bar", -) - -origin_metadata = RawExtrinsicMetadata( - type=MetadataTargetType.ORIGIN, - id=origin["url"], - discovery_date=datetime.datetime( - 2015, 1, 1, 21, 0, 0, tzinfo=datetime.timezone.utc - ), - authority=attr.evolve(metadata_authority, metadata=None), - fetcher=attr.evolve(metadata_fetcher, metadata=None), - format="json", - metadata=b'{"foo": "bar"}', -) -origin_metadata2 = RawExtrinsicMetadata( - type=MetadataTargetType.ORIGIN, - id=origin["url"], - discovery_date=datetime.datetime( - 2017, 1, 1, 22, 0, 0, tzinfo=datetime.timezone.utc - ), - authority=attr.evolve(metadata_authority, metadata=None), - fetcher=attr.evolve(metadata_fetcher, metadata=None), - format="yaml", - metadata=b"foo: bar", -) -origin_metadata3 = RawExtrinsicMetadata( - type=MetadataTargetType.ORIGIN, - id=origin["url"], - discovery_date=datetime.datetime( - 2017, 1, 1, 22, 0, 0, tzinfo=datetime.timezone.utc - ), - authority=attr.evolve(metadata_authority2, metadata=None), - fetcher=attr.evolve(metadata_fetcher2, metadata=None), - format="yaml", - metadata=b"foo: bar", -) - -person = { - "name": b"John Doe", - "email": b"john.doe@institute.org", - "fullname": b"John Doe ", -} - -objects = { - "content": contents, - "skipped_content": skipped_contents, - "directory": directories, - "revision": revisions, - "origin": origins, - "release": releases, - "snapshot": snapshots, -} + ) + + snapshots: Tuple[Snapshot, ...] = (snapshot, empty_snapshot, complete_snapshot) + + content_metadata1 = RawExtrinsicMetadata( + type=MetadataTargetType.CONTENT, + id=parse_swhid(f"swh:1:cnt:{hash_to_hex(content.sha1_git)}"), + origin=origin.url, + discovery_date=datetime.datetime( + 2015, 1, 1, 21, 0, 0, tzinfo=datetime.timezone.utc + ), + authority=attr.evolve(metadata_authority, metadata=None), + fetcher=attr.evolve(metadata_fetcher, metadata=None), + format="json", + metadata=b'{"foo": "bar"}', + ) + content_metadata2 = RawExtrinsicMetadata( + type=MetadataTargetType.CONTENT, + id=parse_swhid(f"swh:1:cnt:{hash_to_hex(content.sha1_git)}"), + origin=origin2.url, + discovery_date=datetime.datetime( + 2017, 1, 1, 22, 0, 0, tzinfo=datetime.timezone.utc + ), + authority=attr.evolve(metadata_authority, metadata=None), + fetcher=attr.evolve(metadata_fetcher, metadata=None), + format="yaml", + metadata=b"foo: bar", + ) + content_metadata3 = RawExtrinsicMetadata( + type=MetadataTargetType.CONTENT, + id=parse_swhid(f"swh:1:cnt:{hash_to_hex(content.sha1_git)}"), + discovery_date=datetime.datetime( + 2017, 1, 1, 22, 0, 0, tzinfo=datetime.timezone.utc + ), + authority=attr.evolve(metadata_authority2, metadata=None), + fetcher=attr.evolve(metadata_fetcher2, metadata=None), + format="yaml", + metadata=b"foo: bar", + origin=origin.url, + visit=42, + snapshot=parse_swhid(f"swh:1:snp:{hash_to_hex(snapshot.id)}"), + release=parse_swhid(f"swh:1:rel:{hash_to_hex(release.id)}"), + revision=parse_swhid(f"swh:1:rev:{hash_to_hex(revision.id)}"), + directory=parse_swhid(f"swh:1:dir:{hash_to_hex(directory.id)}"), + path=b"/foo/bar", + ) + + content_metadata: Tuple[RawExtrinsicMetadata, ...] = ( + content_metadata1, + content_metadata2, + content_metadata3, + ) + + origin_metadata1 = RawExtrinsicMetadata( + type=MetadataTargetType.ORIGIN, + id=origin.url, + discovery_date=datetime.datetime( + 2015, 1, 1, 21, 0, 0, tzinfo=datetime.timezone.utc + ), + authority=attr.evolve(metadata_authority, metadata=None), + fetcher=attr.evolve(metadata_fetcher, metadata=None), + format="json", + metadata=b'{"foo": "bar"}', + ) + origin_metadata2 = RawExtrinsicMetadata( + type=MetadataTargetType.ORIGIN, + id=origin.url, + discovery_date=datetime.datetime( + 2017, 1, 1, 22, 0, 0, tzinfo=datetime.timezone.utc + ), + authority=attr.evolve(metadata_authority, metadata=None), + fetcher=attr.evolve(metadata_fetcher, metadata=None), + format="yaml", + metadata=b"foo: bar", + ) + origin_metadata3 = RawExtrinsicMetadata( + type=MetadataTargetType.ORIGIN, + id=origin.url, + discovery_date=datetime.datetime( + 2017, 1, 1, 22, 0, 0, tzinfo=datetime.timezone.utc + ), + authority=attr.evolve(metadata_authority2, metadata=None), + fetcher=attr.evolve(metadata_fetcher2, metadata=None), + format="yaml", + metadata=b"foo: bar", + ) + + origin_metadata: Tuple[RawExtrinsicMetadata, ...] = ( + origin_metadata1, + origin_metadata2, + origin_metadata3, + ) diff --git a/swh/storage/tests/test_api_client.py b/swh/storage/tests/test_api_client.py index a3d5b49a..075ad0b7 100644 --- a/swh/storage/tests/test_api_client.py +++ b/swh/storage/tests/test_api_client.py @@ -1,67 +1,70 @@ # Copyright (C) 2015-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information from unittest.mock import patch import pytest import swh.storage.api.server as server import swh.storage.storage from swh.storage import get_storage from swh.storage.tests.test_storage import TestStorageGeneratedData # noqa from swh.storage.tests.test_storage import TestStorage as _TestStorage # tests are executed using imported classes (TestStorage and # TestStorageGeneratedData) using overloaded swh_storage fixture # below @pytest.fixture def app_server(): server.storage = swh.storage.get_storage( cls="memory", journal_writer={"cls": "memory"} ) yield server @pytest.fixture def app(app_server): return app_server.app @pytest.fixture def swh_rpc_client_class(): def storage_factory(**kwargs): - storage_config = {"cls": "validate", "storage": {"cls": "remote", **kwargs,}} + storage_config = { + "cls": "remote", + **kwargs, + } return get_storage(**storage_config) return storage_factory @pytest.fixture def swh_storage(swh_rpc_client, app_server): # This version of the swh_storage fixture uses the swh_rpc_client fixture # to instantiate a RemoteStorage (see swh_rpc_client_class above) that # proxies, via the swh.core RPC mechanism, the local (in memory) storage # configured in the app_server fixture above. # # Also note that, for the sake of # making it easier to write tests, the in-memory journal writer of the # in-memory backend storage is attached to the RemoteStorage as its # journal_writer attribute. storage = swh_rpc_client journal_writer = getattr(storage, "journal_writer", None) storage.journal_writer = app_server.storage.journal_writer yield storage storage.journal_writer = journal_writer class TestStorage(_TestStorage): - def test_content_update(self, swh_storage, app_server, sample_data_model): + def test_content_update(self, swh_storage, app_server, sample_data): # TODO, journal_writer not supported swh_storage.journal_writer.journal = None with patch.object(server.storage.journal_writer, "journal", None): - super().test_content_update(swh_storage, sample_data_model) + super().test_content_update(swh_storage, sample_data) diff --git a/swh/storage/tests/test_buffer.py b/swh/storage/tests/test_buffer.py index bd4aa408..9e3757f0 100644 --- a/swh/storage/tests/test_buffer.py +++ b/swh/storage/tests/test_buffer.py @@ -1,401 +1,399 @@ # Copyright (C) 2019-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information from swh.storage import get_storage def get_storage_with_buffer_config(**buffer_config): storage_config = { "cls": "pipeline", "steps": [{"cls": "buffer", **buffer_config}, {"cls": "memory"},], } return get_storage(**storage_config) -def test_buffering_proxy_storage_content_threshold_not_hit(sample_data_model): - contents = sample_data_model["content"] +def test_buffering_proxy_storage_content_threshold_not_hit(sample_data): + contents = sample_data.contents[:2] contents_dict = [c.to_dict() for c in contents] storage = get_storage_with_buffer_config(min_batch_size={"content": 10,}) - s = storage.content_add([contents[0], contents[1]]) + s = storage.content_add(contents) assert s == {} # contents have not been written to storage missing_contents = storage.content_missing(contents_dict) assert set(missing_contents) == set([contents[0].sha1, contents[1].sha1]) s = storage.flush() assert s == { "content:add": 1 + 1, "content:add:bytes": contents[0].length + contents[1].length, } missing_contents = storage.content_missing(contents_dict) assert list(missing_contents) == [] -def test_buffering_proxy_storage_content_threshold_nb_hit(sample_data_model): - content = sample_data_model["content"][0] +def test_buffering_proxy_storage_content_threshold_nb_hit(sample_data): + content = sample_data.content content_dict = content.to_dict() storage = get_storage_with_buffer_config(min_batch_size={"content": 1,}) s = storage.content_add([content]) assert s == { "content:add": 1, "content:add:bytes": content.length, } missing_contents = storage.content_missing([content_dict]) assert list(missing_contents) == [] s = storage.flush() assert s == {} -def test_buffering_proxy_storage_content_deduplicate(sample_data_model): - contents = sample_data_model["content"] +def test_buffering_proxy_storage_content_deduplicate(sample_data): + contents = sample_data.contents[:2] storage = get_storage_with_buffer_config(min_batch_size={"content": 2,}) s = storage.content_add([contents[0], contents[0]]) assert s == {} s = storage.content_add([contents[0]]) assert s == {} s = storage.content_add([contents[1]]) assert s == { "content:add": 1 + 1, "content:add:bytes": contents[0].length + contents[1].length, } missing_contents = storage.content_missing([c.to_dict() for c in contents]) assert list(missing_contents) == [] s = storage.flush() assert s == {} -def test_buffering_proxy_storage_content_threshold_bytes_hit(sample_data_model): - contents = sample_data_model["content"] +def test_buffering_proxy_storage_content_threshold_bytes_hit(sample_data): + contents = sample_data.contents[:2] content_bytes_min_batch_size = 2 storage = get_storage_with_buffer_config( min_batch_size={"content": 10, "content_bytes": content_bytes_min_batch_size,} ) assert contents[0].length > content_bytes_min_batch_size s = storage.content_add([contents[0]]) assert s == { "content:add": 1, "content:add:bytes": contents[0].length, } missing_contents = storage.content_missing([contents[0].to_dict()]) assert list(missing_contents) == [] s = storage.flush() assert s == {} -def test_buffering_proxy_storage_skipped_content_threshold_not_hit(sample_data_model): - contents = sample_data_model["skipped_content"] +def test_buffering_proxy_storage_skipped_content_threshold_not_hit(sample_data): + contents = sample_data.skipped_contents contents_dict = [c.to_dict() for c in contents] storage = get_storage_with_buffer_config(min_batch_size={"skipped_content": 10,}) s = storage.skipped_content_add([contents[0], contents[1]]) assert s == {} # contents have not been written to storage missing_contents = storage.skipped_content_missing(contents_dict) assert {c["sha1"] for c in missing_contents} == {c.sha1 for c in contents} s = storage.flush() assert s == {"skipped_content:add": 1 + 1} missing_contents = storage.skipped_content_missing(contents_dict) assert list(missing_contents) == [] -def test_buffering_proxy_storage_skipped_content_threshold_nb_hit(sample_data_model): - contents = sample_data_model["skipped_content"] +def test_buffering_proxy_storage_skipped_content_threshold_nb_hit(sample_data): + contents = sample_data.skipped_contents storage = get_storage_with_buffer_config(min_batch_size={"skipped_content": 1,}) s = storage.skipped_content_add([contents[0]]) assert s == {"skipped_content:add": 1} missing_contents = storage.skipped_content_missing([contents[0].to_dict()]) assert list(missing_contents) == [] s = storage.flush() assert s == {} -def test_buffering_proxy_storage_skipped_content_deduplicate(sample_data_model): - contents = sample_data_model["skipped_content"][:2] +def test_buffering_proxy_storage_skipped_content_deduplicate(sample_data): + contents = sample_data.skipped_contents[:2] storage = get_storage_with_buffer_config(min_batch_size={"skipped_content": 2,}) s = storage.skipped_content_add([contents[0], contents[0]]) assert s == {} s = storage.skipped_content_add([contents[0]]) assert s == {} s = storage.skipped_content_add([contents[1]]) assert s == { "skipped_content:add": 1 + 1, } missing_contents = storage.skipped_content_missing([c.to_dict() for c in contents]) assert list(missing_contents) == [] s = storage.flush() assert s == {} -def test_buffering_proxy_storage_directory_threshold_not_hit(sample_data_model): - directories = sample_data_model["directory"] +def test_buffering_proxy_storage_directory_threshold_not_hit(sample_data): + directory = sample_data.directory storage = get_storage_with_buffer_config(min_batch_size={"directory": 10,}) - s = storage.directory_add([directories[0]]) + s = storage.directory_add([directory]) assert s == {} - directory_id = directories[0].id - missing_directories = storage.directory_missing([directory_id]) - assert list(missing_directories) == [directory_id] + missing_directories = storage.directory_missing([directory.id]) + assert list(missing_directories) == [directory.id] s = storage.flush() assert s == { "directory:add": 1, } - missing_directories = storage.directory_missing([directory_id]) + missing_directories = storage.directory_missing([directory.id]) assert list(missing_directories) == [] -def test_buffering_proxy_storage_directory_threshold_hit(sample_data_model): - directories = sample_data_model["directory"] +def test_buffering_proxy_storage_directory_threshold_hit(sample_data): + directory = sample_data.directory storage = get_storage_with_buffer_config(min_batch_size={"directory": 1,}) - s = storage.directory_add([directories[0]]) + s = storage.directory_add([directory]) assert s == { "directory:add": 1, } - missing_directories = storage.directory_missing([directories[0].id]) + missing_directories = storage.directory_missing([directory.id]) assert list(missing_directories) == [] s = storage.flush() assert s == {} -def test_buffering_proxy_storage_directory_deduplicate(sample_data_model): - directories = sample_data_model["directory"][:2] +def test_buffering_proxy_storage_directory_deduplicate(sample_data): + directories = sample_data.directories[:2] storage = get_storage_with_buffer_config(min_batch_size={"directory": 2,}) s = storage.directory_add([directories[0], directories[0]]) assert s == {} s = storage.directory_add([directories[0]]) assert s == {} s = storage.directory_add([directories[1]]) assert s == { "directory:add": 1 + 1, } missing_directories = storage.directory_missing([d.id for d in directories]) assert list(missing_directories) == [] s = storage.flush() assert s == {} -def test_buffering_proxy_storage_revision_threshold_not_hit(sample_data_model): - revisions = sample_data_model["revision"] +def test_buffering_proxy_storage_revision_threshold_not_hit(sample_data): + revision = sample_data.revision storage = get_storage_with_buffer_config(min_batch_size={"revision": 10,}) - s = storage.revision_add([revisions[0]]) + s = storage.revision_add([revision]) assert s == {} - revision_id = revisions[0].id - missing_revisions = storage.revision_missing([revision_id]) - assert list(missing_revisions) == [revision_id] + missing_revisions = storage.revision_missing([revision.id]) + assert list(missing_revisions) == [revision.id] s = storage.flush() assert s == { "revision:add": 1, } - missing_revisions = storage.revision_missing([revision_id]) + missing_revisions = storage.revision_missing([revision.id]) assert list(missing_revisions) == [] -def test_buffering_proxy_storage_revision_threshold_hit(sample_data_model): - revisions = sample_data_model["revision"] +def test_buffering_proxy_storage_revision_threshold_hit(sample_data): + revision = sample_data.revision storage = get_storage_with_buffer_config(min_batch_size={"revision": 1,}) - s = storage.revision_add([revisions[0]]) + s = storage.revision_add([revision]) assert s == { "revision:add": 1, } - missing_revisions = storage.revision_missing([revisions[0].id]) + missing_revisions = storage.revision_missing([revision.id]) assert list(missing_revisions) == [] s = storage.flush() assert s == {} -def test_buffering_proxy_storage_revision_deduplicate(sample_data_model): - revisions = sample_data_model["revision"][:2] +def test_buffering_proxy_storage_revision_deduplicate(sample_data): + revisions = sample_data.revisions[:2] storage = get_storage_with_buffer_config(min_batch_size={"revision": 2,}) s = storage.revision_add([revisions[0], revisions[0]]) assert s == {} s = storage.revision_add([revisions[0]]) assert s == {} s = storage.revision_add([revisions[1]]) assert s == { "revision:add": 1 + 1, } missing_revisions = storage.revision_missing([r.id for r in revisions]) assert list(missing_revisions) == [] s = storage.flush() assert s == {} -def test_buffering_proxy_storage_release_threshold_not_hit(sample_data_model): - releases = sample_data_model["release"] +def test_buffering_proxy_storage_release_threshold_not_hit(sample_data): + releases = sample_data.releases threshold = 10 assert len(releases) < threshold storage = get_storage_with_buffer_config( min_batch_size={"release": threshold,} # configuration set ) s = storage.release_add(releases) assert s == {} release_ids = [r.id for r in releases] missing_releases = storage.release_missing(release_ids) assert list(missing_releases) == release_ids s = storage.flush() assert s == { "release:add": len(releases), } missing_releases = storage.release_missing(release_ids) assert list(missing_releases) == [] -def test_buffering_proxy_storage_release_threshold_hit(sample_data_model): - releases = sample_data_model["release"] +def test_buffering_proxy_storage_release_threshold_hit(sample_data): + releases = sample_data.releases threshold = 2 assert len(releases) > threshold storage = get_storage_with_buffer_config( min_batch_size={"release": threshold,} # configuration set ) s = storage.release_add(releases) assert s == { "release:add": len(releases), } release_ids = [r.id for r in releases] missing_releases = storage.release_missing(release_ids) assert list(missing_releases) == [] s = storage.flush() assert s == {} -def test_buffering_proxy_storage_release_deduplicate(sample_data_model): - releases = sample_data_model["release"][:2] +def test_buffering_proxy_storage_release_deduplicate(sample_data): + releases = sample_data.releases[:2] storage = get_storage_with_buffer_config(min_batch_size={"release": 2,}) s = storage.release_add([releases[0], releases[0]]) assert s == {} s = storage.release_add([releases[0]]) assert s == {} s = storage.release_add([releases[1]]) assert s == { "release:add": 1 + 1, } missing_releases = storage.release_missing([r.id for r in releases]) assert list(missing_releases) == [] s = storage.flush() assert s == {} -def test_buffering_proxy_storage_clear(sample_data_model): +def test_buffering_proxy_storage_clear(sample_data): """Clear operation on buffer """ threshold = 10 - contents = sample_data_model["content"] + contents = sample_data.contents assert 0 < len(contents) < threshold - skipped_contents = sample_data_model["skipped_content"] + skipped_contents = sample_data.skipped_contents assert 0 < len(skipped_contents) < threshold - directories = sample_data_model["directory"] + directories = sample_data.directories assert 0 < len(directories) < threshold - revisions = sample_data_model["revision"] + revisions = sample_data.revisions assert 0 < len(revisions) < threshold - releases = sample_data_model["release"] + releases = sample_data.releases assert 0 < len(releases) < threshold storage = get_storage_with_buffer_config( min_batch_size={ "content": threshold, "skipped_content": threshold, "directory": threshold, "revision": threshold, "release": threshold, } ) s = storage.content_add(contents) assert s == {} s = storage.skipped_content_add(skipped_contents) assert s == {} s = storage.directory_add(directories) assert s == {} s = storage.revision_add(revisions) assert s == {} s = storage.release_add(releases) assert s == {} assert len(storage._objects["content"]) == len(contents) assert len(storage._objects["skipped_content"]) == len(skipped_contents) assert len(storage._objects["directory"]) == len(directories) assert len(storage._objects["revision"]) == len(revisions) assert len(storage._objects["release"]) == len(releases) # clear only content from the buffer s = storage.clear_buffers(["content"]) assert s is None # specific clear operation on specific object type content only touched # them assert len(storage._objects["content"]) == 0 assert len(storage._objects["skipped_content"]) == len(skipped_contents) assert len(storage._objects["directory"]) == len(directories) assert len(storage._objects["revision"]) == len(revisions) assert len(storage._objects["release"]) == len(releases) # clear current buffer from all object types s = storage.clear_buffers() assert s is None assert len(storage._objects["content"]) == 0 assert len(storage._objects["skipped_content"]) == 0 assert len(storage._objects["directory"]) == 0 assert len(storage._objects["revision"]) == 0 assert len(storage._objects["release"]) == 0 diff --git a/swh/storage/tests/test_cassandra.py b/swh/storage/tests/test_cassandra.py index 4938190b..870ee59c 100644 --- a/swh/storage/tests/test_cassandra.py +++ b/swh/storage/tests/test_cassandra.py @@ -1,410 +1,396 @@ # Copyright (C) 2018-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import attr import os import signal import socket import subprocess import time from collections import namedtuple import pytest from swh.storage import get_storage from swh.storage.cassandra import create_keyspace from swh.storage.cassandra.schema import TABLES, HASH_ALGORITHMS from swh.storage.utils import now from swh.storage.tests.test_storage import TestStorage as _TestStorage from swh.storage.tests.test_storage import ( TestStorageGeneratedData as _TestStorageGeneratedData, ) CONFIG_TEMPLATE = """ data_file_directories: - {data_dir}/data commitlog_directory: {data_dir}/commitlog hints_directory: {data_dir}/hints saved_caches_directory: {data_dir}/saved_caches commitlog_sync: periodic commitlog_sync_period_in_ms: 1000000 partitioner: org.apache.cassandra.dht.Murmur3Partitioner endpoint_snitch: SimpleSnitch seed_provider: - class_name: org.apache.cassandra.locator.SimpleSeedProvider parameters: - seeds: "127.0.0.1" storage_port: {storage_port} native_transport_port: {native_transport_port} start_native_transport: true listen_address: 127.0.0.1 enable_user_defined_functions: true # speed-up by disabling period saving to disk key_cache_save_period: 0 row_cache_save_period: 0 trickle_fsync: false commitlog_sync_period_in_ms: 100000 """ def free_port(): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.bind(("127.0.0.1", 0)) port = sock.getsockname()[1] sock.close() return port def wait_for_peer(addr, port): wait_until = time.time() + 20 while time.time() < wait_until: try: sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.connect((addr, port)) except ConnectionRefusedError: time.sleep(0.1) else: sock.close() return True return False @pytest.fixture(scope="session") def cassandra_cluster(tmpdir_factory): cassandra_conf = tmpdir_factory.mktemp("cassandra_conf") cassandra_data = tmpdir_factory.mktemp("cassandra_data") cassandra_log = tmpdir_factory.mktemp("cassandra_log") native_transport_port = free_port() storage_port = free_port() jmx_port = free_port() with open(str(cassandra_conf.join("cassandra.yaml")), "w") as fd: fd.write( CONFIG_TEMPLATE.format( data_dir=str(cassandra_data), storage_port=storage_port, native_transport_port=native_transport_port, ) ) if os.environ.get("SWH_CASSANDRA_LOG"): stdout = stderr = None else: stdout = stderr = subprocess.DEVNULL cassandra_bin = os.environ.get("SWH_CASSANDRA_BIN", "/usr/sbin/cassandra") proc = subprocess.Popen( [ cassandra_bin, "-Dcassandra.config=file://%s/cassandra.yaml" % cassandra_conf, "-Dcassandra.logdir=%s" % cassandra_log, "-Dcassandra.jmx.local.port=%d" % jmx_port, "-Dcassandra-foreground=yes", ], start_new_session=True, env={ "MAX_HEAP_SIZE": "300M", "HEAP_NEWSIZE": "50M", "JVM_OPTS": "-Xlog:gc=error:file=%s/gc.log" % cassandra_log, }, stdout=stdout, stderr=stderr, ) running = wait_for_peer("127.0.0.1", native_transport_port) if running: yield (["127.0.0.1"], native_transport_port) if not running or os.environ.get("SWH_CASSANDRA_LOG"): debug_log_path = str(cassandra_log.join("debug.log")) if os.path.exists(debug_log_path): with open(debug_log_path) as fd: print(fd.read()) if not running: raise Exception("cassandra process stopped unexpectedly.") pgrp = os.getpgid(proc.pid) os.killpg(pgrp, signal.SIGKILL) class RequestHandler: def on_request(self, rf): if hasattr(rf.message, "query"): print() print(rf.message.query) @pytest.fixture(scope="session") def keyspace(cassandra_cluster): (hosts, port) = cassandra_cluster keyspace = os.urandom(10).hex() create_keyspace(hosts, keyspace, port) return keyspace # tests are executed using imported classes (TestStorage and # TestStorageGeneratedData) using overloaded swh_storage fixture # below @pytest.fixture def swh_storage_backend_config(cassandra_cluster, keyspace): (hosts, port) = cassandra_cluster storage_config = dict( cls="cassandra", hosts=hosts, port=port, keyspace=keyspace, journal_writer={"cls": "memory",}, objstorage={"cls": "memory", "args": {},}, ) yield storage_config storage = get_storage(**storage_config) for table in TABLES: storage._cql_runner._session.execute('TRUNCATE TABLE "%s"' % table) storage._cql_runner._cluster.shutdown() @pytest.mark.cassandra class TestCassandraStorage(_TestStorage): - def test_content_add_murmur3_collision( - self, swh_storage, mocker, sample_data_model - ): + def test_content_add_murmur3_collision(self, swh_storage, mocker, sample_data): """The Murmur3 token is used as link from index tables to the main table; and non-matching contents with colliding murmur3-hash are filtered-out when reading the main table. This test checks the content methods do filter out these collision. """ called = 0 - cont, cont2 = sample_data_model["content"][:2] + cont, cont2 = sample_data.contents[:2] # always return a token def mock_cgtfsh(algo, hash_): nonlocal called called += 1 assert algo in ("sha1", "sha1_git") return [123456] mocker.patch.object( - swh_storage.storage._cql_runner, - "content_get_tokens_from_single_hash", - mock_cgtfsh, + swh_storage._cql_runner, "content_get_tokens_from_single_hash", mock_cgtfsh, ) # For all tokens, always return cont Row = namedtuple("Row", HASH_ALGORITHMS) def mock_cgft(token): nonlocal called called += 1 return [Row(**{algo: getattr(cont, algo) for algo in HASH_ALGORITHMS})] mocker.patch.object( - swh_storage.storage._cql_runner, "content_get_from_token", mock_cgft + swh_storage._cql_runner, "content_get_from_token", mock_cgft ) actual_result = swh_storage.content_add([cont2]) assert called == 4 assert actual_result == { "content:add": 1, "content:add:bytes": cont2.length, } def test_content_get_metadata_murmur3_collision( - self, swh_storage, mocker, sample_data_model + self, swh_storage, mocker, sample_data ): """The Murmur3 token is used as link from index tables to the main table; and non-matching contents with colliding murmur3-hash are filtered-out when reading the main table. This test checks the content methods do filter out these collisions. """ called = 0 - cont, cont2 = [ - attr.evolve(c, ctime=now()) for c in sample_data_model["content"][:2] - ] + cont, cont2 = [attr.evolve(c, ctime=now()) for c in sample_data.contents[:2]] # always return a token def mock_cgtfsh(algo, hash_): nonlocal called called += 1 assert algo in ("sha1", "sha1_git") return [123456] mocker.patch.object( - swh_storage.storage._cql_runner, - "content_get_tokens_from_single_hash", - mock_cgtfsh, + swh_storage._cql_runner, "content_get_tokens_from_single_hash", mock_cgtfsh, ) # For all tokens, always return cont and cont2 cols = list(set(cont.to_dict()) - {"data"}) Row = namedtuple("Row", cols) def mock_cgft(token): nonlocal called called += 1 return [ Row(**{col: getattr(cont, col) for col in cols}) for cont in [cont, cont2] ] mocker.patch.object( - swh_storage.storage._cql_runner, "content_get_from_token", mock_cgft + swh_storage._cql_runner, "content_get_from_token", mock_cgft ) actual_result = swh_storage.content_get_metadata([cont.sha1]) assert called == 2 # dropping extra column not returned expected_cont = attr.evolve(cont, data=None, ctime=None).to_dict() del expected_cont["ctime"] # forced to pop it as to_dict does not # but cont2 should be filtered out assert actual_result == {cont.sha1: [expected_cont]} - def test_content_find_murmur3_collision( - self, swh_storage, mocker, sample_data_model - ): + def test_content_find_murmur3_collision(self, swh_storage, mocker, sample_data): """The Murmur3 token is used as link from index tables to the main table; and non-matching contents with colliding murmur3-hash are filtered-out when reading the main table. This test checks the content methods do filter out these collisions. """ called = 0 - cont, cont2 = [ - attr.evolve(c, ctime=now()) for c in sample_data_model["content"][:2] - ] + cont, cont2 = [attr.evolve(c, ctime=now()) for c in sample_data.contents[:2]] # always return a token def mock_cgtfsh(algo, hash_): nonlocal called called += 1 assert algo in ("sha1", "sha1_git") return [123456] mocker.patch.object( - swh_storage.storage._cql_runner, - "content_get_tokens_from_single_hash", - mock_cgtfsh, + swh_storage._cql_runner, "content_get_tokens_from_single_hash", mock_cgtfsh, ) # For all tokens, always return cont and cont2 cols = list(set(cont.to_dict()) - {"data"}) Row = namedtuple("Row", cols) def mock_cgft(token): nonlocal called called += 1 return [ Row(**{col: getattr(cont, col) for col in cols}) for cont in [cont, cont2] ] mocker.patch.object( - swh_storage.storage._cql_runner, "content_get_from_token", mock_cgft + swh_storage._cql_runner, "content_get_from_token", mock_cgft ) expected_cont = attr.evolve(cont, data=None).to_dict() actual_result = swh_storage.content_find({"sha1": cont.sha1}) assert called == 2 # but cont2 should be filtered out assert actual_result == [expected_cont] @pytest.mark.skip("content_update is not yet implemented for Cassandra") def test_content_update(self): pass @pytest.mark.skip( 'The "person" table of the pgsql is a legacy thing, and not ' "supported by the cassandra backend." ) def test_person_fullname_unicity(self): pass @pytest.mark.skip( 'The "person" table of the pgsql is a legacy thing, and not ' "supported by the cassandra backend." ) def test_person_get(self): pass @pytest.mark.skip("Not supported by Cassandra") def test_origin_count(self): pass @pytest.mark.cassandra class TestCassandraStorageGeneratedData(_TestStorageGeneratedData): @pytest.mark.skip("Not supported by Cassandra") def test_origin_count(self): pass @pytest.mark.skip("Not supported by Cassandra") def test_origin_get_range(self): pass @pytest.mark.skip("Not supported by Cassandra") def test_origin_get_range_from_zero(self): pass @pytest.mark.skip("Not supported by Cassandra") def test_generate_content_get_range_limit(self): pass @pytest.mark.skip("Not supported by Cassandra") def test_generate_content_get_range_no_limit(self): pass @pytest.mark.skip("Not supported by Cassandra") def test_generate_content_get_range(self): pass @pytest.mark.skip("Not supported by Cassandra") def test_generate_content_get_range_empty(self): pass @pytest.mark.skip("Not supported by Cassandra") def test_generate_content_get_range_limit_none(self): pass @pytest.mark.skip("Not supported by Cassandra") def test_generate_content_get_range_full(self): pass @pytest.mark.skip("Not supported by Cassandra") def test_origin_count_with_visit_no_visits(self): pass @pytest.mark.skip("Not supported by Cassandra") def test_origin_count_with_visit_with_visits_and_snapshot(self): pass @pytest.mark.skip("Not supported by Cassandra") def test_origin_count_with_visit_with_visits_no_snapshot(self): pass diff --git a/swh/storage/tests/test_filter.py b/swh/storage/tests/test_filter.py index b69c1187..558c39c9 100644 --- a/swh/storage/tests/test_filter.py +++ b/swh/storage/tests/test_filter.py @@ -1,131 +1,131 @@ # Copyright (C) 2019-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import attr import pytest from swh.storage import get_storage @pytest.fixture def swh_storage(): storage_config = { "cls": "pipeline", "steps": [{"cls": "filter"}, {"cls": "memory"},], } return get_storage(**storage_config) -def test_filtering_proxy_storage_content(swh_storage, sample_data_model): - sample_content = sample_data_model["content"][0] +def test_filtering_proxy_storage_content(swh_storage, sample_data): + sample_content = sample_data.content content = next(swh_storage.content_get([sample_content.sha1])) assert not content s = swh_storage.content_add([sample_content]) assert s == { "content:add": 1, "content:add:bytes": sample_content.length, } content = next(swh_storage.content_get([sample_content.sha1])) assert content is not None s = swh_storage.content_add([sample_content]) assert s == { "content:add": 0, "content:add:bytes": 0, } -def test_filtering_proxy_storage_skipped_content(swh_storage, sample_data_model): - sample_content = sample_data_model["skipped_content"][0] +def test_filtering_proxy_storage_skipped_content(swh_storage, sample_data): + sample_content = sample_data.skipped_content sample_content_dict = sample_content.to_dict() content = next(swh_storage.skipped_content_missing([sample_content_dict])) assert content["sha1"] == sample_content.sha1 s = swh_storage.skipped_content_add([sample_content]) assert s == { "skipped_content:add": 1, } content = list(swh_storage.skipped_content_missing([sample_content_dict])) assert content == [] s = swh_storage.skipped_content_add([sample_content]) assert s == { "skipped_content:add": 0, } def test_filtering_proxy_storage_skipped_content_missing_sha1_git( - swh_storage, sample_data_model + swh_storage, sample_data ): sample_contents = [ - attr.evolve(c, sha1_git=None) for c in sample_data_model["skipped_content"] + attr.evolve(c, sha1_git=None) for c in sample_data.skipped_contents ] sample_content, sample_content2 = [c.to_dict() for c in sample_contents[:2]] content = next(swh_storage.skipped_content_missing([sample_content])) assert content["sha1"] == sample_content["sha1"] s = swh_storage.skipped_content_add([sample_contents[0]]) assert s == { "skipped_content:add": 1, } content = list(swh_storage.skipped_content_missing([sample_content])) assert content == [] s = swh_storage.skipped_content_add([sample_contents[1]]) assert s == { "skipped_content:add": 1, } content = list(swh_storage.skipped_content_missing([sample_content2])) assert content == [] -def test_filtering_proxy_storage_revision(swh_storage, sample_data_model): - sample_revision = sample_data_model["revision"][0] +def test_filtering_proxy_storage_revision(swh_storage, sample_data): + sample_revision = sample_data.revision revision = next(swh_storage.revision_get([sample_revision.id])) assert not revision s = swh_storage.revision_add([sample_revision]) assert s == { "revision:add": 1, } revision = next(swh_storage.revision_get([sample_revision.id])) assert revision is not None s = swh_storage.revision_add([sample_revision]) assert s == { "revision:add": 0, } -def test_filtering_proxy_storage_directory(swh_storage, sample_data_model): - sample_directory = sample_data_model["directory"][0] +def test_filtering_proxy_storage_directory(swh_storage, sample_data): + sample_directory = sample_data.directory directory = next(swh_storage.directory_missing([sample_directory.id])) assert directory s = swh_storage.directory_add([sample_directory]) assert s == { "directory:add": 1, } directory = list(swh_storage.directory_missing([sample_directory.id])) assert not directory s = swh_storage.directory_add([sample_directory]) assert s == { "directory:add": 0, } diff --git a/swh/storage/tests/test_kafka_writer.py b/swh/storage/tests/test_kafka_writer.py index 2242e7e8..1cba627a 100644 --- a/swh/storage/tests/test_kafka_writer.py +++ b/swh/storage/tests/test_kafka_writer.py @@ -1,151 +1,151 @@ # Copyright (C) 2018-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information from confluent_kafka import Consumer from swh.storage import get_storage from swh.model.model import Origin, OriginVisit from swh.model.hypothesis_strategies import objects from swh.journal.pytest_plugin import consume_messages, assert_all_objects_consumed from swh.journal.tests.journal_data import TEST_OBJECTS from swh.model.model import Person from attr import asdict, has from hypothesis import given from hypothesis.strategies import lists def test_storage_direct_writer(kafka_prefix: str, kafka_server, consumer: Consumer): writer_config = { "cls": "kafka", "brokers": [kafka_server], "client_id": "kafka_writer", "prefix": kafka_prefix, "anonymize": False, } storage_config = { "cls": "pipeline", "steps": [{"cls": "memory", "journal_writer": writer_config},], } storage = get_storage(**storage_config) expected_messages = 0 for obj_type, objs in TEST_OBJECTS.items(): method = getattr(storage, obj_type + "_add") if obj_type in ( "content", "skipped_content", "directory", "revision", "release", "snapshot", "origin", "origin_visit_status", ): method(objs) expected_messages += len(objs) elif obj_type in ("origin_visit",): for obj in objs: assert isinstance(obj, OriginVisit) - storage.origin_add_one(Origin(url=obj.origin)) + storage.origin_add([Origin(url=obj.origin)]) method([obj]) expected_messages += 1 + 1 # 1 visit + 1 visit status else: assert False, obj_type existing_topics = set( topic for topic in consumer.list_topics(timeout=10).topics.keys() if topic.startswith(f"{kafka_prefix}.") # final . to exclude privileged topics ) assert existing_topics == { f"{kafka_prefix}.{obj_type}" for obj_type in ( "content", "directory", "origin", "origin_visit", "origin_visit_status", "release", "revision", "snapshot", "skipped_content", ) } consumed_messages = consume_messages(consumer, kafka_prefix, expected_messages) assert_all_objects_consumed(consumed_messages) def test_storage_direct_writer_anonymized( kafka_prefix: str, kafka_server, consumer: Consumer ): writer_config = { "cls": "kafka", "brokers": [kafka_server], "client_id": "kafka_writer", "prefix": kafka_prefix, "anonymize": True, } storage_config = { "cls": "pipeline", "steps": [{"cls": "memory", "journal_writer": writer_config},], } storage = get_storage(**storage_config) expected_messages = 0 for obj_type, objs in TEST_OBJECTS.items(): if obj_type == "origin_visit": # these have non-consistent API and are unrelated with what we # want to test here continue method = getattr(storage, obj_type + "_add") method(objs) expected_messages += len(objs) existing_topics = set( topic for topic in consumer.list_topics(timeout=10).topics.keys() if topic.startswith(kafka_prefix) ) assert existing_topics == { f"{kafka_prefix}.{obj_type}" for obj_type in ( "content", "directory", "origin", "origin_visit", "origin_visit_status", "release", "revision", "snapshot", "skipped_content", ) } | { f"{kafka_prefix}_privileged.{obj_type}" for obj_type in ("release", "revision",) } def check_anonymized_obj(obj): if has(obj): if isinstance(obj, Person): assert obj.name is None assert obj.email is None assert len(obj.fullname) == 32 else: for key, value in asdict(obj, recurse=False).items(): check_anonymized_obj(value) @given(lists(objects(split_content=True))) def test_anonymizer(obj_type_and_objs): for obj_type, obj in obj_type_and_objs: check_anonymized_obj(obj.anonymize()) diff --git a/swh/storage/tests/test_pytest_plugin.py b/swh/storage/tests/test_pytest_plugin.py index 86345064..5a59c5e9 100644 --- a/swh/storage/tests/test_pytest_plugin.py +++ b/swh/storage/tests/test_pytest_plugin.py @@ -1,75 +1,19 @@ # Copyright (C) 2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information +from swh.storage.interface import StorageInterface +from swh.storage.tests.storage_data import StorageData -from swh.storage.pytest_plugin import OBJECT_FACTORY +def test_sample_data(sample_data): + assert isinstance(sample_data, StorageData) -from swh.model.model import BaseModel - -def test_sample_data(sample_data, sample_data_model): - assert set(sample_data.keys()) == set( - [ - "content", - "content_metadata", - "skipped_content", - "person", - "directory", - "revision", - "release", - "snapshot", - "origin", - "origin_visit", - "fetcher", - "authority", - "origin_metadata", - ] - ) - for object_type, objs in sample_data.items(): - for obj in objs: - assert isinstance(obj, dict) - - if sample_data_model.get(object_type): - # metadata keys are missing because conversion is not possible yet - assert len(objs) == len(sample_data_model[object_type]) - - -def test_sample_data_model(sample_data, sample_data_model): - assert set(sample_data_model.keys()) == set( - [ - "content", - "content_metadata", - "skipped_content", - "person", - "directory", - "revision", - "release", - "snapshot", - "origin", - "origin_visit", - "fetcher", - "authority", - "origin_metadata", - ] - ) - - for object_type, objs in sample_data_model.items(): - assert object_type in OBJECT_FACTORY - - for obj in objs: - assert isinstance(obj, BaseModel) - - assert len(objs) == len(sample_data[object_type]) - - -def test_swh_storage(swh_storage): - # Cannot check yet that it's an instance of StorageInterface (due to validate proxy - # again). That ensures though that it's instantiable - assert swh_storage is not None +def test_swh_storage(swh_storage: StorageInterface): + assert isinstance(swh_storage, StorageInterface) is not None def test_swh_storage_backend_config(swh_storage_backend_config): assert isinstance(swh_storage_backend_config, dict) diff --git a/swh/storage/tests/test_retry.py b/swh/storage/tests/test_retry.py index 0b990439..782e8f35 100644 --- a/swh/storage/tests/test_retry.py +++ b/swh/storage/tests/test_retry.py @@ -1,933 +1,834 @@ # Copyright (C) 2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information +import attr + from unittest.mock import call import psycopg2 import pytest -from swh.model.model import ( - OriginVisit, - MetadataTargetType, -) +from swh.model.model import MetadataTargetType -from swh.storage import get_storage from swh.storage.exc import HashCollision, StorageArgumentException -from .storage_data import date_visit1 - @pytest.fixture def monkeypatch_sleep(monkeypatch, swh_storage): """In test context, we don't want to wait, make test faster """ from swh.storage.retry import RetryingProxyStorage for method_name, method in RetryingProxyStorage.__dict__.items(): if "_add" in method_name or "_update" in method_name: monkeypatch.setattr(method.retry, "sleep", lambda x: None) return monkeypatch @pytest.fixture def fake_hash_collision(sample_data): return HashCollision("sha1", "38762cf7f55934b34d179ae6a4c80cadccbb7f0a", []) @pytest.fixture def swh_storage_backend_config(): yield { "cls": "pipeline", "steps": [{"cls": "retry"}, {"cls": "memory"},], } -@pytest.fixture -def swh_storage(swh_storage_backend_config): - return get_storage(**swh_storage_backend_config) - - -@pytest.fixture -def swh_storage_validate(swh_storage_backend_config): - return get_storage(cls="validate", storage=swh_storage_backend_config) - - -def test_retrying_proxy_storage_content_add(swh_storage, sample_data_model): +def test_retrying_proxy_storage_content_add(swh_storage, sample_data): """Standard content_add works as before """ - sample_content = sample_data_model["content"][0] - + sample_content = sample_data.content content = next(swh_storage.content_get([sample_content.sha1])) assert not content s = swh_storage.content_add([sample_content]) assert s == { "content:add": 1, "content:add:bytes": sample_content.length, } content = next(swh_storage.content_get([sample_content.sha1])) assert content["sha1"] == sample_content.sha1 def test_retrying_proxy_storage_content_add_with_retry( - monkeypatch_sleep, swh_storage, sample_data_model, mocker, fake_hash_collision, + monkeypatch_sleep, swh_storage, sample_data, mocker, fake_hash_collision, ): """Multiple retries for hash collision and psycopg2 error but finally ok """ mock_memory = mocker.patch("swh.storage.in_memory.InMemoryStorage.content_add") mock_memory.side_effect = [ # first try goes ko fake_hash_collision, # second try goes ko psycopg2.IntegrityError("content already inserted"), # ok then! {"content:add": 1}, ] - sample_content = sample_data_model["content"][0] + sample_content = sample_data.content content = next(swh_storage.content_get([sample_content.sha1])) assert not content s = swh_storage.content_add([sample_content]) assert s == {"content:add": 1} mock_memory.assert_has_calls( [call([sample_content]), call([sample_content]), call([sample_content]),] ) def test_retrying_proxy_swh_storage_content_add_failure( - swh_storage, sample_data_model, mocker + swh_storage, sample_data, mocker ): """Unfiltered errors are raising without retry """ mock_memory = mocker.patch("swh.storage.in_memory.InMemoryStorage.content_add") mock_memory.side_effect = StorageArgumentException("Refuse to add content always!") - sample_content = sample_data_model["content"][0] + sample_content = sample_data.content content = next(swh_storage.content_get([sample_content.sha1])) assert not content with pytest.raises(StorageArgumentException, match="Refuse to add"): swh_storage.content_add([sample_content]) assert mock_memory.call_count == 1 -def test_retrying_proxy_storage_content_add_metadata(swh_storage, sample_data_model): +def test_retrying_proxy_storage_content_add_metadata(swh_storage, sample_data): """Standard content_add_metadata works as before """ - sample_content = sample_data_model["content_metadata"][0] + sample_content = sample_data.content + content = attr.evolve(sample_content, data=None) - pk = sample_content.sha1 + pk = content.sha1 content_metadata = swh_storage.content_get_metadata([pk]) assert not content_metadata[pk] - s = swh_storage.content_add_metadata([sample_content]) + s = swh_storage.content_add_metadata([content]) assert s == { "content:add": 1, } content_metadata = swh_storage.content_get_metadata([pk]) assert len(content_metadata[pk]) == 1 assert content_metadata[pk][0]["sha1"] == pk def test_retrying_proxy_storage_content_add_metadata_with_retry( - monkeypatch_sleep, swh_storage, sample_data_model, mocker, fake_hash_collision + monkeypatch_sleep, swh_storage, sample_data, mocker, fake_hash_collision ): """Multiple retries for hash collision and psycopg2 error but finally ok """ mock_memory = mocker.patch( "swh.storage.in_memory.InMemoryStorage.content_add_metadata" ) mock_memory.side_effect = [ # first try goes ko fake_hash_collision, # second try goes ko psycopg2.IntegrityError("content_metadata already inserted"), # ok then! {"content:add": 1}, ] - sample_content = sample_data_model["content_metadata"][0] + sample_content = sample_data.content + content = attr.evolve(sample_content, data=None) - s = swh_storage.content_add_metadata([sample_content]) + s = swh_storage.content_add_metadata([content]) assert s == {"content:add": 1} mock_memory.assert_has_calls( - [call([sample_content]), call([sample_content]), call([sample_content]),] + [call([content]), call([content]), call([content]),] ) def test_retrying_proxy_swh_storage_content_add_metadata_failure( - swh_storage, sample_data_model, mocker + swh_storage, sample_data, mocker ): """Unfiltered errors are raising without retry """ mock_memory = mocker.patch( "swh.storage.in_memory.InMemoryStorage.content_add_metadata" ) mock_memory.side_effect = StorageArgumentException( "Refuse to add content_metadata!" ) - sample_content = sample_data_model["content_metadata"][0] - pk = sample_content.sha1 + sample_content = sample_data.content + content = attr.evolve(sample_content, data=None) + pk = content.sha1 content_metadata = swh_storage.content_get_metadata([pk]) assert not content_metadata[pk] with pytest.raises(StorageArgumentException, match="Refuse to add"): - swh_storage.content_add_metadata([sample_content]) + swh_storage.content_add_metadata([content]) assert mock_memory.call_count == 1 -def test_retrying_proxy_storage_skipped_content_add(swh_storage, sample_data_model): +def test_retrying_proxy_storage_skipped_content_add(swh_storage, sample_data): """Standard skipped_content_add works as before """ - sample_content = sample_data_model["skipped_content"][0] + sample_content = sample_data.skipped_content sample_content_dict = sample_content.to_dict() skipped_contents = list(swh_storage.skipped_content_missing([sample_content_dict])) assert len(skipped_contents) == 1 s = swh_storage.skipped_content_add([sample_content]) assert s == { "skipped_content:add": 1, } skipped_content = list(swh_storage.skipped_content_missing([sample_content_dict])) assert len(skipped_content) == 0 def test_retrying_proxy_storage_skipped_content_add_with_retry( - monkeypatch_sleep, swh_storage, sample_data_model, mocker, fake_hash_collision + monkeypatch_sleep, swh_storage, sample_data, mocker, fake_hash_collision ): """Multiple retries for hash collision and psycopg2 error but finally ok """ mock_memory = mocker.patch( "swh.storage.in_memory.InMemoryStorage.skipped_content_add" ) mock_memory.side_effect = [ # 1st & 2nd try goes ko fake_hash_collision, psycopg2.IntegrityError("skipped_content already inserted"), # ok then! {"skipped_content:add": 1}, ] - sample_content = sample_data_model["skipped_content"][0] + sample_content = sample_data.skipped_content s = swh_storage.skipped_content_add([sample_content]) assert s == {"skipped_content:add": 1} mock_memory.assert_has_calls( [call([sample_content]), call([sample_content]), call([sample_content]),] ) def test_retrying_proxy_swh_storage_skipped_content_add_failure( - swh_storage, sample_data_model, mocker + swh_storage, sample_data, mocker ): """Unfiltered errors are raising without retry """ mock_memory = mocker.patch( "swh.storage.in_memory.InMemoryStorage.skipped_content_add" ) mock_memory.side_effect = StorageArgumentException( "Refuse to add content_metadata!" ) - sample_content = sample_data_model["skipped_content"][0] + sample_content = sample_data.skipped_content sample_content_dict = sample_content.to_dict() skipped_contents = list(swh_storage.skipped_content_missing([sample_content_dict])) assert len(skipped_contents) == 1 with pytest.raises(StorageArgumentException, match="Refuse to add"): swh_storage.skipped_content_add([sample_content]) skipped_contents = list(swh_storage.skipped_content_missing([sample_content_dict])) assert len(skipped_contents) == 1 assert mock_memory.call_count == 1 -def test_retrying_proxy_swh_storage_origin_add_one(swh_storage, sample_data_model): - """Standard origin_add_one works as before - - """ - sample_origin = sample_data_model["origin"][0] - sample_origin_dict = sample_origin.to_dict() - - origin = swh_storage.origin_get(sample_origin_dict) - assert not origin - - swh_storage.origin_add_one(sample_origin) - - origin = swh_storage.origin_get(sample_origin_dict) - assert origin["url"] == sample_origin.url - - -def test_retrying_proxy_swh_storage_origin_add_one_retry( - monkeypatch_sleep, swh_storage, sample_data_model, mocker, fake_hash_collision -): - """Multiple retries for hash collision and psycopg2 error but finally ok - - """ - sample_origin = sample_data_model["origin"][1] - mock_memory = mocker.patch("swh.storage.in_memory.InMemoryStorage.origin_add_one") - mock_memory.side_effect = [ - # first try goes ko - fake_hash_collision, - # second try goes ko - psycopg2.IntegrityError("origin already inserted"), - # ok then! - sample_origin.url, - ] - sample_origin_dict = sample_origin.to_dict() - - origin = swh_storage.origin_get(sample_origin_dict) - assert not origin - - swh_storage.origin_add_one(sample_origin) - - mock_memory.assert_has_calls( - [call(sample_origin), call(sample_origin), call(sample_origin),] - ) - - -def test_retrying_proxy_swh_storage_origin_add_one_failure( - swh_storage, sample_data_model, mocker -): - """Unfiltered errors are raising without retry - - """ - mock_memory = mocker.patch("swh.storage.in_memory.InMemoryStorage.origin_add_one") - mock_memory.side_effect = StorageArgumentException("Refuse to add origin always!") - - sample_origin = sample_data_model["origin"][0] - sample_origin_dict = sample_origin.to_dict() - - origin = swh_storage.origin_get(sample_origin_dict) - assert not origin - - with pytest.raises(StorageArgumentException, match="Refuse to add"): - swh_storage.origin_add_one(sample_origin) - - assert mock_memory.call_count == 1 - - -def test_retrying_proxy_swh_storage_origin_visit_add(swh_storage, sample_data_model): +def test_retrying_proxy_swh_storage_origin_visit_add(swh_storage, sample_data): """Standard origin_visit_add works as before """ - origin = sample_data_model["origin"][0] + origin = sample_data.origin + visit = sample_data.origin_visit + assert visit.origin == origin.url - swh_storage.origin_add_one(origin) + swh_storage.origin_add([origin]) origins = list(swh_storage.origin_visit_get(origin.url)) assert not origins - visit = OriginVisit(origin=origin.url, date=date_visit1, type="hg") origin_visit = swh_storage.origin_visit_add([visit])[0] assert origin_visit.origin == origin.url assert isinstance(origin_visit.visit, int) origin_visit = next(swh_storage.origin_visit_get(origin.url)) assert origin_visit["origin"] == origin.url assert isinstance(origin_visit["visit"], int) def test_retrying_proxy_swh_storage_origin_visit_add_retry( - monkeypatch_sleep, swh_storage, sample_data_model, mocker, fake_hash_collision + monkeypatch_sleep, swh_storage, sample_data, mocker, fake_hash_collision ): """Multiple retries for hash collision and psycopg2 error but finally ok """ - origin = sample_data_model["origin"][1] - swh_storage.origin_add_one(origin) + origin = sample_data.origin + visit = sample_data.origin_visit + assert visit.origin == origin.url + + swh_storage.origin_add([origin]) mock_memory = mocker.patch("swh.storage.in_memory.InMemoryStorage.origin_visit_add") - visit = OriginVisit(origin=origin.url, date=date_visit1, type="git") mock_memory.side_effect = [ # first try goes ko fake_hash_collision, # second try goes ko psycopg2.IntegrityError("origin already inserted"), # ok then! [visit], ] origins = list(swh_storage.origin_visit_get(origin.url)) assert not origins r = swh_storage.origin_visit_add([visit]) assert r == [visit] mock_memory.assert_has_calls( [call([visit]), call([visit]), call([visit]),] ) def test_retrying_proxy_swh_storage_origin_visit_add_failure( - swh_storage, sample_data_model, mocker + swh_storage, sample_data, mocker ): """Unfiltered errors are raising without retry """ mock_memory = mocker.patch("swh.storage.in_memory.InMemoryStorage.origin_visit_add") mock_memory.side_effect = StorageArgumentException("Refuse to add origin always!") - origin = sample_data_model["origin"][0] + origin = sample_data.origin + visit = sample_data.origin_visit + assert visit.origin == origin.url origins = list(swh_storage.origin_visit_get(origin.url)) assert not origins with pytest.raises(StorageArgumentException, match="Refuse to add"): - visit = OriginVisit(origin=origin.url, date=date_visit1, type="svn",) swh_storage.origin_visit_add([visit]) mock_memory.assert_has_calls( [call([visit]),] ) -def test_retrying_proxy_storage_metadata_fetcher_add( - swh_storage_validate, sample_data_model -): +def test_retrying_proxy_storage_metadata_fetcher_add(swh_storage, sample_data): """Standard metadata_fetcher_add works as before """ - fetcher = sample_data_model["fetcher"][0] + fetcher = sample_data.metadata_fetcher - metadata_fetcher = swh_storage_validate.metadata_fetcher_get( - fetcher.name, fetcher.version - ) + metadata_fetcher = swh_storage.metadata_fetcher_get(fetcher.name, fetcher.version) assert not metadata_fetcher - swh_storage_validate.metadata_fetcher_add([fetcher]) + swh_storage.metadata_fetcher_add([fetcher]) - actual_fetcher = swh_storage_validate.metadata_fetcher_get( - fetcher.name, fetcher.version - ) + actual_fetcher = swh_storage.metadata_fetcher_get(fetcher.name, fetcher.version) assert actual_fetcher == fetcher def test_retrying_proxy_storage_metadata_fetcher_add_with_retry( - monkeypatch_sleep, - swh_storage_validate, - sample_data_model, - mocker, - fake_hash_collision, + monkeypatch_sleep, swh_storage, sample_data, mocker, fake_hash_collision, ): """Multiple retries for hash collision and psycopg2 error but finally ok """ - fetcher = sample_data_model["fetcher"][0] + fetcher = sample_data.metadata_fetcher mock_memory = mocker.patch( "swh.storage.in_memory.InMemoryStorage.metadata_fetcher_add" ) mock_memory.side_effect = [ # first try goes ko fake_hash_collision, # second try goes ko psycopg2.IntegrityError("metadata_fetcher already inserted"), # ok then! [fetcher], ] - actual_fetcher = swh_storage_validate.metadata_fetcher_get( - fetcher.name, fetcher.version - ) + actual_fetcher = swh_storage.metadata_fetcher_get(fetcher.name, fetcher.version) assert not actual_fetcher - swh_storage_validate.metadata_fetcher_add([fetcher]) + swh_storage.metadata_fetcher_add([fetcher]) mock_memory.assert_has_calls( [call([fetcher]), call([fetcher]), call([fetcher]),] ) def test_retrying_proxy_swh_storage_metadata_fetcher_add_failure( - swh_storage_validate, sample_data_model, mocker + swh_storage, sample_data, mocker ): """Unfiltered errors are raising without retry """ mock_memory = mocker.patch( "swh.storage.in_memory.InMemoryStorage.metadata_fetcher_add" ) mock_memory.side_effect = StorageArgumentException( "Refuse to add metadata_fetcher always!" ) - fetcher = sample_data_model["fetcher"][0] + fetcher = sample_data.metadata_fetcher - actual_fetcher = swh_storage_validate.metadata_fetcher_get( - fetcher.name, fetcher.version - ) + actual_fetcher = swh_storage.metadata_fetcher_get(fetcher.name, fetcher.version) assert not actual_fetcher with pytest.raises(StorageArgumentException, match="Refuse to add"): - swh_storage_validate.metadata_fetcher_add([fetcher]) + swh_storage.metadata_fetcher_add([fetcher]) assert mock_memory.call_count == 1 -def test_retrying_proxy_storage_metadata_authority_add( - swh_storage_validate, sample_data_model -): +def test_retrying_proxy_storage_metadata_authority_add(swh_storage, sample_data): """Standard metadata_authority_add works as before """ - authority = sample_data_model["authority"][0] + authority = sample_data.metadata_authority - assert not swh_storage_validate.metadata_authority_get( - authority.type, authority.url - ) + assert not swh_storage.metadata_authority_get(authority.type, authority.url) - swh_storage_validate.metadata_authority_add([authority]) + swh_storage.metadata_authority_add([authority]) - actual_authority = swh_storage_validate.metadata_authority_get( - authority.type, authority.url - ) + actual_authority = swh_storage.metadata_authority_get(authority.type, authority.url) assert actual_authority == authority def test_retrying_proxy_storage_metadata_authority_add_with_retry( - monkeypatch_sleep, - swh_storage_validate, - sample_data_model, - mocker, - fake_hash_collision, + monkeypatch_sleep, swh_storage, sample_data, mocker, fake_hash_collision, ): """Multiple retries for hash collision and psycopg2 error but finally ok """ - authority = sample_data_model["authority"][0] + authority = sample_data.metadata_authority mock_memory = mocker.patch( "swh.storage.in_memory.InMemoryStorage.metadata_authority_add" ) mock_memory.side_effect = [ # first try goes ko fake_hash_collision, # second try goes ko psycopg2.IntegrityError("foo bar"), # ok then! None, ] - assert not swh_storage_validate.metadata_authority_get( - authority.type, authority.url - ) + assert not swh_storage.metadata_authority_get(authority.type, authority.url) - swh_storage_validate.metadata_authority_add([authority]) + swh_storage.metadata_authority_add([authority]) mock_memory.assert_has_calls( [call([authority]), call([authority]), call([authority])] ) def test_retrying_proxy_swh_storage_metadata_authority_add_failure( - swh_storage_validate, sample_data_model, mocker + swh_storage, sample_data, mocker ): """Unfiltered errors are raising without retry """ mock_memory = mocker.patch( "swh.storage.in_memory.InMemoryStorage.metadata_authority_add" ) mock_memory.side_effect = StorageArgumentException( "Refuse to add authority_id always!" ) - authority = sample_data_model["authority"][0] + authority = sample_data.metadata_authority - swh_storage_validate.metadata_authority_get(authority.type, authority.url) + swh_storage.metadata_authority_get(authority.type, authority.url) with pytest.raises(StorageArgumentException, match="Refuse to add"): - swh_storage_validate.metadata_authority_add([authority]) + swh_storage.metadata_authority_add([authority]) assert mock_memory.call_count == 1 -def test_retrying_proxy_storage_object_metadata_add( - swh_storage_validate, sample_data_model -): +def test_retrying_proxy_storage_object_metadata_add(swh_storage, sample_data): """Standard object_metadata_add works as before """ - ori_meta = sample_data_model["origin_metadata"][0] - swh_storage_validate.origin_add_one({"url": ori_meta.id}) - swh_storage_validate.metadata_authority_add([sample_data_model["authority"][0]]) - swh_storage_validate.metadata_fetcher_add([sample_data_model["fetcher"][0]]) + origin = sample_data.origin + ori_meta = sample_data.origin_metadata1 + assert origin.url == ori_meta.id + swh_storage.origin_add([origin]) + swh_storage.metadata_authority_add([sample_data.metadata_authority]) + swh_storage.metadata_fetcher_add([sample_data.metadata_fetcher]) - origin_metadata = swh_storage_validate.object_metadata_get( + origin_metadata = swh_storage.object_metadata_get( MetadataTargetType.ORIGIN, ori_meta.id, ori_meta.authority ) assert origin_metadata["next_page_token"] is None assert not origin_metadata["results"] - swh_storage_validate.object_metadata_add([ori_meta]) + swh_storage.object_metadata_add([ori_meta]) - origin_metadata = swh_storage_validate.object_metadata_get( + origin_metadata = swh_storage.object_metadata_get( MetadataTargetType.ORIGIN, ori_meta.id, ori_meta.authority ) assert origin_metadata def test_retrying_proxy_storage_object_metadata_add_with_retry( - monkeypatch_sleep, - swh_storage_validate, - sample_data_model, - mocker, - fake_hash_collision, + monkeypatch_sleep, swh_storage, sample_data, mocker, fake_hash_collision, ): """Multiple retries for hash collision and psycopg2 error but finally ok """ - ori_meta = sample_data_model["origin_metadata"][0] - swh_storage_validate.origin_add_one({"url": ori_meta.id}) - swh_storage_validate.metadata_authority_add([sample_data_model["authority"][0]]) - swh_storage_validate.metadata_fetcher_add([sample_data_model["fetcher"][0]]) + origin = sample_data.origin + ori_meta = sample_data.origin_metadata1 + assert origin.url == ori_meta.id + swh_storage.origin_add([origin]) + swh_storage.metadata_authority_add([sample_data.metadata_authority]) + swh_storage.metadata_fetcher_add([sample_data.metadata_fetcher]) mock_memory = mocker.patch( "swh.storage.in_memory.InMemoryStorage.object_metadata_add" ) mock_memory.side_effect = [ # first try goes ko fake_hash_collision, # second try goes ko psycopg2.IntegrityError("foo bar"), # ok then! None, ] # No exception raised as insertion finally came through - swh_storage_validate.object_metadata_add([ori_meta]) + swh_storage.object_metadata_add([ori_meta]) mock_memory.assert_has_calls( [ # 3 calls, as long as error raised call([ori_meta]), call([ori_meta]), call([ori_meta]), ] ) def test_retrying_proxy_swh_storage_object_metadata_add_failure( - swh_storage_validate, sample_data_model, mocker + swh_storage, sample_data, mocker ): """Unfiltered errors are raising without retry """ mock_memory = mocker.patch( "swh.storage.in_memory.InMemoryStorage.object_metadata_add" ) mock_memory.side_effect = StorageArgumentException("Refuse to add always!") - ori_meta = sample_data_model["origin_metadata"][0] - swh_storage_validate.origin_add_one({"url": ori_meta.id}) + origin = sample_data.origin + ori_meta = sample_data.origin_metadata1 + assert origin.url == ori_meta.id + swh_storage.origin_add([origin]) with pytest.raises(StorageArgumentException, match="Refuse to add"): - swh_storage_validate.object_metadata_add([ori_meta]) + swh_storage.object_metadata_add([ori_meta]) assert mock_memory.call_count == 1 -def test_retrying_proxy_storage_directory_add(swh_storage, sample_data_model): +def test_retrying_proxy_storage_directory_add(swh_storage, sample_data): """Standard directory_add works as before """ - sample_dir = sample_data_model["directory"][0] + sample_dir = sample_data.directory directory = swh_storage.directory_get_random() # no directory assert not directory s = swh_storage.directory_add([sample_dir]) assert s == { "directory:add": 1, } directory_id = swh_storage.directory_get_random() # only 1 assert directory_id == sample_dir.id def test_retrying_proxy_storage_directory_add_with_retry( - monkeypatch_sleep, swh_storage, sample_data_model, mocker, fake_hash_collision + monkeypatch_sleep, swh_storage, sample_data, mocker, fake_hash_collision ): """Multiple retries for hash collision and psycopg2 error but finally ok """ mock_memory = mocker.patch("swh.storage.in_memory.InMemoryStorage.directory_add") mock_memory.side_effect = [ # first try goes ko fake_hash_collision, # second try goes ko psycopg2.IntegrityError("directory already inserted"), # ok then! {"directory:add": 1}, ] - sample_dir = sample_data_model["directory"][1] + sample_dir = sample_data.directories[1] directory_id = swh_storage.directory_get_random() # no directory assert not directory_id s = swh_storage.directory_add([sample_dir]) assert s == { "directory:add": 1, } mock_memory.assert_has_calls( [call([sample_dir]), call([sample_dir]), call([sample_dir]),] ) def test_retrying_proxy_swh_storage_directory_add_failure( - swh_storage, sample_data_model, mocker + swh_storage, sample_data, mocker ): """Unfiltered errors are raising without retry """ mock_memory = mocker.patch("swh.storage.in_memory.InMemoryStorage.directory_add") mock_memory.side_effect = StorageArgumentException( "Refuse to add directory always!" ) - sample_dir = sample_data_model["directory"][0] + sample_dir = sample_data.directory directory_id = swh_storage.directory_get_random() # no directory assert not directory_id with pytest.raises(StorageArgumentException, match="Refuse to add"): swh_storage.directory_add([sample_dir]) assert mock_memory.call_count == 1 -def test_retrying_proxy_storage_revision_add(swh_storage, sample_data_model): +def test_retrying_proxy_storage_revision_add(swh_storage, sample_data): """Standard revision_add works as before """ - sample_rev = sample_data_model["revision"][0] + sample_rev = sample_data.revision revision = next(swh_storage.revision_get([sample_rev.id])) assert not revision s = swh_storage.revision_add([sample_rev]) assert s == { "revision:add": 1, } revision = next(swh_storage.revision_get([sample_rev.id])) assert revision["id"] == sample_rev.id def test_retrying_proxy_storage_revision_add_with_retry( - monkeypatch_sleep, swh_storage, sample_data_model, mocker, fake_hash_collision + monkeypatch_sleep, swh_storage, sample_data, mocker, fake_hash_collision ): """Multiple retries for hash collision and psycopg2 error but finally ok """ mock_memory = mocker.patch("swh.storage.in_memory.InMemoryStorage.revision_add") mock_memory.side_effect = [ # first try goes ko fake_hash_collision, # second try goes ko psycopg2.IntegrityError("revision already inserted"), # ok then! {"revision:add": 1}, ] - sample_rev = sample_data_model["revision"][0] + sample_rev = sample_data.revision revision = next(swh_storage.revision_get([sample_rev.id])) assert not revision s = swh_storage.revision_add([sample_rev]) assert s == { "revision:add": 1, } mock_memory.assert_has_calls( [call([sample_rev]), call([sample_rev]), call([sample_rev]),] ) def test_retrying_proxy_swh_storage_revision_add_failure( - swh_storage, sample_data_model, mocker + swh_storage, sample_data, mocker ): """Unfiltered errors are raising without retry """ mock_memory = mocker.patch("swh.storage.in_memory.InMemoryStorage.revision_add") mock_memory.side_effect = StorageArgumentException("Refuse to add revision always!") - sample_rev = sample_data_model["revision"][0] + sample_rev = sample_data.revision revision = next(swh_storage.revision_get([sample_rev.id])) assert not revision with pytest.raises(StorageArgumentException, match="Refuse to add"): swh_storage.revision_add([sample_rev]) assert mock_memory.call_count == 1 -def test_retrying_proxy_storage_release_add(swh_storage, sample_data_model): +def test_retrying_proxy_storage_release_add(swh_storage, sample_data): """Standard release_add works as before """ - sample_rel = sample_data_model["release"][0] + sample_rel = sample_data.release release = next(swh_storage.release_get([sample_rel.id])) assert not release s = swh_storage.release_add([sample_rel]) assert s == { "release:add": 1, } release = next(swh_storage.release_get([sample_rel.id])) assert release["id"] == sample_rel.id def test_retrying_proxy_storage_release_add_with_retry( - monkeypatch_sleep, swh_storage, sample_data_model, mocker, fake_hash_collision + monkeypatch_sleep, swh_storage, sample_data, mocker, fake_hash_collision ): """Multiple retries for hash collision and psycopg2 error but finally ok """ mock_memory = mocker.patch("swh.storage.in_memory.InMemoryStorage.release_add") mock_memory.side_effect = [ # first try goes ko fake_hash_collision, # second try goes ko psycopg2.IntegrityError("release already inserted"), # ok then! {"release:add": 1}, ] - sample_rel = sample_data_model["release"][0] + sample_rel = sample_data.release release = next(swh_storage.release_get([sample_rel.id])) assert not release s = swh_storage.release_add([sample_rel]) assert s == { "release:add": 1, } mock_memory.assert_has_calls( [call([sample_rel]), call([sample_rel]), call([sample_rel]),] ) def test_retrying_proxy_swh_storage_release_add_failure( - swh_storage, sample_data_model, mocker + swh_storage, sample_data, mocker ): """Unfiltered errors are raising without retry """ mock_memory = mocker.patch("swh.storage.in_memory.InMemoryStorage.release_add") mock_memory.side_effect = StorageArgumentException("Refuse to add release always!") - sample_rel = sample_data_model["release"][0] + sample_rel = sample_data.release release = next(swh_storage.release_get([sample_rel.id])) assert not release with pytest.raises(StorageArgumentException, match="Refuse to add"): swh_storage.release_add([sample_rel]) assert mock_memory.call_count == 1 -def test_retrying_proxy_storage_snapshot_add(swh_storage, sample_data_model): +def test_retrying_proxy_storage_snapshot_add(swh_storage, sample_data): """Standard snapshot_add works as before """ - sample_snap = sample_data_model["snapshot"][0] + sample_snap = sample_data.snapshot snapshot = swh_storage.snapshot_get(sample_snap.id) assert not snapshot s = swh_storage.snapshot_add([sample_snap]) assert s == { "snapshot:add": 1, } snapshot = swh_storage.snapshot_get(sample_snap.id) assert snapshot["id"] == sample_snap.id def test_retrying_proxy_storage_snapshot_add_with_retry( - monkeypatch_sleep, swh_storage, sample_data_model, mocker, fake_hash_collision + monkeypatch_sleep, swh_storage, sample_data, mocker, fake_hash_collision ): """Multiple retries for hash collision and psycopg2 error but finally ok """ mock_memory = mocker.patch("swh.storage.in_memory.InMemoryStorage.snapshot_add") mock_memory.side_effect = [ # first try goes ko fake_hash_collision, # second try goes ko psycopg2.IntegrityError("snapshot already inserted"), # ok then! {"snapshot:add": 1}, ] - sample_snap = sample_data_model["snapshot"][0] + sample_snap = sample_data.snapshot snapshot = swh_storage.snapshot_get(sample_snap.id) assert not snapshot s = swh_storage.snapshot_add([sample_snap]) assert s == { "snapshot:add": 1, } mock_memory.assert_has_calls( [call([sample_snap]), call([sample_snap]), call([sample_snap]),] ) def test_retrying_proxy_swh_storage_snapshot_add_failure( - swh_storage, sample_data_model, mocker + swh_storage, sample_data, mocker ): """Unfiltered errors are raising without retry """ mock_memory = mocker.patch("swh.storage.in_memory.InMemoryStorage.snapshot_add") mock_memory.side_effect = StorageArgumentException("Refuse to add snapshot always!") - sample_snap = sample_data_model["snapshot"][0] + sample_snap = sample_data.snapshot snapshot = swh_storage.snapshot_get(sample_snap.id) assert not snapshot with pytest.raises(StorageArgumentException, match="Refuse to add"): swh_storage.snapshot_add([sample_snap]) assert mock_memory.call_count == 1 diff --git a/swh/storage/tests/test_revision_bw_compat.py b/swh/storage/tests/test_revision_bw_compat.py index ca4837b6..83216a34 100644 --- a/swh/storage/tests/test_revision_bw_compat.py +++ b/swh/storage/tests/test_revision_bw_compat.py @@ -1,49 +1,47 @@ # Copyright (C) 2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import attr from swh.core.utils import decode_with_escape from swh.model.model import Revision from swh.storage import get_storage from swh.storage.tests.test_storage import db_transaction def headers_to_db(git_headers): return [[key, decode_with_escape(value)] for key, value in git_headers] -def test_revision_extra_header_in_metadata( - swh_storage_backend_config, sample_data_model -): +def test_revision_extra_header_in_metadata(swh_storage_backend_config, sample_data): storage = get_storage(**swh_storage_backend_config) - rev = sample_data_model["revision"][0] + rev = sample_data.revision md_w_extra = dict( rev.metadata.items(), extra_headers=headers_to_db( [ ["gpgsig", b"test123"], ["mergetag", b"foo\\bar"], ["mergetag", b"\x22\xaf\x89\x80\x01\x00"], ] ), ) bw_rev = attr.evolve(rev, extra_headers=()) object.__setattr__(bw_rev, "metadata", md_w_extra) assert bw_rev.extra_headers == () assert storage.revision_add([bw_rev]) == {"revision:add": 1} # check data in the db are old format with db_transaction(storage) as (_, cur): cur.execute("SELECT metadata, extra_headers FROM revision") metadata, extra_headers = cur.fetchone() assert extra_headers == [] assert metadata == bw_rev.metadata # check the Revision build from revision_get is the original, "new style", Revision assert [Revision.from_dict(x) for x in storage.revision_get([rev.id])] == [rev] diff --git a/swh/storage/tests/test_storage.py b/swh/storage/tests/test_storage.py index 879b4e0f..f806611a 100644 --- a/swh/storage/tests/test_storage.py +++ b/swh/storage/tests/test_storage.py @@ -1,4272 +1,4176 @@ # Copyright (C) 2015-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information -import copy import datetime import inspect import itertools import math import queue import random import threading from collections import defaultdict from contextlib import contextmanager from datetime import timedelta from unittest.mock import Mock import attr -import psycopg2 import pytest from hypothesis import given, strategies, settings, HealthCheck from typing import ClassVar, Optional -from swh.model import from_disk, identifiers +from swh.model import from_disk from swh.model.hashutil import hash_to_bytes from swh.model.identifiers import SWHID from swh.model.model import ( Content, - Directory, + MetadataTargetType, Origin, OriginVisit, OriginVisitStatus, + Person, Release, Revision, - SkippedContent, Snapshot, - MetadataTargetType, ) from swh.model.hypothesis_strategies import objects from swh.storage import get_storage from swh.storage.converters import origin_url_to_sha1 as sha1 from swh.storage.exc import HashCollision, StorageArgumentException from swh.storage.interface import StorageInterface from swh.storage.utils import content_hex_hashes, now -from .storage_data import data - @contextmanager def db_transaction(storage): with storage.db() as db: with db.transaction() as cur: yield db, cur -def normalize_entity(entity): - entity = copy.deepcopy(entity) - for key in ("date", "committer_date"): - if key in entity: - entity[key] = identifiers.normalize_timestamp(entity[key]) - return entity - - def transform_entries(dir_, *, prefix=b""): for ent in dir_.entries: yield { "dir_id": dir_.id, "type": ent.type, "target": ent.target, "name": prefix + ent.name, "perms": ent.perms, "status": None, "sha1": None, "sha1_git": None, "sha256": None, "length": None, } def cmpdir(directory): return (directory["type"], directory["dir_id"]) -def short_revision(revision): - return [revision["id"], revision["parents"]] - - def assert_contents_ok( expected_contents, actual_contents, keys_to_check={"sha1", "data"} ): """Assert that a given list of contents matches on a given set of keys. """ for k in keys_to_check: expected_list = set([c.get(k) for c in expected_contents]) actual_list = set([c.get(k) for c in actual_contents]) assert actual_list == expected_list, k def round_to_milliseconds(date): """Round datetime to milliseconds before insertion, so equality doesn't fail after a round-trip through a DB (eg. Cassandra) """ return date.replace(microsecond=(date.microsecond // 1000) * 1000) def test_round_to_milliseconds(): date = now() for (ms, expected_ms) in [(0, 0), (1000, 1000), (555555, 555000), (999500, 999000)]: date = date.replace(microsecond=ms) actual_date = round_to_milliseconds(date) assert actual_date.microsecond == expected_ms class LazyContent(Content): def with_data(self): - return Content.from_dict({**self.to_dict(), "data": data.cont["data"]}) + return Content.from_dict({**self.to_dict(), "data": b"42\n"}) class TestStorage: """Main class for Storage testing. This class is used as-is to test local storage (see TestLocalStorage below) and remote storage (see TestRemoteStorage in test_remote_storage.py. We need to have the two classes inherit from this base class separately to avoid nosetests running the tests from the base class twice. """ maxDiff = None # type: ClassVar[Optional[int]] def test_types(self, swh_storage_backend_config): """Checks all methods of StorageInterface are implemented by this backend, and that they have the same signature.""" # Create an instance of the protocol (which cannot be instantiated # directly, so this creates a subclass, then instantiates it) interface = type("_", (StorageInterface,), {})() storage = get_storage(**swh_storage_backend_config) assert "content_add" in dir(interface) missing_methods = [] for meth_name in dir(interface): if meth_name.startswith("_"): continue interface_meth = getattr(interface, meth_name) try: concrete_meth = getattr(storage, meth_name) except AttributeError: if not getattr(interface_meth, "deprecated_endpoint", False): # The backend is missing a (non-deprecated) endpoint missing_methods.append(meth_name) continue expected_signature = inspect.signature(interface_meth) actual_signature = inspect.signature(concrete_meth) assert expected_signature == actual_signature, meth_name assert missing_methods == [] def test_check_config(self, swh_storage): assert swh_storage.check_config(check_write=True) assert swh_storage.check_config(check_write=False) - def test_content_add(self, swh_storage, sample_data_model): - cont = sample_data_model["content"][0] + def test_content_add(self, swh_storage, sample_data): + cont = sample_data.content insertion_start_time = now() actual_result = swh_storage.content_add([cont]) insertion_end_time = now() assert actual_result == { "content:add": 1, "content:add:bytes": cont.length, } assert list(swh_storage.content_get([cont.sha1])) == [ {"sha1": cont.sha1, "data": cont.data} ] expected_cont = attr.evolve(cont, data=None) contents = [ obj for (obj_type, obj) in swh_storage.journal_writer.journal.objects if obj_type == "content" ] assert len(contents) == 1 for obj in contents: assert insertion_start_time <= obj.ctime assert obj.ctime <= insertion_end_time assert obj == expected_cont swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["content"] == 1 - def test_content_add_from_generator(self, swh_storage, sample_data_model): - cont = sample_data_model["content"][0] + def test_content_add_from_generator(self, swh_storage, sample_data): + cont = sample_data.content def _cnt_gen(): yield cont actual_result = swh_storage.content_add(_cnt_gen()) assert actual_result == { "content:add": 1, "content:add:bytes": cont.length, } swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["content"] == 1 - def test_content_add_from_lazy_content(self, swh_storage, sample_data_model): - cont = sample_data_model["content"][0] + def test_content_add_from_lazy_content(self, swh_storage, sample_data): + cont = sample_data.content lazy_content = LazyContent.from_dict(cont.to_dict()) insertion_start_time = now() - # bypass the validation proxy for now, to directly put a dict - actual_result = swh_storage.storage.content_add([lazy_content]) + actual_result = swh_storage.content_add([lazy_content]) insertion_end_time = now() assert actual_result == { "content:add": 1, "content:add:bytes": cont.length, } # the fact that we retrieve the content object from the storage with # the correct 'data' field ensures it has been 'called' assert list(swh_storage.content_get([cont.sha1])) == [ {"sha1": cont.sha1, "data": cont.data} ] expected_cont = attr.evolve(lazy_content, data=None, ctime=None) contents = [ obj for (obj_type, obj) in swh_storage.journal_writer.journal.objects if obj_type == "content" ] assert len(contents) == 1 for obj in contents: assert insertion_start_time <= obj.ctime assert obj.ctime <= insertion_end_time assert attr.evolve(obj, ctime=None).to_dict() == expected_cont.to_dict() swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["content"] == 1 - def test_content_add_validation(self, swh_storage, sample_data_model): - cont = sample_data_model["content"][0].to_dict() - - with pytest.raises(StorageArgumentException, match="status"): - swh_storage.content_add([{**cont, "status": "absent"}]) - - with pytest.raises(StorageArgumentException, match="status"): - swh_storage.content_add([{**cont, "status": "foobar"}]) - - with pytest.raises(StorageArgumentException, match="(?i)length"): - swh_storage.content_add([{**cont, "length": -2}]) - - with pytest.raises(StorageArgumentException, match="reason"): - swh_storage.content_add([{**cont, "reason": "foobar"}]) - - def test_skipped_content_add_validation(self, swh_storage, sample_data_model): - cont = attr.evolve(sample_data_model["content"][0], data=None).to_dict() - - with pytest.raises(StorageArgumentException, match="status"): - swh_storage.skipped_content_add([{**cont, "status": "visible"}]) - - with pytest.raises(StorageArgumentException, match="reason") as cm: - swh_storage.skipped_content_add([{**cont, "status": "absent"}]) - - if type(cm.value) == psycopg2.IntegrityError: - assert cm.exception.pgcode == psycopg2.errorcodes.NOT_NULL_VIOLATION - - def test_content_get_missing(self, swh_storage, sample_data_model): - cont, cont2 = sample_data_model["content"][:2] + def test_content_get_missing(self, swh_storage, sample_data): + cont, cont2 = sample_data.contents[:2] swh_storage.content_add([cont]) # Query a single missing content results = list(swh_storage.content_get([cont2.sha1])) assert results == [None] # Check content_get does not abort after finding a missing content results = list(swh_storage.content_get([cont.sha1, cont2.sha1])) assert results == [{"sha1": cont.sha1, "data": cont.data}, None] # Check content_get does not discard found countent when it finds # a missing content. results = list(swh_storage.content_get([cont2.sha1, cont.sha1])) assert results == [None, {"sha1": cont.sha1, "data": cont.data}] - def test_content_add_different_input(self, swh_storage, sample_data_model): - cont, cont2 = sample_data_model["content"][:2] + def test_content_add_different_input(self, swh_storage, sample_data): + cont, cont2 = sample_data.contents[:2] actual_result = swh_storage.content_add([cont, cont2]) assert actual_result == { "content:add": 2, "content:add:bytes": cont.length + cont2.length, } - def test_content_add_twice(self, swh_storage, sample_data_model): - cont, cont2 = sample_data_model["content"][:2] + def test_content_add_twice(self, swh_storage, sample_data): + cont, cont2 = sample_data.contents[:2] actual_result = swh_storage.content_add([cont]) assert actual_result == { "content:add": 1, "content:add:bytes": cont.length, } assert len(swh_storage.journal_writer.journal.objects) == 1 actual_result = swh_storage.content_add([cont, cont2]) assert actual_result == { "content:add": 1, "content:add:bytes": cont2.length, } assert 2 <= len(swh_storage.journal_writer.journal.objects) <= 3 assert len(swh_storage.content_find(cont.to_dict())) == 1 assert len(swh_storage.content_find(cont2.to_dict())) == 1 - def test_content_add_collision(self, swh_storage, sample_data_model): - cont1 = sample_data_model["content"][0] + def test_content_add_collision(self, swh_storage, sample_data): + cont1 = sample_data.content # create (corrupted) content with same sha1{,_git} but != sha256 sha256_array = bytearray(cont1.sha256) sha256_array[0] += 1 cont1b = attr.evolve(cont1, sha256=bytes(sha256_array)) with pytest.raises(HashCollision) as cm: swh_storage.content_add([cont1, cont1b]) exc = cm.value actual_algo = exc.algo assert actual_algo in ["sha1", "sha1_git", "blake2s256"] actual_id = exc.hash_id assert actual_id == getattr(cont1, actual_algo).hex() collisions = exc.args[2] assert len(collisions) == 2 assert collisions == [ content_hex_hashes(cont1.hashes()), content_hex_hashes(cont1b.hashes()), ] assert exc.colliding_content_hashes() == [ cont1.hashes(), cont1b.hashes(), ] - def test_content_add_duplicate(self, swh_storage, sample_data_model): - cont = sample_data_model["content"][0] + def test_content_add_duplicate(self, swh_storage, sample_data): + cont = sample_data.content swh_storage.content_add([cont, cont]) assert list(swh_storage.content_get([cont.sha1])) == [ {"sha1": cont.sha1, "data": cont.data} ] - def test_content_update(self, swh_storage, sample_data_model): - cont1 = sample_data_model["content"][0] + def test_content_update(self, swh_storage, sample_data): + cont1 = sample_data.content - if hasattr(swh_storage, "storage"): + if hasattr(swh_storage, "journal_writer"): swh_storage.journal_writer.journal = None # TODO, not supported swh_storage.content_add([cont1]) # alter the sha1_git for example cont1b = attr.evolve( cont1, sha1_git=hash_to_bytes("3a60a5275d0333bf13468e8b3dcab90f4046e654") ) swh_storage.content_update([cont1b.to_dict()], keys=["sha1_git"]) results = swh_storage.content_get_metadata([cont1.sha1]) expected_content = attr.evolve(cont1b, data=None).to_dict() del expected_content["ctime"] assert tuple(results[cont1.sha1]) == (expected_content,) - def test_content_add_metadata(self, swh_storage, sample_data_model): - cont = attr.evolve(sample_data_model["content"][0], data=None, ctime=now()) + def test_content_add_metadata(self, swh_storage, sample_data): + cont = attr.evolve(sample_data.content, data=None, ctime=now()) actual_result = swh_storage.content_add_metadata([cont]) assert actual_result == { "content:add": 1, } expected_cont = cont.to_dict() del expected_cont["ctime"] assert tuple(swh_storage.content_get_metadata([cont.sha1])[cont.sha1]) == ( expected_cont, ) contents = [ obj for (obj_type, obj) in swh_storage.journal_writer.journal.objects if obj_type == "content" ] assert len(contents) == 1 for obj in contents: obj = attr.evolve(obj, ctime=None) assert obj == cont - def test_content_add_metadata_different_input(self, swh_storage, sample_data_model): - contents = sample_data_model["content"][:2] + def test_content_add_metadata_different_input(self, swh_storage, sample_data): + contents = sample_data.contents[:2] cont = attr.evolve(contents[0], data=None, ctime=now()) cont2 = attr.evolve(contents[1], data=None, ctime=now()) actual_result = swh_storage.content_add_metadata([cont, cont2]) assert actual_result == { "content:add": 2, } - def test_content_add_metadata_collision(self, swh_storage, sample_data_model): - cont1 = attr.evolve(sample_data_model["content"][0], data=None, ctime=now()) + def test_content_add_metadata_collision(self, swh_storage, sample_data): + cont1 = attr.evolve(sample_data.content, data=None, ctime=now()) # create (corrupted) content with same sha1{,_git} but != sha256 sha1_git_array = bytearray(cont1.sha256) sha1_git_array[0] += 1 cont1b = attr.evolve(cont1, sha256=bytes(sha1_git_array)) with pytest.raises(HashCollision) as cm: swh_storage.content_add_metadata([cont1, cont1b]) exc = cm.value actual_algo = exc.algo assert actual_algo in ["sha1", "sha1_git", "blake2s256"] actual_id = exc.hash_id assert actual_id == getattr(cont1, actual_algo).hex() collisions = exc.args[2] assert len(collisions) == 2 assert collisions == [ content_hex_hashes(cont1.hashes()), content_hex_hashes(cont1b.hashes()), ] assert exc.colliding_content_hashes() == [ cont1.hashes(), cont1b.hashes(), ] - def test_skipped_content_add(self, swh_storage, sample_data_model): - contents = sample_data_model["skipped_content"][:2] + def test_skipped_content_add(self, swh_storage, sample_data): + contents = sample_data.skipped_contents[:2] cont = contents[0] cont2 = attr.evolve(contents[1], blake2s256=None) contents_dict = [c.to_dict() for c in [cont, cont2]] missing = list(swh_storage.skipped_content_missing(contents_dict)) assert missing == [cont.hashes(), cont2.hashes()] actual_result = swh_storage.skipped_content_add([cont, cont, cont2]) assert 2 <= actual_result.pop("skipped_content:add") <= 3 assert actual_result == {} missing = list(swh_storage.skipped_content_missing(contents_dict)) assert missing == [] - def test_skipped_content_add_missing_hashes(self, swh_storage, sample_data_model): + def test_skipped_content_add_missing_hashes(self, swh_storage, sample_data): cont, cont2 = [ - attr.evolve(c, sha1_git=None) - for c in sample_data_model["skipped_content"][:2] + attr.evolve(c, sha1_git=None) for c in sample_data.skipped_contents[:2] ] contents_dict = [c.to_dict() for c in [cont, cont2]] missing = list(swh_storage.skipped_content_missing(contents_dict)) assert len(missing) == 2 actual_result = swh_storage.skipped_content_add([cont, cont, cont2]) assert 2 <= actual_result.pop("skipped_content:add") <= 3 assert actual_result == {} missing = list(swh_storage.skipped_content_missing(contents_dict)) assert missing == [] - def test_skipped_content_missing_partial_hash(self, swh_storage, sample_data_model): - cont = sample_data_model["skipped_content"][0] + def test_skipped_content_missing_partial_hash(self, swh_storage, sample_data): + cont = sample_data.skipped_content cont2 = attr.evolve(cont, sha1_git=None) contents_dict = [c.to_dict() for c in [cont, cont2]] missing = list(swh_storage.skipped_content_missing(contents_dict)) assert len(missing) == 2 actual_result = swh_storage.skipped_content_add([cont]) assert actual_result.pop("skipped_content:add") == 1 assert actual_result == {} missing = list(swh_storage.skipped_content_missing(contents_dict)) assert missing == [cont2.hashes()] @pytest.mark.property_based @settings(deadline=None) # this test is very slow @given( strategies.sets( elements=strategies.sampled_from(["sha256", "sha1_git", "blake2s256"]), min_size=0, ) ) - def test_content_missing(self, swh_storage, algos): + def test_content_missing(self, swh_storage, sample_data, algos): algos |= {"sha1"} - cont = Content.from_dict(data.cont2) - missing_cont = SkippedContent.from_dict(data.missing_cont) - swh_storage.content_add([cont]) + content, missing_content = [sample_data.content2, sample_data.skipped_content] + swh_storage.content_add([content]) - test_contents = [cont.to_dict()] + test_contents = [content.to_dict()] missing_per_hash = defaultdict(list) for i in range(256): - test_content = missing_cont.to_dict() + test_content = missing_content.to_dict() for hash in algos: test_content[hash] = bytes([i]) + test_content[hash][1:] missing_per_hash[hash].append(test_content[hash]) test_contents.append(test_content) assert set(swh_storage.content_missing(test_contents)) == set( missing_per_hash["sha1"] ) for hash in algos: assert set( swh_storage.content_missing(test_contents, key_hash=hash) ) == set(missing_per_hash[hash]) @pytest.mark.property_based @given( strategies.sets( elements=strategies.sampled_from(["sha256", "sha1_git", "blake2s256"]), min_size=0, ) ) - def test_content_missing_unknown_algo(self, swh_storage, algos): + def test_content_missing_unknown_algo(self, swh_storage, sample_data, algos): algos |= {"sha1"} - cont = Content.from_dict(data.cont2) - missing_cont = SkippedContent.from_dict(data.missing_cont) - swh_storage.content_add([cont]) + content, missing_content = [sample_data.content2, sample_data.skipped_content] + swh_storage.content_add([content]) - test_contents = [cont.to_dict()] + test_contents = [content.to_dict()] missing_per_hash = defaultdict(list) for i in range(16): - test_content = missing_cont.to_dict() + test_content = missing_content.to_dict() for hash in algos: test_content[hash] = bytes([i]) + test_content[hash][1:] missing_per_hash[hash].append(test_content[hash]) test_content["nonexisting_algo"] = b"\x00" test_contents.append(test_content) assert set(swh_storage.content_missing(test_contents)) == set( missing_per_hash["sha1"] ) for hash in algos: assert set( swh_storage.content_missing(test_contents, key_hash=hash) ) == set(missing_per_hash[hash]) - def test_content_missing_per_sha1(self, swh_storage, sample_data_model): + def test_content_missing_per_sha1(self, swh_storage, sample_data): # given - cont = sample_data_model["content"][0] - missing_cont = sample_data_model["skipped_content"][0] + cont = sample_data.content + missing_cont = sample_data.skipped_content swh_storage.content_add([cont]) # when gen = swh_storage.content_missing_per_sha1([cont.sha1, missing_cont.sha1]) # then assert list(gen) == [missing_cont.sha1] - def test_content_missing_per_sha1_git(self, swh_storage, sample_data_model): - cont, cont2 = sample_data_model["content"][:2] - missing_cont = sample_data_model["skipped_content"][0] + def test_content_missing_per_sha1_git(self, swh_storage, sample_data): + cont, cont2 = sample_data.contents[:2] + missing_cont = sample_data.skipped_content swh_storage.content_add([cont, cont2]) contents = [cont.sha1_git, cont2.sha1_git, missing_cont.sha1_git] missing_contents = swh_storage.content_missing_per_sha1_git(contents) assert list(missing_contents) == [missing_cont.sha1_git] def test_content_get_partition(self, swh_storage, swh_contents): """content_get_partition paginates results if limit exceeded""" expected_contents = [c.to_dict() for c in swh_contents if c.status != "absent"] actual_contents = [] for i in range(16): actual_result = swh_storage.content_get_partition(i, 16) assert actual_result["next_page_token"] is None actual_contents.extend(actual_result["contents"]) assert_contents_ok(expected_contents, actual_contents, ["sha1"]) def test_content_get_partition_full(self, swh_storage, swh_contents): """content_get_partition for a single partition returns all available contents""" expected_contents = [c.to_dict() for c in swh_contents if c.status != "absent"] actual_result = swh_storage.content_get_partition(0, 1) assert actual_result["next_page_token"] is None actual_contents = actual_result["contents"] assert_contents_ok(expected_contents, actual_contents, ["sha1"]) def test_content_get_partition_empty(self, swh_storage, swh_contents): """content_get_partition when at least one of the partitions is empty""" expected_contents = { cont.sha1 for cont in swh_contents if cont.status != "absent" } # nb_partitions = smallest power of 2 such that at least one of # the partitions is empty nb_partitions = 1 << math.floor(math.log2(len(swh_contents)) + 1) seen_sha1s = [] for i in range(nb_partitions): actual_result = swh_storage.content_get_partition( i, nb_partitions, limit=len(swh_contents) + 1 ) for cont in actual_result["contents"]: seen_sha1s.append(cont["sha1"]) # Limit is higher than the max number of results assert actual_result["next_page_token"] is None assert set(seen_sha1s) == expected_contents def test_content_get_partition_limit_none(self, swh_storage): """content_get_partition call with wrong limit input should fail""" with pytest.raises(StorageArgumentException) as e: swh_storage.content_get_partition(1, 16, limit=None) assert e.value.args == ("limit should not be None",) def test_generate_content_get_partition_pagination(self, swh_storage, swh_contents): """content_get_partition returns contents within range provided""" expected_contents = [c.to_dict() for c in swh_contents if c.status != "absent"] # retrieve contents actual_contents = [] for i in range(4): page_token = None while True: actual_result = swh_storage.content_get_partition( i, 4, limit=3, page_token=page_token ) actual_contents.extend(actual_result["contents"]) page_token = actual_result["next_page_token"] if page_token is None: break assert_contents_ok(expected_contents, actual_contents, ["sha1"]) - def test_content_get_metadata(self, swh_storage, sample_data_model): - cont1, cont2 = sample_data_model["content"][:2] + def test_content_get_metadata(self, swh_storage, sample_data): + cont1, cont2 = sample_data.contents[:2] swh_storage.content_add([cont1, cont2]) actual_md = swh_storage.content_get_metadata([cont1.sha1, cont2.sha1]) # we only retrieve the metadata so no data nor ctime within expected_cont1, expected_cont2 = [ attr.evolve(c, data=None).to_dict() for c in [cont1, cont2] ] expected_cont1.pop("ctime") expected_cont2.pop("ctime") assert tuple(actual_md[cont1.sha1]) == (expected_cont1,) assert tuple(actual_md[cont2.sha1]) == (expected_cont2,) assert len(actual_md.keys()) == 2 - def test_content_get_metadata_missing_sha1(self, swh_storage, sample_data_model): - cont1, cont2 = sample_data_model["content"][:2] - missing_cont = sample_data_model["skipped_content"][0] + def test_content_get_metadata_missing_sha1(self, swh_storage, sample_data): + cont1, cont2 = sample_data.contents[:2] + missing_cont = sample_data.skipped_content swh_storage.content_add([cont1, cont2]) actual_contents = swh_storage.content_get_metadata([missing_cont.sha1]) assert len(actual_contents) == 1 assert tuple(actual_contents[missing_cont.sha1]) == () - def test_content_get_random(self, swh_storage, sample_data_model): - cont, cont2 = sample_data_model["content"][:2] - cont3 = sample_data_model["content_metadata"][0] + def test_content_get_random(self, swh_storage, sample_data): + cont, cont2, cont3 = sample_data.contents[:3] swh_storage.content_add([cont, cont2, cont3]) assert swh_storage.content_get_random() in { cont.sha1_git, cont2.sha1_git, cont3.sha1_git, } - def test_directory_add(self, swh_storage, sample_data_model): - directory = sample_data_model["directory"][1] + def test_directory_add(self, swh_storage, sample_data): + directory = sample_data.directories[1] init_missing = list(swh_storage.directory_missing([directory.id])) assert [directory.id] == init_missing actual_result = swh_storage.directory_add([directory]) assert actual_result == {"directory:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ - ("directory", Directory.from_dict(data.dir)) + ("directory", directory) ] actual_data = list(swh_storage.directory_ls(directory.id)) expected_data = list(transform_entries(directory)) assert sorted(expected_data, key=cmpdir) == sorted(actual_data, key=cmpdir) after_missing = list(swh_storage.directory_missing([directory.id])) assert after_missing == [] swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["directory"] == 1 - def test_directory_add_from_generator(self, swh_storage, sample_data_model): - directory = sample_data_model["directory"][1] + def test_directory_add_from_generator(self, swh_storage, sample_data): + directory = sample_data.directories[1] def _dir_gen(): yield directory actual_result = swh_storage.directory_add(directories=_dir_gen()) assert actual_result == {"directory:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ ("directory", directory) ] swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["directory"] == 1 - def test_directory_add_validation(self, swh_storage, sample_data_model): - directory = sample_data_model["directory"][1] - dir_ = directory.to_dict() - dir_["entries"][0]["type"] = "foobar" - - with pytest.raises(StorageArgumentException, match="type.*foobar"): - swh_storage.directory_add([dir_]) - - dir_ = directory.to_dict() - del dir_["entries"][0]["target"] - - with pytest.raises(StorageArgumentException, match="target") as cm: - swh_storage.directory_add([dir_]) - - if type(cm.value) == psycopg2.IntegrityError: - assert cm.value.pgcode == psycopg2.errorcodes.NOT_NULL_VIOLATION - - def test_directory_add_twice(self, swh_storage, sample_data_model): - directory = sample_data_model["directory"][1] + def test_directory_add_twice(self, swh_storage, sample_data): + directory = sample_data.directories[1] actual_result = swh_storage.directory_add([directory]) assert actual_result == {"directory:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ ("directory", directory) ] actual_result = swh_storage.directory_add([directory]) assert actual_result == {"directory:add": 0} assert list(swh_storage.journal_writer.journal.objects) == [ ("directory", directory) ] - def test_directory_get_recursive(self, swh_storage, sample_data_model): - dir1, dir2, dir3 = sample_data_model["directory"][:3] + def test_directory_get_recursive(self, swh_storage, sample_data): + dir1, dir2, dir3 = sample_data.directories[:3] init_missing = list(swh_storage.directory_missing([dir1.id])) assert init_missing == [dir1.id] actual_result = swh_storage.directory_add([dir1, dir2, dir3]) assert actual_result == {"directory:add": 3} assert list(swh_storage.journal_writer.journal.objects) == [ ("directory", dir1), ("directory", dir2), ("directory", dir3), ] # List directory containing a file and an unknown subdirectory actual_data = list(swh_storage.directory_ls(dir1.id, recursive=True)) expected_data = list(transform_entries(dir1)) assert sorted(expected_data, key=cmpdir) == sorted(actual_data, key=cmpdir) # List directory containing a file and an unknown subdirectory actual_data = list(swh_storage.directory_ls(dir2.id, recursive=True)) expected_data = list(transform_entries(dir2)) assert sorted(expected_data, key=cmpdir) == sorted(actual_data, key=cmpdir) # List directory containing a known subdirectory, entries should # be both those of the directory and of the subdir actual_data = list(swh_storage.directory_ls(dir3.id, recursive=True)) expected_data = list( itertools.chain( transform_entries(dir3), transform_entries(dir2, prefix=b"subdir/"), ) ) assert sorted(expected_data, key=cmpdir) == sorted(actual_data, key=cmpdir) - def test_directory_get_non_recursive(self, swh_storage, sample_data_model): - dir1, dir2, dir3 = sample_data_model["directory"][:3] + def test_directory_get_non_recursive(self, swh_storage, sample_data): + dir1, dir2, dir3 = sample_data.directories[:3] init_missing = list(swh_storage.directory_missing([dir1.id])) assert init_missing == [dir1.id] actual_result = swh_storage.directory_add([dir1, dir2, dir3]) assert actual_result == {"directory:add": 3} assert list(swh_storage.journal_writer.journal.objects) == [ ("directory", dir1), ("directory", dir2), ("directory", dir3), ] # List directory containing a file and an unknown subdirectory actual_data = list(swh_storage.directory_ls(dir1.id)) expected_data = list(transform_entries(dir1)) assert sorted(expected_data, key=cmpdir) == sorted(actual_data, key=cmpdir) # List directory contaiining a single file actual_data = list(swh_storage.directory_ls(dir2.id)) expected_data = list(transform_entries(dir2)) assert sorted(expected_data, key=cmpdir) == sorted(actual_data, key=cmpdir) # List directory containing a known subdirectory, entries should # only be those of the parent directory, not of the subdir actual_data = list(swh_storage.directory_ls(dir3.id)) expected_data = list(transform_entries(dir3)) assert sorted(expected_data, key=cmpdir) == sorted(actual_data, key=cmpdir) - def test_directory_entry_get_by_path(self, swh_storage, sample_data_model): - cont = sample_data_model["content"][0] - dir1, dir2, dir3, dir4 = sample_data_model["directory"][:4] + def test_directory_entry_get_by_path(self, swh_storage, sample_data): + cont = sample_data.content + dir1, dir2, dir3, dir4, dir5 = sample_data.directories[:5] # given init_missing = list(swh_storage.directory_missing([dir3.id])) assert init_missing == [dir3.id] actual_result = swh_storage.directory_add([dir3, dir4]) assert actual_result == {"directory:add": 2} expected_entries = [ { "dir_id": dir3.id, "name": b"foo", "type": "file", "target": cont.sha1_git, "sha1": None, "sha1_git": None, "sha256": None, "status": None, "perms": from_disk.DentryPerms.content, "length": None, }, { "dir_id": dir3.id, "name": b"subdir", "type": "dir", "target": dir2.id, "sha1": None, "sha1_git": None, "sha256": None, "status": None, "perms": from_disk.DentryPerms.directory, "length": None, }, { "dir_id": dir3.id, "name": b"hello", "type": "file", - "target": b"12345678901234567890", + "target": dir5.id, "sha1": None, "sha1_git": None, "sha256": None, "status": None, "perms": from_disk.DentryPerms.content, "length": None, }, ] # when (all must be found here) for entry, expected_entry in zip(dir3.entries, expected_entries): actual_entry = swh_storage.directory_entry_get_by_path( dir3.id, [entry.name] ) assert actual_entry == expected_entry # same, but deeper for entry, expected_entry in zip(dir3.entries, expected_entries): actual_entry = swh_storage.directory_entry_get_by_path( dir4.id, [b"subdir1", entry.name] ) expected_entry = expected_entry.copy() expected_entry["name"] = b"subdir1/" + expected_entry["name"] assert actual_entry == expected_entry - # when (nothing should be found here since data.dir is not persisted.) + # when (nothing should be found here since `dir` is not persisted.) for entry in dir2.entries: actual_entry = swh_storage.directory_entry_get_by_path( dir2.id, [entry.name] ) assert actual_entry is None - def test_directory_get_random(self, swh_storage, sample_data_model): - dir1, dir2, dir3 = sample_data_model["directory"][:3] + def test_directory_get_random(self, swh_storage, sample_data): + dir1, dir2, dir3 = sample_data.directories[:3] swh_storage.directory_add([dir1, dir2, dir3]) assert swh_storage.directory_get_random() in { dir1.id, dir2.id, dir3.id, } - def test_revision_add(self, swh_storage): - init_missing = swh_storage.revision_missing([data.revision["id"]]) - assert list(init_missing) == [data.revision["id"]] + def test_revision_add(self, swh_storage, sample_data): + revision = sample_data.revision + init_missing = swh_storage.revision_missing([revision.id]) + assert list(init_missing) == [revision.id] - actual_result = swh_storage.revision_add([data.revision]) + actual_result = swh_storage.revision_add([revision]) assert actual_result == {"revision:add": 1} - end_missing = swh_storage.revision_missing([data.revision["id"]]) + end_missing = swh_storage.revision_missing([revision.id]) assert list(end_missing) == [] assert list(swh_storage.journal_writer.journal.objects) == [ - ("revision", Revision.from_dict(data.revision)) + ("revision", revision) ] # already there so nothing added - actual_result = swh_storage.revision_add([data.revision]) + actual_result = swh_storage.revision_add([revision]) assert actual_result == {"revision:add": 0} swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["revision"] == 1 - def test_revision_add_from_generator(self, swh_storage): + def test_revision_add_from_generator(self, swh_storage, sample_data): + revision = sample_data.revision + def _rev_gen(): - yield data.revision + yield revision actual_result = swh_storage.revision_add(_rev_gen()) assert actual_result == {"revision:add": 1} swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["revision"] == 1 - def test_revision_add_validation(self, swh_storage): - rev = copy.deepcopy(data.revision) - rev["date"]["offset"] = 2 ** 16 - - with pytest.raises(StorageArgumentException, match="offset") as cm: - swh_storage.revision_add([rev]) - - if type(cm.value) == psycopg2.DataError: - assert cm.value.pgcode == psycopg2.errorcodes.NUMERIC_VALUE_OUT_OF_RANGE - - rev = copy.deepcopy(data.revision) - rev["committer_date"]["offset"] = 2 ** 16 - - with pytest.raises(StorageArgumentException, match="offset") as cm: - swh_storage.revision_add([rev]) - - if type(cm.value) == psycopg2.DataError: - assert cm.value.pgcode == psycopg2.errorcodes.NUMERIC_VALUE_OUT_OF_RANGE - - rev = copy.deepcopy(data.revision) - rev["type"] = "foobar" + def test_revision_add_twice(self, swh_storage, sample_data): + revision, revision2 = sample_data.revisions[:2] - with pytest.raises(StorageArgumentException, match="(?i)type") as cm: - swh_storage.revision_add([rev]) - - if type(cm.value) == psycopg2.DataError: - assert cm.value.pgcode == psycopg2.errorcodes.INVALID_TEXT_REPRESENTATION - - def test_revision_add_twice(self, swh_storage): - actual_result = swh_storage.revision_add([data.revision]) + actual_result = swh_storage.revision_add([revision]) assert actual_result == {"revision:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ - ("revision", Revision.from_dict(data.revision)) + ("revision", revision) ] - actual_result = swh_storage.revision_add([data.revision, data.revision2]) + actual_result = swh_storage.revision_add([revision, revision2]) assert actual_result == {"revision:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ - ("revision", Revision.from_dict(data.revision)), - ("revision", Revision.from_dict(data.revision2)), + ("revision", revision), + ("revision", revision2), ] - def test_revision_add_name_clash(self, swh_storage): - revision1 = data.revision - revision2 = data.revision2 - - revision1["author"] = { - "fullname": b"John Doe ", - "name": b"John Doe", - "email": b"john.doe@example.com", - } - revision2["author"] = { - "fullname": b"John Doe ", - "name": b"John Doe ", - "email": b"john.doe@example.com ", - } + def test_revision_add_name_clash(self, swh_storage, sample_data): + revision, revision2 = sample_data.revisions[:2] + + revision1 = attr.evolve( + revision, + author=Person( + fullname=b"John Doe ", + name=b"John Doe", + email=b"john.doe@example.com", + ), + ) + revision2 = attr.evolve( + revision2, + author=Person( + fullname=b"John Doe ", + name=b"John Doe ", + email=b"john.doe@example.com ", + ), + ) actual_result = swh_storage.revision_add([revision1, revision2]) assert actual_result == {"revision:add": 2} - def test_revision_get_order(self, swh_storage): - add_result = swh_storage.revision_add([data.revision, data.revision2]) + def test_revision_get_order(self, swh_storage, sample_data): + revision, revision2 = sample_data.revisions[:2] + + add_result = swh_storage.revision_add([revision, revision2]) assert add_result == {"revision:add": 2} # order 1 - res1 = swh_storage.revision_get([data.revision["id"], data.revision2["id"]]) - assert list(res1) == [data.revision, data.revision2] + res1 = swh_storage.revision_get([revision.id, revision2.id]) - # order 2 - res2 = swh_storage.revision_get([data.revision2["id"], data.revision["id"]]) - assert list(res2) == [data.revision2, data.revision] + assert [Revision.from_dict(r) for r in res1] == [revision, revision2] - def test_revision_log(self, swh_storage): - # given - # data.revision4 -is-child-of-> data.revision3 - swh_storage.revision_add([data.revision3, data.revision4]) + # order 2 + res2 = swh_storage.revision_get([revision2.id, revision.id]) + assert [Revision.from_dict(r) for r in res2] == [revision2, revision] - # when - actual_results = list(swh_storage.revision_log([data.revision4["id"]])) + def test_revision_log(self, swh_storage, sample_data): + revision1, revision2, revision3, revision4 = sample_data.revisions[:4] - # hack: ids generated - for actual_result in actual_results: - if "id" in actual_result["author"]: - del actual_result["author"]["id"] - if "id" in actual_result["committer"]: - del actual_result["committer"]["id"] + # rev4 -is-child-of-> rev3 -> rev1, (rev2 -> rev1) + swh_storage.revision_add([revision1, revision2, revision3, revision4]) - assert len(actual_results) == 2 # rev4 -child-> rev3 - assert actual_results[0] == normalize_entity(data.revision4) - assert actual_results[1] == normalize_entity(data.revision3) + # when + results = list(swh_storage.revision_log([revision4.id])) - assert list(swh_storage.journal_writer.journal.objects) == [ - ("revision", Revision.from_dict(data.revision3)), - ("revision", Revision.from_dict(data.revision4)), - ] + # for comparison purposes + actual_results = [Revision.from_dict(r) for r in results] + assert len(actual_results) == 4 # rev4 -child-> rev3 -> rev1, (rev2 -> rev1) + assert actual_results == [revision4, revision3, revision1, revision2] - def test_revision_log_with_limit(self, swh_storage): - # given - # data.revision4 -is-child-of-> data.revision3 - swh_storage.revision_add([data.revision3, data.revision4]) - actual_results = list(swh_storage.revision_log([data.revision4["id"]], 1)) + def test_revision_log_with_limit(self, swh_storage, sample_data): + revision1, revision2, revision3, revision4 = sample_data.revisions[:4] - # hack: ids generated - for actual_result in actual_results: - if "id" in actual_result["author"]: - del actual_result["author"]["id"] - if "id" in actual_result["committer"]: - del actual_result["committer"]["id"] + # revision4 -is-child-of-> revision3 + swh_storage.revision_add([revision3, revision4]) + results = list(swh_storage.revision_log([revision4.id], 1)) + actual_results = [Revision.from_dict(r) for r in results] assert len(actual_results) == 1 - assert actual_results[0] == data.revision4 + assert actual_results[0] == revision4 - def test_revision_log_unknown_revision(self, swh_storage): - rev_log = list(swh_storage.revision_log([data.revision["id"]])) + def test_revision_log_unknown_revision(self, swh_storage, sample_data): + revision = sample_data.revision + rev_log = list(swh_storage.revision_log([revision.id])) assert rev_log == [] - def test_revision_shortlog(self, swh_storage): - # given - # data.revision4 -is-child-of-> data.revision3 - swh_storage.revision_add([data.revision3, data.revision4]) + def test_revision_shortlog(self, swh_storage, sample_data): + revision1, revision2, revision3, revision4 = sample_data.revisions[:4] - # when - actual_results = list(swh_storage.revision_shortlog([data.revision4["id"]])) + # rev4 -is-child-of-> rev3 -> (rev1, rev2); rev2 -> rev1 + swh_storage.revision_add([revision1, revision2, revision3, revision4]) - assert len(actual_results) == 2 # rev4 -child-> rev3 - assert list(actual_results[0]) == short_revision(data.revision4) - assert list(actual_results[1]) == short_revision(data.revision3) + results = list(swh_storage.revision_shortlog([revision4.id])) + actual_results = [[id, tuple(parents)] for (id, parents) in results] - def test_revision_shortlog_with_limit(self, swh_storage): - # given - # data.revision4 -is-child-of-> data.revision3 - swh_storage.revision_add([data.revision3, data.revision4]) - actual_results = list(swh_storage.revision_shortlog([data.revision4["id"]], 1)) + assert len(actual_results) == 4 + assert actual_results == [ + [revision4.id, revision4.parents], + [revision3.id, revision3.parents], + [revision1.id, revision1.parents], + [revision2.id, revision2.parents], + ] + + def test_revision_shortlog_with_limit(self, swh_storage, sample_data): + revision1, revision2, revision3, revision4 = sample_data.revisions[:4] + + # revision4 -is-child-of-> revision3 + swh_storage.revision_add([revision1, revision2, revision3, revision4]) + results = list(swh_storage.revision_shortlog([revision4.id], 1)) + actual_results = [[id, tuple(parents)] for (id, parents) in results] assert len(actual_results) == 1 - assert list(actual_results[0]) == short_revision(data.revision4) + assert list(actual_results[0]) == [revision4.id, revision4.parents] - def test_revision_get(self, swh_storage): - swh_storage.revision_add([data.revision]) + def test_revision_get(self, swh_storage, sample_data): + revision, revision2 = sample_data.revisions[:2] - actual_revisions = list( - swh_storage.revision_get([data.revision["id"], data.revision2["id"]]) - ) + swh_storage.revision_add([revision]) - # when - if "id" in actual_revisions[0]["author"]: - del actual_revisions[0]["author"]["id"] # hack: ids are generated - if "id" in actual_revisions[0]["committer"]: - del actual_revisions[0]["committer"]["id"] + actual_revisions = list(swh_storage.revision_get([revision.id, revision2.id])) assert len(actual_revisions) == 2 - assert actual_revisions[0] == normalize_entity(data.revision) + assert Revision.from_dict(actual_revisions[0]) == revision assert actual_revisions[1] is None - def test_revision_get_no_parents(self, swh_storage): - swh_storage.revision_add([data.revision3]) + def test_revision_get_no_parents(self, swh_storage, sample_data): + revision = sample_data.revision + swh_storage.revision_add([revision]) - get = list(swh_storage.revision_get([data.revision3["id"]])) + get = list(swh_storage.revision_get([revision.id])) assert len(get) == 1 - assert get[0]["parents"] == () # no parents on this one + assert revision.parents == () + assert tuple(get[0]["parents"]) == () # no parents on this one + + def test_revision_get_random(self, swh_storage, sample_data): + revision1, revision2, revision3 = sample_data.revisions[:3] - def test_revision_get_random(self, swh_storage): - swh_storage.revision_add([data.revision, data.revision2, data.revision3]) + swh_storage.revision_add([revision1, revision2, revision3]) assert swh_storage.revision_get_random() in { - data.revision["id"], - data.revision2["id"], - data.revision3["id"], + revision1.id, + revision2.id, + revision3.id, } - def test_release_add(self, swh_storage): - init_missing = swh_storage.release_missing( - [data.release["id"], data.release2["id"]] - ) - assert [data.release["id"], data.release2["id"]] == list(init_missing) + def test_release_add(self, swh_storage, sample_data): + release, release2 = sample_data.releases[:2] + + init_missing = swh_storage.release_missing([release.id, release2.id]) + assert list(init_missing) == [release.id, release2.id] - actual_result = swh_storage.release_add([data.release, data.release2]) + actual_result = swh_storage.release_add([release, release2]) assert actual_result == {"release:add": 2} - end_missing = swh_storage.release_missing( - [data.release["id"], data.release2["id"]] - ) + end_missing = swh_storage.release_missing([release.id, release2.id]) assert list(end_missing) == [] assert list(swh_storage.journal_writer.journal.objects) == [ - ("release", Release.from_dict(data.release)), - ("release", Release.from_dict(data.release2)), + ("release", release), + ("release", release2), ] # already present so nothing added - actual_result = swh_storage.release_add([data.release, data.release2]) + actual_result = swh_storage.release_add([release, release2]) assert actual_result == {"release:add": 0} swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["release"] == 2 - def test_release_add_from_generator(self, swh_storage): + def test_release_add_from_generator(self, swh_storage, sample_data): + release, release2 = sample_data.releases[:2] + def _rel_gen(): - yield data.release - yield data.release2 + yield release + yield release2 actual_result = swh_storage.release_add(_rel_gen()) assert actual_result == {"release:add": 2} assert list(swh_storage.journal_writer.journal.objects) == [ - ("release", Release.from_dict(data.release)), - ("release", Release.from_dict(data.release2)), + ("release", release), + ("release", release2), ] swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["release"] == 2 - def test_release_add_no_author_date(self, swh_storage): - release = data.release - - release["author"] = None - release["date"] = None + def test_release_add_no_author_date(self, swh_storage, sample_data): + full_release = sample_data.release + release = attr.evolve(full_release, author=None, date=None) actual_result = swh_storage.release_add([release]) assert actual_result == {"release:add": 1} - end_missing = swh_storage.release_missing([data.release["id"]]) + end_missing = swh_storage.release_missing([release.id]) assert list(end_missing) == [] assert list(swh_storage.journal_writer.journal.objects) == [ - ("release", Release.from_dict(release)) + ("release", release) ] - def test_release_add_validation(self, swh_storage): - rel = copy.deepcopy(data.release) - rel["date"]["offset"] = 2 ** 16 + def test_release_add_twice(self, swh_storage, sample_data): + release, release2 = sample_data.releases[:2] - with pytest.raises(StorageArgumentException, match="offset") as cm: - swh_storage.release_add([rel]) - - if type(cm.value) == psycopg2.DataError: - assert cm.value.pgcode == psycopg2.errorcodes.NUMERIC_VALUE_OUT_OF_RANGE - - rel = copy.deepcopy(data.release) - rel["author"] = None - - with pytest.raises(StorageArgumentException, match="date") as cm: - swh_storage.release_add([rel]) - - if type(cm.value) == psycopg2.IntegrityError: - assert cm.value.pgcode == psycopg2.errorcodes.CHECK_VIOLATION - - def test_release_add_validation_type(self, swh_storage): - rel = copy.deepcopy(data.release) - - rel["date"]["offset"] = "toto" - with pytest.raises(StorageArgumentException): - swh_storage.release_add([rel]) - - def test_release_add_twice(self, swh_storage): - actual_result = swh_storage.release_add([data.release]) + actual_result = swh_storage.release_add([release]) assert actual_result == {"release:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ - ("release", Release.from_dict(data.release)) + ("release", release) ] - actual_result = swh_storage.release_add( - [data.release, data.release2, data.release, data.release2] - ) + actual_result = swh_storage.release_add([release, release2, release, release2]) assert actual_result == {"release:add": 1} assert set(swh_storage.journal_writer.journal.objects) == set( - [ - ("release", Release.from_dict(data.release)), - ("release", Release.from_dict(data.release2)), - ] + [("release", release), ("release", release2),] ) - def test_release_add_name_clash(self, swh_storage): - release1 = data.release.copy() - release2 = data.release2.copy() + def test_release_add_name_clash(self, swh_storage, sample_data): + release, release2 = [ + attr.evolve( + c, + author=Person( + fullname=b"John Doe ", + name=b"John Doe", + email=b"john.doe@example.com", + ), + ) + for c in sample_data.releases[:2] + ] - release1["author"] = { - "fullname": b"John Doe ", - "name": b"John Doe", - "email": b"john.doe@example.com", - } - release2["author"] = { - "fullname": b"John Doe ", - "name": b"John Doe ", - "email": b"john.doe@example.com ", - } - actual_result = swh_storage.release_add([release1, release2]) + actual_result = swh_storage.release_add([release, release2]) assert actual_result == {"release:add": 2} - def test_release_get(self, swh_storage): + def test_release_get(self, swh_storage, sample_data): + release, release2, release3 = sample_data.releases[:3] + # given - swh_storage.release_add([data.release, data.release2]) + swh_storage.release_add([release, release2]) # when - actual_releases = list( - swh_storage.release_get([data.release["id"], data.release2["id"]]) - ) + releases = list(swh_storage.release_get([release.id, release2.id])) + actual_releases = [Release.from_dict(r) for r in releases] # then - for actual_release in actual_releases: - if "id" in actual_release["author"]: - del actual_release["author"]["id"] # hack: ids are generated - - assert [normalize_entity(data.release), normalize_entity(data.release2)] == [ - actual_releases[0], - actual_releases[1], - ] - - unknown_releases = list(swh_storage.release_get([data.release3["id"]])) + assert actual_releases == [release, release2] + unknown_releases = list(swh_storage.release_get([release3.id])) assert unknown_releases[0] is None - def test_release_get_order(self, swh_storage): - add_result = swh_storage.release_add([data.release, data.release2]) + def test_release_get_order(self, swh_storage, sample_data): + release, release2 = sample_data.releases[:2] + + add_result = swh_storage.release_add([release, release2]) assert add_result == {"release:add": 2} # order 1 - res1 = swh_storage.release_get([data.release["id"], data.release2["id"]]) - assert list(res1) == [data.release, data.release2] + res1 = swh_storage.release_get([release.id, release2.id]) + assert list(res1) == [release.to_dict(), release2.to_dict()] # order 2 - res2 = swh_storage.release_get([data.release2["id"], data.release["id"]]) - assert list(res2) == [data.release2, data.release] + res2 = swh_storage.release_get([release2.id, release.id]) + assert list(res2) == [release2.to_dict(), release.to_dict()] - def test_release_get_random(self, swh_storage): - swh_storage.release_add([data.release, data.release2, data.release3]) + def test_release_get_random(self, swh_storage, sample_data): + release, release2, release3 = sample_data.releases[:3] + + swh_storage.release_add([release, release2, release3]) assert swh_storage.release_get_random() in { - data.release["id"], - data.release2["id"], - data.release3["id"], + release.id, + release2.id, + release3.id, } - def test_origin_add_one(self, swh_storage): - origin0 = swh_storage.origin_get(data.origin) - assert origin0 is None - - id = swh_storage.origin_add_one(data.origin) - - actual_origin = swh_storage.origin_get({"url": data.origin["url"]}) - assert actual_origin["url"] == data.origin["url"] + def test_origin_add(self, swh_storage, sample_data): + origin, origin2 = sample_data.origins[:2] + origin_dict, origin2_dict = [o.to_dict() for o in [origin, origin2]] - id2 = swh_storage.origin_add_one(data.origin) + assert swh_storage.origin_get([origin_dict])[0] is None - assert id == id2 - - def test_origin_add(self, swh_storage): - origin0 = swh_storage.origin_get([data.origin])[0] - assert origin0 is None - - stats = swh_storage.origin_add([data.origin, data.origin2]) + stats = swh_storage.origin_add([origin, origin2]) assert stats == {"origin:add": 2} - actual_origin = swh_storage.origin_get([{"url": data.origin["url"],}])[0] - assert actual_origin["url"] == data.origin["url"] + actual_origin = swh_storage.origin_get([origin_dict])[0] + assert actual_origin["url"] == origin.url - actual_origin2 = swh_storage.origin_get([{"url": data.origin2["url"],}])[0] - assert actual_origin2["url"] == data.origin2["url"] + actual_origin2 = swh_storage.origin_get([origin2_dict])[0] + assert actual_origin2["url"] == origin2.url assert set(swh_storage.journal_writer.journal.objects) == set( - [ - ("origin", Origin.from_dict(actual_origin)), - ("origin", Origin.from_dict(actual_origin2)), - ] + [("origin", origin), ("origin", origin2),] ) swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["origin"] == 2 - def test_origin_add_from_generator(self, swh_storage): + def test_origin_add_from_generator(self, swh_storage, sample_data): + origin, origin2 = sample_data.origins[:2] + origin_dict, origin2_dict = [o.to_dict() for o in [origin, origin2]] + def _ori_gen(): - yield data.origin - yield data.origin2 + yield origin + yield origin2 stats = swh_storage.origin_add(_ori_gen()) assert stats == {"origin:add": 2} - actual_origin = swh_storage.origin_get([{"url": data.origin["url"],}])[0] - assert actual_origin["url"] == data.origin["url"] + actual_origin = swh_storage.origin_get([origin_dict])[0] + assert actual_origin["url"] == origin.url - actual_origin2 = swh_storage.origin_get([{"url": data.origin2["url"],}])[0] - assert actual_origin2["url"] == data.origin2["url"] - - if "id" in actual_origin: - del actual_origin["id"] - del actual_origin2["id"] + actual_origin2 = swh_storage.origin_get([origin2_dict])[0] + assert actual_origin2["url"] == origin2.url assert set(swh_storage.journal_writer.journal.objects) == set( - [ - ("origin", Origin.from_dict(actual_origin)), - ("origin", Origin.from_dict(actual_origin2)), - ] + [("origin", origin), ("origin", origin2),] ) swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["origin"] == 2 - def test_origin_add_twice(self, swh_storage): - add1 = swh_storage.origin_add([data.origin, data.origin2]) + def test_origin_add_twice(self, swh_storage, sample_data): + origin, origin2 = sample_data.origins[:2] + origin_dict, origin2_dict = [o.to_dict() for o in [origin, origin2]] + + add1 = swh_storage.origin_add([origin, origin2]) assert set(swh_storage.journal_writer.journal.objects) == set( - [ - ("origin", Origin.from_dict(data.origin)), - ("origin", Origin.from_dict(data.origin2)), - ] + [("origin", origin), ("origin", origin2),] ) assert add1 == {"origin:add": 2} - add2 = swh_storage.origin_add([data.origin, data.origin2]) + add2 = swh_storage.origin_add([origin, origin2]) assert set(swh_storage.journal_writer.journal.objects) == set( - [ - ("origin", Origin.from_dict(data.origin)), - ("origin", Origin.from_dict(data.origin2)), - ] + [("origin", origin), ("origin", origin2),] ) assert add2 == {"origin:add": 0} - def test_origin_add_validation(self, swh_storage): - """Incorrect formatted origin should fail the validation + def test_origin_get_legacy(self, swh_storage, sample_data): + origin, origin2 = sample_data.origins[:2] + origin_dict, origin2_dict = [o.to_dict() for o in [origin, origin2]] - """ - with pytest.raises(StorageArgumentException, match="url"): - swh_storage.origin_add([{}]) - with pytest.raises( - StorageArgumentException, match="unexpected keyword argument" - ): - swh_storage.origin_add([{"ul": "mistyped url key"}]) + assert swh_storage.origin_get(origin_dict) is None + swh_storage.origin_add([origin]) - def test_origin_get_legacy(self, swh_storage): - assert swh_storage.origin_get(data.origin) is None - swh_storage.origin_add_one(data.origin) + actual_origin0 = swh_storage.origin_get(origin_dict) + assert actual_origin0["url"] == origin.url - actual_origin0 = swh_storage.origin_get({"url": data.origin["url"]}) - assert actual_origin0["url"] == data.origin["url"] + def test_origin_get(self, swh_storage, sample_data): + origin, origin2 = sample_data.origins[:2] + origin_dict, origin2_dict = [o.to_dict() for o in [origin, origin2]] - def test_origin_get(self, swh_storage): - assert swh_storage.origin_get(data.origin) is None - assert swh_storage.origin_get([data.origin]) == [None] - swh_storage.origin_add_one(data.origin) + assert swh_storage.origin_get(origin_dict) is None + assert swh_storage.origin_get([origin_dict]) == [None] + swh_storage.origin_add([origin]) - actual_origin0 = swh_storage.origin_get([{"url": data.origin["url"]}]) - assert len(actual_origin0) == 1 - assert actual_origin0[0]["url"] == data.origin["url"] + actual_origins = swh_storage.origin_get([origin_dict]) + assert len(actual_origins) == 1 - actual_origins = swh_storage.origin_get( - [{"url": data.origin["url"]}, {"url": "not://exists"}] - ) - assert actual_origins == [{"url": data.origin["url"]}, None] + actual_origin0 = swh_storage.origin_get(origin_dict) + assert actual_origin0 == actual_origins[0] + assert actual_origin0["url"] == origin.url + + actual_origins = swh_storage.origin_get([origin_dict, {"url": "not://exists"}]) + assert actual_origins == [origin_dict, None] def _generate_random_visits(self, nb_visits=100, start=0, end=7): """Generate random visits within the last 2 months (to avoid computations) """ visits = [] today = now() for weeks in range(nb_visits, 0, -1): hours = random.randint(0, 24) minutes = random.randint(0, 60) seconds = random.randint(0, 60) days = random.randint(0, 28) weeks = random.randint(start, end) date_visit = today - timedelta( weeks=weeks, hours=hours, minutes=minutes, seconds=seconds, days=days ) visits.append(date_visit) return visits - def test_origin_visit_get_all(self, swh_storage): - origin = Origin.from_dict(data.origin) - swh_storage.origin_add_one(origin) + def test_origin_visit_get_all(self, swh_storage, sample_data): + origin = sample_data.origin + swh_storage.origin_add([origin]) visits = swh_storage.origin_visit_add( [ OriginVisit( - origin=origin.url, date=data.date_visit1, type=data.type_visit1, + origin=origin.url, + date=sample_data.date_visit1, + type=sample_data.type_visit1, ), OriginVisit( - origin=origin.url, date=data.date_visit2, type=data.type_visit2, + origin=origin.url, + date=sample_data.date_visit2, + type=sample_data.type_visit2, ), OriginVisit( - origin=origin.url, date=data.date_visit2, type=data.type_visit2, + origin=origin.url, + date=sample_data.date_visit2, + type=sample_data.type_visit2, ), ] ) ov1, ov2, ov3 = [ {**v.to_dict(), "status": "created", "snapshot": None, "metadata": None,} for v in visits ] # order asc, no pagination, no limit all_visits = list(swh_storage.origin_visit_get(origin.url)) assert all_visits == [ov1, ov2, ov3] # order asc, no pagination, limit all_visits2 = list(swh_storage.origin_visit_get(origin.url, limit=2)) assert all_visits2 == [ov1, ov2] # order asc, pagination, no limit all_visits3 = list( swh_storage.origin_visit_get(origin.url, last_visit=ov1["visit"]) ) assert all_visits3 == [ov2, ov3] # order asc, pagination, limit all_visits4 = list( swh_storage.origin_visit_get(origin.url, last_visit=ov2["visit"], limit=1) ) assert all_visits4 == [ov3] # order desc, no pagination, no limit all_visits5 = list(swh_storage.origin_visit_get(origin.url, order="desc")) assert all_visits5 == [ov3, ov2, ov1] # order desc, no pagination, limit all_visits6 = list( swh_storage.origin_visit_get(origin.url, limit=2, order="desc") ) assert all_visits6 == [ov3, ov2] # order desc, pagination, no limit all_visits7 = list( swh_storage.origin_visit_get( origin.url, last_visit=ov3["visit"], order="desc" ) ) assert all_visits7 == [ov2, ov1] # order desc, pagination, limit all_visits8 = list( swh_storage.origin_visit_get( origin.url, last_visit=ov3["visit"], order="desc", limit=1 ) ) assert all_visits8 == [ov2] def test_origin_visit_get__unknown_origin(self, swh_storage): assert [] == list(swh_storage.origin_visit_get("foo")) - def test_origin_visit_get_random(self, swh_storage): - swh_storage.origin_add(data.origins) + def test_origin_visit_get_random(self, swh_storage, sample_data): + origins = sample_data.origins[:2] + swh_storage.origin_add(origins) + # Add some random visits within the selection range visits = self._generate_random_visits() visit_type = "git" # Add visits to those origins - for origin in data.origins: - origin_url = origin["url"] + for origin in origins: for date_visit in visits: visit = swh_storage.origin_visit_add( - [OriginVisit(origin=origin_url, date=date_visit, type=visit_type,)] + [OriginVisit(origin=origin.url, date=date_visit, type=visit_type,)] )[0] swh_storage.origin_visit_status_add( [ OriginVisitStatus( - origin=origin_url, + origin=origin.url, visit=visit.visit, date=now(), status="full", snapshot=None, ) ] ) swh_storage.refresh_stat_counters() stats = swh_storage.stat_counters() - assert stats["origin"] == len(data.origins) - assert stats["origin_visit"] == len(data.origins) * len(visits) + assert stats["origin"] == len(origins) + assert stats["origin_visit"] == len(origins) * len(visits) random_origin_visit = swh_storage.origin_visit_get_random(visit_type) assert random_origin_visit assert random_origin_visit["origin"] is not None - original_urls = [o["url"] for o in data.origins] - assert random_origin_visit["origin"] in original_urls + assert random_origin_visit["origin"] in [o.url for o in origins] - def test_origin_visit_get_random_nothing_found(self, swh_storage): - swh_storage.origin_add(data.origins) + def test_origin_visit_get_random_nothing_found(self, swh_storage, sample_data): + origins = sample_data.origins + swh_storage.origin_add(origins) visit_type = "hg" # Add some visits outside of the random generation selection so nothing # will be found by the random selection visits = self._generate_random_visits(nb_visits=3, start=13, end=24) - for origin in data.origins: - origin_url = origin["url"] + for origin in origins: for date_visit in visits: visit = swh_storage.origin_visit_add( - [OriginVisit(origin=origin_url, date=date_visit, type=visit_type,)] + [OriginVisit(origin=origin.url, date=date_visit, type=visit_type,)] )[0] swh_storage.origin_visit_status_add( [ OriginVisitStatus( - origin=origin_url, + origin=origin.url, visit=visit.visit, date=now(), status="full", snapshot=None, ) ] ) random_origin_visit = swh_storage.origin_visit_get_random(visit_type) assert random_origin_visit is None - def test_origin_get_by_sha1(self, swh_storage): - assert swh_storage.origin_get(data.origin) is None - swh_storage.origin_add_one(data.origin) + def test_origin_get_by_sha1(self, swh_storage, sample_data): + origin = sample_data.origin + assert swh_storage.origin_get(origin.to_dict()) is None + swh_storage.origin_add([origin]) - origins = list(swh_storage.origin_get_by_sha1([sha1(data.origin["url"])])) + origins = list(swh_storage.origin_get_by_sha1([sha1(origin.url)])) assert len(origins) == 1 - assert origins[0]["url"] == data.origin["url"] + assert origins[0]["url"] == origin.url - def test_origin_get_by_sha1_not_found(self, swh_storage): - assert swh_storage.origin_get(data.origin) is None - origins = list(swh_storage.origin_get_by_sha1([sha1(data.origin["url"])])) + def test_origin_get_by_sha1_not_found(self, swh_storage, sample_data): + origin = sample_data.origin + assert swh_storage.origin_get(origin.to_dict()) is None + origins = list(swh_storage.origin_get_by_sha1([sha1(origin.url)])) assert len(origins) == 1 assert origins[0] is None - def test_origin_search_single_result(self, swh_storage): - found_origins = list(swh_storage.origin_search(data.origin["url"])) + def test_origin_search_single_result(self, swh_storage, sample_data): + origin, origin2 = sample_data.origins[:2] + + found_origins = list(swh_storage.origin_search(origin.url)) assert len(found_origins) == 0 - found_origins = list(swh_storage.origin_search(data.origin["url"], regexp=True)) + found_origins = list(swh_storage.origin_search(origin.url, regexp=True)) assert len(found_origins) == 0 - swh_storage.origin_add_one(data.origin) - origin_data = {"url": data.origin["url"]} - found_origins = list(swh_storage.origin_search(data.origin["url"])) + swh_storage.origin_add([origin]) + origin_data = origin.to_dict() + found_origins = list(swh_storage.origin_search(origin.url)) + assert len(found_origins) == 1 - if "id" in found_origins[0]: - del found_origins[0]["id"] assert found_origins[0] == origin_data found_origins = list( - swh_storage.origin_search("." + data.origin["url"][1:-1] + ".", regexp=True) + swh_storage.origin_search(f".{origin.url[1:-1]}.", regexp=True) ) assert len(found_origins) == 1 - if "id" in found_origins[0]: - del found_origins[0]["id"] assert found_origins[0] == origin_data - swh_storage.origin_add_one(data.origin2) - origin2_data = {"url": data.origin2["url"]} - found_origins = list(swh_storage.origin_search(data.origin2["url"])) + swh_storage.origin_add([origin2]) + origin2_data = origin2.to_dict() + found_origins = list(swh_storage.origin_search(origin2.url)) assert len(found_origins) == 1 - if "id" in found_origins[0]: - del found_origins[0]["id"] assert found_origins[0] == origin2_data found_origins = list( - swh_storage.origin_search( - "." + data.origin2["url"][1:-1] + ".", regexp=True - ) + swh_storage.origin_search(f".{origin2.url[1:-1]}.", regexp=True) ) assert len(found_origins) == 1 - if "id" in found_origins[0]: - del found_origins[0]["id"] assert found_origins[0] == origin2_data - def test_origin_search_no_regexp(self, swh_storage): - swh_storage.origin_add_one(data.origin) - swh_storage.origin_add_one(data.origin2) + def test_origin_search_no_regexp(self, swh_storage, sample_data): + origin, origin2 = sample_data.origins[:2] + origin_dicts = [o.to_dict() for o in [origin, origin2]] - origin = swh_storage.origin_get({"url": data.origin["url"]}) - origin2 = swh_storage.origin_get({"url": data.origin2["url"]}) + swh_storage.origin_add([origin, origin2]) # no pagination found_origins = list(swh_storage.origin_search("/")) assert len(found_origins) == 2 # offset=0 - found_origins0 = list(swh_storage.origin_search("/", offset=0, limit=1)) # noqa + found_origins0 = list(swh_storage.origin_search("/", offset=0, limit=1)) assert len(found_origins0) == 1 - assert found_origins0[0] in [origin, origin2] + assert found_origins0[0] in origin_dicts # offset=1 - found_origins1 = list(swh_storage.origin_search("/", offset=1, limit=1)) # noqa + found_origins1 = list(swh_storage.origin_search("/", offset=1, limit=1)) assert len(found_origins1) == 1 - assert found_origins1[0] in [origin, origin2] + assert found_origins1[0] in origin_dicts # check both origins were returned assert found_origins0 != found_origins1 - def test_origin_search_regexp_substring(self, swh_storage): - swh_storage.origin_add_one(data.origin) - swh_storage.origin_add_one(data.origin2) + def test_origin_search_regexp_substring(self, swh_storage, sample_data): + origin, origin2 = sample_data.origins[:2] + origin_dicts = [o.to_dict() for o in [origin, origin2]] - origin = swh_storage.origin_get({"url": data.origin["url"]}) - origin2 = swh_storage.origin_get({"url": data.origin2["url"]}) + swh_storage.origin_add([origin, origin2]) # no pagination found_origins = list(swh_storage.origin_search("/", regexp=True)) assert len(found_origins) == 2 # offset=0 found_origins0 = list( swh_storage.origin_search("/", offset=0, limit=1, regexp=True) - ) # noqa + ) assert len(found_origins0) == 1 - assert found_origins0[0] in [origin, origin2] + assert found_origins0[0] in origin_dicts # offset=1 found_origins1 = list( swh_storage.origin_search("/", offset=1, limit=1, regexp=True) - ) # noqa + ) assert len(found_origins1) == 1 - assert found_origins1[0] in [origin, origin2] + assert found_origins1[0] in origin_dicts # check both origins were returned assert found_origins0 != found_origins1 - def test_origin_search_regexp_fullstring(self, swh_storage): - swh_storage.origin_add_one(data.origin) - swh_storage.origin_add_one(data.origin2) + def test_origin_search_regexp_fullstring(self, swh_storage, sample_data): + origin, origin2 = sample_data.origins[:2] + origin_dicts = [o.to_dict() for o in [origin, origin2]] - origin = swh_storage.origin_get({"url": data.origin["url"]}) - origin2 = swh_storage.origin_get({"url": data.origin2["url"]}) + swh_storage.origin_add([origin, origin2]) # no pagination found_origins = list(swh_storage.origin_search(".*/.*", regexp=True)) assert len(found_origins) == 2 # offset=0 found_origins0 = list( swh_storage.origin_search(".*/.*", offset=0, limit=1, regexp=True) - ) # noqa + ) assert len(found_origins0) == 1 - assert found_origins0[0] in [origin, origin2] + assert found_origins0[0] in origin_dicts # offset=1 found_origins1 = list( swh_storage.origin_search(".*/.*", offset=1, limit=1, regexp=True) - ) # noqa + ) assert len(found_origins1) == 1 - assert found_origins1[0] in [origin, origin2] + assert found_origins1[0] in origin_dicts # check both origins were returned assert found_origins0 != found_origins1 - def test_origin_visit_add(self, swh_storage): - origin1 = Origin.from_dict(data.origin2) - swh_storage.origin_add_one(origin1) + def test_origin_visit_add(self, swh_storage, sample_data): + origin1 = sample_data.origins[1] + swh_storage.origin_add([origin1]) date_visit = now() date_visit2 = date_visit + datetime.timedelta(minutes=1) date_visit = round_to_milliseconds(date_visit) date_visit2 = round_to_milliseconds(date_visit2) visit1 = OriginVisit( - origin=origin1.url, date=date_visit, type=data.type_visit1, + origin=origin1.url, date=date_visit, type=sample_data.type_visit1, ) visit2 = OriginVisit( - origin=origin1.url, date=date_visit2, type=data.type_visit2, + origin=origin1.url, date=date_visit2, type=sample_data.type_visit2, ) # add once ov1, ov2 = swh_storage.origin_visit_add([visit1, visit2]) # then again (will be ignored as they already exist) origin_visit1, origin_visit2 = swh_storage.origin_visit_add([ov1, ov2]) assert ov1 == origin_visit1 assert ov2 == origin_visit2 ovs1 = OriginVisitStatus( origin=origin1.url, visit=ov1.visit, date=date_visit, status="created", snapshot=None, ) ovs2 = OriginVisitStatus( origin=origin1.url, visit=ov2.visit, date=date_visit2, status="created", snapshot=None, ) actual_origin_visits = list(swh_storage.origin_visit_get(origin1.url)) expected_visits = [ {**ovs1.to_dict(), "type": ov1.type}, {**ovs2.to_dict(), "type": ov2.type}, ] assert len(expected_visits) == len(actual_origin_visits) for visit in expected_visits: assert visit in actual_origin_visits actual_objects = list(swh_storage.journal_writer.journal.objects) expected_objects = list( [("origin", origin1)] + [("origin_visit", visit) for visit in [ov1, ov2]] * 2 + [("origin_visit_status", ovs) for ovs in [ovs1, ovs2]] ) for obj in expected_objects: assert obj in actual_objects - def test_origin_visit_add_validation(self, swh_storage): + def test_origin_visit_add_validation(self, swh_storage, sample_data): """Unknown origin when adding visits should raise""" - visit = OriginVisit( - origin="something-unknown", date=now(), type=data.type_visit1, - ) + visit = attr.evolve(sample_data.origin_visit, origin="something-unknonw") with pytest.raises(StorageArgumentException, match="Unknown origin"): swh_storage.origin_visit_add([visit]) objects = list(swh_storage.journal_writer.journal.objects) assert not objects def test_origin_visit_status_add_validation(self, swh_storage): """Wrong origin_visit_status input should raise storage argument error""" date_visit = now() visit_status1 = OriginVisitStatus( origin="unknown-origin-url", visit=10, date=date_visit, status="full", snapshot=None, ) with pytest.raises(StorageArgumentException, match="Unknown origin"): swh_storage.origin_visit_status_add([visit_status1]) objects = list(swh_storage.journal_writer.journal.objects) assert not objects - def test_origin_visit_status_add(self, swh_storage): + def test_origin_visit_status_add(self, swh_storage, sample_data): """Correct origin visit statuses should add a new visit status """ - origin1 = Origin.from_dict(data.origin2) + snapshot = sample_data.snapshot + origin1 = sample_data.origins[1] origin2 = Origin(url="new-origin") swh_storage.origin_add([origin1, origin2]) ov1, ov2 = swh_storage.origin_visit_add( [ OriginVisit( - origin=origin1.url, date=data.date_visit1, type=data.type_visit1, + origin=origin1.url, + date=sample_data.date_visit1, + type=sample_data.type_visit1, ), OriginVisit( - origin=origin2.url, date=data.date_visit2, type=data.type_visit2, + origin=origin2.url, + date=sample_data.date_visit2, + type=sample_data.type_visit2, ), ] ) ovs1 = OriginVisitStatus( origin=origin1.url, visit=ov1.visit, - date=data.date_visit1, + date=sample_data.date_visit1, status="created", snapshot=None, ) ovs2 = OriginVisitStatus( origin=origin2.url, visit=ov2.visit, - date=data.date_visit2, + date=sample_data.date_visit2, status="created", snapshot=None, ) - snapshot_id = data.snapshot["id"] date_visit_now = now() visit_status1 = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=date_visit_now, status="full", - snapshot=snapshot_id, + snapshot=snapshot.id, ) date_visit_now = now() visit_status2 = OriginVisitStatus( origin=ov2.origin, visit=ov2.visit, date=date_visit_now, status="ongoing", snapshot=None, metadata={"intrinsic": "something"}, ) swh_storage.origin_visit_status_add([visit_status1, visit_status2]) origin_visit1 = swh_storage.origin_visit_get_latest( origin1.url, require_snapshot=True ) assert origin_visit1 assert origin_visit1["status"] == "full" - assert origin_visit1["snapshot"] == snapshot_id + assert origin_visit1["snapshot"] == snapshot.id origin_visit2 = swh_storage.origin_visit_get_latest( origin2.url, require_snapshot=False ) assert origin2.url != origin1.url assert origin_visit2 assert origin_visit2["status"] == "ongoing" assert origin_visit2["snapshot"] is None assert origin_visit2["metadata"] == {"intrinsic": "something"} actual_objects = list(swh_storage.journal_writer.journal.objects) expected_origins = [origin1, origin2] expected_visits = [ov1, ov2] expected_visit_statuses = [ovs1, ovs2, visit_status1, visit_status2] expected_objects = ( [("origin", o) for o in expected_origins] + [("origin_visit", v) for v in expected_visits] + [("origin_visit_status", ovs) for ovs in expected_visit_statuses] ) for obj in expected_objects: assert obj in actual_objects - def test_origin_visit_status_add_twice(self, swh_storage): + def test_origin_visit_status_add_twice(self, swh_storage, sample_data): """Correct origin visit statuses should add a new visit status """ - origin1 = Origin.from_dict(data.origin2) + snapshot = sample_data.snapshot + origin1 = sample_data.origins[1] swh_storage.origin_add([origin1]) ov1 = swh_storage.origin_visit_add( [ OriginVisit( - origin=origin1.url, date=data.date_visit1, type=data.type_visit1, + origin=origin1.url, + date=sample_data.date_visit1, + type=sample_data.type_visit1, ), ] )[0] ovs1 = OriginVisitStatus( origin=origin1.url, visit=ov1.visit, - date=data.date_visit1, + date=sample_data.date_visit1, status="created", snapshot=None, ) - snapshot_id = data.snapshot["id"] date_visit_now = now() visit_status1 = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=date_visit_now, status="full", - snapshot=snapshot_id, + snapshot=snapshot.id, ) swh_storage.origin_visit_status_add([visit_status1]) # second call will ignore existing entries (will send to storage though) swh_storage.origin_visit_status_add([visit_status1]) origin_visits = list(swh_storage.origin_visit_get(ov1.origin)) assert len(origin_visits) == 1 origin_visit1 = origin_visits[0] assert origin_visit1 assert origin_visit1["status"] == "full" - assert origin_visit1["snapshot"] == snapshot_id + assert origin_visit1["snapshot"] == snapshot.id actual_objects = list(swh_storage.journal_writer.journal.objects) expected_origins = [origin1] expected_visits = [ov1] expected_visit_statuses = [ovs1, visit_status1, visit_status1] # write twice in the journal expected_objects = ( [("origin", o) for o in expected_origins] + [("origin_visit", v) for v in expected_visits] + [("origin_visit_status", ovs) for ovs in expected_visit_statuses] ) for obj in expected_objects: assert obj in actual_objects - def test_origin_visit_find_by_date(self, swh_storage): - # given - origin = Origin.from_dict(data.origin) - swh_storage.origin_add_one(data.origin) + def test_origin_visit_find_by_date(self, swh_storage, sample_data): + origin = sample_data.origin + swh_storage.origin_add([origin]) visit1 = OriginVisit( - origin=origin.url, date=data.date_visit2, type=data.type_visit1, + origin=origin.url, + date=sample_data.date_visit2, + type=sample_data.type_visit1, ) visit2 = OriginVisit( - origin=origin.url, date=data.date_visit3, type=data.type_visit2, + origin=origin.url, + date=sample_data.date_visit3, + type=sample_data.type_visit2, ) visit3 = OriginVisit( - origin=origin.url, date=data.date_visit2, type=data.type_visit3, + origin=origin.url, + date=sample_data.date_visit2, + type=sample_data.type_visit3, ) ov1, ov2, ov3 = swh_storage.origin_visit_add([visit1, visit2, visit3]) ovs1 = OriginVisitStatus( origin=origin.url, visit=ov1.visit, - date=data.date_visit2, + date=sample_data.date_visit2, status="ongoing", snapshot=None, ) ovs2 = OriginVisitStatus( origin=origin.url, visit=ov2.visit, - date=data.date_visit3, + date=sample_data.date_visit3, status="ongoing", snapshot=None, ) ovs3 = OriginVisitStatus( origin=origin.url, visit=ov3.visit, - date=data.date_visit2, + date=sample_data.date_visit2, status="ongoing", snapshot=None, ) swh_storage.origin_visit_status_add([ovs1, ovs2, ovs3]) # Simple case - visit = swh_storage.origin_visit_find_by_date(origin.url, data.date_visit3) + visit = swh_storage.origin_visit_find_by_date( + origin.url, sample_data.date_visit3 + ) assert visit["visit"] == ov2.visit # There are two visits at the same date, the latest must be returned - visit = swh_storage.origin_visit_find_by_date(origin.url, data.date_visit2) + visit = swh_storage.origin_visit_find_by_date( + origin.url, sample_data.date_visit2 + ) assert visit["visit"] == ov3.visit - def test_origin_visit_find_by_date__unknown_origin(self, swh_storage): - swh_storage.origin_visit_find_by_date("foo", data.date_visit2) + def test_origin_visit_find_by_date__unknown_origin(self, swh_storage, sample_data): + swh_storage.origin_visit_find_by_date("foo", sample_data.date_visit2) + + def test_origin_visit_get_by(self, swh_storage, sample_data): + snapshot = sample_data.snapshot + origins = sample_data.origins[:2] + swh_storage.origin_add(origins) + origin_url, origin_url2 = [o.url for o in origins] - def test_origin_visit_get_by(self, swh_storage): - origin_url = swh_storage.origin_add_one(data.origin) - origin_url2 = swh_storage.origin_add_one(data.origin2) visit = OriginVisit( - origin=origin_url, date=data.date_visit2, type=data.type_visit2, + origin=origin_url, + date=sample_data.date_visit2, + type=sample_data.type_visit2, ) origin_visit1 = swh_storage.origin_visit_add([visit])[0] - swh_storage.snapshot_add([data.snapshot]) + swh_storage.snapshot_add([snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin_url, visit=origin_visit1.visit, date=now(), status="ongoing", - snapshot=data.snapshot["id"], + snapshot=snapshot.id, ) ] ) # Add some other {origin, visit} entries visit2 = OriginVisit( - origin=origin_url, date=data.date_visit3, type=data.type_visit3, + origin=origin_url, + date=sample_data.date_visit3, + type=sample_data.type_visit3, ) visit3 = OriginVisit( - origin=origin_url2, date=data.date_visit3, type=data.type_visit3, + origin=origin_url2, + date=sample_data.date_visit3, + type=sample_data.type_visit3, ) swh_storage.origin_visit_add([visit2, visit3]) # when visit1_metadata = { "contents": 42, "directories": 22, } swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin_url, visit=origin_visit1.visit, date=now(), status="full", - snapshot=data.snapshot["id"], + snapshot=snapshot.id, metadata=visit1_metadata, ) ] ) expected_origin_visit = origin_visit1.to_dict() expected_origin_visit.update( { "origin": origin_url, "visit": origin_visit1.visit, - "date": data.date_visit2, - "type": data.type_visit2, + "date": sample_data.date_visit2, + "type": sample_data.type_visit2, "metadata": visit1_metadata, "status": "full", - "snapshot": data.snapshot["id"], + "snapshot": snapshot.id, } ) # when actual_origin_visit1 = swh_storage.origin_visit_get_by( origin_url, origin_visit1.visit ) # then assert actual_origin_visit1 == expected_origin_visit def test_origin_visit_get_by__unknown_origin(self, swh_storage): assert swh_storage.origin_visit_get_by("foo", 10) is None - def test_origin_visit_get_by_no_result(self, swh_storage): - swh_storage.origin_add([data.origin]) - actual_origin_visit = swh_storage.origin_visit_get_by(data.origin["url"], 999) + def test_origin_visit_get_by_no_result(self, swh_storage, sample_data): + origin = sample_data.origin + swh_storage.origin_add([origin]) + actual_origin_visit = swh_storage.origin_visit_get_by(origin.url, 999) assert actual_origin_visit is None - def test_origin_visit_get_latest_none(self, swh_storage): + def test_origin_visit_get_latest_none(self, swh_storage, sample_data): """Origin visit get latest on unknown objects should return nothing """ # unknown origin so no result assert swh_storage.origin_visit_get_latest("unknown-origin") is None # unknown type - origin = Origin.from_dict(data.origin) - swh_storage.origin_add_one(origin) + origin = sample_data.origin + swh_storage.origin_add([origin]) assert swh_storage.origin_visit_get_latest(origin.url, type="unknown") is None - def test_origin_visit_get_latest_filter_type(self, swh_storage): + def test_origin_visit_get_latest_filter_type(self, swh_storage, sample_data): """Filtering origin visit get latest with filter type should be ok """ - origin = Origin.from_dict(data.origin) - swh_storage.origin_add_one(origin) + origin = sample_data.origin + swh_storage.origin_add([origin]) visit1 = OriginVisit( - origin=origin.url, date=data.date_visit1, type=data.type_visit1, + origin=origin.url, + date=sample_data.date_visit1, + type=sample_data.type_visit1, ) visit2 = OriginVisit( - origin=origin.url, date=data.date_visit2, type=data.type_visit2, + origin=origin.url, + date=sample_data.date_visit2, + type=sample_data.type_visit2, ) # Add a visit with the same date as the previous one visit3 = OriginVisit( - origin=origin.url, date=data.date_visit2, type=data.type_visit2, + origin=origin.url, + date=sample_data.date_visit2, + type=sample_data.type_visit2, ) - assert data.type_visit1 != data.type_visit2 - assert data.date_visit1 < data.date_visit2 + assert sample_data.type_visit1 != sample_data.type_visit2 + assert sample_data.date_visit1 < sample_data.date_visit2 ov1, ov2, ov3 = swh_storage.origin_visit_add([visit1, visit2, visit3]) origin_visit1 = swh_storage.origin_visit_get_by(origin.url, ov1.visit) origin_visit3 = swh_storage.origin_visit_get_by(origin.url, ov3.visit) - assert data.type_visit1 != data.type_visit2 + assert sample_data.type_visit1 != sample_data.type_visit2 # Check type filter is ok actual_ov1 = swh_storage.origin_visit_get_latest( - origin.url, type=data.type_visit1, + origin.url, type=sample_data.type_visit1, ) assert actual_ov1 == origin_visit1 actual_ov3 = swh_storage.origin_visit_get_latest( - origin.url, type=data.type_visit2, + origin.url, type=sample_data.type_visit2, ) assert actual_ov3 == origin_visit3 new_type = "npm" - assert new_type not in [data.type_visit1, data.type_visit2] + assert new_type not in [sample_data.type_visit1, sample_data.type_visit2] assert ( swh_storage.origin_visit_get_latest( origin.url, type=new_type, # no visit matching that type ) is None ) - def test_origin_visit_get_latest(self, swh_storage): - origin = Origin.from_dict(data.origin) - swh_storage.origin_add_one(origin) + def test_origin_visit_get_latest(self, swh_storage, sample_data): + empty_snapshot, complete_snapshot = sample_data.snapshots[1:3] + origin = sample_data.origin + + swh_storage.origin_add([origin]) visit1 = OriginVisit( - origin=origin.url, date=data.date_visit1, type=data.type_visit1, + origin=origin.url, + date=sample_data.date_visit1, + type=sample_data.type_visit1, ) visit2 = OriginVisit( - origin=origin.url, date=data.date_visit2, type=data.type_visit2, + origin=origin.url, + date=sample_data.date_visit2, + type=sample_data.type_visit2, ) # Add a visit with the same date as the previous one visit3 = OriginVisit( - origin=origin.url, date=data.date_visit2, type=data.type_visit2, + origin=origin.url, + date=sample_data.date_visit2, + type=sample_data.type_visit2, ) ov1, ov2, ov3 = swh_storage.origin_visit_add([visit1, visit2, visit3]) origin_visit1 = swh_storage.origin_visit_get_by(origin.url, ov1.visit) origin_visit2 = swh_storage.origin_visit_get_by(origin.url, ov2.visit) origin_visit3 = swh_storage.origin_visit_get_by(origin.url, ov3.visit) # Two visits, both with no snapshot assert origin_visit3 == swh_storage.origin_visit_get_latest(origin.url) assert ( swh_storage.origin_visit_get_latest(origin.url, require_snapshot=True) is None ) # Add snapshot to visit1; require_snapshot=True makes it return # visit1 and require_snapshot=False still returns visit2 - complete_snapshot = Snapshot.from_dict(data.complete_snapshot) + swh_storage.snapshot_add([complete_snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=ov1.visit, date=now(), status="ongoing", snapshot=complete_snapshot.id, ) ] ) actual_visit = swh_storage.origin_visit_get_latest( origin.url, require_snapshot=True ) assert actual_visit == { **origin_visit1, "snapshot": complete_snapshot.id, "status": "ongoing", # visit1 has status created now } assert origin_visit3 == swh_storage.origin_visit_get_latest(origin.url) # Status filter: all three visits are status=ongoing, so no visit # returned assert ( swh_storage.origin_visit_get_latest(origin.url, allowed_statuses=["full"]) is None ) # Mark the first visit as completed and check status filter again swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=ov1.visit, date=now(), status="full", snapshot=complete_snapshot.id, ) ] ) assert { **origin_visit1, "snapshot": complete_snapshot.id, "status": "full", } == swh_storage.origin_visit_get_latest(origin.url, allowed_statuses=["full"]) assert origin_visit3 == swh_storage.origin_visit_get_latest(origin.url) # Add snapshot to visit2 and check that the new snapshot is returned - empty_snapshot = Snapshot.from_dict(data.empty_snapshot) swh_storage.snapshot_add([empty_snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=ov2.visit, date=now(), status="ongoing", snapshot=empty_snapshot.id, ) ] ) assert { **origin_visit2, "snapshot": empty_snapshot.id, "status": "ongoing", } == swh_storage.origin_visit_get_latest(origin.url, require_snapshot=True) assert origin_visit3 == swh_storage.origin_visit_get_latest(origin.url) # Check that the status filter is still working assert { **origin_visit1, "snapshot": complete_snapshot.id, "status": "full", } == swh_storage.origin_visit_get_latest(origin.url, allowed_statuses=["full"]) # Add snapshot to visit3 (same date as visit2) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=ov3.visit, date=now(), status="ongoing", snapshot=complete_snapshot.id, ) ] ) assert { **origin_visit1, "snapshot": complete_snapshot.id, "status": "full", } == swh_storage.origin_visit_get_latest(origin.url, allowed_statuses=["full"]) assert { **origin_visit1, "snapshot": complete_snapshot.id, "status": "full", } == swh_storage.origin_visit_get_latest( origin.url, allowed_statuses=["full"], require_snapshot=True ) assert { **origin_visit3, "snapshot": complete_snapshot.id, "status": "ongoing", } == swh_storage.origin_visit_get_latest(origin.url) assert { **origin_visit3, "snapshot": complete_snapshot.id, "status": "ongoing", } == swh_storage.origin_visit_get_latest(origin.url, require_snapshot=True) - def test_origin_visit_status_get_latest(self, swh_storage): - origin1 = Origin.from_dict(data.origin) - swh_storage.origin_add_one(data.origin) + def test_origin_visit_status_get_latest(self, swh_storage, sample_data): + snapshot = sample_data.snapshots[2] + origin1 = sample_data.origin + swh_storage.origin_add([origin1]) # to have some reference visits ov1, ov2 = swh_storage.origin_visit_add( [ OriginVisit( - origin=origin1.url, date=data.date_visit1, type=data.type_visit1, + origin=origin1.url, + date=sample_data.date_visit1, + type=sample_data.type_visit1, ), OriginVisit( - origin=origin1.url, date=data.date_visit2, type=data.type_visit2, + origin=origin1.url, + date=sample_data.date_visit2, + type=sample_data.type_visit2, ), ] ) - - snapshot = Snapshot.from_dict(data.complete_snapshot) swh_storage.snapshot_add([snapshot]) date_now = now() date_now = round_to_milliseconds(date_now) - assert data.date_visit1 < data.date_visit2 - assert data.date_visit2 < date_now + assert sample_data.date_visit1 < sample_data.date_visit2 + assert sample_data.date_visit2 < date_now ovs1 = OriginVisitStatus( origin=origin1.url, visit=ov1.visit, - date=data.date_visit1, + date=sample_data.date_visit1, status="partial", snapshot=None, ) ovs2 = OriginVisitStatus( origin=origin1.url, visit=ov1.visit, - date=data.date_visit2, + date=sample_data.date_visit2, status="ongoing", snapshot=None, ) ovs3 = OriginVisitStatus( origin=origin1.url, visit=ov2.visit, - date=data.date_visit2 + datetime.timedelta(minutes=1), # to not be ignored + date=sample_data.date_visit2 + + datetime.timedelta(minutes=1), # to not be ignored status="ongoing", snapshot=None, ) ovs4 = OriginVisitStatus( origin=origin1.url, visit=ov2.visit, date=date_now, status="full", snapshot=snapshot.id, metadata={"something": "wicked"}, ) swh_storage.origin_visit_status_add([ovs1, ovs2, ovs3, ovs4]) # unknown origin so no result actual_origin_visit = swh_storage.origin_visit_status_get_latest( "unknown-origin", ov1.visit ) assert actual_origin_visit is None # unknown visit so no result actual_origin_visit = swh_storage.origin_visit_status_get_latest( ov1.origin, ov1.visit + 10 ) assert actual_origin_visit is None # Two visits, both with no snapshot, take the most recent actual_origin_visit2 = swh_storage.origin_visit_status_get_latest( origin1.url, ov1.visit ) assert isinstance(actual_origin_visit2, OriginVisitStatus) assert actual_origin_visit2 == ovs2 assert ovs2.origin == origin1.url assert ovs2.visit == ov1.visit actual_origin_visit = swh_storage.origin_visit_status_get_latest( origin1.url, ov1.visit, require_snapshot=True ) # there is no visit with snapshot yet for that visit assert actual_origin_visit is None actual_origin_visit2 = swh_storage.origin_visit_status_get_latest( origin1.url, ov1.visit, allowed_statuses=["partial", "ongoing"] ) # visit status with partial status visit elected assert actual_origin_visit2 == ovs2 assert actual_origin_visit2.status == "ongoing" actual_origin_visit4 = swh_storage.origin_visit_status_get_latest( origin1.url, ov2.visit, require_snapshot=True ) assert actual_origin_visit4 == ovs4 assert actual_origin_visit4.snapshot == snapshot.id actual_origin_visit = swh_storage.origin_visit_status_get_latest( origin1.url, ov2.visit, require_snapshot=True, allowed_statuses=["ongoing"] ) # nothing matches so nothing assert actual_origin_visit is None # there is no visit with status full actual_origin_visit3 = swh_storage.origin_visit_status_get_latest( origin1.url, ov2.visit, allowed_statuses=["ongoing"] ) assert actual_origin_visit3 == ovs3 - def test_person_fullname_unicity(self, swh_storage): - # given (person injection through revisions for example) - revision = data.revision - + def test_person_fullname_unicity(self, swh_storage, sample_data): + revision, rev2 = sample_data.revisions[0:2] # create a revision with same committer fullname but wo name and email - revision2 = copy.deepcopy(data.revision2) - revision2["committer"] = dict(revision["committer"]) - revision2["committer"]["email"] = None - revision2["committer"]["name"] = None + revision2 = attr.evolve( + rev2, + committer=Person( + fullname=revision.committer.fullname, name=None, email=None + ), + ) - swh_storage.revision_add([revision]) - swh_storage.revision_add([revision2]) + swh_storage.revision_add([revision, revision2]) # when getting added revisions - revisions = list(swh_storage.revision_get([revision["id"], revision2["id"]])) + revisions = list(swh_storage.revision_get([revision.id, revision2.id])) - # then - # check committers are the same + # then check committers are the same assert revisions[0]["committer"] == revisions[1]["committer"] - def test_snapshot_add_get_empty(self, swh_storage): - origin_url = swh_storage.origin_add_one(data.origin) + def test_snapshot_add_get_empty(self, swh_storage, sample_data): + empty_snapshot = sample_data.snapshots[1] + empty_snapshot_dict = empty_snapshot.to_dict() + + origin = sample_data.origin + swh_storage.origin_add([origin]) ov1 = swh_storage.origin_visit_add( [ OriginVisit( - origin=origin_url, date=data.date_visit1, type=data.type_visit1, + origin=origin.url, + date=sample_data.date_visit1, + type=sample_data.type_visit1, ) ] )[0] - actual_result = swh_storage.snapshot_add([data.empty_snapshot]) + actual_result = swh_storage.snapshot_add([empty_snapshot]) assert actual_result == {"snapshot:add": 1} date_now = now() swh_storage.origin_visit_status_add( [ OriginVisitStatus( - origin=origin_url, + origin=origin.url, visit=ov1.visit, date=date_now, status="full", - snapshot=data.empty_snapshot["id"], + snapshot=empty_snapshot.id, ) ] ) - by_id = swh_storage.snapshot_get(data.empty_snapshot["id"]) - assert by_id == {**data.empty_snapshot, "next_branch": None} + by_id = swh_storage.snapshot_get(empty_snapshot.id) + assert by_id == {**empty_snapshot_dict, "next_branch": None} - by_ov = swh_storage.snapshot_get_by_origin_visit(origin_url, ov1.visit) - assert by_ov == {**data.empty_snapshot, "next_branch": None} + by_ov = swh_storage.snapshot_get_by_origin_visit(origin.url, ov1.visit) + assert by_ov == {**empty_snapshot_dict, "next_branch": None} ovs1 = OriginVisitStatus.from_dict( { - "origin": origin_url, - "date": data.date_visit1, + "origin": origin.url, + "date": sample_data.date_visit1, "visit": ov1.visit, "status": "created", "snapshot": None, "metadata": None, } ) ovs2 = OriginVisitStatus.from_dict( { - "origin": origin_url, + "origin": origin.url, "date": date_now, "visit": ov1.visit, "status": "full", "metadata": None, - "snapshot": data.empty_snapshot["id"], + "snapshot": empty_snapshot.id, } ) actual_objects = list(swh_storage.journal_writer.journal.objects) expected_objects = [ - ("origin", Origin.from_dict(data.origin)), + ("origin", origin), ("origin_visit", ov1), ("origin_visit_status", ovs1,), - ("snapshot", Snapshot.from_dict(data.empty_snapshot)), + ("snapshot", empty_snapshot), ("origin_visit_status", ovs2,), ] for obj in expected_objects: assert obj in actual_objects - def test_snapshot_add_get_complete(self, swh_storage): - origin_url = data.origin["url"] - origin_url = swh_storage.origin_add_one(data.origin) + def test_snapshot_add_get_complete(self, swh_storage, sample_data): + complete_snapshot = sample_data.snapshots[2] + complete_snapshot_dict = complete_snapshot.to_dict() + origin = sample_data.origin + + swh_storage.origin_add([origin]) visit = OriginVisit( - origin=origin_url, date=data.date_visit1, type=data.type_visit1, + origin=origin.url, + date=sample_data.date_visit1, + type=sample_data.type_visit1, ) origin_visit1 = swh_storage.origin_visit_add([visit])[0] visit_id = origin_visit1.visit - actual_result = swh_storage.snapshot_add([data.complete_snapshot]) + actual_result = swh_storage.snapshot_add([complete_snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( - origin=origin_url, + origin=origin.url, visit=origin_visit1.visit, date=now(), status="ongoing", - snapshot=data.complete_snapshot["id"], + snapshot=complete_snapshot.id, ) ] ) assert actual_result == {"snapshot:add": 1} - by_id = swh_storage.snapshot_get(data.complete_snapshot["id"]) - assert by_id == {**data.complete_snapshot, "next_branch": None} + by_id = swh_storage.snapshot_get(complete_snapshot.id) + assert by_id == {**complete_snapshot_dict, "next_branch": None} - by_ov = swh_storage.snapshot_get_by_origin_visit(origin_url, visit_id) - assert by_ov == {**data.complete_snapshot, "next_branch": None} + by_ov = swh_storage.snapshot_get_by_origin_visit(origin.url, visit_id) + assert by_ov == {**complete_snapshot_dict, "next_branch": None} - def test_snapshot_add_many(self, swh_storage): - actual_result = swh_storage.snapshot_add( - [data.snapshot, data.complete_snapshot] - ) + def test_snapshot_add_many(self, swh_storage, sample_data): + snapshot, _, complete_snapshot = sample_data.snapshots[:3] + + actual_result = swh_storage.snapshot_add([snapshot, complete_snapshot]) assert actual_result == {"snapshot:add": 2} - assert { - **data.complete_snapshot, + assert swh_storage.snapshot_get(complete_snapshot.id) == { + **complete_snapshot.to_dict(), "next_branch": None, - } == swh_storage.snapshot_get(data.complete_snapshot["id"]) + } - assert {**data.snapshot, "next_branch": None} == swh_storage.snapshot_get( - data.snapshot["id"] - ) + assert swh_storage.snapshot_get(snapshot.id) == { + **snapshot.to_dict(), + "next_branch": None, + } swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["snapshot"] == 2 - def test_snapshot_add_many_from_generator(self, swh_storage): + def test_snapshot_add_many_from_generator(self, swh_storage, sample_data): + snapshot, _, complete_snapshot = sample_data.snapshots[:3] + def _snp_gen(): - yield data.snapshot - yield data.complete_snapshot + yield from [snapshot, complete_snapshot] actual_result = swh_storage.snapshot_add(_snp_gen()) assert actual_result == {"snapshot:add": 2} swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["snapshot"] == 2 - def test_snapshot_add_many_incremental(self, swh_storage): - actual_result = swh_storage.snapshot_add([data.complete_snapshot]) + def test_snapshot_add_many_incremental(self, swh_storage, sample_data): + snapshot, _, complete_snapshot = sample_data.snapshots[:3] + + actual_result = swh_storage.snapshot_add([complete_snapshot]) assert actual_result == {"snapshot:add": 1} - actual_result2 = swh_storage.snapshot_add( - [data.snapshot, data.complete_snapshot] - ) + actual_result2 = swh_storage.snapshot_add([snapshot, complete_snapshot]) assert actual_result2 == {"snapshot:add": 1} - assert { - **data.complete_snapshot, + assert swh_storage.snapshot_get(complete_snapshot.id) == { + **complete_snapshot.to_dict(), "next_branch": None, - } == swh_storage.snapshot_get(data.complete_snapshot["id"]) + } - assert {**data.snapshot, "next_branch": None} == swh_storage.snapshot_get( - data.snapshot["id"] - ) + assert swh_storage.snapshot_get(snapshot.id) == { + **snapshot.to_dict(), + "next_branch": None, + } - def test_snapshot_add_twice(self, swh_storage): - actual_result = swh_storage.snapshot_add([data.empty_snapshot]) + def test_snapshot_add_twice(self, swh_storage, sample_data): + snapshot, empty_snapshot = sample_data.snapshots[:2] + + actual_result = swh_storage.snapshot_add([empty_snapshot]) assert actual_result == {"snapshot:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ - ("snapshot", Snapshot.from_dict(data.empty_snapshot)) + ("snapshot", empty_snapshot) ] - actual_result = swh_storage.snapshot_add([data.snapshot]) + actual_result = swh_storage.snapshot_add([snapshot]) assert actual_result == {"snapshot:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ - ("snapshot", Snapshot.from_dict(data.empty_snapshot)), - ("snapshot", Snapshot.from_dict(data.snapshot)), + ("snapshot", empty_snapshot), + ("snapshot", snapshot), ] - def test_snapshot_add_validation(self, swh_storage): - snap = copy.deepcopy(data.snapshot) - snap["branches"][b"foo"] = {"target_type": "revision"} - - with pytest.raises(StorageArgumentException, match="target"): - swh_storage.snapshot_add([snap]) - - snap = copy.deepcopy(data.snapshot) - snap["branches"][b"foo"] = {"target": b"\x42" * 20} + def test_snapshot_add_count_branches(self, swh_storage, sample_data): + complete_snapshot = sample_data.snapshots[2] - with pytest.raises(StorageArgumentException, match="target_type"): - swh_storage.snapshot_add([snap]) - - def test_snapshot_add_count_branches(self, swh_storage): - actual_result = swh_storage.snapshot_add([data.complete_snapshot]) + actual_result = swh_storage.snapshot_add([complete_snapshot]) assert actual_result == {"snapshot:add": 1} - snp_id = data.complete_snapshot["id"] - snp_size = swh_storage.snapshot_count_branches(snp_id) + snp_size = swh_storage.snapshot_count_branches(complete_snapshot.id) expected_snp_size = { "alias": 1, "content": 1, "directory": 2, "release": 1, "revision": 1, "snapshot": 1, None: 1, } assert snp_size == expected_snp_size - def test_snapshot_add_get_paginated(self, swh_storage): - swh_storage.snapshot_add([data.complete_snapshot]) + def test_snapshot_add_get_paginated(self, swh_storage, sample_data): + complete_snapshot = sample_data.snapshots[2] - snp_id = data.complete_snapshot["id"] - branches = data.complete_snapshot["branches"] + swh_storage.snapshot_add([complete_snapshot]) + + snp_id = complete_snapshot.id + branches = complete_snapshot.to_dict()["branches"] branch_names = list(sorted(branches)) # Test branch_from snapshot = swh_storage.snapshot_get_branches(snp_id, branches_from=b"release") rel_idx = branch_names.index(b"release") expected_snapshot = { "id": snp_id, "branches": {name: branches[name] for name in branch_names[rel_idx:]}, "next_branch": None, } assert snapshot == expected_snapshot # Test branches_count snapshot = swh_storage.snapshot_get_branches(snp_id, branches_count=1) expected_snapshot = { "id": snp_id, "branches": {branch_names[0]: branches[branch_names[0]],}, "next_branch": b"content", } assert snapshot == expected_snapshot # test branch_from + branches_count snapshot = swh_storage.snapshot_get_branches( snp_id, branches_from=b"directory", branches_count=3 ) dir_idx = branch_names.index(b"directory") expected_snapshot = { "id": snp_id, "branches": { name: branches[name] for name in branch_names[dir_idx : dir_idx + 3] }, "next_branch": branch_names[dir_idx + 3], } assert snapshot == expected_snapshot - def test_snapshot_add_get_filtered(self, swh_storage): - origin_url = swh_storage.origin_add_one(data.origin) + def test_snapshot_add_get_filtered(self, swh_storage, sample_data): + origin = sample_data.origin + complete_snapshot = sample_data.snapshots[2] + + swh_storage.origin_add([origin]) visit = OriginVisit( - origin=origin_url, date=data.date_visit1, type=data.type_visit1, + origin=origin.url, + date=sample_data.date_visit1, + type=sample_data.type_visit1, ) origin_visit1 = swh_storage.origin_visit_add([visit])[0] - swh_storage.snapshot_add([data.complete_snapshot]) + swh_storage.snapshot_add([complete_snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( - origin=origin_url, + origin=origin.url, visit=origin_visit1.visit, date=now(), status="ongoing", - snapshot=data.complete_snapshot["id"], + snapshot=complete_snapshot.id, ) ] ) - snp_id = data.complete_snapshot["id"] - branches = data.complete_snapshot["branches"] + snp_id = complete_snapshot.id + branches = complete_snapshot.to_dict()["branches"] snapshot = swh_storage.snapshot_get_branches( snp_id, target_types=["release", "revision"] ) expected_snapshot = { "id": snp_id, "branches": { name: tgt for name, tgt in branches.items() if tgt and tgt["target_type"] in ["release", "revision"] }, "next_branch": None, } assert snapshot == expected_snapshot snapshot = swh_storage.snapshot_get_branches(snp_id, target_types=["alias"]) expected_snapshot = { "id": snp_id, "branches": { name: tgt for name, tgt in branches.items() if tgt and tgt["target_type"] == "alias" }, "next_branch": None, } assert snapshot == expected_snapshot - def test_snapshot_add_get_filtered_and_paginated(self, swh_storage): - swh_storage.snapshot_add([data.complete_snapshot]) + def test_snapshot_add_get_filtered_and_paginated(self, swh_storage, sample_data): + complete_snapshot = sample_data.snapshots[2] - snp_id = data.complete_snapshot["id"] - branches = data.complete_snapshot["branches"] + swh_storage.snapshot_add([complete_snapshot]) + + snp_id = complete_snapshot.id + branches = complete_snapshot.to_dict()["branches"] branch_names = list(sorted(branches)) # Test branch_from snapshot = swh_storage.snapshot_get_branches( snp_id, target_types=["directory", "release"], branches_from=b"directory2" ) expected_snapshot = { "id": snp_id, "branches": {name: branches[name] for name in (b"directory2", b"release")}, "next_branch": None, } assert snapshot == expected_snapshot # Test branches_count snapshot = swh_storage.snapshot_get_branches( snp_id, target_types=["directory", "release"], branches_count=1 ) expected_snapshot = { "id": snp_id, "branches": {b"directory": branches[b"directory"]}, "next_branch": b"directory2", } assert snapshot == expected_snapshot # Test branches_count snapshot = swh_storage.snapshot_get_branches( snp_id, target_types=["directory", "release"], branches_count=2 ) expected_snapshot = { "id": snp_id, "branches": { name: branches[name] for name in (b"directory", b"directory2") }, "next_branch": b"release", } assert snapshot == expected_snapshot # test branch_from + branches_count snapshot = swh_storage.snapshot_get_branches( snp_id, target_types=["directory", "release"], branches_from=b"directory2", branches_count=1, ) dir_idx = branch_names.index(b"directory2") expected_snapshot = { "id": snp_id, "branches": {branch_names[dir_idx]: branches[branch_names[dir_idx]],}, "next_branch": b"release", } assert snapshot == expected_snapshot - def test_snapshot_add_get_branch_by_type(self, swh_storage): - snapshot = copy.deepcopy(data.complete_snapshot) + def test_snapshot_add_get_branch_by_type(self, swh_storage, sample_data): + complete_snapshot = sample_data.snapshots[2] + snapshot = complete_snapshot.to_dict() alias1 = b"alias1" alias2 = b"alias2" target1 = random.choice(list(snapshot["branches"].keys())) target2 = random.choice(list(snapshot["branches"].keys())) snapshot["branches"][alias2] = { "target": target2, "target_type": "alias", } snapshot["branches"][alias1] = { "target": target1, "target_type": "alias", } - swh_storage.snapshot_add([snapshot]) + new_snapshot = Snapshot.from_dict(snapshot) + swh_storage.snapshot_add([new_snapshot]) branches = swh_storage.snapshot_get_branches( - snapshot["id"], + new_snapshot.id, target_types=["alias"], branches_from=alias1, branches_count=1, )["branches"] assert len(branches) == 1 assert alias1 in branches - def test_snapshot_add_get(self, swh_storage): - origin_url = swh_storage.origin_add_one(data.origin) + def test_snapshot_add_get(self, swh_storage, sample_data): + snapshot = sample_data.snapshot + origin = sample_data.origin + + swh_storage.origin_add([origin]) visit = OriginVisit( - origin=origin_url, date=data.date_visit1, type=data.type_visit1, + origin=origin.url, + date=sample_data.date_visit1, + type=sample_data.type_visit1, ) origin_visit1 = swh_storage.origin_visit_add([visit])[0] visit_id = origin_visit1.visit - swh_storage.snapshot_add([data.snapshot]) + swh_storage.snapshot_add([snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( - origin=origin_url, + origin=origin.url, visit=origin_visit1.visit, date=now(), status="ongoing", - snapshot=data.snapshot["id"], + snapshot=snapshot.id, ) ] ) - by_id = swh_storage.snapshot_get(data.snapshot["id"]) - assert by_id == {**data.snapshot, "next_branch": None} + expected_snapshot = {**snapshot.to_dict(), "next_branch": None} - by_ov = swh_storage.snapshot_get_by_origin_visit(origin_url, visit_id) - assert by_ov == {**data.snapshot, "next_branch": None} + by_id = swh_storage.snapshot_get(snapshot.id) + assert by_id == expected_snapshot - origin_visit_info = swh_storage.origin_visit_get_by(origin_url, visit_id) - assert origin_visit_info["snapshot"] == data.snapshot["id"] + by_ov = swh_storage.snapshot_get_by_origin_visit(origin.url, visit_id) + assert by_ov == expected_snapshot - def test_snapshot_add_twice__by_origin_visit(self, swh_storage): - origin_url = swh_storage.origin_add_one(data.origin) + origin_visit_info = swh_storage.origin_visit_get_by(origin.url, visit_id) + assert origin_visit_info["snapshot"] == snapshot.id + + def test_snapshot_add_twice__by_origin_visit(self, swh_storage, sample_data): + snapshot = sample_data.snapshot + origin = sample_data.origin + + swh_storage.origin_add([origin]) ov1 = swh_storage.origin_visit_add( [ OriginVisit( - origin=origin_url, date=data.date_visit1, type=data.type_visit1, + origin=origin.url, + date=sample_data.date_visit1, + type=sample_data.type_visit1, ) ] )[0] - swh_storage.snapshot_add([data.snapshot]) + swh_storage.snapshot_add([snapshot]) date_now2 = now() swh_storage.origin_visit_status_add( [ OriginVisitStatus( - origin=origin_url, + origin=origin.url, visit=ov1.visit, date=date_now2, status="ongoing", - snapshot=data.snapshot["id"], + snapshot=snapshot.id, ) ] ) - by_ov1 = swh_storage.snapshot_get_by_origin_visit(origin_url, ov1.visit) - assert by_ov1 == {**data.snapshot, "next_branch": None} + expected_snapshot = {**snapshot.to_dict(), "next_branch": None} + + by_ov1 = swh_storage.snapshot_get_by_origin_visit(origin.url, ov1.visit) + assert by_ov1 == expected_snapshot ov2 = swh_storage.origin_visit_add( [ OriginVisit( - origin=origin_url, date=data.date_visit2, type=data.type_visit2, + origin=origin.url, + date=sample_data.date_visit2, + type=sample_data.type_visit2, ) ] )[0] - swh_storage.snapshot_add([data.snapshot]) date_now4 = now() swh_storage.origin_visit_status_add( [ OriginVisitStatus( - origin=origin_url, + origin=origin.url, visit=ov2.visit, date=date_now4, status="ongoing", - snapshot=data.snapshot["id"], + snapshot=snapshot.id, ) ] ) - by_ov2 = swh_storage.snapshot_get_by_origin_visit(origin_url, ov2.visit) - assert by_ov2 == {**data.snapshot, "next_branch": None} + by_ov2 = swh_storage.snapshot_get_by_origin_visit(origin.url, ov2.visit) + assert by_ov2 == expected_snapshot ovs1 = OriginVisitStatus.from_dict( { - "origin": origin_url, - "date": data.date_visit1, + "origin": origin.url, + "date": sample_data.date_visit1, "visit": ov1.visit, "status": "created", "metadata": None, "snapshot": None, } ) ovs2 = OriginVisitStatus.from_dict( { - "origin": origin_url, + "origin": origin.url, "date": date_now2, "visit": ov1.visit, "status": "ongoing", "metadata": None, - "snapshot": data.snapshot["id"], + "snapshot": snapshot.id, } ) ovs3 = OriginVisitStatus.from_dict( { - "origin": origin_url, - "date": data.date_visit2, + "origin": origin.url, + "date": sample_data.date_visit2, "visit": ov2.visit, "status": "created", "metadata": None, "snapshot": None, } ) ovs4 = OriginVisitStatus.from_dict( { - "origin": origin_url, + "origin": origin.url, "date": date_now4, "visit": ov2.visit, "status": "ongoing", "metadata": None, - "snapshot": data.snapshot["id"], + "snapshot": snapshot.id, } ) actual_objects = list(swh_storage.journal_writer.journal.objects) expected_objects = [ - ("origin", Origin.from_dict(data.origin)), + ("origin", origin), ("origin_visit", ov1), ("origin_visit_status", ovs1), - ("snapshot", Snapshot.from_dict(data.snapshot)), + ("snapshot", snapshot), ("origin_visit_status", ovs2), ("origin_visit", ov2), ("origin_visit_status", ovs3), ("origin_visit_status", ovs4), ] for obj in expected_objects: assert obj in actual_objects - def test_snapshot_get_random(self, swh_storage): - swh_storage.snapshot_add( - [data.snapshot, data.empty_snapshot, data.complete_snapshot] - ) + def test_snapshot_get_random(self, swh_storage, sample_data): + snapshot, empty_snapshot, complete_snapshot = sample_data.snapshots[:3] + swh_storage.snapshot_add([snapshot, empty_snapshot, complete_snapshot]) assert swh_storage.snapshot_get_random() in { - data.snapshot["id"], - data.empty_snapshot["id"], - data.complete_snapshot["id"], + snapshot.id, + empty_snapshot.id, + complete_snapshot.id, } - def test_snapshot_missing(self, swh_storage): - snap = data.snapshot - missing_snap = data.empty_snapshot - snapshots = [snap["id"], missing_snap["id"]] - swh_storage.snapshot_add([snap]) + def test_snapshot_missing(self, swh_storage, sample_data): + snapshot, missing_snapshot = sample_data.snapshots[:2] + snapshots = [snapshot.id, missing_snapshot.id] + swh_storage.snapshot_add([snapshot]) missing_snapshots = swh_storage.snapshot_missing(snapshots) - assert list(missing_snapshots) == [missing_snap["id"]] + assert list(missing_snapshots) == [missing_snapshot.id] + + def test_stat_counters(self, swh_storage, sample_data): + origin = sample_data.origin + snapshot = sample_data.snapshot + revision = sample_data.revision + release = sample_data.release + directory = sample_data.directory + content = sample_data.content - def test_stat_counters(self, swh_storage): expected_keys = ["content", "directory", "origin", "revision"] # Initially, all counters are 0 swh_storage.refresh_stat_counters() counters = swh_storage.stat_counters() assert set(expected_keys) <= set(counters) for key in expected_keys: assert counters[key] == 0 # Add a content. Only the content counter should increase. - swh_storage.content_add([data.cont]) + swh_storage.content_add([content]) swh_storage.refresh_stat_counters() counters = swh_storage.stat_counters() assert set(expected_keys) <= set(counters) for key in expected_keys: if key != "content": assert counters[key] == 0 assert counters["content"] == 1 # Add other objects. Check their counter increased as well. - origin_url = swh_storage.origin_add_one(data.origin2) + swh_storage.origin_add([origin]) visit = OriginVisit( - origin=origin_url, date=data.date_visit2, type=data.type_visit2, + origin=origin.url, + date=sample_data.date_visit2, + type=sample_data.type_visit2, ) origin_visit1 = swh_storage.origin_visit_add([visit])[0] - swh_storage.snapshot_add([data.snapshot]) + swh_storage.snapshot_add([snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( - origin=origin_url, + origin=origin.url, visit=origin_visit1.visit, date=now(), status="ongoing", - snapshot=data.snapshot["id"], + snapshot=snapshot.id, ) ] ) - swh_storage.directory_add([data.dir]) - swh_storage.revision_add([data.revision]) - swh_storage.release_add([data.release]) + swh_storage.directory_add([directory]) + swh_storage.revision_add([revision]) + swh_storage.release_add([release]) swh_storage.refresh_stat_counters() counters = swh_storage.stat_counters() assert counters["content"] == 1 assert counters["directory"] == 1 assert counters["snapshot"] == 1 assert counters["origin"] == 1 assert counters["origin_visit"] == 1 assert counters["revision"] == 1 assert counters["release"] == 1 assert counters["snapshot"] == 1 if "person" in counters: assert counters["person"] == 3 - def test_content_find_ctime(self, swh_storage): - cont = data.cont.copy() - del cont["data"] - ctime = now() - cont["ctime"] = ctime - swh_storage.content_add_metadata([cont]) - - actually_present = swh_storage.content_find({"sha1": cont["sha1"]}) - - # check ctime up to one second - dt = actually_present[0]["ctime"] - ctime - assert abs(dt.total_seconds()) <= 1 - del actually_present[0]["ctime"] - - assert actually_present[0] == { - "sha1": cont["sha1"], - "sha256": cont["sha256"], - "sha1_git": cont["sha1_git"], - "blake2s256": cont["blake2s256"], - "length": cont["length"], - "status": "visible", - } + def test_content_find_ctime(self, swh_storage, sample_data): + origin_content = sample_data.content + ctime = round_to_milliseconds(now()) + content = attr.evolve(origin_content, data=None, ctime=ctime) + swh_storage.content_add_metadata([content]) + + actually_present = swh_storage.content_find({"sha1": content.sha1}) + assert actually_present[0] == content.to_dict() + + def test_content_find_with_present_content(self, swh_storage, sample_data): + content = sample_data.content + expected_content = content.to_dict() + del expected_content["data"] + del expected_content["ctime"] - def test_content_find_with_present_content(self, swh_storage): # 1. with something to find - cont = data.cont - swh_storage.content_add([cont, data.cont2]) + swh_storage.content_add([content]) - actually_present = swh_storage.content_find({"sha1": cont["sha1"]}) + actually_present = swh_storage.content_find({"sha1": content.sha1}) assert 1 == len(actually_present) actually_present[0].pop("ctime") - - assert actually_present[0] == { - "sha1": cont["sha1"], - "sha256": cont["sha256"], - "sha1_git": cont["sha1_git"], - "blake2s256": cont["blake2s256"], - "length": cont["length"], - "status": "visible", - } + assert actually_present[0] == expected_content # 2. with something to find - actually_present = swh_storage.content_find({"sha1_git": cont["sha1_git"]}) + actually_present = swh_storage.content_find({"sha1_git": content.sha1_git}) assert 1 == len(actually_present) - actually_present[0].pop("ctime") - assert actually_present[0] == { - "sha1": cont["sha1"], - "sha256": cont["sha256"], - "sha1_git": cont["sha1_git"], - "blake2s256": cont["blake2s256"], - "length": cont["length"], - "status": "visible", - } + assert actually_present[0] == expected_content # 3. with something to find - actually_present = swh_storage.content_find({"sha256": cont["sha256"]}) + actually_present = swh_storage.content_find({"sha256": content.sha256}) assert 1 == len(actually_present) - actually_present[0].pop("ctime") - assert actually_present[0] == { - "sha1": cont["sha1"], - "sha256": cont["sha256"], - "sha1_git": cont["sha1_git"], - "blake2s256": cont["blake2s256"], - "length": cont["length"], - "status": "visible", - } + assert actually_present[0] == expected_content # 4. with something to find - actually_present = swh_storage.content_find( - { - "sha1": cont["sha1"], - "sha1_git": cont["sha1_git"], - "sha256": cont["sha256"], - "blake2s256": cont["blake2s256"], - } - ) + actually_present = swh_storage.content_find(content.hashes()) assert 1 == len(actually_present) - actually_present[0].pop("ctime") - assert actually_present[0] == { - "sha1": cont["sha1"], - "sha256": cont["sha256"], - "sha1_git": cont["sha1_git"], - "blake2s256": cont["blake2s256"], - "length": cont["length"], - "status": "visible", - } + assert actually_present[0] == expected_content - def test_content_find_with_non_present_content(self, swh_storage): + def test_content_find_with_non_present_content(self, swh_storage, sample_data): + missing_content = sample_data.skipped_content # 1. with something that does not exist - missing_cont = data.missing_cont - - actually_present = swh_storage.content_find({"sha1": missing_cont["sha1"]}) + actually_present = swh_storage.content_find({"sha1": missing_content.sha1}) assert actually_present == [] # 2. with something that does not exist actually_present = swh_storage.content_find( - {"sha1_git": missing_cont["sha1_git"]} + {"sha1_git": missing_content.sha1_git} ) - assert actually_present == [] # 3. with something that does not exist - actually_present = swh_storage.content_find({"sha256": missing_cont["sha256"]}) - + actually_present = swh_storage.content_find({"sha256": missing_content.sha256}) assert actually_present == [] - def test_content_find_with_duplicate_input(self, swh_storage): - cont1 = data.cont - duplicate_cont = cont1.copy() + def test_content_find_with_duplicate_input(self, swh_storage, sample_data): + content = sample_data.content # Create fake data with colliding sha256 and blake2s256 - sha1_array = bytearray(duplicate_cont["sha1"]) + sha1_array = bytearray(content.sha1) sha1_array[0] += 1 - duplicate_cont["sha1"] = bytes(sha1_array) - sha1git_array = bytearray(duplicate_cont["sha1_git"]) + sha1git_array = bytearray(content.sha1_git) sha1git_array[0] += 1 - duplicate_cont["sha1_git"] = bytes(sha1git_array) + duplicated_content = attr.evolve( + content, sha1=bytes(sha1_array), sha1_git=bytes(sha1git_array) + ) + # Inject the data - swh_storage.content_add([cont1, duplicate_cont]) - finder = { - "blake2s256": duplicate_cont["blake2s256"], - "sha256": duplicate_cont["sha256"], - } - actual_result = list(swh_storage.content_find(finder)) + swh_storage.content_add([content, duplicated_content]) + + actual_result = list( + swh_storage.content_find( + { + "blake2s256": duplicated_content.blake2s256, + "sha256": duplicated_content.sha256, + } + ) + ) - cont1.pop("data") - duplicate_cont.pop("data") - actual_result[0].pop("ctime") - actual_result[1].pop("ctime") + expected_content = content.to_dict() + expected_duplicated_content = duplicated_content.to_dict() + + for key in ["data", "ctime"]: # so we can compare + for dict_ in [ + expected_content, + expected_duplicated_content, + actual_result[0], + actual_result[1], + ]: + dict_.pop(key, None) - expected_result = [cont1, duplicate_cont] + expected_result = [expected_content, expected_duplicated_content] for result in expected_result: assert result in actual_result - def test_content_find_with_duplicate_sha256(self, swh_storage): - cont1 = data.cont - duplicate_cont = cont1.copy() + def test_content_find_with_duplicate_sha256(self, swh_storage, sample_data): + content = sample_data.content + hashes = {} # Create fake data with colliding sha256 for hashalgo in ("sha1", "sha1_git", "blake2s256"): - value = bytearray(duplicate_cont[hashalgo]) + value = bytearray(getattr(content, hashalgo)) value[0] += 1 - duplicate_cont[hashalgo] = bytes(value) - swh_storage.content_add([cont1, duplicate_cont]) + hashes[hashalgo] = bytes(value) + + duplicated_content = attr.evolve( + content, + sha1=hashes["sha1"], + sha1_git=hashes["sha1_git"], + blake2s256=hashes["blake2s256"], + ) + swh_storage.content_add([content, duplicated_content]) + + actual_result = list( + swh_storage.content_find({"sha256": duplicated_content.sha256}) + ) - finder = {"sha256": duplicate_cont["sha256"]} - actual_result = list(swh_storage.content_find(finder)) assert len(actual_result) == 2 - cont1.pop("data") - duplicate_cont.pop("data") - actual_result[0].pop("ctime") - actual_result[1].pop("ctime") - expected_result = [cont1, duplicate_cont] - assert expected_result == sorted(actual_result, key=lambda x: x["sha1"]) + expected_content = content.to_dict() + expected_duplicated_content = duplicated_content.to_dict() + + for key in ["data", "ctime"]: # so we can compare + for dict_ in [ + expected_content, + expected_duplicated_content, + actual_result[0], + actual_result[1], + ]: + dict_.pop(key, None) + + assert sorted(actual_result, key=lambda x: x["sha1"]) == [ + expected_content, + expected_duplicated_content, + ] # Find with both sha256 and blake2s256 - finder = { - "sha256": duplicate_cont["sha256"], - "blake2s256": duplicate_cont["blake2s256"], - } - actual_result = list(swh_storage.content_find(finder)) + actual_result = list( + swh_storage.content_find( + { + "sha256": duplicated_content.sha256, + "blake2s256": duplicated_content.blake2s256, + } + ) + ) + assert len(actual_result) == 1 actual_result[0].pop("ctime") - expected_result = [duplicate_cont] - assert actual_result[0] == duplicate_cont + assert actual_result == [expected_duplicated_content] - def test_content_find_with_duplicate_blake2s256(self, swh_storage): - cont1 = data.cont - duplicate_cont = cont1.copy() + def test_content_find_with_duplicate_blake2s256(self, swh_storage, sample_data): + content = sample_data.content # Create fake data with colliding sha256 and blake2s256 - sha1_array = bytearray(duplicate_cont["sha1"]) + sha1_array = bytearray(content.sha1) sha1_array[0] += 1 - duplicate_cont["sha1"] = bytes(sha1_array) - sha1git_array = bytearray(duplicate_cont["sha1_git"]) + sha1git_array = bytearray(content.sha1_git) sha1git_array[0] += 1 - duplicate_cont["sha1_git"] = bytes(sha1git_array) - sha256_array = bytearray(duplicate_cont["sha256"]) + sha256_array = bytearray(content.sha256) sha256_array[0] += 1 - duplicate_cont["sha256"] = bytes(sha256_array) - swh_storage.content_add([cont1, duplicate_cont]) - finder = {"blake2s256": duplicate_cont["blake2s256"]} - actual_result = list(swh_storage.content_find(finder)) - cont1.pop("data") - duplicate_cont.pop("data") - actual_result[0].pop("ctime") - actual_result[1].pop("ctime") - expected_result = [cont1, duplicate_cont] + duplicated_content = attr.evolve( + content, + sha1=bytes(sha1_array), + sha1_git=bytes(sha1git_array), + sha256=bytes(sha256_array), + ) + + swh_storage.content_add([content, duplicated_content]) + + actual_result = list( + swh_storage.content_find({"blake2s256": duplicated_content.blake2s256}) + ) + + expected_content = content.to_dict() + expected_duplicated_content = duplicated_content.to_dict() + + for key in ["data", "ctime"]: # so we can compare + for dict_ in [ + expected_content, + expected_duplicated_content, + actual_result[0], + actual_result[1], + ]: + dict_.pop(key, None) + + expected_result = [expected_content, expected_duplicated_content] for result in expected_result: assert result in actual_result # Find with both sha256 and blake2s256 - finder = { - "sha256": duplicate_cont["sha256"], - "blake2s256": duplicate_cont["blake2s256"], - } - actual_result = list(swh_storage.content_find(finder)) + actual_result = list( + swh_storage.content_find( + { + "sha256": duplicated_content.sha256, + "blake2s256": duplicated_content.blake2s256, + } + ) + ) actual_result[0].pop("ctime") - - expected_result = [duplicate_cont] - assert expected_result == actual_result + assert actual_result == [expected_duplicated_content] def test_content_find_bad_input(self, swh_storage): # 1. with bad input with pytest.raises(StorageArgumentException): swh_storage.content_find({}) # empty is bad # 2. with bad input with pytest.raises(StorageArgumentException): swh_storage.content_find({"unknown-sha1": "something"}) # not the right key - def test_object_find_by_sha1_git(self, swh_storage): + def test_object_find_by_sha1_git(self, swh_storage, sample_data): + content = sample_data.content + directory = sample_data.directory + revision = sample_data.revision + release = sample_data.release + sha1_gits = [b"00000000000000000000"] expected = { b"00000000000000000000": [], } - swh_storage.content_add([data.cont]) - sha1_gits.append(data.cont["sha1_git"]) - expected[data.cont["sha1_git"]] = [ - {"sha1_git": data.cont["sha1_git"], "type": "content",} + swh_storage.content_add([content]) + sha1_gits.append(content.sha1_git) + + expected[content.sha1_git] = [ + {"sha1_git": content.sha1_git, "type": "content",} ] - swh_storage.directory_add([data.dir]) - sha1_gits.append(data.dir["id"]) - expected[data.dir["id"]] = [{"sha1_git": data.dir["id"], "type": "directory",}] + swh_storage.directory_add([directory]) + sha1_gits.append(directory.id) + expected[directory.id] = [{"sha1_git": directory.id, "type": "directory",}] - swh_storage.revision_add([data.revision]) - sha1_gits.append(data.revision["id"]) - expected[data.revision["id"]] = [ - {"sha1_git": data.revision["id"], "type": "revision",} - ] + swh_storage.revision_add([revision]) + sha1_gits.append(revision.id) + expected[revision.id] = [{"sha1_git": revision.id, "type": "revision",}] - swh_storage.release_add([data.release]) - sha1_gits.append(data.release["id"]) - expected[data.release["id"]] = [ - {"sha1_git": data.release["id"], "type": "release",} - ] + swh_storage.release_add([release]) + sha1_gits.append(release.id) + expected[release.id] = [{"sha1_git": release.id, "type": "release",}] ret = swh_storage.object_find_by_sha1_git(sha1_gits) assert expected == ret - def test_metadata_fetcher_add_get(self, swh_storage): - actual_fetcher = swh_storage.metadata_fetcher_get( - data.metadata_fetcher.name, data.metadata_fetcher.version - ) + def test_metadata_fetcher_add_get(self, swh_storage, sample_data): + fetcher = sample_data.metadata_fetcher + actual_fetcher = swh_storage.metadata_fetcher_get(fetcher.name, fetcher.version) assert actual_fetcher is None # does not exist - swh_storage.metadata_fetcher_add([data.metadata_fetcher]) + swh_storage.metadata_fetcher_add([fetcher]) - res = swh_storage.metadata_fetcher_get( - data.metadata_fetcher.name, data.metadata_fetcher.version - ) + res = swh_storage.metadata_fetcher_get(fetcher.name, fetcher.version) + assert res == fetcher - assert res == data.metadata_fetcher + def test_metadata_authority_add_get(self, swh_storage, sample_data): + authority = sample_data.metadata_authority - def test_metadata_authority_add_get(self, swh_storage): actual_authority = swh_storage.metadata_authority_get( - data.metadata_authority.type, data.metadata_authority.url + authority.type, authority.url ) assert actual_authority is None # does not exist - swh_storage.metadata_authority_add([data.metadata_authority]) + swh_storage.metadata_authority_add([authority]) - res = swh_storage.metadata_authority_get( - data.metadata_authority.type, data.metadata_authority.url - ) + res = swh_storage.metadata_authority_get(authority.type, authority.url) + assert res == authority - assert res == data.metadata_authority + def test_content_metadata_add(self, swh_storage, sample_data): + content = sample_data.content + fetcher = sample_data.metadata_fetcher + authority = sample_data.metadata_authority + content_metadata = sample_data.content_metadata[:2] - def test_content_metadata_add(self, swh_storage): - content = data.cont - fetcher = data.metadata_fetcher - authority = data.metadata_authority content_swhid = SWHID( - object_type="content", object_id=hash_to_bytes(content["sha1_git"]) + object_type="content", object_id=hash_to_bytes(content.sha1_git) ) swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) - swh_storage.object_metadata_add([data.content_metadata, data.content_metadata2]) + swh_storage.object_metadata_add(content_metadata) result = swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content_swhid, authority ) assert result["next_page_token"] is None - assert [data.content_metadata, data.content_metadata2] == list( - sorted(result["results"], key=lambda x: x.discovery_date,) + assert list(sorted(result["results"], key=lambda x: x.discovery_date,)) == list( + content_metadata ) - def test_content_metadata_add_duplicate(self, swh_storage): + def test_content_metadata_add_duplicate(self, swh_storage, sample_data): """Duplicates should be silently updated.""" - content = data.cont - fetcher = data.metadata_fetcher - authority = data.metadata_authority + content = sample_data.content + fetcher = sample_data.metadata_fetcher + authority = sample_data.metadata_authority + content_metadata, content_metadata2 = sample_data.content_metadata[:2] content_swhid = SWHID( - object_type="content", object_id=hash_to_bytes(content["sha1_git"]) + object_type="content", object_id=hash_to_bytes(content.sha1_git) ) new_content_metadata2 = attr.evolve( - data.content_metadata2, format="new-format", metadata=b"new-metadata", + content_metadata2, format="new-format", metadata=b"new-metadata", ) swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) - swh_storage.object_metadata_add([data.content_metadata, data.content_metadata2]) + swh_storage.object_metadata_add([content_metadata, content_metadata2]) swh_storage.object_metadata_add([new_content_metadata2]) result = swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content_swhid, authority ) assert result["next_page_token"] is None - expected_results1 = (data.content_metadata, new_content_metadata2) - expected_results2 = (data.content_metadata, data.content_metadata2) + expected_results1 = (content_metadata, new_content_metadata2) + expected_results2 = (content_metadata, content_metadata2) assert tuple(sorted(result["results"], key=lambda x: x.discovery_date,)) in ( expected_results1, # cassandra expected_results2, # postgresql ) - def test_content_metadata_get(self, swh_storage): - authority = data.metadata_authority - fetcher = data.metadata_fetcher - authority2 = data.metadata_authority2 - fetcher2 = data.metadata_fetcher2 - content1_swhid = SWHID( - object_type="content", object_id=hash_to_bytes(data.cont["sha1_git"]) - ) - content2_swhid = SWHID( - object_type="content", object_id=hash_to_bytes(data.cont2["sha1_git"]) - ) + def test_content_metadata_get(self, swh_storage, sample_data): + content, content2 = sample_data.contents[:2] + fetcher, fetcher2 = sample_data.fetchers[:2] + authority, authority2 = sample_data.authorities[:2] + ( + content1_metadata1, + content1_metadata2, + content1_metadata3, + ) = sample_data.content_metadata[:3] - content1_metadata1 = data.content_metadata - content1_metadata2 = data.content_metadata2 - content1_metadata3 = data.content_metadata3 - content2_metadata = attr.evolve(data.content_metadata2, id=content2_swhid) + content1_swhid = SWHID(object_type="content", object_id=content.sha1_git) + content2_swhid = SWHID(object_type="content", object_id=content2.sha1_git) + content2_metadata = attr.evolve(content1_metadata2, id=content2_swhid) swh_storage.metadata_authority_add([authority, authority2]) swh_storage.metadata_fetcher_add([fetcher, fetcher2]) swh_storage.object_metadata_add( [ content1_metadata1, content1_metadata2, content1_metadata3, content2_metadata, ] ) result = swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content1_swhid, authority ) assert result["next_page_token"] is None assert [content1_metadata1, content1_metadata2] == list( sorted(result["results"], key=lambda x: x.discovery_date,) ) result = swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content1_swhid, authority2 ) assert result["next_page_token"] is None assert [content1_metadata3] == list( sorted(result["results"], key=lambda x: x.discovery_date,) ) result = swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content2_swhid, authority ) assert result["next_page_token"] is None assert [content2_metadata] == list(result["results"],) - def test_content_metadata_get_after(self, swh_storage): - content = data.cont - fetcher = data.metadata_fetcher - authority = data.metadata_authority - content_swhid = SWHID( - object_type="content", object_id=hash_to_bytes(content["sha1_git"]) - ) + def test_content_metadata_get_after(self, swh_storage, sample_data): + content = sample_data.content + fetcher = sample_data.metadata_fetcher + authority = sample_data.metadata_authority + content_metadata, content_metadata2 = sample_data.content_metadata[:2] + + content_swhid = SWHID(object_type="content", object_id=content.sha1_git) swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) - swh_storage.object_metadata_add([data.content_metadata, data.content_metadata2]) + swh_storage.object_metadata_add([content_metadata, content_metadata2]) result = swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content_swhid, authority, - after=data.content_metadata.discovery_date - timedelta(seconds=1), + after=content_metadata.discovery_date - timedelta(seconds=1), ) assert result["next_page_token"] is None - assert [data.content_metadata, data.content_metadata2] == list( + assert [content_metadata, content_metadata2] == list( sorted(result["results"], key=lambda x: x.discovery_date,) ) result = swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content_swhid, authority, - after=data.content_metadata.discovery_date, + after=content_metadata.discovery_date, ) assert result["next_page_token"] is None - assert [data.content_metadata2] == result["results"] + assert result["results"] == [content_metadata2] result = swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content_swhid, authority, - after=data.content_metadata2.discovery_date, + after=content_metadata2.discovery_date, ) assert result["next_page_token"] is None - assert [] == result["results"] + assert result["results"] == [] - def test_content_metadata_get_paginate(self, swh_storage): - content = data.cont - fetcher = data.metadata_fetcher - authority = data.metadata_authority - content_swhid = SWHID( - object_type="content", object_id=hash_to_bytes(content["sha1_git"]) - ) + def test_content_metadata_get_paginate(self, swh_storage, sample_data): + content = sample_data.content + fetcher = sample_data.metadata_fetcher + authority = sample_data.metadata_authority + content_metadata, content_metadata2 = sample_data.content_metadata[:2] + + content_swhid = SWHID(object_type="content", object_id=content.sha1_git) swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) - - swh_storage.object_metadata_add([data.content_metadata, data.content_metadata2]) - + swh_storage.object_metadata_add([content_metadata, content_metadata2]) swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content_swhid, authority ) result = swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content_swhid, authority, limit=1 ) assert result["next_page_token"] is not None - assert [data.content_metadata] == result["results"] + assert result["results"] == [content_metadata] result = swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content_swhid, authority, limit=1, page_token=result["next_page_token"], ) assert result["next_page_token"] is None - assert [data.content_metadata2] == result["results"] + assert result["results"] == [content_metadata2] - def test_content_metadata_get_paginate_same_date(self, swh_storage): - content = data.cont - fetcher1 = data.metadata_fetcher - fetcher2 = data.metadata_fetcher2 - authority = data.metadata_authority - content_swhid = SWHID( - object_type="content", object_id=hash_to_bytes(content["sha1_git"]) - ) + def test_content_metadata_get_paginate_same_date(self, swh_storage, sample_data): + content = sample_data.content + fetcher1, fetcher2 = sample_data.fetchers[:2] + authority = sample_data.metadata_authority + content_metadata, content_metadata2 = sample_data.content_metadata[:2] + + content_swhid = SWHID(object_type="content", object_id=content.sha1_git) swh_storage.metadata_fetcher_add([fetcher1, fetcher2]) swh_storage.metadata_authority_add([authority]) - content_metadata2 = attr.evolve( - data.content_metadata2, - discovery_date=data.content_metadata2.discovery_date, + new_content_metadata2 = attr.evolve( + content_metadata2, + discovery_date=content_metadata2.discovery_date, fetcher=attr.evolve(fetcher2, metadata=None), ) - swh_storage.object_metadata_add([data.content_metadata, content_metadata2]) + swh_storage.object_metadata_add([content_metadata, new_content_metadata2]) result = swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content_swhid, authority, limit=1 ) assert result["next_page_token"] is not None - assert [data.content_metadata] == result["results"] + assert result["results"] == [content_metadata] result = swh_storage.object_metadata_get( MetadataTargetType.CONTENT, content_swhid, authority, limit=1, page_token=result["next_page_token"], ) assert result["next_page_token"] is None - assert [content_metadata2] == result["results"] + assert result["results"] == [new_content_metadata2] - def test_content_metadata_get__invalid_id(self, swh_storage): - fetcher = data.metadata_fetcher - authority = data.metadata_authority + def test_content_metadata_get__invalid_id(self, swh_storage, sample_data): + origin = sample_data.origin + fetcher = sample_data.metadata_fetcher + authority = sample_data.metadata_authority + content_metadata, content_metadata2 = sample_data.content_metadata[:2] swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) - - swh_storage.object_metadata_add([data.content_metadata, data.content_metadata2]) + swh_storage.object_metadata_add([content_metadata, content_metadata2]) with pytest.raises(StorageArgumentException, match="SWHID"): swh_storage.object_metadata_get( - MetadataTargetType.CONTENT, data.origin["url"], authority + MetadataTargetType.CONTENT, origin.url, authority ) - def test_origin_metadata_add(self, swh_storage): - origin = data.origin - fetcher = data.metadata_fetcher - authority = data.metadata_authority + def test_origin_metadata_add(self, swh_storage, sample_data): + origin = sample_data.origin + fetcher = sample_data.metadata_fetcher + authority = sample_data.metadata_authority + origin_metadata, origin_metadata2 = sample_data.origin_metadata[:2] + assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) - swh_storage.object_metadata_add([data.origin_metadata, data.origin_metadata2]) + swh_storage.object_metadata_add([origin_metadata, origin_metadata2]) result = swh_storage.object_metadata_get( - MetadataTargetType.ORIGIN, origin["url"], authority + MetadataTargetType.ORIGIN, origin.url, authority ) assert result["next_page_token"] is None - assert [data.origin_metadata, data.origin_metadata2] == list( - sorted(result["results"], key=lambda x: x.discovery_date) - ) + assert list(sorted(result["results"], key=lambda x: x.discovery_date)) == [ + origin_metadata, + origin_metadata2, + ] - def test_origin_metadata_add_duplicate(self, swh_storage): + def test_origin_metadata_add_duplicate(self, swh_storage, sample_data): """Duplicates should be silently updated.""" - origin = data.origin - fetcher = data.metadata_fetcher - authority = data.metadata_authority + origin = sample_data.origin + fetcher = sample_data.metadata_fetcher + authority = sample_data.metadata_authority + origin_metadata, origin_metadata2 = sample_data.origin_metadata[:2] assert swh_storage.origin_add([origin]) == {"origin:add": 1} new_origin_metadata2 = attr.evolve( - data.origin_metadata2, format="new-format", metadata=b"new-metadata", + origin_metadata2, format="new-format", metadata=b"new-metadata", ) swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) - swh_storage.object_metadata_add([data.origin_metadata, data.origin_metadata2]) + swh_storage.object_metadata_add([origin_metadata, origin_metadata2]) swh_storage.object_metadata_add([new_origin_metadata2]) result = swh_storage.object_metadata_get( - MetadataTargetType.ORIGIN, origin["url"], authority + MetadataTargetType.ORIGIN, origin.url, authority ) assert result["next_page_token"] is None # which of the two behavior happens is backend-specific. - expected_results1 = (data.origin_metadata, new_origin_metadata2) - expected_results2 = (data.origin_metadata, data.origin_metadata2) + expected_results1 = (origin_metadata, new_origin_metadata2) + expected_results2 = (origin_metadata, origin_metadata2) assert tuple(sorted(result["results"], key=lambda x: x.discovery_date,)) in ( expected_results1, # cassandra expected_results2, # postgresql ) - def test_origin_metadata_get(self, swh_storage): - authority = data.metadata_authority - fetcher = data.metadata_fetcher - authority2 = data.metadata_authority2 - fetcher2 = data.metadata_fetcher2 - origin_url1 = data.origin["url"] - origin_url2 = data.origin2["url"] - assert swh_storage.origin_add([data.origin, data.origin2]) == {"origin:add": 2} + def test_origin_metadata_get(self, swh_storage, sample_data): + origin, origin2 = sample_data.origins[:2] + fetcher, fetcher2 = sample_data.fetchers[:2] + authority, authority2 = sample_data.authorities[:2] + ( + origin1_metadata1, + origin1_metadata2, + origin1_metadata3, + ) = sample_data.origin_metadata[:3] + + assert swh_storage.origin_add([origin, origin2]) == {"origin:add": 2} - origin1_metadata1 = data.origin_metadata - origin1_metadata2 = data.origin_metadata2 - origin1_metadata3 = data.origin_metadata3 - origin2_metadata = attr.evolve(data.origin_metadata2, id=origin_url2) + origin2_metadata = attr.evolve(origin1_metadata2, id=origin2.url) swh_storage.metadata_authority_add([authority, authority2]) swh_storage.metadata_fetcher_add([fetcher, fetcher2]) swh_storage.object_metadata_add( [origin1_metadata1, origin1_metadata2, origin1_metadata3, origin2_metadata] ) result = swh_storage.object_metadata_get( - MetadataTargetType.ORIGIN, origin_url1, authority + MetadataTargetType.ORIGIN, origin.url, authority ) assert result["next_page_token"] is None assert [origin1_metadata1, origin1_metadata2] == list( sorted(result["results"], key=lambda x: x.discovery_date,) ) result = swh_storage.object_metadata_get( - MetadataTargetType.ORIGIN, origin_url1, authority2 + MetadataTargetType.ORIGIN, origin.url, authority2 ) assert result["next_page_token"] is None assert [origin1_metadata3] == list( sorted(result["results"], key=lambda x: x.discovery_date,) ) result = swh_storage.object_metadata_get( - MetadataTargetType.ORIGIN, origin_url2, authority + MetadataTargetType.ORIGIN, origin2.url, authority ) assert result["next_page_token"] is None assert [origin2_metadata] == list(result["results"],) - def test_origin_metadata_get_after(self, swh_storage): - origin = data.origin - fetcher = data.metadata_fetcher - authority = data.metadata_authority + def test_origin_metadata_get_after(self, swh_storage, sample_data): + origin = sample_data.origin + fetcher = sample_data.metadata_fetcher + authority = sample_data.metadata_authority + origin_metadata, origin_metadata2 = sample_data.origin_metadata[:2] + assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) - - swh_storage.object_metadata_add([data.origin_metadata, data.origin_metadata2]) + swh_storage.object_metadata_add([origin_metadata, origin_metadata2]) result = swh_storage.object_metadata_get( MetadataTargetType.ORIGIN, - origin["url"], + origin.url, authority, - after=data.origin_metadata.discovery_date - timedelta(seconds=1), + after=origin_metadata.discovery_date - timedelta(seconds=1), ) assert result["next_page_token"] is None - assert [data.origin_metadata, data.origin_metadata2] == list( - sorted(result["results"], key=lambda x: x.discovery_date,) - ) + assert list(sorted(result["results"], key=lambda x: x.discovery_date,)) == [ + origin_metadata, + origin_metadata2, + ] result = swh_storage.object_metadata_get( MetadataTargetType.ORIGIN, - origin["url"], + origin.url, authority, - after=data.origin_metadata.discovery_date, + after=origin_metadata.discovery_date, ) assert result["next_page_token"] is None - assert [data.origin_metadata2] == result["results"] + assert result["results"] == [origin_metadata2] result = swh_storage.object_metadata_get( MetadataTargetType.ORIGIN, - origin["url"], + origin.url, authority, - after=data.origin_metadata2.discovery_date, + after=origin_metadata2.discovery_date, ) assert result["next_page_token"] is None - assert [] == result["results"] + assert result["results"] == [] - def test_origin_metadata_get_paginate(self, swh_storage): - origin = data.origin - fetcher = data.metadata_fetcher - authority = data.metadata_authority + def test_origin_metadata_get_paginate(self, swh_storage, sample_data): + origin = sample_data.origin + fetcher = sample_data.metadata_fetcher + authority = sample_data.metadata_authority + origin_metadata, origin_metadata2 = sample_data.origin_metadata[:2] assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) - swh_storage.object_metadata_add([data.origin_metadata, data.origin_metadata2]) + swh_storage.object_metadata_add([origin_metadata, origin_metadata2]) swh_storage.object_metadata_get( - MetadataTargetType.ORIGIN, origin["url"], authority + MetadataTargetType.ORIGIN, origin.url, authority ) result = swh_storage.object_metadata_get( - MetadataTargetType.ORIGIN, origin["url"], authority, limit=1 + MetadataTargetType.ORIGIN, origin.url, authority, limit=1 ) assert result["next_page_token"] is not None - assert [data.origin_metadata] == result["results"] + assert result["results"] == [origin_metadata] result = swh_storage.object_metadata_get( MetadataTargetType.ORIGIN, - origin["url"], + origin.url, authority, limit=1, page_token=result["next_page_token"], ) assert result["next_page_token"] is None - assert [data.origin_metadata2] == result["results"] + assert result["results"] == [origin_metadata2] - def test_origin_metadata_get_paginate_same_date(self, swh_storage): - origin = data.origin - fetcher1 = data.metadata_fetcher - fetcher2 = data.metadata_fetcher2 - authority = data.metadata_authority + def test_origin_metadata_get_paginate_same_date(self, swh_storage, sample_data): + origin = sample_data.origin + fetcher1, fetcher2 = sample_data.fetchers[:2] + authority = sample_data.metadata_authority + origin_metadata, origin_metadata2 = sample_data.origin_metadata[:2] assert swh_storage.origin_add([origin]) == {"origin:add": 1} - swh_storage.metadata_fetcher_add([fetcher1]) - swh_storage.metadata_fetcher_add([fetcher2]) + swh_storage.metadata_fetcher_add([fetcher1, fetcher2]) swh_storage.metadata_authority_add([authority]) - origin_metadata2 = attr.evolve( - data.origin_metadata2, - discovery_date=data.origin_metadata2.discovery_date, + new_origin_metadata2 = attr.evolve( + origin_metadata2, + discovery_date=origin_metadata2.discovery_date, fetcher=attr.evolve(fetcher2, metadata=None), ) - swh_storage.object_metadata_add([data.origin_metadata, origin_metadata2]) + swh_storage.object_metadata_add([origin_metadata, new_origin_metadata2]) result = swh_storage.object_metadata_get( - MetadataTargetType.ORIGIN, origin["url"], authority, limit=1 + MetadataTargetType.ORIGIN, origin.url, authority, limit=1 ) assert result["next_page_token"] is not None - assert [data.origin_metadata] == result["results"] + assert result["results"] == [origin_metadata] result = swh_storage.object_metadata_get( MetadataTargetType.ORIGIN, - origin["url"], + origin.url, authority, limit=1, page_token=result["next_page_token"], ) assert result["next_page_token"] is None - assert [origin_metadata2] == result["results"] + assert result["results"] == [new_origin_metadata2] - def test_origin_metadata_add_missing_authority(self, swh_storage): - origin = data.origin - fetcher = data.metadata_fetcher + def test_origin_metadata_add_missing_authority(self, swh_storage, sample_data): + origin = sample_data.origin + fetcher = sample_data.metadata_fetcher + origin_metadata, origin_metadata2 = sample_data.origin_metadata[:2] assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_fetcher_add([fetcher]) with pytest.raises(StorageArgumentException, match="authority"): - swh_storage.object_metadata_add( - [data.origin_metadata, data.origin_metadata2] - ) + swh_storage.object_metadata_add([origin_metadata, origin_metadata2]) - def test_origin_metadata_add_missing_fetcher(self, swh_storage): - origin = data.origin - authority = data.metadata_authority + def test_origin_metadata_add_missing_fetcher(self, swh_storage, sample_data): + origin = sample_data.origin + authority = sample_data.metadata_authority + origin_metadata, origin_metadata2 = sample_data.origin_metadata[:2] assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_authority_add([authority]) with pytest.raises(StorageArgumentException, match="fetcher"): - swh_storage.object_metadata_add( - [data.origin_metadata, data.origin_metadata2] - ) - - def test_origin_metadata_get__invalid_id_type(self, swh_storage): - origin = data.origin - fetcher = data.metadata_fetcher - authority = data.metadata_authority + swh_storage.object_metadata_add([origin_metadata, origin_metadata2]) + + def test_origin_metadata_get__invalid_id_type(self, swh_storage, sample_data): + origin = sample_data.origin + authority = sample_data.metadata_authority + fetcher = sample_data.metadata_fetcher + origin_metadata, origin_metadata2 = sample_data.origin_metadata[:2] + content_metadata = sample_data.content_metadata[0] assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) - swh_storage.object_metadata_add([data.origin_metadata, data.origin_metadata2]) + swh_storage.object_metadata_add([origin_metadata, origin_metadata2]) with pytest.raises(StorageArgumentException, match="SWHID"): swh_storage.object_metadata_get( - MetadataTargetType.ORIGIN, data.content_metadata.id, authority, + MetadataTargetType.ORIGIN, content_metadata.id, authority, ) class TestStorageGeneratedData: def test_generate_content_get(self, swh_storage, swh_contents): contents_with_data = [c.to_dict() for c in swh_contents if c.status != "absent"] # input the list of sha1s we want from storage get_sha1s = [c["sha1"] for c in contents_with_data] # retrieve contents actual_contents = list(swh_storage.content_get(get_sha1s)) assert None not in actual_contents assert_contents_ok(contents_with_data, actual_contents) def test_generate_content_get_metadata(self, swh_storage, swh_contents): # input the list of sha1s we want from storage expected_contents = [c.to_dict() for c in swh_contents if c.status != "absent"] get_sha1s = [c["sha1"] for c in expected_contents] # retrieve contents meta_contents = swh_storage.content_get_metadata(get_sha1s) assert len(list(meta_contents)) == len(get_sha1s) actual_contents = [] for contents in meta_contents.values(): actual_contents.extend(contents) keys_to_check = {"length", "status", "sha1", "sha1_git", "sha256", "blake2s256"} assert_contents_ok( expected_contents, actual_contents, keys_to_check=keys_to_check ) def test_generate_content_get_range(self, swh_storage, swh_contents): """content_get_range returns complete range""" present_contents = [c.to_dict() for c in swh_contents if c.status != "absent"] get_sha1s = sorted([c.sha1 for c in swh_contents if c.status != "absent"]) start = get_sha1s[2] end = get_sha1s[-2] actual_result = swh_storage.content_get_range(start, end) assert actual_result["next"] is None actual_contents = actual_result["contents"] expected_contents = [c for c in present_contents if start <= c["sha1"] <= end] if expected_contents: assert_contents_ok(expected_contents, actual_contents, ["sha1"]) else: assert actual_contents == [] def test_generate_content_get_range_full(self, swh_storage, swh_contents): """content_get_range for a full range returns all available contents""" present_contents = [c.to_dict() for c in swh_contents if c.status != "absent"] start = b"0" * 40 end = b"f" * 40 actual_result = swh_storage.content_get_range(start, end) assert actual_result["next"] is None actual_contents = actual_result["contents"] expected_contents = [c for c in present_contents if start <= c["sha1"] <= end] if expected_contents: assert_contents_ok(expected_contents, actual_contents, ["sha1"]) else: assert actual_contents == [] def test_generate_content_get_range_empty(self, swh_storage, swh_contents): """content_get_range for an empty range returns nothing""" start = b"0" * 40 end = b"f" * 40 actual_result = swh_storage.content_get_range(end, start) assert actual_result["next"] is None assert len(actual_result["contents"]) == 0 def test_generate_content_get_range_limit_none(self, swh_storage): """content_get_range call with wrong limit input should fail""" with pytest.raises(StorageArgumentException) as e: swh_storage.content_get_range(start=None, end=None, limit=None) assert e.value.args == ("limit should not be None",) def test_generate_content_get_range_no_limit(self, swh_storage, swh_contents): """content_get_range returns contents within range provided""" # input the list of sha1s we want from storage get_sha1s = sorted([c.sha1 for c in swh_contents if c.status != "absent"]) start = get_sha1s[0] end = get_sha1s[-1] # retrieve contents actual_result = swh_storage.content_get_range(start, end) actual_contents = actual_result["contents"] assert actual_result["next"] is None assert len(actual_contents) == len(get_sha1s) expected_contents = [c.to_dict() for c in swh_contents if c.status != "absent"] assert_contents_ok(expected_contents, actual_contents, ["sha1"]) def test_generate_content_get_range_limit(self, swh_storage, swh_contents): """content_get_range paginates results if limit exceeded""" contents_map = {c.sha1: c.to_dict() for c in swh_contents} # input the list of sha1s we want from storage get_sha1s = sorted([c.sha1 for c in swh_contents if c.status != "absent"]) start = get_sha1s[0] end = get_sha1s[-1] # retrieve contents limited to n-1 results limited_results = len(get_sha1s) - 1 actual_result = swh_storage.content_get_range(start, end, limit=limited_results) actual_contents = actual_result["contents"] assert actual_result["next"] == get_sha1s[-1] assert len(actual_contents) == limited_results expected_contents = [contents_map[sha1] for sha1 in get_sha1s[:-1]] assert_contents_ok(expected_contents, actual_contents, ["sha1"]) # retrieve next part actual_results2 = swh_storage.content_get_range(start=end, end=end) assert actual_results2["next"] is None actual_contents2 = actual_results2["contents"] assert len(actual_contents2) == 1 assert_contents_ok([contents_map[get_sha1s[-1]]], actual_contents2, ["sha1"]) def test_origin_get_range_from_zero(self, swh_storage, swh_origins): actual_origins = list( swh_storage.origin_get_range(origin_from=0, origin_count=0) ) assert len(actual_origins) == 0 actual_origins = list( swh_storage.origin_get_range(origin_from=0, origin_count=1) ) assert len(actual_origins) == 1 assert actual_origins[0]["id"] == 1 - assert actual_origins[0]["url"] == swh_origins[0]["url"] + assert actual_origins[0]["url"] == swh_origins[0].url @pytest.mark.parametrize( "origin_from,origin_count", [(1, 1), (1, 10), (1, 20), (1, 101), (11, 0), (11, 10), (91, 11)], ) def test_origin_get_range( self, swh_storage, swh_origins, origin_from, origin_count ): actual_origins = list( swh_storage.origin_get_range( origin_from=origin_from, origin_count=origin_count ) ) origins_with_id = list(enumerate(swh_origins, start=1)) expected_origins = [ - {"url": origin["url"], "id": origin_id,} + {"url": origin.url, "id": origin_id,} for (origin_id, origin) in origins_with_id[ origin_from - 1 : origin_from + origin_count - 1 ] ] assert actual_origins == expected_origins @pytest.mark.parametrize("limit", [1, 7, 10, 100, 1000]) def test_origin_list(self, swh_storage, swh_origins, limit): returned_origins = [] page_token = None i = 0 while True: result = swh_storage.origin_list(page_token=page_token, limit=limit) assert len(result["origins"]) <= limit returned_origins.extend(origin["url"] for origin in result["origins"]) i += 1 page_token = result.get("next_page_token") if page_token is None: assert i * limit >= len(swh_origins) break else: assert len(result["origins"]) == limit - expected_origins = [origin["url"] for origin in swh_origins] + expected_origins = [origin.url for origin in swh_origins] assert sorted(returned_origins) == sorted(expected_origins) - ORIGINS = [ - "https://github.com/user1/repo1", - "https://github.com/user2/repo1", - "https://github.com/user3/repo1", - "https://gitlab.com/user1/repo1", - "https://gitlab.com/user2/repo1", - "https://forge.softwareheritage.org/source/repo1", - ] - - def test_origin_count(self, swh_storage): - swh_storage.origin_add([{"url": url} for url in self.ORIGINS]) + def test_origin_count(self, swh_storage, sample_data): + swh_storage.origin_add(sample_data.origins) assert swh_storage.origin_count("github") == 3 assert swh_storage.origin_count("gitlab") == 2 assert swh_storage.origin_count(".*user.*", regexp=True) == 5 assert swh_storage.origin_count(".*user.*", regexp=False) == 0 assert swh_storage.origin_count(".*user1.*", regexp=True) == 2 assert swh_storage.origin_count(".*user1.*", regexp=False) == 0 - def test_origin_count_with_visit_no_visits(self, swh_storage): - swh_storage.origin_add([{"url": url} for url in self.ORIGINS]) + def test_origin_count_with_visit_no_visits(self, swh_storage, sample_data): + swh_storage.origin_add(sample_data.origins) # none of them have visits, so with_visit=True => 0 assert swh_storage.origin_count("github", with_visit=True) == 0 assert swh_storage.origin_count("gitlab", with_visit=True) == 0 assert swh_storage.origin_count(".*user.*", regexp=True, with_visit=True) == 0 assert swh_storage.origin_count(".*user.*", regexp=False, with_visit=True) == 0 assert swh_storage.origin_count(".*user1.*", regexp=True, with_visit=True) == 0 assert swh_storage.origin_count(".*user1.*", regexp=False, with_visit=True) == 0 - def test_origin_count_with_visit_with_visits_no_snapshot(self, swh_storage): - swh_storage.origin_add([{"url": url} for url in self.ORIGINS]) + def test_origin_count_with_visit_with_visits_no_snapshot( + self, swh_storage, sample_data + ): + swh_storage.origin_add(sample_data.origins) origin_url = "https://github.com/user1/repo1" visit = OriginVisit(origin=origin_url, date=now(), type="git",) swh_storage.origin_visit_add([visit]) assert swh_storage.origin_count("github", with_visit=False) == 3 # it has a visit, but no snapshot, so with_visit=True => 0 assert swh_storage.origin_count("github", with_visit=True) == 0 assert swh_storage.origin_count("gitlab", with_visit=False) == 2 # these gitlab origins have no visit assert swh_storage.origin_count("gitlab", with_visit=True) == 0 assert ( swh_storage.origin_count("github.*user1", regexp=True, with_visit=False) == 1 ) assert ( swh_storage.origin_count("github.*user1", regexp=True, with_visit=True) == 0 ) assert swh_storage.origin_count("github", regexp=True, with_visit=True) == 0 - def test_origin_count_with_visit_with_visits_and_snapshot(self, swh_storage): - swh_storage.origin_add([{"url": url} for url in self.ORIGINS]) + def test_origin_count_with_visit_with_visits_and_snapshot( + self, swh_storage, sample_data + ): + snapshot = sample_data.snapshot + swh_storage.origin_add(sample_data.origins) - swh_storage.snapshot_add([data.snapshot]) + swh_storage.snapshot_add([snapshot]) origin_url = "https://github.com/user1/repo1" visit = OriginVisit(origin=origin_url, date=now(), type="git",) visit = swh_storage.origin_visit_add([visit])[0] swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin_url, visit=visit.visit, date=now(), status="ongoing", - snapshot=data.snapshot["id"], + snapshot=snapshot.id, ) ] ) assert swh_storage.origin_count("github", with_visit=False) == 3 # github/user1 has a visit and a snapshot, so with_visit=True => 1 assert swh_storage.origin_count("github", with_visit=True) == 1 assert ( swh_storage.origin_count("github.*user1", regexp=True, with_visit=False) == 1 ) assert ( swh_storage.origin_count("github.*user1", regexp=True, with_visit=True) == 1 ) assert swh_storage.origin_count("github", regexp=True, with_visit=True) == 1 @settings(suppress_health_check=[HealthCheck.too_slow]) - @given(strategies.lists(objects(), max_size=2)) + @given(strategies.lists(objects(split_content=True), max_size=2)) def test_add_arbitrary(self, swh_storage, objects): for (obj_type, obj) in objects: - obj = obj.to_dict() - if obj_type == "origin_visit": - origin_url = obj.pop("origin") - swh_storage.origin_add_one({"url": origin_url}) - if "visit" in obj: - del obj["visit"] - visit = OriginVisit( - origin=origin_url, date=obj["date"], type=obj["type"], - ) + if obj.object_type == "origin_visit": + swh_storage.origin_add([Origin(url=obj.origin)]) + visit = OriginVisit(origin=obj.origin, date=obj.date, type=obj.type,) swh_storage.origin_visit_add([visit]) else: - if obj_type == "content" and obj["status"] == "absent": - obj_type = "skipped_content" method = getattr(swh_storage, obj_type + "_add") try: method([obj]) except HashCollision: pass @pytest.mark.db class TestLocalStorage: """Test the local storage""" # This test is only relevant on the local storage, with an actual # objstorage raising an exception - def test_content_add_objstorage_exception(self, swh_storage): + def test_content_add_objstorage_exception(self, swh_storage, sample_data): + content = sample_data.content + swh_storage.objstorage.content_add = Mock( side_effect=Exception("mocked broken objstorage") ) - with pytest.raises(Exception) as e: - swh_storage.content_add([data.cont]) + with pytest.raises(Exception, match="mocked broken"): + swh_storage.content_add([content]) - assert e.value.args == ("mocked broken objstorage",) - missing = list(swh_storage.content_missing([data.cont])) - assert missing == [data.cont["sha1"]] + missing = list(swh_storage.content_missing([content.hashes()])) + assert missing == [content.sha1] @pytest.mark.db class TestStorageRaceConditions: @pytest.mark.xfail - def test_content_add_race(self, swh_storage): + def test_content_add_race(self, swh_storage, sample_data): + content = sample_data.content results = queue.Queue() def thread(): try: with db_transaction(swh_storage) as (db, cur): - ret = swh_storage.content_add([data.cont], db=db, cur=cur) + ret = swh_storage.content_add([content], db=db, cur=cur) results.put((threading.get_ident(), "data", ret)) except Exception as e: results.put((threading.get_ident(), "exc", e)) t1 = threading.Thread(target=thread) t2 = threading.Thread(target=thread) t1.start() # this avoids the race condition # import time # time.sleep(1) t2.start() t1.join() t2.join() r1 = results.get(block=False) r2 = results.get(block=False) with pytest.raises(queue.Empty): results.get(block=False) assert r1[0] != r2[0] assert r1[1] == "data", "Got exception %r in Thread%s" % (r1[2], r1[0]) assert r2[1] == "data", "Got exception %r in Thread%s" % (r2[2], r2[0]) @pytest.mark.db class TestPgStorage: """This class is dedicated for the rare case where the schema needs to be altered dynamically. Otherwise, the tests could be blocking when ran altogether. """ - def test_content_update_with_new_cols(self, swh_storage): + def test_content_update_with_new_cols(self, swh_storage, sample_data): + content, content2 = sample_data.contents[:2] + swh_storage.journal_writer.journal = None # TODO, not supported with db_transaction(swh_storage) as (_, cur): cur.execute( """alter table content add column test text default null, add column test2 text default null""" ) - cont = copy.deepcopy(data.cont2) - swh_storage.content_add([cont]) + swh_storage.content_add([content]) + + cont = content.to_dict() cont["test"] = "value-1" cont["test2"] = "value-2" swh_storage.content_update([cont], keys=["test", "test2"]) with db_transaction(swh_storage) as (_, cur): cur.execute( """SELECT sha1, sha1_git, sha256, length, status, test, test2 FROM content WHERE sha1 = %s""", (cont["sha1"],), ) datum = cur.fetchone() assert datum == ( cont["sha1"], cont["sha1_git"], cont["sha256"], cont["length"], "visible", cont["test"], cont["test2"], ) with db_transaction(swh_storage) as (_, cur): cur.execute( """alter table content drop column test, drop column test2""" ) - def test_content_add_db(self, swh_storage): - cont = data.cont + def test_content_add_db(self, swh_storage, sample_data): + content = sample_data.content - actual_result = swh_storage.content_add([cont]) + actual_result = swh_storage.content_add([content]) assert actual_result == { "content:add": 1, - "content:add:bytes": cont["length"], + "content:add:bytes": content.length, } if hasattr(swh_storage, "objstorage"): - assert cont["sha1"] in swh_storage.objstorage.objstorage + assert content.sha1 in swh_storage.objstorage.objstorage with db_transaction(swh_storage) as (_, cur): cur.execute( "SELECT sha1, sha1_git, sha256, length, status" " FROM content WHERE sha1 = %s", - (cont["sha1"],), + (content.sha1,), ) datum = cur.fetchone() assert datum == ( - cont["sha1"], - cont["sha1_git"], - cont["sha256"], - cont["length"], + content.sha1, + content.sha1_git, + content.sha256, + content.length, "visible", ) - expected_cont = cont.copy() - del expected_cont["data"] contents = [ obj for (obj_type, obj) in swh_storage.journal_writer.journal.objects if obj_type == "content" ] assert len(contents) == 1 - for obj in contents: - obj_d = obj.to_dict() - del obj_d["ctime"] - assert obj_d == expected_cont + assert contents[0] == attr.evolve(content, data=None) - def test_content_add_metadata_db(self, swh_storage): - cont = data.cont - del cont["data"] - cont["ctime"] = now() + def test_content_add_metadata_db(self, swh_storage, sample_data): + content = attr.evolve(sample_data.content, data=None, ctime=now()) - actual_result = swh_storage.content_add_metadata([cont]) + actual_result = swh_storage.content_add_metadata([content]) assert actual_result == { "content:add": 1, } if hasattr(swh_storage, "objstorage"): - assert cont["sha1"] not in swh_storage.objstorage.objstorage + assert content.sha1 not in swh_storage.objstorage.objstorage with db_transaction(swh_storage) as (_, cur): cur.execute( "SELECT sha1, sha1_git, sha256, length, status" " FROM content WHERE sha1 = %s", - (cont["sha1"],), + (content.sha1,), ) datum = cur.fetchone() assert datum == ( - cont["sha1"], - cont["sha1_git"], - cont["sha256"], - cont["length"], + content.sha1, + content.sha1_git, + content.sha256, + content.length, "visible", ) contents = [ obj for (obj_type, obj) in swh_storage.journal_writer.journal.objects if obj_type == "content" ] assert len(contents) == 1 - for obj in contents: - obj_d = obj.to_dict() - assert obj_d == cont + assert contents[0] == content - def test_skipped_content_add_db(self, swh_storage): - cont = data.skipped_cont - cont2 = data.skipped_cont2 - cont2["blake2s256"] = None + def test_skipped_content_add_db(self, swh_storage, sample_data): + content, cont2 = sample_data.skipped_contents[:2] + content2 = attr.evolve(cont2, blake2s256=None) - actual_result = swh_storage.skipped_content_add([cont, cont, cont2]) + actual_result = swh_storage.skipped_content_add([content, content, content2]) assert 2 <= actual_result.pop("skipped_content:add") <= 3 assert actual_result == {} with db_transaction(swh_storage) as (_, cur): cur.execute( "SELECT sha1, sha1_git, sha256, blake2s256, " "length, status, reason " "FROM skipped_content ORDER BY sha1_git" ) dbdata = cur.fetchall() assert len(dbdata) == 2 assert dbdata[0] == ( - cont["sha1"], - cont["sha1_git"], - cont["sha256"], - cont["blake2s256"], - cont["length"], + content.sha1, + content.sha1_git, + content.sha256, + content.blake2s256, + content.length, "absent", "Content too long", ) assert dbdata[1] == ( - cont2["sha1"], - cont2["sha1_git"], - cont2["sha256"], - cont2["blake2s256"], - cont2["length"], + content2.sha1, + content2.sha1_git, + content2.sha256, + content2.blake2s256, + content2.length, "absent", "Content too long", ) def test_clear_buffers(self, swh_storage): """Calling clear buffers on real storage does nothing """ assert swh_storage.clear_buffers() is None def test_flush(self, swh_storage): """Calling clear buffers on real storage does nothing """ assert swh_storage.flush() == {} diff --git a/swh/storage/tests/test_storage_data.py b/swh/storage/tests/test_storage_data.py new file mode 100644 index 00000000..be9c7bce --- /dev/null +++ b/swh/storage/tests/test_storage_data.py @@ -0,0 +1,29 @@ +# Copyright (C) 2020 The Software Heritage developers +# See the AUTHORS file at the top-level directory of this distribution +# License: GNU General Public License version 3, or any later version +# See top-level LICENSE file for more information + +from swh.model.model import BaseModel + +from swh.storage.tests.storage_data import StorageData + + +def test_storage_data(): + data = StorageData() + + for attribute_key in [ + "contents", + "skipped_contents", + "directories", + "revisions", + "releases", + "snapshots", + "origins", + "origin_visits", + "fetchers", + "authorities", + "origin_metadata", + "content_metadata", + ]: + for obj in getattr(data, attribute_key): + assert isinstance(obj, BaseModel) diff --git a/swh/storage/validate.py b/swh/storage/validate.py deleted file mode 100644 index 7231fe48..00000000 --- a/swh/storage/validate.py +++ /dev/null @@ -1,154 +0,0 @@ -# Copyright (C) 2020 The Software Heritage developers -# See the AUTHORS file at the top-level directory of this distribution -# License: GNU General Public License version 3, or any later version -# See top-level LICENSE file for more information - -import contextlib -from typing import Dict, Iterable, Iterator, Optional, Tuple, Type, TypeVar, Union - -from deprecated import deprecated - -from swh.model.model import ( - SkippedContent, - Content, - Directory, - Revision, - Release, - Snapshot, - OriginVisit, - Origin, -) - -from . import get_storage -from .exc import StorageArgumentException - - -VALIDATION_EXCEPTIONS = [ - KeyError, - TypeError, - ValueError, -] - - -@contextlib.contextmanager -def convert_validation_exceptions(): - """Catches validation errors arguments, and re-raises a - StorageArgumentException.""" - try: - yield - except tuple(VALIDATION_EXCEPTIONS) as e: - raise StorageArgumentException(str(e)) - - -ModelObject = TypeVar( - "ModelObject", - Content, - SkippedContent, - Directory, - Revision, - Release, - Snapshot, - OriginVisit, - Origin, -) - - -def dict_converter( - model: Type[ModelObject], obj: Union[Dict, ModelObject] -) -> ModelObject: - """Convert dicts to model objects; Passes through model objects as well.""" - if isinstance(obj, dict): - with convert_validation_exceptions(): - return model.from_dict(obj) - else: - return obj - - -class ValidatingProxyStorage: - """Storage implementation converts dictionaries to swh-model objects - before calling its backend, and back to dicts before returning results - - For test purposes. - """ - - def __init__(self, storage): - self.storage = get_storage(**storage) - - def __getattr__(self, key): - if key == "storage": - raise AttributeError(key) - return getattr(self.storage, key) - - def content_add(self, content: Iterable[Union[Content, Dict]]) -> Dict: - return self.storage.content_add([dict_converter(Content, c) for c in content]) - - def content_add_metadata(self, content: Iterable[Union[Content, Dict]]) -> Dict: - return self.storage.content_add_metadata( - [dict_converter(Content, c) for c in content] - ) - - def skipped_content_add( - self, content: Iterable[Union[SkippedContent, Dict]] - ) -> Dict: - return self.storage.skipped_content_add( - [dict_converter(SkippedContent, c) for c in content] - ) - - def directory_add(self, directories: Iterable[Union[Directory, Dict]]) -> Dict: - return self.storage.directory_add( - [dict_converter(Directory, d) for d in directories] - ) - - def revision_add(self, revisions: Iterable[Union[Revision, Dict]]) -> Dict: - return self.storage.revision_add( - [dict_converter(Revision, r) for r in revisions] - ) - - def revision_get(self, revisions: Iterable[bytes]) -> Iterator[Optional[Dict]]: - rev_dicts = self.storage.revision_get(revisions) - with convert_validation_exceptions(): - for rev_dict in rev_dicts: - if rev_dict is None: - yield None - else: - yield Revision.from_dict(rev_dict).to_dict() - - def revision_log( - self, revisions: Iterable[bytes], limit: Optional[int] = None - ) -> Iterator[Dict]: - for rev_dict in self.storage.revision_log(revisions, limit): - with convert_validation_exceptions(): - rev_obj = Revision.from_dict(rev_dict) - yield rev_obj.to_dict() - - def revision_shortlog( - self, revisions: Iterable[bytes], limit: Optional[int] = None - ) -> Iterator[Tuple[bytes, Tuple]]: - for rev, parents in self.storage.revision_shortlog(revisions, limit): - yield (rev, tuple(parents)) - - def release_add(self, releases: Iterable[Union[Dict, Release]]) -> Dict: - return self.storage.release_add( - [dict_converter(Release, release) for release in releases] - ) - - def snapshot_add(self, snapshots: Iterable[Union[Dict, Snapshot]]) -> Dict: - return self.storage.snapshot_add( - [dict_converter(Snapshot, snapshot) for snapshot in snapshots] - ) - - def origin_visit_add(self, visits: Iterable[OriginVisit]) -> Iterable[OriginVisit]: - return self.storage.origin_visit_add(visits) - - def origin_add(self, origins: Iterable[Union[Dict, Origin]]) -> Dict[str, int]: - return self.storage.origin_add([dict_converter(Origin, o) for o in origins]) - - @deprecated("Use origin_add([origin]) instead") - def origin_add_one(self, origin: Union[Dict, Origin]) -> int: - return self.storage.origin_add_one(dict_converter(Origin, origin)) - - def clear_buffers(self, object_types: Optional[Iterable[str]] = None) -> None: - return self.storage.clear_buffers(object_types) - - def flush(self, object_types: Optional[Iterable[str]] = None) -> Dict: - return self.storage.flush(object_types)