diff --git a/PKG-INFO b/PKG-INFO index 0df6abfb..782ed476 100644 --- a/PKG-INFO +++ b/PKG-INFO @@ -1,223 +1,223 @@ Metadata-Version: 2.1 Name: swh.storage -Version: 0.30.0 +Version: 0.30.1 Summary: Software Heritage storage manager Home-page: https://forge.softwareheritage.org/diffusion/DSTO/ Author: Software Heritage developers Author-email: swh-devel@inria.fr License: UNKNOWN Project-URL: Bug Reports, https://forge.softwareheritage.org/maniphest Project-URL: Funding, https://www.softwareheritage.org/donate Project-URL: Source, https://forge.softwareheritage.org/source/swh-storage Project-URL: Documentation, https://docs.softwareheritage.org/devel/swh-storage/ Description: swh-storage =========== Abstraction layer over the archive, allowing to access all stored source code artifacts as well as their metadata. See the [documentation](https://docs.softwareheritage.org/devel/swh-storage/index.html) for more details. ## Quick start ### Dependencies Python tests for this module include tests that cannot be run without a local Postgresql database, so you need the Postgresql server executable on your machine (no need to have a running Postgresql server). They also expect a cassandra server. #### Debian-like host ``` $ sudo apt install libpq-dev postgresql-11 cassandra ``` #### Non Debian-like host The tests expects the path to `cassandra` to either be unspecified, it is then looked up at `/usr/sbin/cassandra`, either specified through the environment variable `SWH_CASSANDRA_BIN`. Optionally, you can avoid running the cassandra tests. ``` (swh) :~/swh-storage$ tox -- -m 'not cassandra' ``` ### Installation It is strongly recommended to use a virtualenv. In the following, we consider you work in a virtualenv named `swh`. See the [developer setup guide](https://docs.softwareheritage.org/devel/developer-setup.html#developer-setup) for a more details on how to setup a working environment. You can install the package directly from [pypi](https://pypi.org/p/swh.storage): ``` (swh) :~$ pip install swh.storage [...] ``` Or from sources: ``` (swh) :~$ git clone https://forge.softwareheritage.org/source/swh-storage.git [...] (swh) :~$ cd swh-storage (swh) :~/swh-storage$ pip install . [...] ``` Then you can check it's properly installed: ``` (swh) :~$ swh storage --help Usage: swh storage [OPTIONS] COMMAND [ARGS]... Software Heritage Storage tools. Options: -h, --help Show this message and exit. Commands: rpc-serve Software Heritage Storage RPC server. ``` ## Tests The best way of running Python tests for this module is to use [tox](https://tox.readthedocs.io/). ``` (swh) :~$ pip install tox ``` ### tox From the sources directory, simply use tox: ``` (swh) :~/swh-storage$ tox [...] ========= 315 passed, 6 skipped, 15 warnings in 40.86 seconds ========== _______________________________ summary ________________________________ flake8: commands succeeded py3: commands succeeded congratulations :) ``` Note: it is possible to set the `JAVA_HOME` environment variable to specify the version of the JVM to be used by Cassandra. For example, at the time of writing this, Cassandra does not support java 14, so one may want to use for example java 11: ``` (swh) :~/swh-storage$ export JAVA_HOME=/usr/lib/jvm/java-14-openjdk-amd64/bin/java (swh) :~/swh-storage$ tox [...] ``` ## Development The storage server can be locally started. It requires a configuration file and a running Postgresql database. ### Sample configuration A typical configuration `storage.yml` file is: ``` storage: cls: local db: "dbname=softwareheritage-dev user= password=" objstorage: cls: pathslicing root: /tmp/swh-storage/ slicing: 0:2/2:4/4:6 ``` which means, this uses: - a local storage instance whose db connection is to `softwareheritage-dev` local instance, - the objstorage uses a local objstorage instance whose: - `root` path is /tmp/swh-storage, - slicing scheme is `0:2/2:4/4:6`. This means that the identifier of the content (sha1) which will be stored on disk at first level with the first 2 hex characters, the second level with the next 2 hex characters and the third level with the next 2 hex characters. And finally the complete hash file holding the raw content. For example: 00062f8bd330715c4f819373653d97b3cd34394c will be stored at 00/06/2f/00062f8bd330715c4f819373653d97b3cd34394c Note that the `root` path should exist on disk before starting the server. ### Starting the storage server If the python package has been properly installed (e.g. in a virtual env), you should be able to use the command: ``` (swh) :~/swh-storage$ swh storage rpc-serve storage.yml ``` This runs a local swh-storage api at 5002 port. ``` (swh) :~/swh-storage$ curl http://127.0.0.1:5002 Software Heritage storage server

You have reached the Software Heritage storage server.
See its documentation and API for more information

``` ### And then what? In your upper layer ([loader-git](https://forge.softwareheritage.org/source/swh-loader-git/), [loader-svn](https://forge.softwareheritage.org/source/swh-loader-svn/), etc...), you can define a remote storage with this snippet of yaml configuration. ``` storage: cls: remote url: http://localhost:5002/ ``` You could directly define a local storage with the following snippet: ``` storage: cls: local db: service=swh-dev objstorage: cls: pathslicing root: /home/storage/swh-storage/ slicing: 0:2/2:4/4:6 ``` Platform: UNKNOWN Classifier: Programming Language :: Python :: 3 Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3) Classifier: Operating System :: OS Independent Classifier: Development Status :: 5 - Production/Stable Requires-Python: >=3.7 Description-Content-Type: text/markdown Provides-Extra: testing Provides-Extra: journal diff --git a/debian/changelog b/debian/changelog index 7c8c1190..490b29e9 100644 --- a/debian/changelog +++ b/debian/changelog @@ -1,2612 +1,2619 @@ -swh-storage (0.30.0-1~swh1~bpo10+1) buster-swh; urgency=medium +swh-storage (0.30.1-1~swh1) unstable-swh; urgency=medium - * Rebuild for buster-swh + * New upstream release 0.30.1 - (tagged by Antoine R. Dumont + (@ardumont) on 2021-05-21 10:09:02 + +0200) + * Upstream changes: - v0.30.1 - Finalize the config "local" + deprecation in favor of "postgresql" - tests: Make test + parameters order deterministic, so they don't crash pytest-xdist + - test_cassandra: Improve error when the process is started but not + listening - -- Software Heritage autobuilder (on jenkins-debian1) Tue, 18 May 2021 14:52:31 +0000 + -- Software Heritage autobuilder (on jenkins-debian1) Fri, 21 May 2021 08:22:33 +0000 swh-storage (0.30.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.30.0 - (tagged by David Douard on 2021-05-18 16:34:25 +0200) * Upstream changes: - v0.30.0 -- Software Heritage autobuilder (on jenkins-debian1) Tue, 18 May 2021 14:45:21 +0000 swh-storage (0.29.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.29.1 - (tagged by Nicolas Dandrimont on 2021-05-14 18:31:52 +0200) * Upstream changes: - Release swh.storage 0.29.1 - Add missing db migration -- Software Heritage autobuilder (on jenkins-debian1) Fri, 14 May 2021 16:59:42 +0000 swh-storage (0.29.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.29.0 - (tagged by Valentin Lorentz on 2021-05-11 15:04:58 +0200) * Upstream changes: - v0.29.0 - * Make the TenaciousProxyStorage retry when a single object add fails - * Move all proxy storages in swh/storage/proxies/ - * Deprecate the "local" storage cls in favor of "postgresql" - * cassandra: Add tests checking directory_add and snapshot_add are atomic. - * Add endpoint directory_get_entries, to quickly list a directory's entries - * content_get: Add support for queries by sha1_git -- Software Heritage autobuilder (on jenkins-debian1) Tue, 11 May 2021 13:12:42 +0000 swh-storage (0.28.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.28.0 - (tagged by Valentin Lorentz on 2021-05-06 15:52:03 +0200) * Upstream changes: - v0.28.0 - * Normalize all Storage.xxx_add() methods to return a summary - * cassandra: Add 'check_missing' option, to allow updating objects - * cassandra: Add a test of a 'complex' migration, with a PK update - * Add a new TenaciousProxyStorage - * Make postgresql's origin_add not raise an error in case of conflict - * Stop storing authority/fetcher metadata. - * tenacious: Document potential issues about objects being dropped - * Use swh.core 0.14 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 06 May 2021 14:06:51 +0000 swh-storage (0.27.4-1~swh1) unstable-swh; urgency=medium * New upstream release 0.27.4 - (tagged by Antoine Lambert on 2021-04-29 14:38:49 +0200) * Upstream changes: - version 0.27.4 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 29 Apr 2021 13:04:46 +0000 swh-storage (0.27.3-1~swh1) unstable-swh; urgency=medium * New upstream release 0.27.3 - (tagged by Antoine Lambert on 2021-04-09 14:59:36 +0200) * Upstream changes: - version 0.27.3 -- Software Heritage autobuilder (on jenkins-debian1) Fri, 09 Apr 2021 13:06:58 +0000 swh-storage (0.27.2-1~swh1) unstable-swh; urgency=medium * New upstream release 0.27.2 - (tagged by David Douard on 2021-04-07 15:06:41 +0200) * Upstream changes: - v0.27.2 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 08 Apr 2021 08:05:43 +0000 swh-storage (0.27.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.27.1 - (tagged by Valentin Lorentz on 2021-03-30 17:47:03 +0200) * Upstream changes: - v0.27.1 - * buffer: Add support for 'extid' -- Software Heritage autobuilder (on jenkins-debian1) Tue, 30 Mar 2021 15:59:01 +0000 swh-storage (0.27.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.27.0 - (tagged by Valentin Lorentz on 2021-03-29 14:33:24 +0200) * Upstream changes: - v0.27.0 - * origin_visit_status_add: Fix inconsistent/incorrect errors when type is None and visit is missing. - * extid: remove unicity on (extid_type, extid) and (target_type, target) -- Software Heritage autobuilder (on jenkins-debian1) Mon, 29 Mar 2021 12:44:14 +0000 swh-storage (0.26.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.26.0 - (tagged by Nicolas Dandrimont on 2021-03-22 14:44:35 +0100) * Upstream changes: - Release swh.storage v0.26.0 - Move raw_extrinsic_metadata deduplication to use a new id column. -- Software Heritage autobuilder (on jenkins-debian1) Mon, 22 Mar 2021 21:53:39 +0000 swh-storage (0.25.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.25.0 - (tagged by Antoine Lambert on 2021-03-18 13:55:10 +0100) * Upstream changes: - version 0.25.0 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 18 Mar 2021 13:02:02 +0000 swh-storage (0.24.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.24.1 - (tagged by Valentin Lorentz on 2021-03-04 23:32:36 +0100) * Upstream changes: - v0.24.1 - * tests: Drop hypothesis < 6 requirement - * Remove the remaining references to the deprecated SWHID class - * postgresql: Ensure a minimum limit for the snapshot branches query -- Software Heritage autobuilder (on jenkins-debian1) Thu, 04 Mar 2021 22:39:03 +0000 swh-storage (0.24.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.24.0 - (tagged by Valentin Lorentz on 2021-03-02 10:00:23 +0100) * Upstream changes: - v0.24.0 - * storage_tests: recompute ids when evolving RawExtrinsicMetadata objects. - * RawExtrinsicMetadata: update to use the API in swh-model 1.0.0 -- Software Heritage autobuilder (on jenkins-debian1) Tue, 02 Mar 2021 09:11:15 +0000 swh-storage (0.23.2-1~swh1) unstable-swh; urgency=medium * New upstream release 0.23.2 - (tagged by Antoine Lambert on 2021-02-19 11:47:03 +0100) * Upstream changes: - version 0.23.2 -- Software Heritage autobuilder (on jenkins-debian1) Fri, 19 Feb 2021 10:58:50 +0000 swh-storage (0.23.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.23.1 - (tagged by Antoine R. Dumont (@ardumont) on 2021-02-16 17:19:00 +0100) * Upstream changes: - v0.23.1 - Switch anonymized replayer test to use pytest parametrization -- Software Heritage autobuilder (on jenkins-debian1) Tue, 16 Feb 2021 16:28:25 +0000 swh-storage (0.23.0-1~swh2) unstable-swh; urgency=medium * Fix dependency issue -- Antoine R. Dumont (@ardumont) Tue, 16 Feb 2021 14:34:57 +0100 swh-storage (0.23.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.23.0 - (tagged by Antoine R. Dumont (@ardumont) on 2021-02-15 15:20:21 +0100) * Upstream changes: - v0.23.0 - storage: Refactor OriginVisitStatus instantiation - db: Unify sql joins on origin_visit_status using "USING" - storage.postgresql: Use origin_visit_status.type value as source - test_replay: Fix hang since confluent-kafka 1.6 release - postgresql: Fix dbversion() to return the max version instead of a random one. - buffer: ensure objects are flushed in topological order - Return an accurate summary from buffer's flush() method - buffer: add support for snapshots - buffer: add type annotations for tests -- Software Heritage autobuilder (on jenkins-debian1) Mon, 15 Feb 2021 14:39:04 +0000 swh-storage (0.22.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.22.0 - (tagged by Antoine R. Dumont (@ardumont) on 2021-02-03 12:09:29 +0100) * Upstream changes: - v0.22.0 - storage: Make origin_get_latest_visit_status return OriginVisitStatus - storage: Change origin_visit_status_get_random interface to return visit_status - Write introduction to swh-storage -- Software Heritage autobuilder (on jenkins-debian1) Wed, 03 Feb 2021 11:15:27 +0000 swh-storage (0.21.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.21.1 - (tagged by Vincent SELLIER on 2021-01-28 14:11:26 +0100) * Upstream changes: - v0.21.1 - * Correctly return origin_visit_status.type value everywhere -- Software Heritage autobuilder (on jenkins-debian1) Thu, 28 Jan 2021 13:19:24 +0000 swh-storage (0.21.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.21.0 - (tagged by Antoine R. Dumont (@ardumont) on 2021-01-20 15:42:40 +0100) * Upstream changes: - v0.21.0 - db: Allow new status values not_found, failed to OriginVisitStatus -- Software Heritage autobuilder (on jenkins-debian1) Wed, 20 Jan 2021 14:52:20 +0000 swh-storage (0.20.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.20.0 - (tagged by Antoine R. Dumont (@ardumont) on 2021-01-20 10:24:00 +0100) * Upstream changes: - v0.20.0 - storage: Add persistence of the field OriginVisitStatus.type - backfiller: Add type to the origin_visit_status topic - tests: Make test_content_add_race fail for the right reason. -- Software Heritage autobuilder (on jenkins-debian1) Wed, 20 Jan 2021 09:29:54 +0000 swh-storage (0.19.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.19.0 - (tagged by Vincent SELLIER on 2021-01-14 11:09:17 +0100) * Upstream changes: - v0.19.0 - * 2021-01-12 Adapt cassandra storage to ignore the new OriginVisitStatus.type field - * 2021- 01-08 Allow to use the JAVA_HOME environment for cassandra tests - * 2021-01-13 Enforce hypothesis <6 to prevent test breakage - * 2021-01-08 Make the CREATE_TABLES_QUERIES in cassandra/schema.py an explicit list - * 2020-12-18 Add a cli section in the doc - * 2020-11-24 storage.backfill: Allow cli run for origin_visit_status as well - * 2020-11-24 conftest: Reference swh.core.db.pytest_plugin -- Software Heritage autobuilder (on jenkins-debian1) Thu, 14 Jan 2021 10:18:31 +0000 swh-storage (0.18.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.18.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-11-23 14:46:41 +0100) * Upstream changes: - v0.18.0 - requirements-test.txt: Drop no longer needed pytest-postgresql requirement - backfill: Reverse flawed logic in SnapshotBranch generation - migrate_extrinsic_metadata: don't crash when deb revisions aren't referenced by any snapshot -- Software Heritage autobuilder (on jenkins-debian1) Mon, 23 Nov 2020 13:52:32 +0000 swh-storage (0.17.2-1~swh1) unstable-swh; urgency=medium * New upstream release 0.17.2 - (tagged by Nicolas Dandrimont on 2020-11-13 11:56:37 +0100) * Upstream changes: - Release swh.storage 0.17.2 - Future- proof get_journal_writer by setting the value_sanitizer argument - migrate_extrinsic_metadata improvements - backfill: only flush on every batch -- Software Heritage autobuilder (on jenkins-debian1) Fri, 13 Nov 2020 11:05:35 +0000 swh-storage (0.17.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.17.1 - (tagged by Antoine Lambert on 2020-11-05 13:50:35 +0100) * Upstream changes: - version 0.17.1 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 05 Nov 2020 12:56:53 +0000 swh-storage (0.17.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.17.0 - (tagged by Nicolas Dandrimont on 2020-11-03 18:09:53 +0100) * Upstream changes: - Release swh.storage v0.17.0 - Migrate all raw extrinsic metadata attributes from id to target - Add an `algos` function to resolve branch aliases - Prepare updates to make swh.journal more generic - Improve api server initialization - Various updates to the migrate_extrinsic_metadata script, notably writing - most metadata on directories instead of revisions -- Software Heritage autobuilder (on jenkins-debian1) Tue, 03 Nov 2020 17:20:45 +0000 swh-storage (0.16.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.16.0 - (tagged by Nicolas Dandrimont on 2020-10-09 18:23:24 +0200) * Upstream changes: - Release swh.storage v0.16.0 - Updates to the intrinsic metadata migration script - Various improvements to the buffer storage - Update swh storage backfill to use common configuration keys -- Software Heritage autobuilder (on jenkins-debian1) Fri, 09 Oct 2020 16:33:11 +0000 swh-storage (0.15.3-1~swh1) unstable-swh; urgency=medium * New upstream release 0.15.3 - (tagged by Nicolas Dandrimont on 2020-09-24 20:14:39 +0200) * Upstream changes: - Release swh.storage v0.15.3 - hopefully fix the documentation build -- Software Heritage autobuilder (on jenkins-debian1) Thu, 24 Sep 2020 18:24:14 +0000 swh-storage (0.15.2-1~swh1) unstable-swh; urgency=medium * New upstream release 0.15.2 - (tagged by Nicolas Dandrimont on 2020-09-24 19:22:11 +0200) * Upstream changes: - Release swh.storage v0.15.2 - no change rebuild to clean up jenkins fsckup accumulating old files. -- Software Heritage autobuilder (on jenkins-debian1) Thu, 24 Sep 2020 17:28:22 +0000 swh-storage (0.15.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.15.1 - (tagged by Nicolas Dandrimont on 2020-09-24 18:34:54 +0200) * Upstream changes: - Release swh.storage v0.15.1 - Restore buffer proxy behavior with default arguments -- Software Heritage autobuilder (on jenkins-debian1) Thu, 24 Sep 2020 16:44:22 +0000 swh-storage (0.15.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.15.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-09-24 16:54:07 +0200) * Upstream changes: - v0.15.0 - Support different database flavors in the SQL scripts - Add the SQL commands used to set up the logical replication publication - Output a warning when the version of the database is different than expected - Improve code quality and doc in BufferedProxyStorage - Adapt cli declaration entrypoint to swh.core 0.3 - Add warning about skipped_content (sneaking into the 'content' topics) - graph- replayer: fix to prevent wrong warning - pre-commit: Add isort hook and reorder imports with isort - pytest_plugin: Change dbname to storage to avoid clash in tests - pytest_plugin: Use psql to load SQL files instead of connecting with psycopg2 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 24 Sep 2020 15:03:58 +0000 swh-storage (0.14.3-1~swh1) unstable-swh; urgency=medium * New upstream release 0.14.3 - (tagged by David Douard on 2020-09-17 16:58:59 +0200) * Upstream changes: - v0.14.3 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 17 Sep 2020 16:53:56 +0000 swh-storage (0.14.2-1~swh1) unstable-swh; urgency=medium * New upstream release 0.14.2 - (tagged by David Douard on 2020-09-11 15:31:22 +0200) * Upstream changes: - v0.14.2 -- Software Heritage autobuilder (on jenkins-debian1) Fri, 11 Sep 2020 13:37:11 +0000 swh-storage (0.14.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.14.1 - (tagged by Antoine R. Dumont (@ardumont) on 2020-09-04 15:43:51 +0200) * Upstream changes: - v0.14.1 - algos.diff: Add missed revision_get conversion -- Software Heritage autobuilder (on jenkins-debian1) Fri, 04 Sep 2020 13:52:17 +0000 swh-storage (0.14.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.14.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-09-04 12:23:52 +0200) * Upstream changes: - v0.14.0 - Refactor revision_get storage API to return Revision objects - cassandra: Discard Content ctime field in content_get_partition -- Software Heritage autobuilder (on jenkins-debian1) Fri, 04 Sep 2020 10:59:54 +0000 swh-storage (0.13.3-1~swh1) unstable-swh; urgency=medium * New upstream release 0.13.3 - (tagged by Antoine R. Dumont (@ardumont) on 2020-09-01 14:34:57 +0200) * Upstream changes: - v0.13.3 - storage*: release_get(...) -> List[Optional[Release]] - Make StorageInterface a Protocol. - Add a validating storage proxy, to check ids before insertion. - Add a --check-config option for cli commands - Remove the deprecated config-path option from `swh storage rpc-serve` command - Add support for a new "check_config" config option in get_storage() - Check for db version mismatch in PgStorage.check_config() - Add a check_dbversion() method to the Db class - Fix pytest_plugin's database janitor: do not truncate the dbversion table - algos.snapshot: Add visits_and_snapshots_get_from_revision - storage/interface: Remove deprecated diff endpoints - storage_tests: Remove duplicated postgresql-specific tests. - Move postgresql-related files to swh/storage/postgresql/ -- Software Heritage autobuilder (on jenkins-debian1) Tue, 01 Sep 2020 12:40:29 +0000 swh-storage (0.13.2-1~swh2) unstable-swh; urgency=medium * Add mypy-extensions to build-dependencies -- Nicolas Dandrimont Fri, 21 Aug 2020 12:17:05 +0200 swh-storage (0.13.2-1~swh1) unstable-swh; urgency=medium * New upstream release 0.13.2 - (tagged by Valentin Lorentz on 2020-08-20 08:59:39 +0200) * Upstream changes: - v0.13.2 - * pg: Fix crash in snapshot_get when the snapshot does not exist. - * cassandra: fix signatures - * in_memory: rewrite as a backend for the cassandra storage - * remove endpoint snapshot_get_by_origin_visit. - * pg: rewrite converters to work with model objects -- Software Heritage autobuilder (on jenkins-debian1) Thu, 20 Aug 2020 07:18:50 +0000 swh-storage (0.13.1-1~swh3) unstable-swh; urgency=medium * Update dependencies -- Antoine R. Dumont (@ardumont) Fri, 07 Aug 2020 21:17:01 +0000 swh-storage (0.13.1-1~swh2) unstable-swh; urgency=medium * Update dependencies -- Antoine R. Dumont (@ardumont) Fri, 07 Aug 2020 21:02:01 +0000 swh-storage (0.13.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.13.1 - (tagged by Valentin Lorentz on 2020-08-07 18:14:32 +0200) * Upstream changes: - v0.13.1 - * Make snapshot_get_branches return a TypedDict containing SnapshotBranch objects. -- Software Heritage autobuilder (on jenkins-debian1) Fri, 07 Aug 2020 16:23:01 +0000 swh-storage (0.13.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.13.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-08-07 12:38:47 +0200) * Upstream changes: - v0.13.0 - storage*: Rename and type content_get(List[Sha1]) -> List[Optional[Content]] - storage*: Rename content_get_data(Sha1) -> Optional[bytes] - Simplify as Content.ctime None is popped out of a to_dict call in recent model - cassandra.storage: Use next token for pagination instead of computing it -- Software Heritage autobuilder (on jenkins-debian1) Fri, 07 Aug 2020 10:49:28 +0000 swh-storage (0.12.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.12.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-08-06 08:50:17 +0200) * Upstream changes: - v0.12.0 - Type storage endpoints - Drop content_get_range endpoint in favor of content_get_partition -- Software Heritage autobuilder (on jenkins-debian1) Thu, 06 Aug 2020 06:55:26 +0000 swh-storage (0.11.10-1~swh1) unstable-swh; urgency=medium * New upstream release 0.11.10 - (tagged by Antoine R. Dumont (@ardumont) on 2020-08-04 14:10:21 +0200) * Upstream changes: - v0.11.10 - tests: Improve coverage on directory_ls endpoints - storage*: Type content_find(...) -> List[Content] - storage*: Type {cnt,dir,rev,rel,snp}_get_random(...) -> Sha1Git -- Software Heritage autobuilder (on jenkins-debian1) Tue, 04 Aug 2020 12:15:21 +0000 swh-storage (0.11.9-1~swh1) unstable-swh; urgency=medium * New upstream release 0.11.9 - (tagged by Antoine R. Dumont (@ardumont) on 2020-08-03 11:55:10 +0200) * Upstream changes: - v0.11.9 - storage*: Drop origin-get- range in favor of origin-list - storage*: Do not allow unknown visit status in origin_visit*_get_latest - storage*: Add type annotation to origin_count - Reuse swh.core stream_results function -- Software Heritage autobuilder (on jenkins-debian1) Mon, 03 Aug 2020 10:02:56 +0000 swh-storage (0.11.8-1~swh1) unstable-swh; urgency=medium * New upstream release 0.11.8 - (tagged by Valentin Lorentz on 2020-07-31 14:57:09 +0200) * Upstream changes: - v0.11.8 - * test_replay: update for swh.journal 0.4.1. - * Add support for metadata-related object types to the backfiller and replayer. - * pg: Rewrite _origin_query to force the query planner to filter on URLs before filtering on visits. - * Make raw_extrinsic_metadata_get return PagedResult instead of Dict. - * Rename argument 'object_type' of raw_extrinsic_metadata_get to 'type'. -- Software Heritage autobuilder (on jenkins-debian1) Fri, 31 Jul 2020 13:17:40 +0000 swh-storage (0.11.6-1~swh1) unstable-swh; urgency=medium * New upstream release 0.11.6 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-30 16:20:48 +0200) * Upstream changes: - v0.11.6 - storage*: Adapt origin_list(...) -> PagedResult[Origin] - algos.snapshot: Open snapshot_id_get_from_revision - storage*: add origin_visit_status_get(...) -> PagedResult[OriginVisitStatus] - Add type annotations on get_storage. - buffer: Pass lists to backend functions, not iterables. - storage*: Simplify next-page- token computation - filter: Fix types passed to the proxied storage. - Fix upcoming type warning with swh.core > v0.1.2. - Make API endpoints take Lists instead of Iterables as arguments - storage*: use an enum to explicit the order in origin_visit_get - storage*: origin_visit_get(...) -> PagedResult[OriginVisit] - Write metadata + metadata authorities/fetchers to the journal. -- Software Heritage autobuilder (on jenkins-debian1) Thu, 30 Jul 2020 14:29:10 +0000 swh-storage (0.11.5-1~swh1) unstable-swh; urgency=medium * New upstream release 0.11.5 - (tagged by Valentin Lorentz on 2020-07-28 09:55:34 +0200) * Upstream changes: - v0.11.5 - in_memory: fix tie-breaking when two visits have the same date. -- Software Heritage autobuilder (on jenkins-debian1) Tue, 28 Jul 2020 08:10:21 +0000 swh-storage (0.11.4-1~swh1) unstable-swh; urgency=medium * New upstream release 0.11.4 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-27 16:08:42 +0200) * Upstream changes: - v0.11.4 - Rename object_metadata to raw_extrinsic_metadata - metadata_{authority,fetcher}_add: Fix crash when the iterable argument is empty - storage*: origin_visit_get_by -> Optional[OriginVisit] - storage*: origin_visit_find_by_date -> Optional[OriginVisit] - storage*: type origin_visit_get_latest endpoint result - algos.origin: Simplify origin_get_latest_visit_status function -- Software Heritage autobuilder (on jenkins-debian1) Mon, 27 Jul 2020 14:16:18 +0000 swh-storage (0.11.3-1~swh1) unstable-swh; urgency=medium * New upstream release 0.11.3 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-27 08:01:03 +0200) * Upstream changes: - v0.11.3 - storage*: origin_get(Iterable[str]) -> Iterable[Optional[Origin]] - storage*.origin_visit_get_random: Read model objects -- Software Heritage autobuilder (on jenkins-debian1) Mon, 27 Jul 2020 06:08:55 +0000 swh-storage (0.11.2-1~swh1) unstable-swh; urgency=medium * New upstream release 0.11.2 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-23 12:09:51 +0200) * Upstream changes: - v0.11.2 - pgstorage: Drop unnecessary indirection from reading origin_visit - pytest-plugin: Make sample_data return data model objects - tests: Use only model objects for testing - Drop validate storage proxy -- Software Heritage autobuilder (on jenkins-debian1) Thu, 23 Jul 2020 10:18:15 +0000 swh-storage (0.11.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.11.1 - (tagged by Valentin Lorentz on 2020-07-20 13:01:20 +0200) * Upstream changes: - v0.11.1 - * Use model objects in tests - * Rename 'deposit' authority type to 'deposit_client'. -- Software Heritage autobuilder (on jenkins-debian1) Mon, 20 Jul 2020 11:14:39 +0000 swh-storage (0.11.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.11.0 - (tagged by Valentin Lorentz on 2020-07-20 11:01:10 +0200) * Upstream changes: - v0.11.0 - * Make metadata-related endpoints consistent with other endpoints by using Iterables of swh- model objects instead of a dict. - * Update tests to use model objects -- Software Heritage autobuilder (on jenkins-debian1) Mon, 20 Jul 2020 09:12:25 +0000 swh-storage (0.10.6-1~swh1) unstable-swh; urgency=medium * New upstream release 0.10.6 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-16 15:31:19 +0200) * Upstream changes: - v0.10.6 - pytest_plugin: Ensure fixture instantiates correctly -- Software Heritage autobuilder (on jenkins-debian1) Thu, 16 Jul 2020 13:36:34 +0000 swh-storage (0.10.5-1~swh1) unstable-swh; urgency=medium * New upstream release 0.10.5 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-16 14:24:50 +0200) * Upstream changes: - v0.10.5 - pytest_plugin: Do not expose the validate proxy storage - pytest-plugin: Expose a sample_data_model fixture - tests: Start using model objects and drop validate proxy when possible -- Software Heritage autobuilder (on jenkins-debian1) Thu, 16 Jul 2020 12:34:44 +0000 swh-storage (0.10.4-1~swh1) unstable-swh; urgency=medium * New upstream release 0.10.4 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-16 11:25:25 +0200) * Upstream changes: - v0.10.4 - pytest_plugin: Avoid fixture client to declare optional dependency - Allow cassandra binary path to be configured through env variable - 158: Make schema and migration converge so the migration works -- Software Heritage autobuilder (on jenkins-debian1) Thu, 16 Jul 2020 09:37:24 +0000 swh-storage (0.10.3-1~swh1) unstable-swh; urgency=medium * New upstream release 0.10.3 - (tagged by Antoine Lambert on 2020-07-10 16:26:27 +0200) * Upstream changes: - version 0.10.3 -- Software Heritage autobuilder (on jenkins-debian1) Fri, 10 Jul 2020 14:40:28 +0000 swh-storage (0.10.2-1~swh2) unstable-swh; urgency=medium * Fix debian rules to avoid double pytest-plugin loading clash -- Antoine R. Dumont (@ardumont) Fri, 10 Jul 2020 09:21:14 +0200 swh-storage (0.10.2-1~swh1) unstable-swh; urgency=medium * New upstream release 0.10.2 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-10 08:30:37 +0200) * Upstream changes: - v0.10.2 - tests: Do no expose the pytest- plugin through setuptools entry - Convert ImmutableDict to dict before passing it to json.dumps - docs: Rework dia -> pdf pipeline for inkscape 1.0 -- Software Heritage autobuilder (on jenkins-debian1) Fri, 10 Jul 2020 06:52:42 +0000 swh-storage (0.10.1-1~swh2) unstable-swh; urgency=medium * Update runtime dependencies -- Antoine R. Dumont (@ardumont) Wed, 08 Jul 2020 14:56:01 +0200 swh-storage (0.10.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.10.1 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-08 14:32:52 +0200) * Upstream changes: - v0.10.1 - extract-pytest-fixture Move shareable fixtures out of conftest into a dedicated pytest plugin - Migrate from vcversioner to setuptools-scm -- Software Heritage autobuilder (on jenkins-debian1) Wed, 08 Jul 2020 12:39:15 +0000 swh-storage (0.10.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.10.0 - (tagged by David Douard on 2020-07-08 09:20:49 +0200) * Upstream changes: - v0.10.0 -- Software Heritage autobuilder (on jenkins-debian1) Wed, 08 Jul 2020 10:11:09 +0000 swh-storage (0.9.3-1~swh1) unstable-swh; urgency=medium * New upstream release 0.9.3 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-06 09:55:56 +0200) * Upstream changes: - v0.9.3 - storage: Send metrics from the origin_add endpoint -- Software Heritage autobuilder (on jenkins-debian1) Mon, 06 Jul 2020 08:06:13 +0000 swh-storage (0.9.2-1~swh1) unstable-swh; urgency=medium * New upstream release 0.9.2 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-03 18:48:39 +0200) * Upstream changes: - v0.9.2 - pg-storage: Add missing cur parameter passing -- Software Heritage autobuilder (on jenkins-debian1) Fri, 03 Jul 2020 16:54:13 +0000 swh-storage (0.9.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.9.1 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-03 16:50:45 +0200) * Upstream changes: - v0.9.1 - storage.db: Drop db.origin_visit_upsert behavior -- Software Heritage autobuilder (on jenkins-debian1) Fri, 03 Jul 2020 15:00:32 +0000 swh-storage (0.9.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.9.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-01 09:53:34 +0200) * Upstream changes: - v0.9.0 - storage*: Drop intermediary conversion step into OriginVisit - pg: use 'on conflict do nothing' strategy for duplicate metadata rows. - Make the code location of metadata endpoints consistent across backends. - Add content_metadata_{add,get}. - Add context columns to object_metadata table and object_metadata_{add,get}. - Generalize origin_metadata to allow support for other object types in the future. - Work around the segmentation faults caused by pytest-coverage + multiprocessing. -- Software Heritage autobuilder (on jenkins-debian1) Wed, 01 Jul 2020 08:02:08 +0000 swh-storage (0.8.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.8.1 - (tagged by David Douard on 2020-06-30 10:08:21 +0200) * Upstream changes: - v0.8.1 -- Software Heritage autobuilder (on jenkins-debian1) Tue, 30 Jun 2020 08:36:45 +0000 swh-storage (0.8.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.8.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-06-29 09:33:12 +0200) * Upstream changes: - v0.8.0 - Iterate over paginated visits in batches to retrieve latest visit/snapshot - storage*: Open order parameter to origin-visit-get endpoint - tests/replayer/storage*: Drop obsolete origin visit fields - Relax checks on journal writes regarding origin-visit* - replayer: Fix isoformat datetime string for origin-visit - Deprecate the origin_add_one() endpoint - test_storage: Add missing tests on origin_visit_get method -- Software Heritage autobuilder (on jenkins-debian1) Mon, 29 Jun 2020 07:44:00 +0000 swh-storage (0.7.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.7.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-06-22 15:42:25 +0200) * Upstream changes: - v0.7.0 - test_origin: Rename appropriately tests - algos: Improve origin visit get latest visit status algorithm - test_snapshot: Do not use origin_visit_add returned result - algos.snapshot: Fix edge case when snapshot is not resolved - Ensure ids are correct in tests' storage_data - Fix tests' storage_data revisions - SQL: replace the hash(url) index by a unique btree(url) on the origin table - Make sure the pagination in swh_snapshot_get_by_id uses the proper indexes -- Software Heritage autobuilder (on jenkins-debian1) Mon, 22 Jun 2020 14:09:33 +0000 swh-storage (0.6.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.6.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-06-19 11:29:42 +0200) * Upstream changes: - v0.6.0 - Move deprecated endpoint snapshot_get_latest from api endpoint to algos - algos.origin: Open origin-get-latest-visit-status function - storage*: Allow origin-visit-get-latest to filter on type - test_origin: Align storage initialization within tests -- Software Heritage autobuilder (on jenkins-debian1) Fri, 19 Jun 2020 12:45:32 +0000 swh-storage (0.5.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.5.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-06-17 16:03:15 +0200) * Upstream changes: - v0.5.0 - test_storage: Fix flakiness in round to milliseconds test util method - storage*: Add origin- visit-status-get-latest endpoint - Fix/update the backfiller - validate: accept model objects as well as dicts on all add endpoints - cql: Fix blackified strings - storage: Add missing cur parameter - Fix db_to_author() converter to return None is all fields are None -- Software Heritage autobuilder (on jenkins-debian1) Wed, 17 Jun 2020 14:19:37 +0000 swh-storage (0.4.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.4.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-06-16 09:50:25 +0200) * Upstream changes: - v0.4.0 - ardumont/master storage*: Drop leftover code - storage*: Drop origin_visit_upsert endpoint - storage*: Remove origin-visit-update endpoint - replay: Replay origin-visit and origin-visit-status - in_memory: Make origin- visit-status-add respect "on conflict ignore" policy - test_storage: Add journal behavior coverage for origin-visit-*add - Start migrating the validate proxy toward using BaseModel objects - storage*: Do not write twice origin-visit-status in journal -- Software Heritage autobuilder (on jenkins-debian1) Tue, 16 Jun 2020 07:58:23 +0000 swh-storage (0.3.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.3.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-06-12 09:08:23 +0200) * Upstream changes: - v0.3.0 - origin-visit-add storage*: Align origin-visit-add to take iterable of OriginVisit objects -- Software Heritage autobuilder (on jenkins-debian1) Fri, 12 Jun 2020 07:22:03 +0000 swh-storage (0.2.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.2.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-06-10 11:51:30 +0200) * Upstream changes: - v0.2.0 - origin-visit-upsert: Write visit status objects to the journal - origin-visit-update: Write visit status objects to the journal - origin-visit-add: Write visit status to the journal - Add pagination to origin_metadata_get. - Deduplicate origin-metadata when they have the same authority + discovery_date + fetcher. - Open `origin_visit_status_add` endpoint to add origin visit statuses - Add a replayer test for anonymized journal topics - Small refactoring of the InMemoryStorage to make it more consistent -- Software Heritage autobuilder (on jenkins-debian1) Wed, 10 Jun 2020 10:02:45 +0000 swh-storage (0.1.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.1.1 - (tagged by Nicolas Dandrimont on 2020-06-04 16:49:22 +0200) * Upstream changes: - Release swh.storage v0.1.1 - Work around tests hanging during Debian build -- Software Heritage autobuilder (on jenkins-debian1) Thu, 04 Jun 2020 14:56:54 +0000 swh-storage (0.1.0-2~swh1) unstable-swh; urgency=medium * Update dependencies. -- David Douard Thu, 04 Jun 2020 13:40:52 +0200 swh-storage (0.1.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.1.0 - (tagged by David Douard on 2020-06-04 12:08:46 +0200) * Upstream changes: - v0.1.0 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 04 Jun 2020 10:28:43 +0000 swh-storage (0.0.193-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.193 - (tagged by Antoine R. Dumont (@ardumont) on 2020-05-28 14:28:54 +0200) * Upstream changes: - v0.0.193 - pg: Write origin visit updates & status, read from origin_visit_status - Make content.blake2s256 not null. - Remove unused SQL functions. - README: Update necessary dependencies for test purposes - Add a pre-commit hook to check there are version bumps in sql/upgrades/*.sql - Add missing dbversion bump in 150.sql. - Add artifact metadata to the extrinsic metadata storage specification. - Add not null constraints to metadata_authority/origin_metadata - Realign schema with latest 149 migration script -- Software Heritage autobuilder (on jenkins-debian1) Thu, 28 May 2020 12:37:58 +0000 swh-storage (0.0.192-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.192 - (tagged by Valentin Lorentz on 2020-05-19 18:42:00 +0200) * Upstream changes: - v0.0.192 - * origin_metadata_add: Reject non-bytes types for 'metadata'. -- Software Heritage autobuilder (on jenkins-debian1) Tue, 19 May 2020 16:54:00 +0000 swh-storage (0.0.191-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.191 - (tagged by Valentin Lorentz on 2020-05-19 13:43:35 +0200) * Upstream changes: - v0.0.191 - * Implement the new extrinsic metadata specification/vocabulary. -- Software Heritage autobuilder (on jenkins-debian1) Tue, 19 May 2020 11:52:00 +0000 swh-storage (0.0.190-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.190 - (tagged by Antoine R. Dumont (@ardumont) on 2020-05-18 14:10:39 +0200) * Upstream changes: - v0.0.190 - storage: metadata_provider: Ensure idempotency when creating provider - journal: add a skipped_content topic dedicated to SkippedContent objects - Add missing return annotations on JournalWriter methods - Improve a bit the exception message of JournalWriter.content_update - Refactor the JournalWriter class to normalize its methods - tests: fix test_replay; do only use aware datetime objects - test_kafka_writer: Add missing object type skipped_content -- Software Heritage autobuilder (on jenkins-debian1) Mon, 18 May 2020 12:18:09 +0000 swh-storage (0.0.189-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.189 - (tagged by Antoine R. Dumont (@ardumont) on 2020-04-30 14:50:54 +0200) * Upstream changes: - v0.0.189 - pg: Write both origin visit updates & status, read from origin_visit - pg-storage: Add new created state - setup.py: add documentation link - metadata spec: Fix title hierarchy - tests: Use aware datetimes instead of naive ones. - cassandra: Adapt internal implementations to use origin visit status - in_memory: Adapt internal implementations to use origin visit status -- Software Heritage autobuilder (on jenkins-debian1) Thu, 30 Apr 2020 12:58:57 +0000 swh-storage (0.0.188-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.188 - (tagged by David Douard on 2020-04-28 13:44:20 +0200) * Upstream changes: - v0.0.188 -- Software Heritage autobuilder (on jenkins-debian1) Tue, 28 Apr 2020 11:52:08 +0000 swh-storage (0.0.187-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.187 - (tagged by Antoine R. Dumont (@ardumont) on 2020-04-14 18:13:08 +0200) * Upstream changes: - v0.0.187 - storage.interface: Actually define the remote flush operation -- Software Heritage autobuilder (on jenkins-debian1) Tue, 14 Apr 2020 16:23:41 +0000 swh-storage (0.0.186-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.186 - (tagged by Nicolas Dandrimont on 2020-04-14 17:09:22 +0200) * Upstream changes: - Release swh.storage v0.0.186 - Drop backwards-compatibility code with swh.journal < 0.0.30 -- Software Heritage autobuilder (on jenkins-debian1) Tue, 14 Apr 2020 15:20:57 +0000 swh-storage (0.0.185-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.185 - (tagged by Antoine R. Dumont (@ardumont) on 2020-04-14 14:15:32 +0200) * Upstream changes: - v0.0.185 - storage.filter: Remove internal state - test: update storage tests to (future) swh.journal 0.0.30 -- Software Heritage autobuilder (on jenkins-debian1) Tue, 14 Apr 2020 12:22:06 +0000 swh-storage (0.0.184-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.184 - (tagged by Antoine R. Dumont (@ardumont) on 2020-04-10 16:07:32 +0200) * Upstream changes: - v0.0.184 - storage*: Add flush endpoints to storage implems (backend, proxy) - test_retry: Add missing skipped_content_add tests -- Software Heritage autobuilder (on jenkins-debian1) Fri, 10 Apr 2020 14:14:20 +0000 swh-storage (0.0.183-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.183 - (tagged by Antoine R. Dumont (@ardumont) on 2020-04-09 12:35:53 +0200) * Upstream changes: - v0.0.183 - proxy storage: Add a clear_buffers endpoint - buffer proxy storage: Filter out duplicate objects prior to storage write - storage: Prevent erroneous HashCollisions by using the same ctime for all rows. - Enable black - origin_visit_update: ensure it raises a StorageArgumentException - Adapt cassandra backend to validating model types - tests: many refactoring improvements - tests: Shut down cassandra connection before closing the fixture down - Add more type annotations -- Software Heritage autobuilder (on jenkins-debian1) Thu, 09 Apr 2020 10:46:29 +0000 swh-storage (0.0.182-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.182 - (tagged by Antoine R. Dumont (@ardumont) on 2020-03-27 07:02:13 +0100) * Upstream changes: - v0.0.182 - storage*: Update origin_visit_update to make status parameter mandatory - test: Adapt origin validation test according to latest model changes - Respec discovery_date as a Python datetime instead of an ISO string. - origin_visit_add: Add missing db/cur argument to call to origin_get. -- Software Heritage autobuilder (on jenkins-debian1) Fri, 27 Mar 2020 06:13:17 +0000 swh-storage (0.0.181-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.181 - (tagged by Antoine R. Dumont (@ardumont) on 2020-03-25 09:50:49 +0100) * Upstream changes: - v0.0.181 - storage*: Hex encode content hashes in HashCollision exception - Add format of discovery_date in the metadata specification. - Store the value of token(partition_key) in skipped_content_by_* table, instead of three hashes. - Store the value of token(partition_key) in content_by_* table, instead of three hashes. -- Software Heritage autobuilder (on jenkins-debian1) Wed, 25 Mar 2020 09:03:43 +0000 swh-storage (0.0.180-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.180 - (tagged by Nicolas Dandrimont on 2020-03-18 18:24:41 +0100) * Upstream changes: - Release swh.storage v0.0.180 - Stop counting origin additions multiple times in statsd -- Software Heritage autobuilder (on jenkins-debian1) Wed, 18 Mar 2020 17:45:36 +0000 swh-storage (0.0.179-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.179 - (tagged by Nicolas Dandrimont on 2020-03-18 16:05:13 +0100) * Upstream changes: - Release swh.storage v0.0.179. - fix requirements-swh.txt to use proper version restriction - reduce the transaction load for content writes and reads -- Software Heritage autobuilder (on jenkins-debian1) Wed, 18 Mar 2020 15:50:50 +0000 swh-storage (0.0.178-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.178 - (tagged by Antoine R. Dumont (@ardumont) on 2020-03-16 12:51:28 +0100) * Upstream changes: - v0.0.178 - origin_visit_add: Adapt endpoint signature to return OriginVisit - origin_visit_upsert: Use OriginVisit object as input - storage/writer: refactor JournalWriter.content_add to send model objects -- Software Heritage autobuilder (on jenkins-debian1) Mon, 16 Mar 2020 11:59:18 +0000 swh-storage (0.0.177-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.177 - (tagged by Antoine R. Dumont (@ardumont) on 2020-03-10 11:37:33 +0100) * Upstream changes: - v0.0.177 - storage: Identify and provide the collision hashes in exception - Guarantee the order of results for revision_get and release_get - tests: Improve test speed - sql: do not attempt to create the plpgsql lang if already exists - Update requirement on swh.core for RPCClient method overrides -- Software Heritage autobuilder (on jenkins-debian1) Tue, 10 Mar 2020 10:48:11 +0000 swh-storage (0.0.176-1~swh2) unstable-swh; urgency=medium * Update build dependencies -- Antoine R. Dumont (@ardumont) Mon, 02 Mar 2020 14:36:00 +0100 swh-storage (0.0.176-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.176 - (tagged by Valentin Lorentz on 2020-02-28 14:44:10 +0100) * Upstream changes: - v0.0.176 - * Accept cassandra-driver >= 3.22. - * Make the RPC client and objstorage helper fetch Content.data from lazy - contents. - * Move ctime out of the validation proxy. -- Software Heritage autobuilder (on jenkins-debian1) Fri, 28 Feb 2020 15:21:27 +0000 swh-storage (0.0.175-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.175 - (tagged by Antoine Lambert on 2020-02-20 13:51:40 +0100) * Upstream changes: - version 0.0.175 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 20 Feb 2020 13:18:34 +0000 swh-storage (0.0.174-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.174 - (tagged by Valentin Lorentz on 2020-02-19 14:18:59 +0100) * Upstream changes: - v0.0.174 - * Fix inconsistent behavior of skipped_content_missing across backends. - * Fix FilteringProxy to not drop skipped-contents with a missing sha1_git. - * Make storage proxies use swh-model objects instead of dicts. - * Add support for (de)serializing swh-model in RPC calls. -- Software Heritage autobuilder (on jenkins-debian1) Wed, 19 Feb 2020 15:00:32 +0000 swh-storage (0.0.172-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.172 - (tagged by Valentin Lorentz on 2020-02-12 14:00:04 +0100) * Upstream changes: - v0.0.172 - * Unify exception raised by invalid input to API endpoints. - * Add a validation proxy for _add() methods. This proxy is *required* - in front of all backends whose _add() methods may be called or they'll - crash at runtime. - * Fix RecursionError when storage proxies are deepcopied or unpickled. - * storages: Refactor objstorage operations with a dedicated collaborator - * storages: Refactor journal operations with a dedicated writer collab -- Software Heritage autobuilder (on jenkins-debian1) Wed, 12 Feb 2020 13:13:47 +0000 swh-storage (0.0.171-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.171 - (tagged by Valentin Lorentz on 2020-02-06 14:46:05 +0100) * Upstream changes: - v0.0.171 - * Split 'content_add' method into 'content_add' and 'skipped_content_add'. - * Increase Cassandra requests timeout to 1 second. -- Software Heritage autobuilder (on jenkins-debian1) Thu, 06 Feb 2020 14:07:37 +0000 swh-storage (0.0.170-1~swh3) unstable-swh; urgency=medium * Update build dependencies -- Antoine R. Dumont (@ardumont) Mon, 03 Feb 2020 17:30:38 +0100 swh-storage (0.0.170-1~swh2) unstable-swh; urgency=medium * Update build dependencies -- Antoine R. Dumont (@ardumont) Mon, 03 Feb 2020 16:00:39 +0100 swh-storage (0.0.170-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.170 - (tagged by Antoine R. Dumont (@ardumont) on 2020-02-03 14:11:53 +0100) * Upstream changes: - v0.0.170 - swh.storage.cassandra: Add Cassandra backend implementation -- Software Heritage autobuilder (on jenkins-debian1) Mon, 03 Feb 2020 13:23:48 +0000 swh-storage (0.0.169-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.169 - (tagged by Antoine R. Dumont (@ardumont) on 2020-01-30 13:40:00 +0100) * Upstream changes: - v0.0.169 - retry: Add retry behavior on pipeline storage with flushing failure -- Software Heritage autobuilder (on jenkins-debian1) Thu, 30 Jan 2020 13:26:23 +0000 swh-storage (0.0.168-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.168 - (tagged by Valentin Lorentz on 2020-01-30 11:19:31 +0100) * Upstream changes: - v0.0.168 - * Implement content_update for the in-mem storage. - * Remove cur/db arguments from the in- mem storage. - * Move Storage documentation and endpoint paths to a new StorageInterface class - * Rename in_memory.Storage to in_memory.InMemoryStorage. - * CONTRIBUTORS: add Daniele Serafini -- Software Heritage autobuilder (on jenkins-debian1) Thu, 30 Jan 2020 10:25:30 +0000 swh-storage (0.0.167-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.167 - (tagged by Antoine R. Dumont (@ardumont) on 2020-01-24 14:55:57 +0100) * Upstream changes: - v0.0.167 - pgstorage: Empty temp tables instead of dropping them -- Software Heritage autobuilder (on jenkins-debian1) Fri, 24 Jan 2020 14:01:57 +0000 swh-storage (0.0.166-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.166 - (tagged by Antoine R. Dumont (@ardumont) on 2020-01-24 09:51:52 +0100) * Upstream changes: - v0.0.166 - storage: Add endpoint to get missing content (by sha1_git) and missing snapshot - Remove redundant config checks in load_and_check_config - Remove 'id' and 'object_id' from the output of object_find_by_sha1_git - Make origin_visit_get_random return None instead of {} if there are no results - docs: Fix sphinx warnings -- Software Heritage autobuilder (on jenkins-debian1) Fri, 24 Jan 2020 09:00:12 +0000 swh-storage (0.0.165-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.165 - (tagged by Antoine R. Dumont (@ardumont) on 2020-01-17 14:04:53 +0100) * Upstream changes: - v0.0.165 - storage.retry: Fix objects loading when using generator parameters -- Software Heritage autobuilder (on jenkins-debian1) Fri, 17 Jan 2020 13:09:39 +0000 swh-storage (0.0.164-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.164 - (tagged by Antoine Lambert on 2020-01-16 17:54:40 +0100) * Upstream changes: - version 0.0.164 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 16 Jan 2020 17:05:02 +0000 swh-storage (0.0.163-1~swh2) unstable-swh; urgency=medium * Fix test dependency -- Antoine R. Dumont (@ardumont) Tue, 14 Jan 2020 17:26:08 +0100 swh-storage (0.0.163-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.163 - (tagged by Antoine R. Dumont (@ardumont) on 2020-01-14 17:12:03 +0100) * Upstream changes: - v0.0.163 - retry: Improve proxy storage for add endpoints - in_memory: Make directory_get_random return None when storage empty - storage: Change content_get_metadata api to return Dict[bytes, List[Dict]] - storage: Add content_get_partition endpoint to replace content_get_range - storage: Add endpoint origin_list to replace origin_get_range -- Software Heritage autobuilder (on jenkins-debian1) Tue, 14 Jan 2020 16:17:45 +0000 swh-storage (0.0.162-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.162 - (tagged by Valentin Lorentz on 2019-12-16 14:37:44 +0100) * Upstream changes: - v0.0.162 - Add {content,directory,revision,release,snapshot}_get_random. -- Software Heritage autobuilder (on jenkins-debian1) Mon, 16 Dec 2019 13:41:39 +0000 swh-storage (0.0.161-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.161 - (tagged by Antoine R. Dumont (@ardumont) on 2019-12-10 15:03:28 +0100) * Upstream changes: - v0.0.161 - storage: Add endpoint to randomly pick an origin -- Software Heritage autobuilder (on jenkins-debian1) Tue, 10 Dec 2019 14:08:15 +0000 swh-storage (0.0.160-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.160 - (tagged by Antoine R. Dumont (@ardumont) on 2019-12-06 11:15:48 +0100) * Upstream changes: - v0.0.160 - storage.buffer: Buffer release objects as well - storage.tests: Unify tests sample data - Implement origin lookup by sha1 -- Software Heritage autobuilder (on jenkins-debian1) Fri, 06 Dec 2019 10:23:44 +0000 swh-storage (0.0.159-1~swh2) unstable-swh; urgency=medium * Force fast hypothesis profile when running tests -- Antoine R. Dumont (@ardumont) Tue, 26 Nov 2019 17:08:16 +0100 swh-storage (0.0.159-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.159 - (tagged by Antoine R. Dumont (@ardumont) on 2019-11-22 11:05:41 +0100) * Upstream changes: - v0.0.159 - Add 'pipeline' storage "class" for more readable configurations. - tests: Improve tests environments configuration - Fix a few typos reported by codespell - Add a pre-commit-hooks.yaml config file - Remove utils/(dump|fix)_revisions scripts -- Software Heritage autobuilder (on jenkins-debian1) Fri, 22 Nov 2019 10:10:31 +0000 swh-storage (0.0.158-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.158 - (tagged by Antoine R. Dumont (@ardumont) on 2019-11-14 13:33:00 +0100) * Upstream changes: - v0.0.158 - Drop schemata module (migrated back to swh-lister) -- Software Heritage autobuilder (on jenkins-debian1) Thu, 14 Nov 2019 12:37:18 +0000 swh-storage (0.0.157-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.157 - (tagged by Nicolas Dandrimont on 2019-11-13 13:22:39 +0100) * Upstream changes: - Release swh.storage 0.0.157 - schemata.distribution: Fix bogus NotImplementedError on Area.index_uris -- Software Heritage autobuilder (on jenkins-debian1) Wed, 13 Nov 2019 12:27:07 +0000 swh-storage (0.0.156-1~swh2) unstable-swh; urgency=medium * Add version constraint on psycopg2 -- Nicolas Dandrimont Wed, 30 Oct 2019 18:21:34 +0100 swh-storage (0.0.156-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.156 - (tagged by Valentin Lorentz on 2019-10-30 15:12:10 +0100) * Upstream changes: - v0.0.156 - * Stop supporting origin ids in API (except in origin_get_range). - * Make visit['origin'] a string everywhere (instead of a dict). -- Software Heritage autobuilder (on jenkins-debian1) Wed, 30 Oct 2019 14:29:28 +0000 swh-storage (0.0.155-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.155 - (tagged by David Douard on 2019-10-30 12:14:14 +0100) * Upstream changes: - v0.0.155 -- Software Heritage autobuilder (on jenkins-debian1) Wed, 30 Oct 2019 11:18:37 +0000 swh-storage (0.0.154-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.154 - (tagged by Antoine R. Dumont (@ardumont) on 2019-10-17 13:47:57 +0200) * Upstream changes: - v0.0.154 - Fix tests in debian build -- Software Heritage autobuilder (on jenkins-debian1) Thu, 17 Oct 2019 11:52:46 +0000 swh-storage (0.0.153-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.153 - (tagged by Antoine R. Dumont (@ardumont) on 2019-10-17 13:21:00 +0200) * Upstream changes: - v0.0.153 - Deploy new test fixture -- Software Heritage autobuilder (on jenkins-debian1) Thu, 17 Oct 2019 11:26:12 +0000 swh-storage (0.0.152-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.152 - (tagged by Antoine R. Dumont (@ardumont) on 2019-10-08 16:55:43 +0200) * Upstream changes: - v0.0.152 - swh.storage.buffer: Add buffering proxy storage implementation - swh.storage.filter: Add filtering storage implementation - swh.storage.tests: Improve db transaction handling - swh.storage.tests: Add more tests - swh.storage.storage: introduce a db() context manager -- Software Heritage autobuilder (on jenkins-debian1) Tue, 08 Oct 2019 15:03:16 +0000 swh-storage (0.0.151-1~swh2) unstable-swh; urgency=medium * Add missing build-dependency on python3-swh.journal -- Nicolas Dandrimont Tue, 01 Oct 2019 18:28:19 +0200 swh-storage (0.0.151-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.151 - (tagged by Stefano Zacchiroli on 2019-10-01 10:04:36 +0200) * Upstream changes: - v0.0.151 - * tox: anticipate mypy run to just after flake8 - * mypy.ini: be less flaky w.r.t. the packages installed in tox - * storage.py: ignore typing of optional get_journal_writer import - * mypy: ignore swh.journal to work-around dependency loop - * init.py: switch to documented way of extending path - * typing: minimal changes to make a no- op mypy run pass - * Write objects to the journal only if they don't exist yet. - * Use origin URLs for skipped_content['origin'] instead of origin ids. - * Properly mock get_journal_writer for the remote-pg-storage tests. - * journal_writer: use journal writer from swh.journal - * fix typos in docstrings and sample paths - * storage.origin_visit_add: Remove deprecated 'ts' parameter - * click "required" param wants bool, not int -- Software Heritage autobuilder (on jenkins-debian1) Tue, 01 Oct 2019 08:09:53 +0000 swh-storage (0.0.150-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.150 - (tagged by Antoine R. Dumont (@ardumont) on 2019-09-04 16:09:59 +0200) * Upstream changes: - v0.0.150 - tests/test_storage: Remove failing assertion after swh-model update - tests/test_storage: Fix tests execution with psycopg2 < 2.8 -- Software Heritage autobuilder (on jenkins-debian1) Wed, 04 Sep 2019 14:16:09 +0000 swh-storage (0.0.149-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.149 - (tagged by Antoine R. Dumont (@ardumont) on 2019-09-03 14:00:57 +0200) * Upstream changes: - v0.0.149 - Add support for origin_url in origin_metadata_* - Make origin_add/origin_visit_update validate their input - Make snapshot_add validate its input - Make revision_add and release_add validate their input - Make directory_add validate its input - Make content_add validate its input using swh-model -- Software Heritage autobuilder (on jenkins-debian1) Tue, 03 Sep 2019 12:27:51 +0000 swh-storage (0.0.148-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.148 - (tagged by Valentin Lorentz on 2019-08-23 10:33:02 +0200) * Upstream changes: - v0.0.148 - Tests improvements: - * Remove 'next_branch' from test input data. - * Fix off-by-one error when using origin_visit_upsert on with an unknown visit id. - * Use explicit arguments for origin_visit_add. - * Remove test_content_missing__marked_missing, it makes no sense. - Drop person ids: - * Stop leaking person ids. - * Remove person_get endpoint. - Logging fixes: - * Enforce log level for the werkzeug logger. - * Eliminate warnings about %TYPE. - * api: use RPCServerApp and RPCClient instead of deprecated classes - Other: - * Add support for skipped content in in- memory storage -- Software Heritage autobuilder (on jenkins-debian1) Fri, 23 Aug 2019 08:48:21 +0000 swh-storage (0.0.147-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.147 - (tagged by Valentin Lorentz on 2019-07-18 12:11:37 +0200) * Upstream changes: - Make origin_get ignore the `type` argument -- Software Heritage autobuilder (on jenkins-debian1) Thu, 18 Jul 2019 10:16:16 +0000 swh-storage (0.0.146-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.146 - (tagged by Valentin Lorentz on 2019-07-18 10:46:21 +0200) * Upstream changes: - Progress toward getting rid of origin ids - * Less dependency on origin ids in the in-mem storage - * add the SWH_STORAGE_IN_MEMORY_ENABLE_ORIGIN_IDS env var - * Remove legacy behavior of snapshot_add -- Software Heritage autobuilder (on jenkins-debian1) Thu, 18 Jul 2019 08:52:09 +0000 swh-storage (0.0.145-1~swh3) unstable-swh; urgency=medium * Properly rebuild for unstable-swh -- Nicolas Dandrimont Thu, 11 Jul 2019 14:03:30 +0200 swh-storage (0.0.145-1~swh2) buster-swh; urgency=medium * Remove useless swh.scheduler dependency -- Nicolas Dandrimont Thu, 11 Jul 2019 13:53:45 +0200 swh-storage (0.0.145-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.145 - (tagged by Valentin Lorentz on 2019-07-02 12:00:53 +0200) * Upstream changes: - v0.0.145 - Add an 'origin_visit_find_by_date' endpoint. - Add support for origin urls in all endpoints -- Software Heritage autobuilder (on jenkins-debian1) Tue, 02 Jul 2019 10:19:19 +0000 swh-storage (0.0.143-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.143 - (tagged by Valentin Lorentz on 2019-06-05 13:18:14 +0200) * Upstream changes: - Add test for snapshot/release counters. -- Software Heritage autobuilder (on jenkins-debian1) Mon, 01 Jul 2019 12:38:40 +0000 swh-storage (0.0.142-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.142 - (tagged by Valentin Lorentz on 2019-06-11 15:24:49 +0200) * Upstream changes: - Mark network tests, so they can be disabled. -- Software Heritage autobuilder (on jenkins-debian1) Tue, 11 Jun 2019 13:44:19 +0000 swh-storage (0.0.141-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.141 - (tagged by Valentin Lorentz on 2019-06-06 17:05:03 +0200) * Upstream changes: - Add support for using URL instead of ID in snapshot_get_latest. -- Software Heritage autobuilder (on jenkins-debian1) Tue, 11 Jun 2019 10:36:32 +0000 swh-storage (0.0.140-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.140 - (tagged by mihir(faux__) on 2019-03-24 21:47:31 +0530) * Upstream changes: - Changes the output of content_find method to a list in case of hash collisions and makes the sql query on python side and added test duplicate input, colliding sha256 and colliding blake2s256 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 16 May 2019 12:09:04 +0000 swh-storage (0.0.139-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.139 - (tagged by Nicolas Dandrimont on 2019-04-18 17:57:57 +0200) * Upstream changes: - Release swh.storage v0.0.139 - Backwards- compatibility improvements for snapshot_add - Better transactionality in revision_add/release_add - Fix backwards metric names - Handle shallow histories properly -- Software Heritage autobuilder (on jenkins-debian1) Thu, 18 Apr 2019 16:08:28 +0000 swh-storage (0.0.138-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.138 - (tagged by Valentin Lorentz on 2019-04-09 16:40:49 +0200) * Upstream changes: - Use the db_transaction decorator on all _add() methods. - So they gracefully release the connection on error instead - of relying on reference-counting to call the Db's `__del__` - (which does not happen in Hypothesis tests) because a ref - to it is kept via the traceback object. -- Software Heritage autobuilder (on jenkins-debian1) Tue, 09 Apr 2019 16:50:48 +0000 swh-storage (0.0.137-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.137 - (tagged by Valentin Lorentz on 2019-04-08 15:40:24 +0200) * Upstream changes: - Make test_origin_get_range run faster. -- Software Heritage autobuilder (on jenkins-debian1) Mon, 08 Apr 2019 13:56:16 +0000 swh-storage (0.0.135-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.135 - (tagged by Valentin Lorentz on 2019-04-04 20:42:32 +0200) * Upstream changes: - Make content_add_metadata require a ctime argument. - This makes Python set the ctime instead of pgsql. -- Software Heritage autobuilder (on jenkins-debian1) Fri, 05 Apr 2019 14:43:28 +0000 swh-storage (0.0.134-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.134 - (tagged by Valentin Lorentz on 2019-04-03 13:38:58 +0200) * Upstream changes: - Don't leak origin ids to the journal. -- Software Heritage autobuilder (on jenkins-debian1) Thu, 04 Apr 2019 10:16:09 +0000 swh-storage (0.0.132-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.132 - (tagged by Valentin Lorentz on 2019-04-01 11:50:30 +0200) * Upstream changes: - Use sha1 instead of bigint as FK from origin_visit to snapshot (part 1: add new column) -- Software Heritage autobuilder (on jenkins-debian1) Mon, 01 Apr 2019 13:30:48 +0000 swh-storage (0.0.131-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.131 - (tagged by Nicolas Dandrimont on 2019-03-28 17:24:44 +0100) * Upstream changes: - Release swh.storage v0.0.131 - Add statsd metrics to storage RPC backend - Clean up snapshot_add/origin_visit_update - Uniformize RPC backend to use POSTs everywhere -- Software Heritage autobuilder (on jenkins-debian1) Thu, 28 Mar 2019 16:34:07 +0000 swh-storage (0.0.130-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.130 - (tagged by Valentin Lorentz on 2019-02-26 10:50:44 +0100) * Upstream changes: - Add an helper function to list all origins in the storage. -- Software Heritage autobuilder (on jenkins-debian1) Wed, 13 Mar 2019 14:01:04 +0000 swh-storage (0.0.129-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.129 - (tagged by Valentin Lorentz on 2019-02-27 10:42:29 +0100) * Upstream changes: - Double the timeout of revision_get. - Metadata indexers often hit the limit. -- Software Heritage autobuilder (on jenkins-debian1) Fri, 01 Mar 2019 10:11:28 +0000 swh-storage (0.0.128-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.128 - (tagged by Antoine R. Dumont (@ardumont) on 2019-02-21 14:59:22 +0100) * Upstream changes: - v0.0.128 - api.server: Fix wrong exception type - storage.cli: Fix cli entry point name to the expected name (setup.py) -- Software Heritage autobuilder (on jenkins-debian1) Thu, 21 Feb 2019 14:07:23 +0000 swh-storage (0.0.127-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.127 - (tagged by Antoine R. Dumont (@ardumont) on 2019-02-21 13:34:19 +0100) * Upstream changes: - v0.0.127 - api.wsgi: Open wsgi entrypoint and check config at startup time - api.server: Make the api server load and check its configuration - swh.storage.cli: Migrate the api server startup in swh.storage.cli -- Software Heritage autobuilder (on jenkins-debian1) Thu, 21 Feb 2019 12:59:48 +0000 swh-storage (0.0.126-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.126 - (tagged by Valentin Lorentz on 2019-02-21 10:18:26 +0100) * Upstream changes: - Double the timeout of snapshot_get_latest. - Metadata indexers often hit the limit. -- Software Heritage autobuilder (on jenkins-debian1) Thu, 21 Feb 2019 11:24:52 +0000 swh-storage (0.0.125-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.125 - (tagged by Antoine R. Dumont (@ardumont) on 2019-02-14 10:13:31 +0100) * Upstream changes: - v0.0.125 - api/server: Do not read configuration at each request -- Software Heritage autobuilder (on jenkins-debian1) Thu, 14 Feb 2019 16:57:01 +0000 swh-storage (0.0.124-1~swh3) unstable-swh; urgency=low * New upstream release, fixing the distribution this time -- Antoine R. Dumont (@ardumont) Thu, 14 Feb 2019 17:51:29 +0100 swh-storage (0.0.124-1~swh2) unstable; urgency=medium * New upstream release for dependency fix reasons -- Antoine R. Dumont (@ardumont) Thu, 14 Feb 2019 09:27:55 +0100 swh-storage (0.0.124-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.124 - (tagged by Antoine Lambert on 2019-02-12 14:40:53 +0100) * Upstream changes: - version 0.0.124 -- Software Heritage autobuilder (on jenkins-debian1) Tue, 12 Feb 2019 13:46:08 +0000 swh-storage (0.0.123-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.123 - (tagged by Antoine R. Dumont (@ardumont) on 2019-02-08 15:06:49 +0100) * Upstream changes: - v0.0.123 - Make Storage.origin_get support a list of origins, like other - Storage.*_get methods. - Stop using _to_bytes functions. - Use the BaseDb (and friends) from swh-core -- Software Heritage autobuilder (on jenkins-debian1) Fri, 08 Feb 2019 14:14:18 +0000 swh-storage (0.0.122-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.122 - (tagged by Antoine Lambert on 2019-01-28 11:57:27 +0100) * Upstream changes: - version 0.0.122 -- Software Heritage autobuilder (on jenkins-debian1) Mon, 28 Jan 2019 11:02:45 +0000 swh-storage (0.0.121-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.121 - (tagged by Antoine Lambert on 2019-01-28 11:31:48 +0100) * Upstream changes: - version 0.0.121 -- Software Heritage autobuilder (on jenkins-debian1) Mon, 28 Jan 2019 10:36:40 +0000 swh-storage (0.0.120-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.120 - (tagged by Antoine Lambert on 2019-01-17 12:04:27 +0100) * Upstream changes: - version 0.0.120 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 17 Jan 2019 11:12:47 +0000 swh-storage (0.0.119-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.119 - (tagged by Antoine R. Dumont (@ardumont) on 2019-01-11 11:57:13 +0100) * Upstream changes: - v0.0.119 - listener: Notify Kafka when an origin visit is updated -- Software Heritage autobuilder (on jenkins-debian1) Fri, 11 Jan 2019 11:02:07 +0000 swh-storage (0.0.118-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.118 - (tagged by Antoine Lambert on 2019-01-09 16:59:15 +0100) * Upstream changes: - version 0.0.118 -- Software Heritage autobuilder (on jenkins-debian1) Wed, 09 Jan 2019 18:51:34 +0000 swh-storage (0.0.117-1~swh1) unstable-swh; urgency=medium * v0.0.117 * listener: Adapt decoding behavior depending on the object type -- Antoine R. Dumont (@ardumont) Thu, 20 Dec 2018 14:48:44 +0100 swh-storage (0.0.116-1~swh1) unstable-swh; urgency=medium * v0.0.116 * Update requirements to latest swh.core -- Antoine R. Dumont (@ardumont) Fri, 14 Dec 2018 15:57:04 +0100 swh-storage (0.0.115-1~swh1) unstable-swh; urgency=medium * version 0.0.115 -- Antoine Lambert Fri, 14 Dec 2018 15:47:52 +0100 swh-storage (0.0.114-1~swh1) unstable-swh; urgency=medium * version 0.0.114 -- Antoine Lambert Wed, 05 Dec 2018 10:59:49 +0100 swh-storage (0.0.113-1~swh1) unstable-swh; urgency=medium * v0.0.113 * in-memory storage: Add recursive argument to directory_ls endpoint -- Antoine R. Dumont (@ardumont) Fri, 30 Nov 2018 11:56:44 +0100 swh-storage (0.0.112-1~swh1) unstable-swh; urgency=medium * v0.0.112 * in-memory storage: Align with existing storage * docstring: Improvements and adapt according to api * doc: update index to match new swh-doc format * Increase test coverage for stat_counters + fix its bugs. -- Antoine R. Dumont (@ardumont) Fri, 30 Nov 2018 10:28:02 +0100 swh-storage (0.0.111-1~swh1) unstable-swh; urgency=medium * v0.0.111 * Move generative tests in their own module * Open in-memory storage implementation -- Antoine R. Dumont (@ardumont) Wed, 21 Nov 2018 08:55:14 +0100 swh-storage (0.0.110-1~swh1) unstable-swh; urgency=medium * v0.0.110 * storage: Open content_get_range endpoint * tests: Start using hypothesis for tests generation * Improvements: Remove SQLisms from the tests and API * docs: Document metadata providers -- Antoine R. Dumont (@ardumont) Fri, 16 Nov 2018 11:53:14 +0100 swh-storage (0.0.109-1~swh1) unstable-swh; urgency=medium * version 0.0.109 -- Antoine Lambert Mon, 12 Nov 2018 14:11:09 +0100 swh-storage (0.0.108-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.108 * Add a function to get a full snapshot from the paginated view -- Nicolas Dandrimont Thu, 18 Oct 2018 18:32:10 +0200 swh-storage (0.0.107-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.107 * Enable pagination of snapshot branches * Drop occurrence-related tables * Drop entity-related tables -- Nicolas Dandrimont Wed, 17 Oct 2018 15:06:07 +0200 swh-storage (0.0.106-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.106 * Fix origin_visit_get_latest_snapshot logic * Improve directory iterator * Drop backwards compatibility between snapshots and occurrences * Drop the occurrence table -- Nicolas Dandrimont Mon, 08 Oct 2018 17:03:54 +0200 swh-storage (0.0.105-1~swh1) unstable-swh; urgency=medium * v0.0.105 * Increase directory_ls endpoint to 20 seconds * Add snapshot to the stats endpoint * Improve documentation -- Antoine R. Dumont (@ardumont) Mon, 10 Sep 2018 11:36:27 +0200 swh-storage (0.0.104-1~swh1) unstable-swh; urgency=medium * version 0.0.104 -- Antoine Lambert Wed, 29 Aug 2018 15:55:37 +0200 swh-storage (0.0.103-1~swh1) unstable-swh; urgency=medium * v0.0.103 * swh.storage.storage: origin_add returns updated list of dict with id -- Antoine R. Dumont (@ardumont) Mon, 30 Jul 2018 11:47:53 +0200 swh-storage (0.0.102-1~swh1) unstable-swh; urgency=medium * Release swh-storage v0.0.102 * Stop using temporary tables for read-only queries * Add timeouts for some read-only queries -- Nicolas Dandrimont Tue, 05 Jun 2018 14:06:54 +0200 swh-storage (0.0.101-1~swh1) unstable-swh; urgency=medium * v0.0.101 * swh.storage.api.client: Permit to specify the query timeout option -- Antoine R. Dumont (@ardumont) Thu, 24 May 2018 12:13:51 +0200 swh-storage (0.0.100-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.100 * remote api: only instantiate storage once per import * add thread-awareness to the storage implementation * properly cleanup after tests * parallelize objstorage and storage additions -- Nicolas Dandrimont Sat, 12 May 2018 18:12:40 +0200 swh-storage (0.0.99-1~swh1) unstable-swh; urgency=medium * v0.0.99 * storage: Add methods to compute directories/revisions diff * Add a new table for "bucketed" object counts * doc: update table clusters in SQL diagram * swh.storage.content_missing: Improve docstring -- Antoine R. Dumont (@ardumont) Tue, 20 Feb 2018 13:32:25 +0100 swh-storage (0.0.98-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.98 * Switch backwards compatibility for snapshots off -- Nicolas Dandrimont Tue, 06 Feb 2018 15:27:15 +0100 swh-storage (0.0.97-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.97 * refactor database initialization * use a separate thread instead of a temporary file for COPY operations * add more snapshot-related endpoints -- Nicolas Dandrimont Tue, 06 Feb 2018 14:07:07 +0100 swh-storage (0.0.96-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.96 * Add snapshot models * Add support for hg revision type -- Nicolas Dandrimont Tue, 19 Dec 2017 16:25:57 +0100 swh-storage (0.0.95-1~swh1) unstable-swh; urgency=medium * v0.0.95 * swh.storage: Rename indexer_configuration to tool * swh.storage: Migrate indexer model to its own model -- Antoine R. Dumont (@ardumont) Thu, 07 Dec 2017 09:56:31 +0100 swh-storage (0.0.94-1~swh1) unstable-swh; urgency=medium * v0.0.94 * Open searching origins methods to storage -- Antoine R. Dumont (@ardumont) Tue, 05 Dec 2017 12:32:57 +0100 swh-storage (0.0.93-1~swh1) unstable-swh; urgency=medium * v0.0.93 * swh.storage: Open indexer_configuration_add endpoint * swh-data: Update content mimetype indexer configuration * origin_visit_get: make order repeatable * db: Make unique indices actually unique and vice versa * Add origin_metadata endpoints (add, get, etc...) * cleanup: Remove unused content provenance cache tables -- Antoine R. Dumont (@ardumont) Fri, 24 Nov 2017 11:14:11 +0100 swh-storage (0.0.92-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.92 * make swh.storage.schemata work on SQLAlchemy 1.0 -- Nicolas Dandrimont Thu, 12 Oct 2017 19:51:24 +0200 swh-storage (0.0.91-1~swh1) unstable-swh; urgency=medium * Release swh.storage version 0.0.91 * Update packaging runes -- Nicolas Dandrimont Thu, 12 Oct 2017 18:41:46 +0200 swh-storage (0.0.90-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.90 * Remove leaky dependency on python3-kafka -- Nicolas Dandrimont Wed, 11 Oct 2017 18:53:22 +0200 swh-storage (0.0.89-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.89 * Add new package for ancillary schemata * Add new metadata-related entry points * Update for new swh.model -- Nicolas Dandrimont Wed, 11 Oct 2017 17:39:29 +0200 swh-storage (0.0.88-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.88 * Move the archiver to its own module * Prepare building for stretch -- Nicolas Dandrimont Fri, 30 Jun 2017 14:52:12 +0200 swh-storage (0.0.87-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.87 * update tasks to new swh.scheduler api -- Nicolas Dandrimont Mon, 12 Jun 2017 17:54:11 +0200 swh-storage (0.0.86-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.86 * archiver updates -- Nicolas Dandrimont Tue, 06 Jun 2017 18:43:43 +0200 swh-storage (0.0.85-1~swh1) unstable-swh; urgency=medium * v0.0.85 * Improve license endpoint's unknown license policy -- Antoine R. Dumont (@ardumont) Tue, 06 Jun 2017 17:55:40 +0200 swh-storage (0.0.84-1~swh1) unstable-swh; urgency=medium * v0.0.84 * Update indexer endpoints to use indexer configuration id * Add indexer configuration endpoint -- Antoine R. Dumont (@ardumont) Fri, 02 Jun 2017 16:16:47 +0200 swh-storage (0.0.83-1~swh1) unstable-swh; urgency=medium * v0.0.83 * Add blake2s256 new hash computation on content -- Antoine R. Dumont (@ardumont) Fri, 31 Mar 2017 12:27:09 +0200 swh-storage (0.0.82-1~swh1) unstable-swh; urgency=medium * v0.0.82 * swh.storage.listener: Subscribe to new origin notifications * sql/swh-func: improve equality check on the three columns for swh_content_missing * swh.storage: add length to directory listing primitives * refactoring: Migrate from swh.core.hashutil to swh.model.hashutil * swh.storage.archiver.updater: Create a content updater journal client * vault: add a git fast-import cooker * vault: generic cache to allow multiple cooker types and formats -- Antoine R. Dumont (@ardumont) Tue, 21 Mar 2017 14:50:16 +0100 swh-storage (0.0.81-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.81 * archiver improvements for mass injection in azure -- Nicolas Dandrimont Thu, 09 Mar 2017 11:15:28 +0100 swh-storage (0.0.80-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.80 * archiver improvements related to the mass injection of contents in azure * updates to the vault cooker -- Nicolas Dandrimont Tue, 07 Mar 2017 15:12:35 +0100 swh-storage (0.0.79-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.79 * archiver: keep counts of objects in each archive * converters: normalize timestamps using swh.model -- Nicolas Dandrimont Tue, 14 Feb 2017 19:37:36 +0100 swh-storage (0.0.78-1~swh1) unstable-swh; urgency=medium * v0.0.78 * Refactoring some common code into swh.core + adaptation api calls in * swh.objstorage and swh.storage (storage and vault) -- Antoine R. Dumont (@ardumont) Thu, 26 Jan 2017 15:08:03 +0100 swh-storage (0.0.77-1~swh1) unstable-swh; urgency=medium * v0.0.77 * Paginate results for origin_visits endpoint -- Antoine R. Dumont (@ardumont) Thu, 19 Jan 2017 14:41:49 +0100 swh-storage (0.0.76-1~swh1) unstable-swh; urgency=medium * v0.0.76 * Unify storage and objstorage configuration and instantiation functions -- Antoine R. Dumont (@ardumont) Thu, 15 Dec 2016 18:25:58 +0100 swh-storage (0.0.75-1~swh1) unstable-swh; urgency=medium * v0.0.75 * Add information on indexer tools (T610) -- Antoine R. Dumont (@ardumont) Fri, 02 Dec 2016 18:21:36 +0100 swh-storage (0.0.74-1~swh1) unstable-swh; urgency=medium * v0.0.74 * Use strict equality for content ctags' symbols search -- Antoine R. Dumont (@ardumont) Tue, 29 Nov 2016 17:25:29 +0100 swh-storage (0.0.73-1~swh1) unstable-swh; urgency=medium * v0.0.73 * Improve ctags search query for edge cases -- Antoine R. Dumont (@ardumont) Mon, 28 Nov 2016 16:34:55 +0100 swh-storage (0.0.72-1~swh1) unstable-swh; urgency=medium * v0.0.72 * Permit pagination on content_ctags_search api endpoint -- Antoine R. Dumont (@ardumont) Thu, 24 Nov 2016 14:19:29 +0100 swh-storage (0.0.71-1~swh1) unstable-swh; urgency=medium * v0.0.71 * Open full-text search endpoint on ctags -- Antoine R. Dumont (@ardumont) Wed, 23 Nov 2016 17:33:51 +0100 swh-storage (0.0.70-1~swh1) unstable-swh; urgency=medium * v0.0.70 * Add new license endpoints (add/get) * Update ctags endpoints to align update conflict policy -- Antoine R. Dumont (@ardumont) Thu, 10 Nov 2016 17:27:49 +0100 swh-storage (0.0.69-1~swh1) unstable-swh; urgency=medium * v0.0.69 * storage: Open ctags entry points (missing, add, get) * storage: allow adding several origins at once -- Antoine R. Dumont (@ardumont) Thu, 20 Oct 2016 16:07:07 +0200 swh-storage (0.0.68-1~swh1) unstable-swh; urgency=medium * v0.0.68 * indexer: Open mimetype/language get endpoints * indexer: Add the mimetype/language add function with conflict_update flag * archiver: Extend worker-to-backend to transmit messages to another * queue (once done) -- Antoine R. Dumont (@ardumont) Thu, 13 Oct 2016 15:30:21 +0200 swh-storage (0.0.67-1~swh1) unstable-swh; urgency=medium * v0.0.67 * Fix provenance storage init function -- Antoine R. Dumont (@ardumont) Wed, 12 Oct 2016 02:24:12 +0200 swh-storage (0.0.66-1~swh1) unstable-swh; urgency=medium * v0.0.66 * Improve provenance configuration format -- Antoine R. Dumont (@ardumont) Wed, 12 Oct 2016 01:39:26 +0200 swh-storage (0.0.65-1~swh1) unstable-swh; urgency=medium * v0.0.65 * Open api entry points for swh.indexer about content mimetype and * language * Update schema graph to latest version -- Antoine R. Dumont (@ardumont) Sat, 08 Oct 2016 10:00:30 +0200 swh-storage (0.0.64-1~swh1) unstable-swh; urgency=medium * v0.0.64 * Fix: Missing incremented version 5 for archiver.dbversion * Retrieve information on a content cached * sql/swh-func: content cache populates lines in deterministic order -- Antoine R. Dumont (@ardumont) Thu, 29 Sep 2016 21:50:59 +0200 swh-storage (0.0.63-1~swh1) unstable-swh; urgency=medium * v0.0.63 * Make the 'worker to backend' destination agnostic (message parameter) * Improve 'unknown sha1' policy (archiver db can lag behind swh db) * Improve 'force copy' policy -- Antoine R. Dumont (@ardumont) Fri, 23 Sep 2016 12:29:50 +0200 swh-storage (0.0.62-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.62 * Updates to the provenance cache to reduce churn on the main tables -- Nicolas Dandrimont Thu, 22 Sep 2016 18:54:52 +0200 swh-storage (0.0.61-1~swh1) unstable-swh; urgency=medium * v0.0.61 * Handle copies of unregistered sha1 in archiver db * Fix copy to only the targeted destination * Update to latest python3-swh.core dependency -- Antoine R. Dumont (@ardumont) Thu, 22 Sep 2016 13:44:05 +0200 swh-storage (0.0.60-1~swh1) unstable-swh; urgency=medium * v0.0.60 * Update archiver dependencies -- Antoine R. Dumont (@ardumont) Tue, 20 Sep 2016 16:46:48 +0200 swh-storage (0.0.59-1~swh1) unstable-swh; urgency=medium * v0.0.59 * Unify configuration property between director/worker * Deal with potential missing contents in the archiver db * Improve get_contents_error implementation * Remove dead code in swh.storage.db about archiver -- Antoine R. Dumont (@ardumont) Sat, 17 Sep 2016 12:50:14 +0200 swh-storage (0.0.58-1~swh1) unstable-swh; urgency=medium * v0.0.58 * ArchiverDirectorToBackend reads sha1 from stdin and sends chunks of sha1 * for archival. -- Antoine R. Dumont (@ardumont) Fri, 16 Sep 2016 22:17:14 +0200 swh-storage (0.0.57-1~swh1) unstable-swh; urgency=medium * v0.0.57 * Update swh.storage.archiver -- Antoine R. Dumont (@ardumont) Thu, 15 Sep 2016 16:30:11 +0200 swh-storage (0.0.56-1~swh1) unstable-swh; urgency=medium * v0.0.56 * Vault: Add vault implementation (directory cooker & cache * implementation + its api) * Archiver: Add another archiver implementation (direct to backend) -- Antoine R. Dumont (@ardumont) Thu, 15 Sep 2016 10:56:35 +0200 swh-storage (0.0.55-1~swh1) unstable-swh; urgency=medium * v0.0.55 * Fix origin_visit endpoint -- Antoine R. Dumont (@ardumont) Thu, 08 Sep 2016 15:21:28 +0200 swh-storage (0.0.54-1~swh1) unstable-swh; urgency=medium * v0.0.54 * Open origin_visit_get_by entry point -- Antoine R. Dumont (@ardumont) Mon, 05 Sep 2016 12:36:34 +0200 swh-storage (0.0.53-1~swh1) unstable-swh; urgency=medium * v0.0.53 * Add cache about content provenance * debian: fix python3-swh.storage.archiver runtime dependency * debian: create new package python3-swh.storage.provenance -- Antoine R. Dumont (@ardumont) Fri, 02 Sep 2016 11:14:09 +0200 swh-storage (0.0.52-1~swh1) unstable-swh; urgency=medium * v0.0.52 * Package python3-swh.storage.archiver -- Antoine R. Dumont (@ardumont) Thu, 25 Aug 2016 14:55:23 +0200 swh-storage (0.0.51-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.51 * Add new metadata column to origin_visit * Update swh-add-directory script for updated API -- Nicolas Dandrimont Wed, 24 Aug 2016 14:36:03 +0200 swh-storage (0.0.50-1~swh1) unstable-swh; urgency=medium * v0.0.50 * Add a function to pull (only) metadata for a list of contents * Update occurrence_add api entry point to properly deal with origin_visit * Add origin_visit api entry points to create/update origin_visit -- Antoine R. Dumont (@ardumont) Tue, 23 Aug 2016 16:29:26 +0200 swh-storage (0.0.49-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.49 * Proper dependency on python3-kafka -- Nicolas Dandrimont Fri, 19 Aug 2016 13:45:52 +0200 swh-storage (0.0.48-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.48 * Updates to the archiver * Notification support for new object creations -- Nicolas Dandrimont Fri, 19 Aug 2016 12:13:50 +0200 swh-storage (0.0.47-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.47 * Update storage archiver to new schemaless schema -- Nicolas Dandrimont Fri, 22 Jul 2016 16:59:19 +0200 swh-storage (0.0.46-1~swh1) unstable-swh; urgency=medium * v0.0.46 * Update archiver bootstrap -- Antoine R. Dumont (@ardumont) Wed, 20 Jul 2016 19:04:42 +0200 swh-storage (0.0.45-1~swh1) unstable-swh; urgency=medium * v0.0.45 * Separate swh.storage.archiver's db from swh.storage.storage -- Antoine R. Dumont (@ardumont) Tue, 19 Jul 2016 15:05:36 +0200 swh-storage (0.0.44-1~swh1) unstable-swh; urgency=medium * v0.0.44 * Open listing visits per origin api -- Quentin Campos Fri, 08 Jul 2016 11:27:10 +0200 swh-storage (0.0.43-1~swh1) unstable-swh; urgency=medium * v0.0.43 * Extract objstorage to its own package swh.objstorage -- Quentin Campos Mon, 27 Jun 2016 14:57:12 +0200 swh-storage (0.0.42-1~swh1) unstable-swh; urgency=medium * Add an object storage multiplexer to allow transition between multiple versions of object storages. -- Quentin Campos Tue, 21 Jun 2016 15:03:52 +0200 swh-storage (0.0.41-1~swh1) unstable-swh; urgency=medium * Refactoring of the object storage in order to allow multiple versions of it, as well as a multiplexer for version transition. -- Quentin Campos Thu, 16 Jun 2016 15:54:16 +0200 swh-storage (0.0.40-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.40: * Refactor objstorage to allow for different implementations * Updates to the checker functionality * Bump swh.core dependency to v0.0.20 -- Nicolas Dandrimont Tue, 14 Jun 2016 17:25:42 +0200 swh-storage (0.0.39-1~swh1) unstable-swh; urgency=medium * v0.0.39 * Add run_from_webserver function for objstorage api server * Add unique identifier message on default api server route endpoints -- Antoine R. Dumont (@ardumont) Fri, 20 May 2016 15:27:34 +0200 swh-storage (0.0.38-1~swh1) unstable-swh; urgency=medium * v0.0.38 * Add an http api for object storage * Implement an archiver to perform backup copies -- Quentin Campos Fri, 20 May 2016 14:40:14 +0200 swh-storage (0.0.37-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.37 * Add fullname to person table * Add svn as a revision type -- Nicolas Dandrimont Fri, 08 Apr 2016 16:44:24 +0200 swh-storage (0.0.36-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.36 * Add json-schema documentation for the jsonb fields * Overhaul entity handling -- Nicolas Dandrimont Wed, 16 Mar 2016 17:27:17 +0100 swh-storage (0.0.35-1~swh1) unstable-swh; urgency=medium * Release swh-storage v0.0.35 * Factor in temporary tables with only an id (db v059) * Allow generic object search by sha1_git (db v060) -- Nicolas Dandrimont Thu, 25 Feb 2016 16:21:01 +0100 swh-storage (0.0.34-1~swh1) unstable-swh; urgency=medium * Release swh.storage version 0.0.34 * occurrence improvements * commit metadata improvements -- Nicolas Dandrimont Fri, 19 Feb 2016 18:20:07 +0100 swh-storage (0.0.33-1~swh1) unstable-swh; urgency=medium * Bump swh.storage to version 0.0.33 -- Nicolas Dandrimont Fri, 05 Feb 2016 11:17:00 +0100 swh-storage (0.0.32-1~swh1) unstable-swh; urgency=medium * v0.0.32 * Let the person's id flow * sql/upgrades/051: 050->051 schema change * sql/upgrades/050: 049->050 schema change - Clean up obsolete functions * sql/upgrades/049: Final take for 048->049 schema change. * sql: Use a new schema for occurrences -- Antoine R. Dumont (@ardumont) Fri, 29 Jan 2016 17:44:27 +0100 swh-storage (0.0.31-1~swh1) unstable-swh; urgency=medium * v0.0.31 * Deal with occurrence_history.branch, occurrence.branch, release.name as bytes -- Antoine R. Dumont (@ardumont) Wed, 27 Jan 2016 15:45:53 +0100 swh-storage (0.0.30-1~swh1) unstable-swh; urgency=medium * Prepare swh.storage v0.0.30 release * type-agnostic occurrences and revisions -- Nicolas Dandrimont Tue, 26 Jan 2016 07:36:43 +0100 swh-storage (0.0.29-1~swh1) unstable-swh; urgency=medium * v0.0.29 * New: * Upgrade sql schema to 041→043 * Deal with communication downtime between clients and storage * Open occurrence_get(origin_id) to retrieve latest occurrences per origin * Open release_get_by to retrieve a release by origin * Open directory_get to retrieve information on directory by id * Open entity_get to retrieve information on entity + hierarchy from its uuid * Open directory_get that retrieve information on directory per id * Update: * directory_get/directory_ls: Rename to directory_ls * revision_log: update to retrieve logs from multiple root revisions * revision_get_by: branch name filtering is now optional -- Antoine R. Dumont (@ardumont) Wed, 20 Jan 2016 16:15:50 +0100 swh-storage (0.0.28-1~swh1) unstable-swh; urgency=medium * v0.0.28 * Open entity_get api -- Antoine R. Dumont (@ardumont) Fri, 15 Jan 2016 16:37:27 +0100 swh-storage (0.0.27-1~swh1) unstable-swh; urgency=medium * v0.0.27 * Open directory_entry_get_by_path api * Improve get_revision_by api performance * sql/swh-schema: add index on origin(type, url) --> improve origin lookup api * Bump to 039 db version -- Antoine R. Dumont (@ardumont) Fri, 15 Jan 2016 12:42:47 +0100 swh-storage (0.0.26-1~swh1) unstable-swh; urgency=medium * v0.0.26 * Open revision_get_by to retrieve a revision by occurrence criterion filtering * sql/upgrades/036: add 035→036 upgrade script -- Antoine R. Dumont (@ardumont) Wed, 13 Jan 2016 12:46:44 +0100 swh-storage (0.0.25-1~swh1) unstable-swh; urgency=medium * v0.0.25 * Limit results in swh_revision_list* * Create the package to align the current db production version on https://archive.softwareheritage.org/ -- Antoine R. Dumont (@ardumont) Fri, 08 Jan 2016 11:33:08 +0100 swh-storage (0.0.24-1~swh1) unstable-swh; urgency=medium * Prepare swh.storage release v0.0.24 * Add a limit argument to revision_log -- Nicolas Dandrimont Wed, 06 Jan 2016 15:12:53 +0100 swh-storage (0.0.23-1~swh1) unstable-swh; urgency=medium * v0.0.23 * Protect against overflow, wrapped in ValueError for client * Fix relative path import for remote storage. * api to retrieve revision_log is now 'parents' aware -- Antoine R. Dumont (@ardumont) Wed, 06 Jan 2016 11:30:58 +0100 swh-storage (0.0.22-1~swh1) unstable-swh; urgency=medium * Release v0.0.22 * Fix relative import for remote storage -- Nicolas Dandrimont Wed, 16 Dec 2015 16:04:48 +0100 swh-storage (0.0.21-1~swh1) unstable-swh; urgency=medium * Prepare release v0.0.21 * Protect the storage api client from overflows * Add a get_storage function mapping to local or remote storage -- Nicolas Dandrimont Wed, 16 Dec 2015 13:34:46 +0100 swh-storage (0.0.20-1~swh1) unstable-swh; urgency=medium * v0.0.20 * allow numeric timestamps with offset * Open revision_log api * start migration to swh.model -- Antoine R. Dumont (@ardumont) Mon, 07 Dec 2015 15:20:36 +0100 swh-storage (0.0.19-1~swh1) unstable-swh; urgency=medium * v0.0.19 * Improve directory listing with content data * Open person_get * Open release_get data reading * Improve origin_get api * Effort to unify api output on dict (for read) * Migrate backend to 032 -- Antoine R. Dumont (@ardumont) Fri, 27 Nov 2015 13:33:34 +0100 swh-storage (0.0.18-1~swh1) unstable-swh; urgency=medium * v0.0.18 * Improve origin_get to permit retrieval per id * Update directory_get implementation (add join from * directory_entry_file to content) * Open release_get : [sha1] -> [Release] -- Antoine R. Dumont (@ardumont) Thu, 19 Nov 2015 11:18:35 +0100 swh-storage (0.0.17-1~swh1) unstable-swh; urgency=medium * Prepare deployment of swh.storage v0.0.17 * Add some entity related entry points -- Nicolas Dandrimont Tue, 03 Nov 2015 16:40:59 +0100 swh-storage (0.0.16-1~swh1) unstable-swh; urgency=medium * v0.0.16 * Add metadata column in revision (db version 29) * cache http connection for remote storage client -- Antoine R. Dumont (@ardumont) Thu, 29 Oct 2015 10:29:00 +0100 swh-storage (0.0.15-1~swh1) unstable-swh; urgency=medium * Prepare deployment of swh.storage v0.0.15 * Allow population of fetch_history * Update organizations / projects as entities * Use schema v028 for directory addition -- Nicolas Dandrimont Tue, 27 Oct 2015 11:43:39 +0100 swh-storage (0.0.14-1~swh1) unstable-swh; urgency=medium * Prepare swh.storage v0.0.14 deployment -- Nicolas Dandrimont Fri, 16 Oct 2015 15:34:08 +0200 swh-storage (0.0.13-1~swh1) unstable-swh; urgency=medium * Prepare deploying swh.storage v0.0.13 -- Nicolas Dandrimont Fri, 16 Oct 2015 14:51:44 +0200 swh-storage (0.0.12-1~swh1) unstable-swh; urgency=medium * Prepare deploying swh.storage v0.0.12 -- Nicolas Dandrimont Tue, 13 Oct 2015 12:39:18 +0200 swh-storage (0.0.11-1~swh1) unstable-swh; urgency=medium * Preparing deployment of swh.storage v0.0.11 -- Nicolas Dandrimont Fri, 09 Oct 2015 17:44:51 +0200 swh-storage (0.0.10-1~swh1) unstable-swh; urgency=medium * Prepare deployment of swh.storage v0.0.10 -- Nicolas Dandrimont Tue, 06 Oct 2015 17:37:00 +0200 swh-storage (0.0.9-1~swh1) unstable-swh; urgency=medium * Prepare deployment of swh.storage v0.0.9 -- Nicolas Dandrimont Thu, 01 Oct 2015 19:03:00 +0200 swh-storage (0.0.8-1~swh1) unstable-swh; urgency=medium * Prepare deployment of swh.storage v0.0.8 -- Nicolas Dandrimont Thu, 01 Oct 2015 11:32:46 +0200 swh-storage (0.0.7-1~swh1) unstable-swh; urgency=medium * Prepare deployment of swh.storage v0.0.7 -- Nicolas Dandrimont Tue, 29 Sep 2015 16:52:54 +0200 swh-storage (0.0.6-1~swh1) unstable-swh; urgency=medium * Prepare deployment of swh.storage v0.0.6 -- Nicolas Dandrimont Tue, 29 Sep 2015 16:43:24 +0200 swh-storage (0.0.5-1~swh1) unstable-swh; urgency=medium * Prepare deploying swh.storage v0.0.5 -- Nicolas Dandrimont Tue, 29 Sep 2015 16:27:00 +0200 swh-storage (0.0.1-1~swh1) unstable-swh; urgency=medium * Initial release * swh.storage.api: Properly escape arbitrary byte sequences in arguments -- Nicolas Dandrimont Tue, 22 Sep 2015 17:02:34 +0200 diff --git a/swh.storage.egg-info/PKG-INFO b/swh.storage.egg-info/PKG-INFO index 0df6abfb..782ed476 100644 --- a/swh.storage.egg-info/PKG-INFO +++ b/swh.storage.egg-info/PKG-INFO @@ -1,223 +1,223 @@ Metadata-Version: 2.1 Name: swh.storage -Version: 0.30.0 +Version: 0.30.1 Summary: Software Heritage storage manager Home-page: https://forge.softwareheritage.org/diffusion/DSTO/ Author: Software Heritage developers Author-email: swh-devel@inria.fr License: UNKNOWN Project-URL: Bug Reports, https://forge.softwareheritage.org/maniphest Project-URL: Funding, https://www.softwareheritage.org/donate Project-URL: Source, https://forge.softwareheritage.org/source/swh-storage Project-URL: Documentation, https://docs.softwareheritage.org/devel/swh-storage/ Description: swh-storage =========== Abstraction layer over the archive, allowing to access all stored source code artifacts as well as their metadata. See the [documentation](https://docs.softwareheritage.org/devel/swh-storage/index.html) for more details. ## Quick start ### Dependencies Python tests for this module include tests that cannot be run without a local Postgresql database, so you need the Postgresql server executable on your machine (no need to have a running Postgresql server). They also expect a cassandra server. #### Debian-like host ``` $ sudo apt install libpq-dev postgresql-11 cassandra ``` #### Non Debian-like host The tests expects the path to `cassandra` to either be unspecified, it is then looked up at `/usr/sbin/cassandra`, either specified through the environment variable `SWH_CASSANDRA_BIN`. Optionally, you can avoid running the cassandra tests. ``` (swh) :~/swh-storage$ tox -- -m 'not cassandra' ``` ### Installation It is strongly recommended to use a virtualenv. In the following, we consider you work in a virtualenv named `swh`. See the [developer setup guide](https://docs.softwareheritage.org/devel/developer-setup.html#developer-setup) for a more details on how to setup a working environment. You can install the package directly from [pypi](https://pypi.org/p/swh.storage): ``` (swh) :~$ pip install swh.storage [...] ``` Or from sources: ``` (swh) :~$ git clone https://forge.softwareheritage.org/source/swh-storage.git [...] (swh) :~$ cd swh-storage (swh) :~/swh-storage$ pip install . [...] ``` Then you can check it's properly installed: ``` (swh) :~$ swh storage --help Usage: swh storage [OPTIONS] COMMAND [ARGS]... Software Heritage Storage tools. Options: -h, --help Show this message and exit. Commands: rpc-serve Software Heritage Storage RPC server. ``` ## Tests The best way of running Python tests for this module is to use [tox](https://tox.readthedocs.io/). ``` (swh) :~$ pip install tox ``` ### tox From the sources directory, simply use tox: ``` (swh) :~/swh-storage$ tox [...] ========= 315 passed, 6 skipped, 15 warnings in 40.86 seconds ========== _______________________________ summary ________________________________ flake8: commands succeeded py3: commands succeeded congratulations :) ``` Note: it is possible to set the `JAVA_HOME` environment variable to specify the version of the JVM to be used by Cassandra. For example, at the time of writing this, Cassandra does not support java 14, so one may want to use for example java 11: ``` (swh) :~/swh-storage$ export JAVA_HOME=/usr/lib/jvm/java-14-openjdk-amd64/bin/java (swh) :~/swh-storage$ tox [...] ``` ## Development The storage server can be locally started. It requires a configuration file and a running Postgresql database. ### Sample configuration A typical configuration `storage.yml` file is: ``` storage: cls: local db: "dbname=softwareheritage-dev user= password=" objstorage: cls: pathslicing root: /tmp/swh-storage/ slicing: 0:2/2:4/4:6 ``` which means, this uses: - a local storage instance whose db connection is to `softwareheritage-dev` local instance, - the objstorage uses a local objstorage instance whose: - `root` path is /tmp/swh-storage, - slicing scheme is `0:2/2:4/4:6`. This means that the identifier of the content (sha1) which will be stored on disk at first level with the first 2 hex characters, the second level with the next 2 hex characters and the third level with the next 2 hex characters. And finally the complete hash file holding the raw content. For example: 00062f8bd330715c4f819373653d97b3cd34394c will be stored at 00/06/2f/00062f8bd330715c4f819373653d97b3cd34394c Note that the `root` path should exist on disk before starting the server. ### Starting the storage server If the python package has been properly installed (e.g. in a virtual env), you should be able to use the command: ``` (swh) :~/swh-storage$ swh storage rpc-serve storage.yml ``` This runs a local swh-storage api at 5002 port. ``` (swh) :~/swh-storage$ curl http://127.0.0.1:5002 Software Heritage storage server

You have reached the Software Heritage storage server.
See its documentation and API for more information

``` ### And then what? In your upper layer ([loader-git](https://forge.softwareheritage.org/source/swh-loader-git/), [loader-svn](https://forge.softwareheritage.org/source/swh-loader-svn/), etc...), you can define a remote storage with this snippet of yaml configuration. ``` storage: cls: remote url: http://localhost:5002/ ``` You could directly define a local storage with the following snippet: ``` storage: cls: local db: service=swh-dev objstorage: cls: pathslicing root: /home/storage/swh-storage/ slicing: 0:2/2:4/4:6 ``` Platform: UNKNOWN Classifier: Programming Language :: Python :: 3 Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3) Classifier: Operating System :: OS Independent Classifier: Development Status :: 5 - Production/Stable Requires-Python: >=3.7 Description-Content-Type: text/markdown Provides-Extra: testing Provides-Extra: journal diff --git a/swh/storage/backfill.py b/swh/storage/backfill.py index 3df10b2c..79b14321 100644 --- a/swh/storage/backfill.py +++ b/swh/storage/backfill.py @@ -1,646 +1,649 @@ # Copyright (C) 2017-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information """Storage backfiller. The backfiller goal is to produce back part or all of the objects from a storage to the journal topics Current implementation consists in the JournalBackfiller class. It simply reads the objects from the storage and sends every object identifier back to the journal. """ import logging from typing import Any, Callable, Dict, Optional from swh.core.db import BaseDb from swh.model.identifiers import ExtendedObjectType from swh.model.model import ( BaseModel, Directory, DirectoryEntry, ExtID, RawExtrinsicMetadata, Release, Revision, Snapshot, SnapshotBranch, TargetType, ) from swh.storage.postgresql.converters import ( db_to_extid, db_to_raw_extrinsic_metadata, db_to_release, db_to_revision, ) from swh.storage.replay import object_converter_fn from swh.storage.writer import JournalWriter logger = logging.getLogger(__name__) PARTITION_KEY = { "content": "sha1", "skipped_content": "sha1", "directory": "id", "extid": "target", "metadata_authority": "type, url", "metadata_fetcher": "name, version", "raw_extrinsic_metadata": "target", "revision": "revision.id", "release": "release.id", "snapshot": "id", "origin": "id", "origin_visit": "origin_visit.origin", "origin_visit_status": "origin_visit_status.origin", } COLUMNS = { "content": [ "sha1", "sha1_git", "sha256", "blake2s256", "length", "status", "ctime", ], "skipped_content": [ "sha1", "sha1_git", "sha256", "blake2s256", "length", "ctime", "status", "reason", ], "directory": ["id", "dir_entries", "file_entries", "rev_entries"], "extid": ["extid_type", "extid", "target_type", "target"], "metadata_authority": ["type", "url"], "metadata_fetcher": ["name", "version"], "origin": ["url"], "origin_visit": ["visit", "type", ("origin.url", "origin"), "date",], "origin_visit_status": [ ("origin_visit_status.visit", "visit"), ("origin.url", "origin"), ("origin_visit_status.date", "date"), "type", "snapshot", "status", "metadata", ], "raw_extrinsic_metadata": [ "raw_extrinsic_metadata.type", "raw_extrinsic_metadata.target", "metadata_authority.type", "metadata_authority.url", "metadata_fetcher.name", "metadata_fetcher.version", "discovery_date", "format", "raw_extrinsic_metadata.metadata", "origin", "visit", "snapshot", "release", "revision", "path", "directory", ], "revision": [ ("revision.id", "id"), "date", "date_offset", "date_neg_utc_offset", "committer_date", "committer_date_offset", "committer_date_neg_utc_offset", "type", "directory", "message", "synthetic", "metadata", "extra_headers", ( "array(select parent_id::bytea from revision_history rh " "where rh.id = revision.id order by rh.parent_rank asc)", "parents", ), ("a.id", "author_id"), ("a.name", "author_name"), ("a.email", "author_email"), ("a.fullname", "author_fullname"), ("c.id", "committer_id"), ("c.name", "committer_name"), ("c.email", "committer_email"), ("c.fullname", "committer_fullname"), ], "release": [ ("release.id", "id"), "date", "date_offset", "date_neg_utc_offset", "comment", ("release.name", "name"), "synthetic", "target", "target_type", ("a.id", "author_id"), ("a.name", "author_name"), ("a.email", "author_email"), ("a.fullname", "author_fullname"), ], "snapshot": ["id", "object_id"], } JOINS = { "release": ["person a on release.author=a.id"], "revision": [ "person a on revision.author=a.id", "person c on revision.committer=c.id", ], "origin_visit": ["origin on origin_visit.origin=origin.id"], "origin_visit_status": ["origin on origin_visit_status.origin=origin.id",], "raw_extrinsic_metadata": [ "metadata_authority on " "raw_extrinsic_metadata.authority_id=metadata_authority.id", "metadata_fetcher on raw_extrinsic_metadata.fetcher_id=metadata_fetcher.id", ], } def directory_converter(db: BaseDb, directory_d: Dict[str, Any]) -> Directory: """Convert directory from the flat representation to swh model compatible objects. """ columns = ["target", "name", "perms"] query_template = """ select %(columns)s from directory_entry_%(type)s where id in %%s """ types = ["file", "dir", "rev"] entries = [] with db.cursor() as cur: for type in types: ids = directory_d.pop("%s_entries" % type) if not ids: continue query = query_template % { "columns": ",".join(columns), "type": type, } cur.execute(query, (tuple(ids),)) for row in cur: entry_d = dict(zip(columns, row)) entry = DirectoryEntry( name=entry_d["name"], type=type, target=entry_d["target"], perms=entry_d["perms"], ) entries.append(entry) return Directory(id=directory_d["id"], entries=tuple(entries),) def raw_extrinsic_metadata_converter( db: BaseDb, metadata: Dict[str, Any] ) -> RawExtrinsicMetadata: """Convert a raw extrinsic metadata from the flat representation to swh model compatible objects. """ return db_to_raw_extrinsic_metadata(metadata) def extid_converter(db: BaseDb, extid: Dict[str, Any]) -> ExtID: """Convert an extid from the flat representation to swh model compatible objects. """ return db_to_extid(extid) def revision_converter(db: BaseDb, revision_d: Dict[str, Any]) -> Revision: """Convert revision from the flat representation to swh model compatible objects. """ revision = db_to_revision(revision_d) assert revision is not None, revision_d["id"] return revision def release_converter(db: BaseDb, release_d: Dict[str, Any]) -> Release: """Convert release from the flat representation to swh model compatible objects. """ release = db_to_release(release_d) assert release is not None, release_d["id"] return release def snapshot_converter(db: BaseDb, snapshot_d: Dict[str, Any]) -> Snapshot: """Convert snapshot from the flat representation to swh model compatible objects. """ columns = ["name", "target", "target_type"] query = """ select %s from snapshot_branches sbs inner join snapshot_branch sb on sb.object_id=sbs.branch_id where sbs.snapshot_id=%%s """ % ", ".join( columns ) with db.cursor() as cur: cur.execute(query, (snapshot_d["object_id"],)) branches = {} for name, *row in cur: branch_d = dict(zip(columns[1:], row)) if branch_d["target"] is not None and branch_d["target_type"] is not None: branch: Optional[SnapshotBranch] = SnapshotBranch( target=branch_d["target"], target_type=TargetType(branch_d["target_type"]), ) else: branch = None branches[name] = branch return Snapshot(id=snapshot_d["id"], branches=branches,) CONVERTERS: Dict[str, Callable[[BaseDb, Dict[str, Any]], BaseModel]] = { "directory": directory_converter, "extid": extid_converter, "raw_extrinsic_metadata": raw_extrinsic_metadata_converter, "revision": revision_converter, "release": release_converter, "snapshot": snapshot_converter, } def object_to_offset(object_id, numbits): """Compute the index of the range containing object id, when dividing space into 2^numbits. Args: object_id (str): The hex representation of object_id numbits (int): Number of bits in which we divide input space Returns: The index of the range containing object id """ q, r = divmod(numbits, 8) length = q + (r != 0) shift_bits = 8 - r if r else 0 truncated_id = object_id[: length * 2] if len(truncated_id) < length * 2: truncated_id += "0" * (length * 2 - len(truncated_id)) truncated_id_bytes = bytes.fromhex(truncated_id) return int.from_bytes(truncated_id_bytes, byteorder="big") >> shift_bits def byte_ranges(numbits, start_object=None, end_object=None): """Generate start/end pairs of bytes spanning numbits bits and constrained by optional start_object and end_object. Args: numbits (int): Number of bits in which we divide input space start_object (str): Hex object id contained in the first range returned end_object (str): Hex object id contained in the last range returned Yields: 2^numbits pairs of bytes """ q, r = divmod(numbits, 8) length = q + (r != 0) shift_bits = 8 - r if r else 0 def to_bytes(i): return int.to_bytes(i << shift_bits, length=length, byteorder="big") start_offset = 0 end_offset = 1 << numbits if start_object is not None: start_offset = object_to_offset(start_object, numbits) if end_object is not None: end_offset = object_to_offset(end_object, numbits) + 1 for start in range(start_offset, end_offset): end = start + 1 if start == 0: yield None, to_bytes(end) elif end == 1 << numbits: yield to_bytes(start), None else: yield to_bytes(start), to_bytes(end) def raw_extrinsic_metadata_target_ranges(start_object=None, end_object=None): """Generate ranges of values for the `target` attribute of `raw_extrinsic_metadata` objects. This generates one range for all values before the first SWHID (which would correspond to raw origin URLs), then a number of hex-based ranges for each known type of SWHID (2**12 ranges for directories, 2**8 ranges for all other types). Finally, it generates one extra range for values above all possible SWHIDs. """ if start_object is None: start_object = "" swhid_target_types = sorted(type.value for type in ExtendedObjectType) first_swhid = f"swh:1:{swhid_target_types[0]}:" # Generate a range for url targets, if the starting object is before SWHIDs if start_object < first_swhid: yield start_object, ( first_swhid if end_object is None or end_object >= first_swhid else end_object ) if end_object is not None and end_object <= first_swhid: return # Prime the following loop, which uses the upper bound of the previous range # as lower bound, to account for potential targets between two valid types # of SWHIDs (even though they would eventually be rejected by the # RawExtrinsicMetadata parser, they /might/ exist...) end_swhid = first_swhid # Generate ranges for swhid targets for target_type in swhid_target_types: finished = False base_swhid = f"swh:1:{target_type}:" last_swhid = base_swhid + ("f" * 40) if start_object > last_swhid: continue # Generate 2**8 or 2**12 ranges for _, end in byte_ranges(12 if target_type == "dir" else 8): # Reuse previous uppper bound start_swhid = end_swhid # Use last_swhid for this object type if on the last byte range end_swhid = (base_swhid + end.hex()) if end is not None else last_swhid # Ignore out of bounds ranges if start_object >= end_swhid: continue # Potentially clamp start of range to the first object requested start_swhid = max(start_swhid, start_object) # Handle ending the loop early if the last requested object id is in # the current range if end_object is not None and end_swhid >= end_object: end_swhid = end_object finished = True yield start_swhid, end_swhid if finished: return # Generate one final range for potential raw origin URLs after the last # valid SWHID start_swhid = max(start_object, end_swhid) yield start_swhid, end_object def integer_ranges(start, end, block_size=1000): for start in range(start, end, block_size): if start == 0: yield None, block_size elif start + block_size > end: yield start, end else: yield start, start + block_size RANGE_GENERATORS = { "content": lambda start, end: byte_ranges(24, start, end), "skipped_content": lambda start, end: [(None, None)], "directory": lambda start, end: byte_ranges(24, start, end), "extid": lambda start, end: byte_ranges(24, start, end), "revision": lambda start, end: byte_ranges(24, start, end), "release": lambda start, end: byte_ranges(16, start, end), "raw_extrinsic_metadata": raw_extrinsic_metadata_target_ranges, "snapshot": lambda start, end: byte_ranges(16, start, end), "origin": integer_ranges, "origin_visit": integer_ranges, "origin_visit_status": integer_ranges, } def compute_query(obj_type, start, end): columns = COLUMNS.get(obj_type) join_specs = JOINS.get(obj_type, []) join_clause = "\n".join("left join %s" % clause for clause in join_specs) where = [] where_args = [] if start: where.append("%(keys)s >= %%s") where_args.append(start) if end: where.append("%(keys)s < %%s") where_args.append(end) where_clause = "" if where: where_clause = ("where " + " and ".join(where)) % { "keys": "(%s)" % PARTITION_KEY[obj_type] } column_specs = [] column_aliases = [] for column in columns: if isinstance(column, str): column_specs.append(column) column_aliases.append(column) else: column_specs.append("%s as %s" % column) column_aliases.append(column[1]) query = """ select %(columns)s from %(table)s %(join)s %(where)s """ % { "columns": ",".join(column_specs), "table": obj_type, "join": join_clause, "where": where_clause, } return query, where_args, column_aliases def fetch(db, obj_type, start, end): """Fetch all obj_type's identifiers from db. This opens one connection, stream objects and when done, close the connection. Args: db (BaseDb): Db connection object obj_type (str): Object type start (Union[bytes|Tuple]): Range start identifier end (Union[bytes|Tuple]): Range end identifier Raises: ValueError if obj_type is not supported Yields: Objects in the given range """ query, where_args, column_aliases = compute_query(obj_type, start, end) converter = CONVERTERS.get(obj_type) with db.cursor() as cursor: logger.debug("Fetching data for table %s", obj_type) logger.debug("query: %s %s", query, where_args) cursor.execute(query, where_args) for row in cursor: record = dict(zip(column_aliases, row)) if converter: record = converter(db, record) else: record = object_converter_fn[obj_type](record) logger.debug("record: %s", record) yield record def _format_range_bound(bound): if isinstance(bound, bytes): return bound.hex() else: return str(bound) MANDATORY_KEYS = ["storage", "journal_writer"] class JournalBackfiller: """Class in charge of reading the storage's objects and sends those back to the journal's topics. This is designed to be run periodically. """ def __init__(self, config=None): self.config = config self.check_config(config) def check_config(self, config): missing_keys = [] for key in MANDATORY_KEYS: if not config.get(key): missing_keys.append(key) if missing_keys: raise ValueError( "Configuration error: The following keys must be" " provided: %s" % (",".join(missing_keys),) ) - if "cls" not in config["storage"] or config["storage"]["cls"] != "local": + if "cls" not in config["storage"] or config["storage"]["cls"] not in ( + "local", + "postgresql", + ): raise ValueError( "swh storage backfiller must be configured to use a local" " (PostgreSQL) storage" ) def parse_arguments(self, object_type, start_object, end_object): """Parse arguments Raises: ValueError for unsupported object type ValueError if object ids are not parseable Returns: Parsed start and end object ids """ if object_type not in COLUMNS: raise ValueError( "Object type %s is not supported. " "The only possible values are %s" % (object_type, ", ".join(sorted(COLUMNS.keys()))) ) if object_type in ["origin", "origin_visit", "origin_visit_status"]: if start_object: start_object = int(start_object) else: start_object = 0 if end_object: end_object = int(end_object) else: end_object = 100 * 1000 * 1000 # hard-coded limit return start_object, end_object def run(self, object_type, start_object, end_object, dry_run=False): """Reads storage's subscribed object types and send them to the journal's reading topic. """ start_object, end_object = self.parse_arguments( object_type, start_object, end_object ) db = BaseDb.connect(self.config["storage"]["db"]) writer = JournalWriter({"cls": "kafka", **self.config["journal_writer"]}) assert writer.journal is not None for range_start, range_end in RANGE_GENERATORS[object_type]( start_object, end_object ): logger.info( "Processing %s range %s to %s", object_type, _format_range_bound(range_start), _format_range_bound(range_end), ) objects = fetch(db, object_type, start=range_start, end=range_end) if not dry_run: writer.write_additions(object_type, objects) else: # only consume the objects iterator to check for any potential # decoding/encoding errors for obj in objects: pass if __name__ == "__main__": print('Please use the "swh-journal backfiller run" command') diff --git a/swh/storage/migrate_extrinsic_metadata.py b/swh/storage/migrate_extrinsic_metadata.py index 171e3b8c..d390c147 100644 --- a/swh/storage/migrate_extrinsic_metadata.py +++ b/swh/storage/migrate_extrinsic_metadata.py @@ -1,1208 +1,1208 @@ #!/usr/bin/env python3 -# Copyright (C) 2020 The Software Heritage developers +# Copyright (C) 2020-2021 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information """This is an executable script to migrate extrinsic revision metadata from the revision table to the new extrinsic metadata storage. This is designed to be as conservative as possible, following this principle: for each revision the script reads (in "handle_row"), it will read some of the fields, write them directly to the metadata storage, and remove them. Then it checks all the remaining fields are in a hardcoded list of fields that are known not to require migration. This means that every field that isn't migrated was explicitly reviewed while writing this script. Additionally, this script contains many assertions to prevent false positives in its heuristics. """ import datetime import hashlib import json import os import re import sys import time from typing import Any, Dict, Optional from urllib.error import HTTPError from urllib.parse import unquote, urlparse from urllib.request import urlopen import iso8601 import psycopg2 from swh.core.db import BaseDb from swh.model.hashutil import hash_to_hex from swh.model.identifiers import ( CoreSWHID, ExtendedObjectType, ExtendedSWHID, ObjectType, QualifiedSWHID, ) from swh.model.model import ( MetadataAuthority, MetadataAuthorityType, MetadataFetcher, RawExtrinsicMetadata, Sha1Git, ) from swh.storage import get_storage from swh.storage.algos.origin import iter_origin_visit_statuses, iter_origin_visits from swh.storage.algos.snapshot import snapshot_get_all_branches # XML namespaces and fields for metadata coming from the deposit: CODEMETA_NS = "https://doi.org/10.5063/SCHEMA/CODEMETA-2.0" ATOM_NS = "http://www.w3.org/2005/Atom" ATOM_KEYS = ["id", "author", "external_identifier", "title"] # columns of the revision table (of the storage DB) REVISION_COLS = [ "id", "directory", "date", "committer_date", "type", "message", "metadata", ] # columns of the tables of the deposit DB DEPOSIT_COLS = [ "deposit.id", "deposit.external_id", "deposit.swhid_context", "deposit.status", "deposit_request.metadata", "deposit_request.date", "deposit_client.provider_url", "deposit_collection.name", "auth_user.username", ] # Formats we write to the extrinsic metadata storage OLD_DEPOSIT_FORMAT = ( "sword-v2-atom-codemeta-v2-in-json-with-expanded-namespaces" # before february 2018 ) NEW_DEPOSIT_FORMAT = "sword-v2-atom-codemeta-v2-in-json" # after february 2018 GNU_FORMAT = "gnu-tree-json" NIXGUIX_FORMAT = "nixguix-sources-json" NPM_FORMAT = "replicate-npm-package-json" ORIGINAL_ARTIFACT_FORMAT = "original-artifacts-json" PYPI_FORMAT = "pypi-project-json" # Information about this script, for traceability FETCHER = MetadataFetcher( name="migrate-extrinsic-metadata-from-revisions", version="0.0.1", ) # Authorities that we got the metadata from AUTHORITIES = { "npmjs": MetadataAuthority( type=MetadataAuthorityType.FORGE, url="https://npmjs.com/", metadata={} ), "pypi": MetadataAuthority( type=MetadataAuthorityType.FORGE, url="https://pypi.org/", metadata={} ), "gnu": MetadataAuthority( type=MetadataAuthorityType.FORGE, url="https://ftp.gnu.org/", metadata={} ), "swh": MetadataAuthority( type=MetadataAuthorityType.REGISTRY, url="https://softwareheritage.org/", metadata={}, ), # for original_artifact (which are checksums computed by SWH) } # Regular expression for the format of revision messages written by the # deposit loader deposit_revision_message_re = re.compile( b"(?P[a-z-]*): " b"Deposit (?P[0-9]+) in collection (?P[a-z-]+).*" ) # not reliable, because PyPI allows arbitrary names def pypi_project_from_filename(filename): original_filename = filename if filename.endswith(".egg"): return None elif filename == "mongomotor-0.13.0.n.tar.gz": return "mongomotor" elif re.match(r"datahaven-rev[0-9]+\.tar\.gz", filename): return "datahaven" elif re.match(r"Dtls-[0-9]\.[0-9]\.[0-9]\.sdist_with_openssl\..*", filename): return "Dtls" elif re.match(r"(gae)?pytz-20[0-9][0-9][a-z]\.(tar\.gz|zip)", filename): return filename.split("-", 1)[0] elif filename.startswith(("powny-", "obedient.powny-",)): return filename.split("-")[0] elif filename.startswith("devpi-theme-16-"): return "devpi-theme-16" elif re.match("[^-]+-[0-9]+.tar.gz", filename): return filename.split("-")[0] elif filename == "ohai-1!0.tar.gz": return "ohai" elif filename == "collective.topicitemsevent-0.1dvl.tar.gz": return "collective.topicitemsevent" elif filename.startswith( ("SpiNNStorageHandlers-1!", "sPyNNakerExternalDevicesPlugin-1!") ): return filename.split("-")[0] elif filename.startswith("limnoria-201"): return "limnoria" elif filename.startswith("pytz-20"): return "pytz" elif filename.startswith("youtube_dl_server-alpha."): return "youtube_dl_server" elif filename == "json-extensions-b76bc7d.tar.gz": return "json-extensions" elif filename == "LitReview-0.6989ev.tar.gz": # typo of "dev" return "LitReview" elif filename.startswith("django_options-r"): return "django_options" elif filename == "Greater than, equal, or less Library-0.1.tar.gz": return "Greater-than-equal-or-less-Library" elif filename.startswith("upstart--main-"): return "upstart" elif filename == "duckduckpy0.1.tar.gz": return "duckduckpy" elif filename == "QUI for MPlayer snapshot_9-14-2011.zip": return "QUI-for-MPlayer" elif filename == "Eddy's Memory Game-1.0.zip": return "Eddy-s-Memory-Game" elif filename == "jekyll2nikola-0-0-1.tar.gz": return "jekyll2nikola" elif filename.startswith("ore.workflowed"): return "ore.workflowed" elif re.match("instancemanager-[0-9]*", filename): return "instancemanager" elif filename == "OrzMC_W&L-1.0.0.tar.gz": return "OrzMC-W-L" elif filename == "use0mk.tar.gz": return "use0mk" elif filename == "play-0-develop-1-gd67cd85.tar.gz": return "play" filename = filename.replace(" ", "-") match = re.match( r"^(?P[a-z_.-]+)" # project name r"\.(tar\.gz|tar\.bz2|tgz|zip)$", # extension filename, re.I, ) if match: return match.group("project_name") # First try with a rather strict format, but that allows accidentally # matching the version as part of the package name match = re.match( r"^(?P[a-z0-9_.]+?([-_][a-z][a-z0-9.]+?)*?)" # project name r"-v?" r"([0-9]+!)?" # epoch r"[0-9_.]+([a-z]+[0-9]+)?" # "main" version r"([.-]?(alpha|beta|dev|post|pre|rc)(\.?[0-9]+)?)*" # development status r"([.-]?20[012][0-9]{5,9})?" # date r"([.-]g?[0-9a-f]+)?" # git commit r"([-+]py(thon)?(3k|[23](\.?[0-9]{1,2})?))?" # python version r"\.(tar\.gz|tar\.bz2|tgz|zip)$", # extension filename, re.I, ) if match: return match.group("project_name") # If that doesn't work, give up on trying to parse version suffixes, # and just find the first version-like occurrence in the file name match = re.match( r"^(?P[a-z0-9_.-]+?)" # project name r"[-_.]v?" r"([0-9]+!)?" # epoch r"(" # "main" version r"[0-9_]+\.[0-9_.]+([a-z]+[0-9]+)?" # classic version number r"|20[012][0-9]{5,9}" # date as integer r"|20[012][0-9]-[01][0-9]-[0-3][0-9]" # date as ISO 8601 r")" # end of "main" version r"[a-z]?(dev|pre)?" # direct version suffix r"([._-].*)?" # extra suffixes r"\.(tar\.gz|tar\.bz2|tgz|zip)$", # extension filename, re.I, ) if match: return match.group("project_name") # If that still doesn't work, give one last chance if there's only one # dash or underscore in the name match = re.match( r"^(?P[^_-]+)" # project name r"[_-][^_-]+" # version r"\.(tar\.gz|tar\.bz2|tgz|zip)$", # extension filename, ) assert match, original_filename return match.group("project_name") def pypi_origin_from_project_name(project_name: str) -> str: return f"https://pypi.org/project/{project_name}/" def pypi_origin_from_filename(storage, rev_id: bytes, filename: str) -> Optional[str]: project_name = pypi_project_from_filename(filename) origin = pypi_origin_from_project_name(project_name) # But unfortunately, the filename is user-provided, and doesn't # necessarily match the package name on pypi. Therefore, we need # to check it. if _check_revision_in_origin(storage, origin, rev_id): return origin # if the origin we guessed does not exist, query the PyPI API with the # project name we guessed. If only the capitalisation and dash/underscores # are wrong (by far the most common case), PyPI kindly corrects them. try: resp = urlopen(f"https://pypi.org/pypi/{project_name}/json/") except HTTPError as e: assert e.code == 404 # nope; PyPI couldn't correct the wrong project name return None assert resp.code == 200, resp.code project_name = json.load(resp)["info"]["name"] origin = pypi_origin_from_project_name(project_name) if _check_revision_in_origin(storage, origin, rev_id): return origin else: # The origin exists, but the revision does not belong in it. # This happens sometimes, as the filename we guessed the origin # from is user-provided. return None def cran_package_from_url(filename): match = re.match( r"^https://cran\.r-project\.org/src/contrib/" r"(?P[a-zA-Z0-9.]+)_[0-9.-]+(\.tar\.gz)?$", filename, ) assert match, filename return match.group("package_name") def npm_package_from_source_url(package_source_url): match = re.match( "^https://registry.npmjs.org/(?P.*)/-/[^/]+.tgz$", package_source_url, ) assert match, package_source_url return unquote(match.group("package_name")) def remove_atom_codemeta_metadata_with_xmlns(metadata): """Removes all known Atom and Codemeta metadata fields from the dict, assuming this is a dict generated by xmltodict without expanding namespaces. """ keys_to_remove = ATOM_KEYS + ["@xmlns", "@xmlns:codemeta"] for key in list(metadata): if key.startswith("codemeta:") or key in keys_to_remove: del metadata[key] def remove_atom_codemeta_metadata_without_xmlns(metadata): """Removes all known Atom and Codemeta metadata fields from the dict, assuming this is a dict generated by xmltodict with expanded namespaces. """ for key in list(metadata): if key.startswith(("{%s}" % ATOM_NS, "{%s}" % CODEMETA_NS)): del metadata[key] def _check_revision_in_origin(storage, origin, revision_id): seen_snapshots = set() # no need to visit them again seen_revisions = set() for visit in iter_origin_visits(storage, origin): for status in iter_origin_visit_statuses(storage, origin, visit.visit): if status.snapshot is None: continue if status.snapshot in seen_snapshots: continue seen_snapshots.add(status.snapshot) snapshot = snapshot_get_all_branches(storage, status.snapshot) for (branch_name, branch) in snapshot.branches.items(): if branch is None: continue # If it's the revision passed as argument, then it is indeed in the # origin if branch.target == revision_id: return True # Else, let's make sure the branch doesn't have any other revision # Get the revision at the top of the branch. if branch.target in seen_revisions: continue seen_revisions.add(branch.target) revision = storage.revision_get([branch.target])[0] if revision is None: # https://forge.softwareheritage.org/T997 continue # Check it doesn't have parents (else we would have to # recurse) assert revision.parents == (), "revision with parents" return False def debian_origins_from_row(row, storage): """Guesses a Debian origin from a row. May return an empty list if it cannot reliably guess it, but all results are guaranteed to be correct.""" filenames = [entry["filename"] for entry in row["metadata"]["original_artifact"]] package_names = {filename.split("_")[0] for filename in filenames} assert len(package_names) == 1, package_names (package_name,) = package_names candidate_origins = [ f"deb://Debian/packages/{package_name}", f"deb://Debian-Security/packages/{package_name}", f"http://snapshot.debian.org/package/{package_name}/", ] return [ origin for origin in candidate_origins if _check_revision_in_origin(storage, origin, row["id"]) ] # Cache of origins that are known to exist _origins = set() def assert_origin_exists(storage, origin): assert check_origin_exists(storage, origin), origin def check_origin_exists(storage, origin): return ( ( hashlib.sha1(origin.encode()).digest() in _origins # very fast or storage.origin_get([origin])[0] is not None # slow, but up to date ), origin, ) def load_metadata( storage, revision_id, directory_id, discovery_date: datetime.datetime, metadata: Dict[str, Any], format: str, authority: MetadataAuthority, origin: Optional[str], dry_run: bool, ): """Does the actual loading to swh-storage.""" directory_swhid = ExtendedSWHID( object_type=ExtendedObjectType.DIRECTORY, object_id=directory_id ) revision_swhid = CoreSWHID(object_type=ObjectType.REVISION, object_id=revision_id) obj = RawExtrinsicMetadata( target=directory_swhid, discovery_date=discovery_date, authority=authority, fetcher=FETCHER, format=format, metadata=json.dumps(metadata).encode(), origin=origin, revision=revision_swhid, ) if not dry_run: storage.raw_extrinsic_metadata_add([obj]) def handle_deposit_row( row, discovery_date: Optional[datetime.datetime], origin, storage, deposit_cur, dry_run: bool, ): """Loads metadata from the deposit database (which is more reliable as the metadata on the revision object, as some versions of the deposit loader were a bit lossy; and they used very different format for the field in the revision table). """ parsed_message = deposit_revision_message_re.match(row["message"]) assert parsed_message is not None, row["message"] deposit_id = int(parsed_message.group("deposit_id")) collection = parsed_message.group("collection").decode() client_name = parsed_message.group("client").decode() deposit_cur.execute( f"SELECT {', '.join(DEPOSIT_COLS)} FROM deposit " f"INNER JOIN deposit_collection " f" ON (deposit.collection_id=deposit_collection.id) " f"INNER JOIN deposit_client ON (deposit.client_id=deposit_client.user_ptr_id) " f"INNER JOIN auth_user ON (deposit.client_id=auth_user.id) " f"INNER JOIN deposit_request ON (deposit.id=deposit_request.deposit_id) " f"WHERE deposit.id = %s", (deposit_id,), ) provider_urls = set() swhids = set() metadata_entries = [] dates = set() external_identifiers = set() for deposit_request_row in deposit_cur: deposit_request = dict(zip(DEPOSIT_COLS, deposit_request_row)) # Sanity checks to make sure we selected the right deposit assert deposit_request["deposit.id"] == deposit_id assert deposit_request["deposit_collection.name"] == collection, deposit_request if client_name != "": # Sometimes it's missing from the commit message assert deposit_request["auth_user.username"] == client_name # Date of the deposit request (either the initial request, of subsequent ones) date = deposit_request["deposit_request.date"] dates.add(date) assert deposit_request["deposit.swhid_context"], deposit_request external_identifiers.add(deposit_request["deposit.external_id"]) swhids.add(deposit_request["deposit.swhid_context"]) # Client of the deposit provider_urls.add(deposit_request["deposit_client.provider_url"]) metadata = deposit_request["deposit_request.metadata"] if metadata is not None: json.dumps(metadata).encode() # check it's valid if "@xmlns" in metadata: assert metadata["@xmlns"] == ATOM_NS assert metadata["@xmlns:codemeta"] in (CODEMETA_NS, [CODEMETA_NS]) format = NEW_DEPOSIT_FORMAT elif "{http://www.w3.org/2005/Atom}id" in metadata: assert ( "{https://doi.org/10.5063/SCHEMA/CODEMETA-2.0}author" in metadata or "{http://www.w3.org/2005/Atom}author" in metadata ) format = OLD_DEPOSIT_FORMAT else: # new format introduced in # https://forge.softwareheritage.org/D4065 # it's the same as the first case, but with the @xmlns # declarations stripped # Most of them should have the "id", but some revisions, # like 4d3890004fade1f4ec3bf7004a4af0c490605128, are missing # this field assert "id" in metadata or "title" in metadata assert "codemeta:author" in metadata format = NEW_DEPOSIT_FORMAT metadata_entries.append((date, format, metadata)) if discovery_date is None: discovery_date = max(dates) # Sanity checks to make sure deposit requests are consistent with each other assert len(metadata_entries) >= 1, deposit_id assert len(provider_urls) == 1, f"expected 1 provider url, got {provider_urls}" (provider_url,) = provider_urls assert len(swhids) == 1 (swhid,) = swhids assert ( len(external_identifiers) == 1 ), f"expected 1 external identifier, got {external_identifiers}" (external_identifier,) = external_identifiers # computed the origin from the external_identifier if we don't have one if origin is None: origin = f"{provider_url.strip('/')}/{external_identifier}" # explicit list of mistakes that happened in the past, but shouldn't # happen again: if origin == "https://hal.archives-ouvertes.fr/hal-01588781": # deposit id 75 origin = "https://inria.halpreprod.archives-ouvertes.fr/hal-01588781" elif origin == "https://hal.archives-ouvertes.fr/hal-01588782": # deposit id 76 origin = "https://inria.halpreprod.archives-ouvertes.fr/hal-01588782" elif origin == "https://hal.archives-ouvertes.fr/hal-01592430": # deposit id 143 origin = "https://hal-preprod.archives-ouvertes.fr/hal-01592430" elif origin == "https://hal.archives-ouvertes.fr/hal-01588927": origin = "https://inria.halpreprod.archives-ouvertes.fr/hal-01588927" elif origin == "https://hal.archives-ouvertes.fr/hal-01593875": # deposit id 175 origin = "https://hal-preprod.archives-ouvertes.fr/hal-01593875" elif deposit_id == 160: assert origin == "https://www.softwareheritage.org/je-suis-gpl", origin origin = "https://forge.softwareheritage.org/source/jesuisgpl/" elif origin == "https://hal.archives-ouvertes.fr/hal-01588942": # deposit id 90 origin = "https://inria.halpreprod.archives-ouvertes.fr/hal-01588942" elif origin == "https://hal.archives-ouvertes.fr/hal-01592499": # deposit id 162 origin = "https://hal-preprod.archives-ouvertes.fr/hal-01592499" elif origin == "https://hal.archives-ouvertes.fr/hal-01588935": # deposit id 89 origin = "https://hal-preprod.archives-ouvertes.fr/hal-01588935" assert_origin_exists(storage, origin) # check the origin we computed matches the one in the deposit db swhid_origin = QualifiedSWHID.from_string(swhid).origin if origin is not None: # explicit list of mistakes that happened in the past, but shouldn't # happen again: exceptions = [ ( # deposit id 229 "https://hal.archives-ouvertes.fr/hal-01243573", "https://hal-test.archives-ouvertes.fr/hal-01243573", ), ( # deposit id 199 "https://hal.archives-ouvertes.fr/hal-01243065", "https://hal-test.archives-ouvertes.fr/hal-01243065", ), ( # deposit id 164 "https://hal.archives-ouvertes.fr/hal-01593855", "https://hal-preprod.archives-ouvertes.fr/hal-01593855", ), ] if (origin, swhid_origin) not in exceptions: assert origin == swhid_origin, ( f"the origin we guessed from the deposit db or revision ({origin}) " f"doesn't match the one in the deposit db's SWHID ({swhid})" ) authority = MetadataAuthority( type=MetadataAuthorityType.DEPOSIT_CLIENT, url=provider_url, metadata={}, ) for (date, format, metadata) in metadata_entries: load_metadata( storage, row["id"], row["directory"], date, metadata, format, authority=authority, origin=origin, dry_run=dry_run, ) return (origin, discovery_date) def handle_row(row: Dict[str, Any], storage, deposit_cur, dry_run: bool): type_ = row["type"] # default date in case we can't find a better one discovery_date = row["date"] or row["committer_date"] metadata = row["metadata"] if metadata is None: return if type_ == "dsc": origin = None # it will be defined later, using debian_origins_from_row # TODO: the debian loader writes the changelog date as the revision's # author date and committer date. Instead, we should use the visit's date if "extrinsic" in metadata: extrinsic_files = metadata["extrinsic"]["raw"]["files"] for artifact_entry in metadata["original_artifact"]: extrinsic_file = extrinsic_files[artifact_entry["filename"]] for key in ("sha256",): assert artifact_entry["checksums"][key] == extrinsic_file[key] artifact_entry["url"] = extrinsic_file["uri"] del metadata["extrinsic"] elif type_ == "tar": provider = metadata.get("extrinsic", {}).get("provider") if provider is not None: # This is the format all the package loaders currently write, and # it is the easiest, thanks to the 'provider' and 'when' fields, # which have all the information we need to tell them easily # and generate accurate metadata discovery_date = iso8601.parse_date(metadata["extrinsic"]["when"]) # New versions of the loaders write the provider; use it. if provider.startswith("https://replicate.npmjs.com/"): # npm loader format 1 parsed_url = urlparse(provider) assert re.match("^/[^/]+/?$", parsed_url.path), parsed_url package_name = unquote(parsed_url.path.strip("/")) origin = "https://www.npmjs.com/package/" + package_name assert_origin_exists(storage, origin) load_metadata( storage, row["id"], row["directory"], discovery_date, metadata["extrinsic"]["raw"], NPM_FORMAT, authority=AUTHORITIES["npmjs"], origin=origin, dry_run=dry_run, ) del metadata["extrinsic"] elif provider.startswith("https://pypi.org/"): # pypi loader format 1 match = re.match( "https://pypi.org/pypi/(?P.*)/json", provider ) assert match, f"unexpected provider URL format: {provider}" project_name = match.group("project_name") origin = f"https://pypi.org/project/{project_name}/" assert_origin_exists(storage, origin) load_metadata( storage, row["id"], row["directory"], discovery_date, metadata["extrinsic"]["raw"], PYPI_FORMAT, authority=AUTHORITIES["pypi"], origin=origin, dry_run=dry_run, ) del metadata["extrinsic"] elif provider.startswith("https://cran.r-project.org/"): # cran loader provider = metadata["extrinsic"]["provider"] if provider.startswith("https://cran.r-project.org/package="): origin = metadata["extrinsic"]["provider"] else: package_name = cran_package_from_url(provider) origin = f"https://cran.r-project.org/package={package_name}" assert origin is not None # Ideally we should assert the origin exists, but we can't: # https://forge.softwareheritage.org/T2536 if ( hashlib.sha1(origin.encode()).digest() not in _origins and storage.origin_get([origin])[0] is None ): return raw_extrinsic_metadata = metadata["extrinsic"]["raw"] # this is actually intrinsic, ignore it if "version" in raw_extrinsic_metadata: del raw_extrinsic_metadata["version"] # Copy the URL to the original_artifacts metadata assert len(metadata["original_artifact"]) == 1 if "url" in metadata["original_artifact"][0]: assert ( metadata["original_artifact"][0]["url"] == raw_extrinsic_metadata["url"] ), row else: metadata["original_artifact"][0]["url"] = raw_extrinsic_metadata[ "url" ] del raw_extrinsic_metadata["url"] assert ( raw_extrinsic_metadata == {} ), f"Unexpected metadata keys: {list(raw_extrinsic_metadata)}" del metadata["extrinsic"] elif ( provider.startswith("https://nix-community.github.io/nixpkgs-swh/") or provider == "https://guix.gnu.org/sources.json" ): # nixguix loader origin = provider assert_origin_exists(storage, origin) authority = MetadataAuthority( type=MetadataAuthorityType.FORGE, url=provider, metadata={}, ) assert row["date"] is None # the nixguix loader does not write dates load_metadata( storage, row["id"], row["directory"], discovery_date, metadata["extrinsic"]["raw"], NIXGUIX_FORMAT, authority=authority, origin=origin, dry_run=dry_run, ) del metadata["extrinsic"] elif provider.startswith("https://ftp.gnu.org/"): # archive loader format 1 origin = provider assert_origin_exists(storage, origin) assert len(metadata["original_artifact"]) == 1 metadata["original_artifact"][0]["url"] = metadata["extrinsic"]["raw"][ "url" ] # Remove duplicate keys of original_artifacts for key in ("url", "time", "length", "version", "filename"): del metadata["extrinsic"]["raw"][key] assert metadata["extrinsic"]["raw"] == {} del metadata["extrinsic"] elif provider.startswith("https://deposit.softwareheritage.org/"): origin = metadata["extrinsic"]["raw"]["origin"]["url"] assert_origin_exists(storage, origin) if "@xmlns" in metadata: assert metadata["@xmlns"] == ATOM_NS assert metadata["@xmlns:codemeta"] in (CODEMETA_NS, [CODEMETA_NS]) assert "intrinsic" not in metadata assert "extra_headers" not in metadata # deposit loader format 1 # in this case, the metadata seems to be both directly in metadata # and in metadata["extrinsic"]["raw"]["metadata"] (origin, discovery_date) = handle_deposit_row( row, discovery_date, origin, storage, deposit_cur, dry_run ) remove_atom_codemeta_metadata_with_xmlns(metadata) if "client" in metadata: del metadata["client"] del metadata["extrinsic"] else: # deposit loader format 2 actual_metadata = metadata["extrinsic"]["raw"]["origin_metadata"][ "metadata" ] if isinstance(actual_metadata, str): # new format introduced in # https://forge.softwareheritage.org/D4105 actual_metadata = json.loads(actual_metadata) if "@xmlns" in actual_metadata: assert actual_metadata["@xmlns"] == ATOM_NS assert actual_metadata["@xmlns:codemeta"] in ( CODEMETA_NS, [CODEMETA_NS], ) elif "{http://www.w3.org/2005/Atom}id" in actual_metadata: assert ( "{https://doi.org/10.5063/SCHEMA/CODEMETA-2.0}author" in actual_metadata ) else: # new format introduced in # https://forge.softwareheritage.org/D4065 # it's the same as the first case, but with the @xmlns # declarations stripped # Most of them should have the "id", but some revisions, # like 4d3890004fade1f4ec3bf7004a4af0c490605128, are missing # this field assert ( "id" in actual_metadata or "title" in actual_metadata or "atom:title" in actual_metadata ) assert "codemeta:author" in actual_metadata (origin, discovery_date) = handle_deposit_row( row, discovery_date, origin, storage, deposit_cur, dry_run ) del metadata["extrinsic"] else: assert False, f"unknown provider {provider}" # Older versions don't write the provider; use heuristics instead. elif ( metadata.get("package_source", {}) .get("url", "") .startswith("https://registry.npmjs.org/") ): # npm loader format 2 package_source_url = metadata["package_source"]["url"] package_name = npm_package_from_source_url(package_source_url) origin = "https://www.npmjs.com/package/" + package_name assert_origin_exists(storage, origin) load_metadata( storage, row["id"], row["directory"], discovery_date, metadata["package"], NPM_FORMAT, authority=AUTHORITIES["npmjs"], origin=origin, dry_run=dry_run, ) del metadata["package"] assert "original_artifact" not in metadata # rebuild an "original_artifact"-like metadata dict from what we # can salvage of "package_source" package_source_metadata = metadata["package_source"] keep_keys = {"blake2s256", "filename", "sha1", "sha256", "url"} discard_keys = { "date", # is equal to the revision date "name", # was loaded above "version", # same } assert ( set(package_source_metadata) == keep_keys | discard_keys ), package_source_metadata # will be loaded below metadata["original_artifact"] = [ { "filename": package_source_metadata["filename"], "checksums": { "sha1": package_source_metadata["sha1"], "sha256": package_source_metadata["sha256"], "blake2s256": package_source_metadata["blake2s256"], }, "url": package_source_metadata["url"], } ] del metadata["package_source"] elif "@xmlns" in metadata: assert metadata["@xmlns:codemeta"] in (CODEMETA_NS, [CODEMETA_NS]) assert "intrinsic" not in metadata assert "extra_headers" not in metadata # deposit loader format 3 if row["message"] == b"swh: Deposit 159 in collection swh": # There is no deposit 159 in the deposit DB, for some reason assert ( hash_to_hex(row["id"]) == "8e9cee14a6ad39bca4347077b87fb5bbd8953bb1" ) return elif row["message"] == b"hal: Deposit 342 in collection hal": # They have status 'failed' and no swhid return origin = None # TODO discovery_date = None # TODO (origin, discovery_date) = handle_deposit_row( row, discovery_date, origin, storage, deposit_cur, dry_run ) remove_atom_codemeta_metadata_with_xmlns(metadata) if "client" in metadata: del metadata["client"] # found in the deposit db if "committer" in metadata: del metadata["committer"] # found on the revision object elif "{http://www.w3.org/2005/Atom}id" in metadata: assert ( "{https://doi.org/10.5063/SCHEMA/CODEMETA-2.0}author" in metadata or "{http://www.w3.org/2005/Atom}author" in metadata ) assert "intrinsic" not in metadata assert "extra_headers" not in metadata # deposit loader format 4 origin = None discovery_date = None # TODO (origin, discovery_date) = handle_deposit_row( row, discovery_date, origin, storage, deposit_cur, dry_run ) remove_atom_codemeta_metadata_without_xmlns(metadata) elif hash_to_hex(row["id"]) == "a86747d201ab8f8657d145df4376676d5e47cf9f": # deposit 91, is missing "{http://www.w3.org/2005/Atom}id" for some # reason, and has an invalid oririn return elif ( isinstance(metadata.get("original_artifact"), dict) and metadata["original_artifact"]["url"].startswith( "https://files.pythonhosted.org/" ) ) or ( isinstance(metadata.get("original_artifact"), list) and len(metadata.get("original_artifact")) == 1 and metadata["original_artifact"][0] .get("url", "") .startswith("https://files.pythonhosted.org/") ): if isinstance(metadata.get("original_artifact"), dict): metadata["original_artifact"] = [metadata["original_artifact"]] assert len(metadata["original_artifact"]) == 1 origin = pypi_origin_from_filename( storage, row["id"], metadata["original_artifact"][0]["filename"] ) if "project" in metadata: # pypi loader format 2 load_metadata( storage, row["id"], row["directory"], discovery_date, metadata["project"], PYPI_FORMAT, authority=AUTHORITIES["pypi"], origin=origin, dry_run=dry_run, ) del metadata["project"] else: assert set(metadata) == {"original_artifact"}, set(metadata) # pypi loader format 3 pass # nothing to do, there's no metadata elif row["message"] == b"synthetic revision message": assert isinstance(metadata["original_artifact"], list), metadata assert not any("url" in d for d in metadata["original_artifact"]) # archive loader format 2 origin = None elif deposit_revision_message_re.match(row["message"]): # deposit without metadata in the revision assert set(metadata) == {"original_artifact"}, metadata origin = None # TODO discovery_date = None (origin, discovery_date) = handle_deposit_row( row, discovery_date, origin, storage, deposit_cur, dry_run ) else: assert False, f"Unable to detect type of metadata for row: {row}" # Ignore common intrinsic metadata keys for key in ("intrinsic", "extra_headers"): if key in metadata: del metadata[key] # Ignore loader-specific intrinsic metadata keys if type_ == "hg": del metadata["node"] elif type_ == "dsc": if "package_info" in metadata: del metadata["package_info"] if "original_artifact" in metadata: for original_artifact in metadata["original_artifact"]: # Rename keys to the expected format of original-artifacts-json. rename_keys = [ ("name", "filename"), # eg. from old Debian loader ("size", "length"), # eg. from old PyPI loader ] for (old_name, new_name) in rename_keys: if old_name in original_artifact: assert new_name not in original_artifact original_artifact[new_name] = original_artifact.pop(old_name) # Move the checksums to their own subdict, which is the expected format # of original-artifacts-json. if "sha1" in original_artifact: assert "checksums" not in original_artifact original_artifact["checksums"] = {} for key in ("sha1", "sha256", "sha1_git", "blake2s256"): if key in original_artifact: original_artifact["checksums"][key] = original_artifact.pop(key) if "date" in original_artifact: # The information comes from the package repository rather than SWH, # so it shouldn't be in the 'original-artifacts' metadata # (which has SWH as authority). # Moreover, it's not a very useful information, so let's just drop it. del original_artifact["date"] allowed_keys = { "checksums", "filename", "length", "url", "archive_type", } assert set(original_artifact) <= allowed_keys, set(original_artifact) if type_ == "dsc": assert origin is None origins = debian_origins_from_row(row, storage) if not origins: print("Missing Debian origin for revision: {hash_to_hex(row['id'])}") else: origins = [origin] for origin in origins: load_metadata( storage, row["id"], row["directory"], discovery_date, metadata["original_artifact"], ORIGINAL_ARTIFACT_FORMAT, authority=AUTHORITIES["swh"], origin=origin, dry_run=dry_run, ) del metadata["original_artifact"] assert metadata == {}, ( f"remaining metadata keys for {row['id'].hex()} (type: {row['type']}): " f"{metadata}" ) def create_fetchers(db): with db.cursor() as cur: cur.execute( """ INSERT INTO metadata_fetcher (name, version, metadata) VALUES (%s, %s, %s) ON CONFLICT DO NOTHING """, (FETCHER.name, FETCHER.version, FETCHER.metadata), ) def iter_revision_rows(storage_dbconn: str, first_id: Sha1Git): after_id = first_id failures = 0 while True: try: storage_db = BaseDb.connect(storage_dbconn) with storage_db.cursor() as cur: while True: cur.execute( f"SELECT {', '.join(REVISION_COLS)} FROM revision " f"WHERE id > %s AND metadata IS NOT NULL AND type != 'git'" f"ORDER BY id LIMIT 1000", (after_id,), ) new_rows = 0 for row in cur: new_rows += 1 row_d = dict(zip(REVISION_COLS, row)) yield row_d after_id = row_d["id"] if new_rows == 0: return except psycopg2.OperationalError as e: print(e) # most likely a temporary error, try again if failures >= 60: raise else: time.sleep(60) failures += 1 def main(storage_dbconn, storage_url, deposit_dbconn, first_id, dry_run): storage_db = BaseDb.connect(storage_dbconn) deposit_db = BaseDb.connect(deposit_dbconn) storage = get_storage( "pipeline", steps=[ {"cls": "retry"}, { - "cls": "local", + "cls": "postgresql", "db": storage_dbconn, "objstorage": {"cls": "memory", "args": {}}, }, ], ) if not dry_run: create_fetchers(storage_db) # Not creating authorities, as the loaders are presumably already running # and created them already. # This also helps make sure this script doesn't accidentally create # authorities that differ from what the loaders use. total_rows = 0 with deposit_db.cursor() as deposit_cur: for row in iter_revision_rows(storage_dbconn, first_id): handle_row(row, storage, deposit_cur, dry_run) total_rows += 1 if total_rows % 1000 == 0: percents = ( int.from_bytes(row["id"][0:4], byteorder="big") * 100 / (1 << 32) ) print( f"Processed {total_rows/1000000.:.2f}M rows " f"(~{percents:.1f}%, last revision: {row['id'].hex()})" ) if __name__ == "__main__": if len(sys.argv) == 4: (_, storage_dbconn, storage_url, deposit_dbconn) = sys.argv first_id = "00" * 20 elif len(sys.argv) == 5: (_, storage_dbconn, storage_url, deposit_dbconn, first_id) = sys.argv else: print( f"Syntax: {sys.argv[0]} " f" []" ) exit(1) if os.path.isfile("./origins.txt"): # You can generate this file with: # psql service=swh-replica \ # -c "\copy (select digest(url, 'sha1') from origin) to stdout" \ # | pv -l > origins.txt print("Loading origins...") with open("./origins.txt") as fd: for line in fd: digest = line.strip()[3:] _origins.add(bytes.fromhex(digest)) print("Done loading origins.") main(storage_dbconn, storage_url, deposit_dbconn, bytes.fromhex(first_id), True) diff --git a/swh/storage/pytest_plugin.py b/swh/storage/pytest_plugin.py index 205271b0..604a3d90 100644 --- a/swh/storage/pytest_plugin.py +++ b/swh/storage/pytest_plugin.py @@ -1,54 +1,54 @@ -# Copyright (C) 2019-2020 The Software Heritage developers +# Copyright (C) 2019-2021 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information from os import environ, path import pytest from swh.core.db.pytest_plugin import postgresql_fact import swh.storage from swh.storage import get_storage from swh.storage.tests.storage_data import StorageData SQL_DIR = path.join(path.dirname(swh.storage.__file__), "sql") environ["LC_ALL"] = "C.UTF-8" swh_storage_postgresql = postgresql_fact( "postgresql_proc", dbname="storage", dump_files=path.join(SQL_DIR, "*.sql") ) @pytest.fixture def swh_storage_backend_config(swh_storage_postgresql): """Basic pg storage configuration with no journal collaborator (to avoid pulling optional dependency on clients of this fixture) """ yield { - "cls": "local", + "cls": "postgresql", "db": swh_storage_postgresql.dsn, "objstorage": {"cls": "memory"}, "check_config": {"check_write": True}, } @pytest.fixture def swh_storage(swh_storage_backend_config): return get_storage(**swh_storage_backend_config) @pytest.fixture def sample_data() -> StorageData: """Pre-defined sample storage object data to manipulate Returns: StorageData whose attribute keys are data model objects. Either multiple objects: contents, directories, revisions, releases, ... or simple ones: content, directory, revision, release, ... """ return StorageData() diff --git a/swh/storage/tests/storage_tests.py b/swh/storage/tests/storage_tests.py index ba113594..367be5ec 100644 --- a/swh/storage/tests/storage_tests.py +++ b/swh/storage/tests/storage_tests.py @@ -1,4404 +1,4404 @@ # Copyright (C) 2015-2021 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information from collections import defaultdict import datetime from datetime import timedelta import inspect import itertools import math import random from typing import Any, ClassVar, Dict, Iterator, Optional from unittest.mock import MagicMock import attr from hypothesis import HealthCheck, given, settings, strategies import pytest from swh.core.api.classes import stream_results from swh.model import from_disk from swh.model.hashutil import DEFAULT_ALGORITHMS, hash_to_bytes from swh.model.hypothesis_strategies import objects from swh.model.identifiers import CoreSWHID, ObjectType from swh.model.model import ( Content, Directory, ExtID, Origin, OriginVisit, OriginVisitStatus, Person, RawExtrinsicMetadata, Revision, SkippedContent, Snapshot, SnapshotBranch, TargetType, ) from swh.storage import get_storage from swh.storage.common import origin_url_to_sha1 as sha1 from swh.storage.exc import HashCollision, StorageArgumentException from swh.storage.interface import ListOrder, PagedResult, StorageInterface from swh.storage.tests.conftest import function_scoped_fixture_check from swh.storage.utils import ( content_hex_hashes, now, remove_keys, round_to_milliseconds, ) def transform_entries( storage: StorageInterface, dir_: Directory, *, prefix: bytes = b"" ) -> Iterator[Dict[str, Any]]: """Iterate through a directory's entries, and yields the items 'directory_ls' is expected to return; including content metadata for file entries.""" for ent in dir_.entries: if ent.type == "dir": yield { "dir_id": dir_.id, "type": ent.type, "target": ent.target, "name": prefix + ent.name, "perms": ent.perms, "status": None, "sha1": None, "sha1_git": None, "sha256": None, "length": None, } elif ent.type == "file": contents = storage.content_find({"sha1_git": ent.target}) assert contents ent_dict = contents[0].to_dict() for key in ["ctime", "blake2s256"]: ent_dict.pop(key, None) ent_dict.update( { "dir_id": dir_.id, "type": ent.type, "target": ent.target, "name": prefix + ent.name, "perms": ent.perms, } ) yield ent_dict def assert_contents_ok( expected_contents, actual_contents, keys_to_check={"sha1", "data"} ): """Assert that a given list of contents matches on a given set of keys. """ for k in keys_to_check: expected_list = set([c.get(k) for c in expected_contents]) actual_list = set([c.get(k) for c in actual_contents]) assert actual_list == expected_list, k class LazyContent(Content): def with_data(self): return Content.from_dict({**self.to_dict(), "data": b"42\n"}) class TestStorage: """Main class for Storage testing. This class is used as-is to test local storage (see TestLocalStorage below) and remote storage (see TestRemoteStorage in test_remote_storage.py. We need to have the two classes inherit from this base class separately to avoid nosetests running the tests from the base class twice. """ maxDiff = None # type: ClassVar[Optional[int]] def test_types(self, swh_storage_backend_config): """Checks all methods of StorageInterface are implemented by this backend, and that they have the same signature.""" # Create an instance of the protocol (which cannot be instantiated # directly, so this creates a subclass, then instantiates it) interface = type("_", (StorageInterface,), {})() storage = get_storage(**swh_storage_backend_config) assert "content_add" in dir(interface) missing_methods = [] for meth_name in dir(interface): if meth_name.startswith("_"): continue interface_meth = getattr(interface, meth_name) try: concrete_meth = getattr(storage, meth_name) except AttributeError: if not getattr(interface_meth, "deprecated_endpoint", False): # The backend is missing a (non-deprecated) endpoint missing_methods.append(meth_name) continue expected_signature = inspect.signature(interface_meth) actual_signature = inspect.signature(concrete_meth) assert expected_signature == actual_signature, meth_name assert missing_methods == [] # If all the assertions above succeed, then this one should too. # But there's no harm in double-checking. # And we could replace the assertions above by this one, but unlike # the assertions above, it doesn't explain what is missing. assert isinstance(storage, StorageInterface) def test_check_config(self, swh_storage): assert swh_storage.check_config(check_write=True) assert swh_storage.check_config(check_write=False) def test_content_add(self, swh_storage, sample_data): cont = sample_data.content insertion_start_time = now() actual_result = swh_storage.content_add([cont]) insertion_end_time = now() assert actual_result == { "content:add": 1, "content:add:bytes": cont.length, } assert swh_storage.content_get_data(cont.sha1) == cont.data expected_cont = attr.evolve(cont, data=None) contents = [ obj for (obj_type, obj) in swh_storage.journal_writer.journal.objects if obj_type == "content" ] assert len(contents) == 1 for obj in contents: assert insertion_start_time <= obj.ctime assert obj.ctime <= insertion_end_time assert obj == expected_cont swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["content"] == 1 def test_content_add_from_lazy_content(self, swh_storage, sample_data): cont = sample_data.content lazy_content = LazyContent.from_dict(cont.to_dict()) insertion_start_time = now() actual_result = swh_storage.content_add([lazy_content]) insertion_end_time = now() assert actual_result == { "content:add": 1, "content:add:bytes": cont.length, } # the fact that we retrieve the content object from the storage with # the correct 'data' field ensures it has been 'called' assert swh_storage.content_get_data(cont.sha1) == cont.data expected_cont = attr.evolve(lazy_content, data=None, ctime=None) contents = [ obj for (obj_type, obj) in swh_storage.journal_writer.journal.objects if obj_type == "content" ] assert len(contents) == 1 for obj in contents: assert insertion_start_time <= obj.ctime assert obj.ctime <= insertion_end_time assert attr.evolve(obj, ctime=None).to_dict() == expected_cont.to_dict() swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["content"] == 1 def test_content_get_data_missing(self, swh_storage, sample_data): cont, cont2 = sample_data.contents[:2] swh_storage.content_add([cont]) # Query a single missing content actual_content_data = swh_storage.content_get_data(cont2.sha1) assert actual_content_data is None # Check content_get does not abort after finding a missing content actual_content_data = swh_storage.content_get_data(cont.sha1) assert actual_content_data == cont.data actual_content_data = swh_storage.content_get_data(cont2.sha1) assert actual_content_data is None def test_content_add_different_input(self, swh_storage, sample_data): cont, cont2 = sample_data.contents[:2] actual_result = swh_storage.content_add([cont, cont2]) assert actual_result == { "content:add": 2, "content:add:bytes": cont.length + cont2.length, } def test_content_add_twice(self, swh_storage, sample_data): cont, cont2 = sample_data.contents[:2] actual_result = swh_storage.content_add([cont]) assert actual_result == { "content:add": 1, "content:add:bytes": cont.length, } assert len(swh_storage.journal_writer.journal.objects) == 1 actual_result = swh_storage.content_add([cont, cont2]) assert actual_result == { "content:add": 1, "content:add:bytes": cont2.length, } assert 2 <= len(swh_storage.journal_writer.journal.objects) <= 3 assert len(swh_storage.content_find(cont.to_dict())) == 1 assert len(swh_storage.content_find(cont2.to_dict())) == 1 def test_content_add_collision(self, swh_storage, sample_data): cont1 = sample_data.content # create (corrupted) content with same sha1{,_git} but != sha256 sha256_array = bytearray(cont1.sha256) sha256_array[0] += 1 cont1b = attr.evolve(cont1, sha256=bytes(sha256_array)) with pytest.raises(HashCollision) as cm: swh_storage.content_add([cont1, cont1b]) exc = cm.value actual_algo = exc.algo assert actual_algo in ["sha1", "sha1_git"] actual_id = exc.hash_id assert actual_id == getattr(cont1, actual_algo).hex() collisions = exc.args[2] assert len(collisions) == 2 assert collisions == [ content_hex_hashes(cont1.hashes()), content_hex_hashes(cont1b.hashes()), ] assert exc.colliding_content_hashes() == [ cont1.hashes(), cont1b.hashes(), ] def test_content_add_duplicate(self, swh_storage, sample_data): cont = sample_data.content swh_storage.content_add([cont, cont]) assert swh_storage.content_get_data(cont.sha1) == cont.data def test_content_update(self, swh_storage, sample_data): cont1 = sample_data.content if hasattr(swh_storage, "journal_writer"): swh_storage.journal_writer.journal = None # TODO, not supported swh_storage.content_add([cont1]) # alter the sha1_git for example cont1b = attr.evolve( cont1, sha1_git=hash_to_bytes("3a60a5275d0333bf13468e8b3dcab90f4046e654") ) swh_storage.content_update([cont1b.to_dict()], keys=["sha1_git"]) actual_contents = swh_storage.content_get([cont1.sha1]) expected_content = attr.evolve(cont1b, data=None) assert actual_contents == [expected_content] def test_content_add_metadata(self, swh_storage, sample_data): cont = attr.evolve(sample_data.content, data=None, ctime=now()) actual_result = swh_storage.content_add_metadata([cont]) assert actual_result == { "content:add": 1, } expected_cont = cont assert swh_storage.content_get([cont.sha1]) == [expected_cont] contents = [ obj for (obj_type, obj) in swh_storage.journal_writer.journal.objects if obj_type == "content" ] assert len(contents) == 1 for obj in contents: obj = attr.evolve(obj, ctime=None) assert obj == cont def test_content_add_metadata_different_input(self, swh_storage, sample_data): contents = sample_data.contents[:2] cont = attr.evolve(contents[0], data=None, ctime=now()) cont2 = attr.evolve(contents[1], data=None, ctime=now()) actual_result = swh_storage.content_add_metadata([cont, cont2]) assert actual_result == { "content:add": 2, } def test_content_add_metadata_collision(self, swh_storage, sample_data): cont1 = attr.evolve(sample_data.content, data=None, ctime=now()) # create (corrupted) content with same sha1{,_git} but != sha256 sha1_git_array = bytearray(cont1.sha256) sha1_git_array[0] += 1 cont1b = attr.evolve(cont1, sha256=bytes(sha1_git_array)) with pytest.raises(HashCollision) as cm: swh_storage.content_add_metadata([cont1, cont1b]) exc = cm.value actual_algo = exc.algo assert actual_algo in ["sha1", "sha1_git", "blake2s256"] actual_id = exc.hash_id assert actual_id == getattr(cont1, actual_algo).hex() collisions = exc.args[2] assert len(collisions) == 2 assert collisions == [ content_hex_hashes(cont1.hashes()), content_hex_hashes(cont1b.hashes()), ] assert exc.colliding_content_hashes() == [ cont1.hashes(), cont1b.hashes(), ] def test_content_add_objstorage_first(self, swh_storage, sample_data): """Tests the objstorage is written to before the DB and journal""" cont = sample_data.content swh_storage.objstorage.content_add = MagicMock(side_effect=Exception("Oops")) # Try to add, but the objstorage crashes try: swh_storage.content_add([cont]) except Exception: pass # The DB must be written to after the objstorage, so the DB should be # unchanged if the objstorage crashed assert swh_storage.content_get_data(cont.sha1) is None # The journal too assert list(swh_storage.journal_writer.journal.objects) == [] def test_skipped_content_add(self, swh_storage, sample_data): contents = sample_data.skipped_contents[:2] cont = contents[0] cont2 = attr.evolve(contents[1], blake2s256=None) contents_dict = [c.to_dict() for c in [cont, cont2]] missing = list(swh_storage.skipped_content_missing(contents_dict)) assert missing == [cont.hashes(), cont2.hashes()] actual_result = swh_storage.skipped_content_add([cont, cont, cont2]) assert 2 <= actual_result.pop("skipped_content:add") <= 3 assert actual_result == {} missing = list(swh_storage.skipped_content_missing(contents_dict)) assert missing == [] def test_skipped_content_add_missing_hashes(self, swh_storage, sample_data): cont, cont2 = [ attr.evolve(c, sha1_git=None) for c in sample_data.skipped_contents[:2] ] contents_dict = [c.to_dict() for c in [cont, cont2]] missing = list(swh_storage.skipped_content_missing(contents_dict)) assert len(missing) == 2 actual_result = swh_storage.skipped_content_add([cont, cont, cont2]) assert 2 <= actual_result.pop("skipped_content:add") <= 3 assert actual_result == {} missing = list(swh_storage.skipped_content_missing(contents_dict)) assert missing == [] def test_skipped_content_missing_partial_hash(self, swh_storage, sample_data): cont = sample_data.skipped_content cont2 = attr.evolve(cont, sha1_git=None) contents_dict = [c.to_dict() for c in [cont, cont2]] missing = list(swh_storage.skipped_content_missing(contents_dict)) assert len(missing) == 2 actual_result = swh_storage.skipped_content_add([cont]) assert actual_result.pop("skipped_content:add") == 1 assert actual_result == {} missing = list(swh_storage.skipped_content_missing(contents_dict)) assert missing == [cont2.hashes()] @pytest.mark.property_based @settings( deadline=None, # this test is very slow suppress_health_check=function_scoped_fixture_check, ) @given( strategies.sets( elements=strategies.sampled_from(["sha256", "sha1_git", "blake2s256"]), min_size=0, ) ) def test_content_missing(self, swh_storage, sample_data, algos): algos |= {"sha1"} content, missing_content = [sample_data.content2, sample_data.skipped_content] swh_storage.content_add([content]) test_contents = [content.to_dict()] missing_per_hash = defaultdict(list) for i in range(256): test_content = missing_content.to_dict() for hash in algos: test_content[hash] = bytes([i]) + test_content[hash][1:] missing_per_hash[hash].append(test_content[hash]) test_contents.append(test_content) assert set(swh_storage.content_missing(test_contents)) == set( missing_per_hash["sha1"] ) for hash in algos: assert set( swh_storage.content_missing(test_contents, key_hash=hash) ) == set(missing_per_hash[hash]) @pytest.mark.property_based @settings(suppress_health_check=function_scoped_fixture_check,) @given( strategies.sets( elements=strategies.sampled_from(["sha256", "sha1_git", "blake2s256"]), min_size=0, ) ) def test_content_missing_unknown_algo(self, swh_storage, sample_data, algos): algos |= {"sha1"} content, missing_content = [sample_data.content2, sample_data.skipped_content] swh_storage.content_add([content]) test_contents = [content.to_dict()] missing_per_hash = defaultdict(list) for i in range(16): test_content = missing_content.to_dict() for hash in algos: test_content[hash] = bytes([i]) + test_content[hash][1:] missing_per_hash[hash].append(test_content[hash]) test_content["nonexisting_algo"] = b"\x00" test_contents.append(test_content) assert set(swh_storage.content_missing(test_contents)) == set( missing_per_hash["sha1"] ) for hash in algos: assert set( swh_storage.content_missing(test_contents, key_hash=hash) ) == set(missing_per_hash[hash]) def test_content_missing_per_sha1(self, swh_storage, sample_data): # given cont = sample_data.content cont2 = sample_data.content2 missing_cont = sample_data.skipped_content missing_cont2 = sample_data.skipped_content2 swh_storage.content_add([cont, cont2]) # when gen = swh_storage.content_missing_per_sha1( [cont.sha1, missing_cont.sha1, cont2.sha1, missing_cont2.sha1] ) # then assert list(gen) == [missing_cont.sha1, missing_cont2.sha1] def test_content_missing_per_sha1_git(self, swh_storage, sample_data): cont, cont2 = sample_data.contents[:2] missing_cont = sample_data.skipped_content missing_cont2 = sample_data.skipped_content2 swh_storage.content_add([cont, cont2]) contents = [ cont.sha1_git, cont2.sha1_git, missing_cont.sha1_git, missing_cont2.sha1_git, ] missing_contents = swh_storage.content_missing_per_sha1_git(contents) assert list(missing_contents) == [missing_cont.sha1_git, missing_cont2.sha1_git] missing_contents = swh_storage.content_missing_per_sha1_git([]) assert list(missing_contents) == [] def test_content_get_partition(self, swh_storage, swh_contents): """content_get_partition paginates results if limit exceeded""" expected_contents = [ attr.evolve(c, data=None) for c in swh_contents if c.status != "absent" ] actual_contents = [] for i in range(16): actual_result = swh_storage.content_get_partition(i, 16) assert actual_result.next_page_token is None actual_contents.extend(actual_result.results) assert len(actual_contents) == len(expected_contents) for content in actual_contents: assert content in expected_contents assert content.ctime is None def test_content_get_partition_full(self, swh_storage, swh_contents): """content_get_partition for a single partition returns all available contents """ expected_contents = [ attr.evolve(c, data=None) for c in swh_contents if c.status != "absent" ] actual_result = swh_storage.content_get_partition(0, 1) assert actual_result.next_page_token is None actual_contents = actual_result.results assert len(actual_contents) == len(expected_contents) for content in actual_contents: assert content in expected_contents def test_content_get_partition_empty(self, swh_storage, swh_contents): """content_get_partition when at least one of the partitions is empty""" expected_contents = { cont.sha1 for cont in swh_contents if cont.status != "absent" } # nb_partitions = smallest power of 2 such that at least one of # the partitions is empty nb_partitions = 1 << math.floor(math.log2(len(swh_contents)) + 1) seen_sha1s = [] for i in range(nb_partitions): actual_result = swh_storage.content_get_partition( i, nb_partitions, limit=len(swh_contents) + 1 ) for content in actual_result.results: seen_sha1s.append(content.sha1) # Limit is higher than the max number of results assert actual_result.next_page_token is None assert set(seen_sha1s) == expected_contents def test_content_get_partition_limit_none(self, swh_storage): """content_get_partition call with wrong limit input should fail""" with pytest.raises(StorageArgumentException, match="limit should not be None"): swh_storage.content_get_partition(1, 16, limit=None) def test_content_get_partition_pagination_generate(self, swh_storage, swh_contents): """content_get_partition returns contents within range provided""" expected_contents = [ attr.evolve(c, data=None) for c in swh_contents if c.status != "absent" ] # retrieve contents actual_contents = [] for i in range(4): page_token = None while True: actual_result = swh_storage.content_get_partition( i, 4, limit=3, page_token=page_token ) actual_contents.extend(actual_result.results) page_token = actual_result.next_page_token if page_token is None: break assert len(actual_contents) == len(expected_contents) for content in actual_contents: assert content in expected_contents - @pytest.mark.parametrize("algo", DEFAULT_ALGORITHMS) + @pytest.mark.parametrize("algo", sorted(DEFAULT_ALGORITHMS)) def test_content_get(self, swh_storage, sample_data, algo): cont1, cont2 = sample_data.contents[:2] swh_storage.content_add([cont1, cont2]) actual_contents = swh_storage.content_get( [getattr(cont1, algo), getattr(cont2, algo)], algo ) # we only retrieve the metadata so no data nor ctime within expected_contents = [attr.evolve(c, data=None) for c in [cont1, cont2]] assert actual_contents == expected_contents for content in actual_contents: assert content.ctime is None - @pytest.mark.parametrize("algo", DEFAULT_ALGORITHMS) + @pytest.mark.parametrize("algo", sorted(DEFAULT_ALGORITHMS)) def test_content_get_missing(self, swh_storage, sample_data, algo): cont1, cont2 = sample_data.contents[:2] assert cont1.sha1 != cont2.sha1 missing_cont = sample_data.skipped_content swh_storage.content_add([cont1, cont2]) actual_contents = swh_storage.content_get( [getattr(cont1, algo), getattr(cont2, algo), getattr(missing_cont, algo)], algo, ) expected_contents = [ attr.evolve(c, data=None) if c else None for c in [cont1, cont2, None] ] assert actual_contents == expected_contents def test_content_get_random(self, swh_storage, sample_data): cont, cont2, cont3 = sample_data.contents[:3] swh_storage.content_add([cont, cont2, cont3]) assert swh_storage.content_get_random() in { cont.sha1_git, cont2.sha1_git, cont3.sha1_git, } def test_directory_add(self, swh_storage, sample_data): content = sample_data.content directory = sample_data.directories[1] assert directory.entries[0].target == content.sha1_git swh_storage.content_add([content]) init_missing = list(swh_storage.directory_missing([directory.id])) assert [directory.id] == init_missing actual_result = swh_storage.directory_add([directory]) assert actual_result == {"directory:add": 1} assert ("directory", directory) in list( swh_storage.journal_writer.journal.objects ) actual_data = list(swh_storage.directory_ls(directory.id)) expected_data = list(transform_entries(swh_storage, directory)) for data in actual_data: assert data in expected_data after_missing = list(swh_storage.directory_missing([directory.id])) assert after_missing == [] swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["directory"] == 1 def test_directory_add_twice(self, swh_storage, sample_data): directory = sample_data.directories[1] actual_result = swh_storage.directory_add([directory]) assert actual_result == {"directory:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ ("directory", directory) ] actual_result = swh_storage.directory_add([directory]) assert actual_result == {"directory:add": 0} assert list(swh_storage.journal_writer.journal.objects) == [ ("directory", directory) ] def test_directory_ls_recursive(self, swh_storage, sample_data): # create consistent dataset regarding the directories we want to list content, content2 = sample_data.contents[:2] swh_storage.content_add([content, content2]) dir1, dir2, dir3 = sample_data.directories[:3] dir_ids = [d.id for d in [dir1, dir2, dir3]] init_missing = list(swh_storage.directory_missing(dir_ids)) assert init_missing == dir_ids actual_result = swh_storage.directory_add([dir1, dir2, dir3]) assert actual_result == {"directory:add": 3} # List directory containing one file actual_data = list(swh_storage.directory_ls(dir1.id, recursive=True)) expected_data = list(transform_entries(swh_storage, dir1)) for data in actual_data: assert data in expected_data # List directory containing a file and an unknown subdirectory actual_data = list(swh_storage.directory_ls(dir2.id, recursive=True)) expected_data = list(transform_entries(swh_storage, dir2)) for data in actual_data: assert data in expected_data # List directory containing both a known and unknown subdirectory, entries # should be both those of the directory and of the known subdir (up to contents) actual_data = list(swh_storage.directory_ls(dir3.id, recursive=True)) expected_data = list( itertools.chain( transform_entries(swh_storage, dir3), transform_entries(swh_storage, dir2, prefix=b"subdir/"), ) ) for data in actual_data: assert data in expected_data def test_directory_ls_non_recursive(self, swh_storage, sample_data): # create consistent dataset regarding the directories we want to list content, content2 = sample_data.contents[:2] swh_storage.content_add([content, content2]) dir1, dir2, dir3, _, dir5 = sample_data.directories[:5] dir_ids = [d.id for d in [dir1, dir2, dir3, dir5]] init_missing = list(swh_storage.directory_missing(dir_ids)) assert init_missing == dir_ids actual_result = swh_storage.directory_add([dir1, dir2, dir3, dir5]) assert actual_result == {"directory:add": 4} # List directory containing a file and an unknown subdirectory actual_data = list(swh_storage.directory_ls(dir1.id)) expected_data = list(transform_entries(swh_storage, dir1)) for data in actual_data: assert data in expected_data # List directory containing a single file actual_data = list(swh_storage.directory_ls(dir2.id)) expected_data = list(transform_entries(swh_storage, dir2)) for data in actual_data: assert data in expected_data # List directory containing a known subdirectory, entries should # only be those of the parent directory, not of the subdir actual_data = list(swh_storage.directory_ls(dir3.id)) expected_data = list(transform_entries(swh_storage, dir3)) for data in actual_data: assert data in expected_data def test_directory_ls_missing_content(self, swh_storage, sample_data): swh_storage.directory_add([sample_data.directory2]) assert list(swh_storage.directory_ls(sample_data.directory2.id)) == [ { "dir_id": sample_data.directory2.id, "length": None, "name": b"oof", "perms": 33188, "sha1": None, "sha1_git": None, "sha256": None, "status": None, "target": sample_data.directory2.entries[0].target, "type": "file", }, ] def test_directory_ls_skipped_content(self, swh_storage, sample_data): swh_storage.directory_add([sample_data.directory2]) cont = SkippedContent( sha1_git=sample_data.directory2.entries[0].target, sha1=b"c" * 20, sha256=None, blake2s256=None, length=42, status="absent", reason="You need a premium subscription to access this content", ) swh_storage.skipped_content_add([cont]) assert list(swh_storage.directory_ls(sample_data.directory2.id)) == [ { "dir_id": sample_data.directory2.id, "length": 42, "name": b"oof", "perms": 33188, "sha1": b"c" * 20, "sha1_git": sample_data.directory2.entries[0].target, "sha256": None, "status": "absent", "target": sample_data.directory2.entries[0].target, "type": "file", }, ] def test_directory_entry_get_by_path(self, swh_storage, sample_data): cont, content2 = sample_data.contents[:2] dir1, dir2, dir3, dir4, dir5 = sample_data.directories[:5] # given dir_ids = [d.id for d in [dir1, dir2, dir3, dir4, dir5]] init_missing = list(swh_storage.directory_missing(dir_ids)) assert init_missing == dir_ids actual_result = swh_storage.directory_add([dir3, dir4]) assert actual_result == {"directory:add": 2} expected_entries = [ { "dir_id": dir3.id, "name": b"foo", "type": "file", "target": cont.sha1_git, "sha1": None, "sha1_git": None, "sha256": None, "status": None, "perms": from_disk.DentryPerms.content, "length": None, }, { "dir_id": dir3.id, "name": b"subdir", "type": "dir", "target": dir2.id, "sha1": None, "sha1_git": None, "sha256": None, "status": None, "perms": from_disk.DentryPerms.directory, "length": None, }, { "dir_id": dir3.id, "name": b"hello", "type": "file", "target": content2.sha1_git, "sha1": None, "sha1_git": None, "sha256": None, "status": None, "perms": from_disk.DentryPerms.content, "length": None, }, ] # when (all must be found here) for entry, expected_entry in zip(dir3.entries, expected_entries): actual_entry = swh_storage.directory_entry_get_by_path( dir3.id, [entry.name] ) assert actual_entry == expected_entry # same, but deeper for entry, expected_entry in zip(dir3.entries, expected_entries): actual_entry = swh_storage.directory_entry_get_by_path( dir4.id, [b"subdir1", entry.name] ) expected_entry = expected_entry.copy() expected_entry["name"] = b"subdir1/" + expected_entry["name"] assert actual_entry == expected_entry # when (nothing should be found here since `dir` is not persisted.) for entry in dir2.entries: actual_entry = swh_storage.directory_entry_get_by_path( dir2.id, [entry.name] ) assert actual_entry is None def test_directory_get_entries_pagination(self, swh_storage, sample_data): # Note: this test assumes entries are returned in lexicographic order, # which is not actually guaranteed by the interface. dir_ = sample_data.directory3 entries = sorted(dir_.entries, key=lambda entry: entry.name) swh_storage.directory_add(sample_data.directories) # No pagination needed actual_data = swh_storage.directory_get_entries(dir_.id) assert actual_data == PagedResult(results=entries, next_page_token=None) # A little pagination actual_data = swh_storage.directory_get_entries(dir_.id, limit=2) assert actual_data.results == entries[0:2] assert actual_data.next_page_token is not None actual_data = swh_storage.directory_get_entries( dir_.id, page_token=actual_data.next_page_token ) assert actual_data == PagedResult(results=entries[2:], next_page_token=None) @pytest.mark.parametrize("limit", [1, 2, 3, 4, 5]) def test_directory_get_entries(self, swh_storage, sample_data, limit): dir_ = sample_data.directory3 swh_storage.directory_add(sample_data.directories) actual_data = list( stream_results(swh_storage.directory_get_entries, dir_.id, limit=limit,) ) assert sorted(actual_data) == sorted(dir_.entries) def test_directory_get_random(self, swh_storage, sample_data): dir1, dir2, dir3 = sample_data.directories[:3] swh_storage.directory_add([dir1, dir2, dir3]) assert swh_storage.directory_get_random() in { dir1.id, dir2.id, dir3.id, } def test_revision_add(self, swh_storage, sample_data): revision = sample_data.revision init_missing = swh_storage.revision_missing([revision.id]) assert list(init_missing) == [revision.id] actual_result = swh_storage.revision_add([revision]) assert actual_result == {"revision:add": 1} end_missing = swh_storage.revision_missing([revision.id]) assert list(end_missing) == [] assert list(swh_storage.journal_writer.journal.objects) == [ ("revision", revision) ] # already there so nothing added actual_result = swh_storage.revision_add([revision]) assert actual_result == {"revision:add": 0} swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["revision"] == 1 def test_revision_add_twice(self, swh_storage, sample_data): revision, revision2 = sample_data.revisions[:2] actual_result = swh_storage.revision_add([revision]) assert actual_result == {"revision:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ ("revision", revision) ] actual_result = swh_storage.revision_add([revision, revision2]) assert actual_result == {"revision:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ ("revision", revision), ("revision", revision2), ] def test_revision_add_name_clash(self, swh_storage, sample_data): revision, revision2 = sample_data.revisions[:2] revision1 = attr.evolve( revision, author=Person( fullname=b"John Doe ", name=b"John Doe", email=b"john.doe@example.com", ), ) revision2 = attr.evolve( revision2, author=Person( fullname=b"John Doe ", name=b"John Doe ", email=b"john.doe@example.com ", ), ) actual_result = swh_storage.revision_add([revision1, revision2]) assert actual_result == {"revision:add": 2} def test_revision_get_order(self, swh_storage, sample_data): revision, revision2 = sample_data.revisions[:2] add_result = swh_storage.revision_add([revision, revision2]) assert add_result == {"revision:add": 2} # order 1 actual_revisions = swh_storage.revision_get([revision.id, revision2.id]) assert actual_revisions == [revision, revision2] # order 2 actual_revisions2 = swh_storage.revision_get([revision2.id, revision.id]) assert actual_revisions2 == [revision2, revision] def test_revision_log(self, swh_storage, sample_data): revision1, revision2, revision3, revision4 = sample_data.revisions[:4] # rev4 -is-child-of-> rev3 -> rev1, (rev2 -> rev1) swh_storage.revision_add([revision1, revision2, revision3, revision4]) # when results = list(swh_storage.revision_log([revision4.id])) # for comparison purposes actual_results = [Revision.from_dict(r) for r in results] assert len(actual_results) == 4 # rev4 -child-> rev3 -> rev1, (rev2 -> rev1) assert actual_results == [revision4, revision3, revision1, revision2] def test_revision_log_with_limit(self, swh_storage, sample_data): revision1, revision2, revision3, revision4 = sample_data.revisions[:4] # revision4 -is-child-of-> revision3 swh_storage.revision_add([revision3, revision4]) results = list(swh_storage.revision_log([revision4.id], 1)) actual_results = [Revision.from_dict(r) for r in results] assert len(actual_results) == 1 assert actual_results[0] == revision4 def test_revision_log_unknown_revision(self, swh_storage, sample_data): revision = sample_data.revision rev_log = list(swh_storage.revision_log([revision.id])) assert rev_log == [] def test_revision_shortlog(self, swh_storage, sample_data): revision1, revision2, revision3, revision4 = sample_data.revisions[:4] # rev4 -is-child-of-> rev3 -> (rev1, rev2); rev2 -> rev1 swh_storage.revision_add([revision1, revision2, revision3, revision4]) results = list(swh_storage.revision_shortlog([revision4.id])) actual_results = [[id, tuple(parents)] for (id, parents) in results] assert len(actual_results) == 4 assert actual_results == [ [revision4.id, revision4.parents], [revision3.id, revision3.parents], [revision1.id, revision1.parents], [revision2.id, revision2.parents], ] def test_revision_shortlog_with_limit(self, swh_storage, sample_data): revision1, revision2, revision3, revision4 = sample_data.revisions[:4] # revision4 -is-child-of-> revision3 swh_storage.revision_add([revision1, revision2, revision3, revision4]) results = list(swh_storage.revision_shortlog([revision4.id], 1)) actual_results = [[id, tuple(parents)] for (id, parents) in results] assert len(actual_results) == 1 assert list(actual_results[0]) == [revision4.id, revision4.parents] def test_revision_get(self, swh_storage, sample_data): revision, revision2 = sample_data.revisions[:2] swh_storage.revision_add([revision]) actual_revisions = swh_storage.revision_get([revision.id, revision2.id]) assert len(actual_revisions) == 2 assert actual_revisions == [revision, None] def test_revision_get_no_parents(self, swh_storage, sample_data): revision = sample_data.revision swh_storage.revision_add([revision]) actual_revision = swh_storage.revision_get([revision.id])[0] assert revision.parents == () assert actual_revision.parents == () # no parents on this one def test_revision_get_random(self, swh_storage, sample_data): revision1, revision2, revision3 = sample_data.revisions[:3] swh_storage.revision_add([revision1, revision2, revision3]) assert swh_storage.revision_get_random() in { revision1.id, revision2.id, revision3.id, } def test_extid_add_git(self, swh_storage, sample_data): gitids = [ revision.id for revision in sample_data.revisions if revision.type.value == "git" ] extids = [ ExtID( extid=gitid, extid_type="git", target=CoreSWHID(object_id=gitid, object_type=ObjectType.REVISION,), ) for gitid in gitids ] assert swh_storage.extid_get_from_extid("git", gitids) == [] assert swh_storage.extid_get_from_target(ObjectType.REVISION, gitids) == [] summary = swh_storage.extid_add(extids) assert summary == {"extid:add": len(gitids)} assert swh_storage.extid_get_from_extid("git", gitids) == extids assert swh_storage.extid_get_from_target(ObjectType.REVISION, gitids) == extids assert swh_storage.extid_get_from_extid("hg", gitids) == [] assert swh_storage.extid_get_from_target(ObjectType.RELEASE, gitids) == [] # check ExtIDs have been added to the journal extids_in_journal = [ obj for (obj_type, obj) in swh_storage.journal_writer.journal.objects if obj_type == "extid" ] assert extids == extids_in_journal def test_extid_add_hg(self, swh_storage, sample_data): def get_node(revision): node = None if revision.extra_headers: node = dict(revision.extra_headers).get(b"node") if node is None and revision.metadata: node = hash_to_bytes(revision.metadata.get("node")) return node swhids = [ revision.id for revision in sample_data.revisions if revision.type.value == "hg" ] extids = [ get_node(revision) for revision in sample_data.revisions if revision.type.value == "hg" ] assert swh_storage.extid_get_from_extid("hg", extids) == [] assert swh_storage.extid_get_from_target(ObjectType.REVISION, swhids) == [] extid_objs = [ ExtID( extid=hgid, extid_type="hg", target=CoreSWHID(object_id=swhid, object_type=ObjectType.REVISION,), ) for hgid, swhid in zip(extids, swhids) ] summary = swh_storage.extid_add(extid_objs) assert summary == {"extid:add": len(swhids)} assert swh_storage.extid_get_from_extid("hg", extids) == extid_objs assert ( swh_storage.extid_get_from_target(ObjectType.REVISION, swhids) == extid_objs ) assert swh_storage.extid_get_from_extid("git", extids) == [] assert swh_storage.extid_get_from_target(ObjectType.RELEASE, swhids) == [] # check ExtIDs have been added to the journal extids_in_journal = [ obj for (obj_type, obj) in swh_storage.journal_writer.journal.objects if obj_type == "extid" ] assert extid_objs == extids_in_journal def test_extid_add_twice(self, swh_storage, sample_data): gitids = [ revision.id for revision in sample_data.revisions if revision.type.value == "git" ] extids = [ ExtID( extid=gitid, extid_type="git", target=CoreSWHID(object_id=gitid, object_type=ObjectType.REVISION,), ) for gitid in gitids ] summary = swh_storage.extid_add(extids) assert summary == {"extid:add": len(gitids)} # add them again, should be noop summary = swh_storage.extid_add(extids) # assert summary == {"extid:add": 0} assert swh_storage.extid_get_from_extid("git", gitids) == extids assert swh_storage.extid_get_from_target(ObjectType.REVISION, gitids) == extids def test_extid_add_extid_multicity(self, swh_storage, sample_data): ids = [ revision.id for revision in sample_data.revisions if revision.type.value == "git" ] extids = [ ExtID( extid=extid, extid_type="git", target=CoreSWHID(object_id=extid, object_type=ObjectType.REVISION,), ) for extid in ids ] swh_storage.extid_add(extids) # try to add "modified-extid" versions, should be added extids2 = [ ExtID( extid=extid, extid_type="hg", target=CoreSWHID(object_id=extid, object_type=ObjectType.REVISION,), ) for extid in ids ] swh_storage.extid_add(extids2) assert swh_storage.extid_get_from_extid("git", ids) == extids assert swh_storage.extid_get_from_extid("hg", ids) == extids2 assert set(swh_storage.extid_get_from_target(ObjectType.REVISION, ids)) == { *extids, *extids2, } def test_extid_add_target_multicity(self, swh_storage, sample_data): ids = [ revision.id for revision in sample_data.revisions if revision.type.value == "git" ] extids = [ ExtID( extid=extid, extid_type="git", target=CoreSWHID(object_id=extid, object_type=ObjectType.REVISION,), ) for extid in ids ] swh_storage.extid_add(extids) # try to add "modified" versions, should be added extids2 = [ ExtID( extid=extid, extid_type="git", target=CoreSWHID(object_id=extid, object_type=ObjectType.RELEASE,), ) for extid in ids ] swh_storage.extid_add(extids2) assert set(swh_storage.extid_get_from_extid("git", ids)) == {*extids, *extids2} assert swh_storage.extid_get_from_target(ObjectType.REVISION, ids) == extids assert swh_storage.extid_get_from_target(ObjectType.RELEASE, ids) == extids2 def test_release_add(self, swh_storage, sample_data): release, release2 = sample_data.releases[:2] init_missing = swh_storage.release_missing([release.id, release2.id]) assert list(init_missing) == [release.id, release2.id] actual_result = swh_storage.release_add([release, release2]) assert actual_result == {"release:add": 2} end_missing = swh_storage.release_missing([release.id, release2.id]) assert list(end_missing) == [] assert list(swh_storage.journal_writer.journal.objects) == [ ("release", release), ("release", release2), ] # already present so nothing added actual_result = swh_storage.release_add([release, release2]) assert actual_result == {"release:add": 0} swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["release"] == 2 def test_release_add_no_author_date(self, swh_storage, sample_data): full_release = sample_data.release release = attr.evolve(full_release, author=None, date=None) actual_result = swh_storage.release_add([release]) assert actual_result == {"release:add": 1} end_missing = swh_storage.release_missing([release.id]) assert list(end_missing) == [] assert list(swh_storage.journal_writer.journal.objects) == [ ("release", release) ] def test_release_add_twice(self, swh_storage, sample_data): release, release2 = sample_data.releases[:2] actual_result = swh_storage.release_add([release]) assert actual_result == {"release:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ ("release", release) ] actual_result = swh_storage.release_add([release, release2, release, release2]) assert actual_result == {"release:add": 1} assert set(swh_storage.journal_writer.journal.objects) == set( [("release", release), ("release", release2),] ) def test_release_add_name_clash(self, swh_storage, sample_data): release, release2 = [ attr.evolve( c, author=Person( fullname=b"John Doe ", name=b"John Doe", email=b"john.doe@example.com", ), ) for c in sample_data.releases[:2] ] actual_result = swh_storage.release_add([release, release2]) assert actual_result == {"release:add": 2} def test_release_get(self, swh_storage, sample_data): release, release2, release3 = sample_data.releases[:3] # given swh_storage.release_add([release, release2]) # when actual_releases = swh_storage.release_get([release.id, release2.id]) # then assert actual_releases == [release, release2] unknown_releases = swh_storage.release_get([release3.id]) assert unknown_releases[0] is None def test_release_get_order(self, swh_storage, sample_data): release, release2 = sample_data.releases[:2] add_result = swh_storage.release_add([release, release2]) assert add_result == {"release:add": 2} # order 1 actual_releases = swh_storage.release_get([release.id, release2.id]) assert actual_releases == [release, release2] # order 2 actual_releases2 = swh_storage.release_get([release2.id, release.id]) assert actual_releases2 == [release2, release] def test_release_get_random(self, swh_storage, sample_data): release, release2, release3 = sample_data.releases[:3] swh_storage.release_add([release, release2, release3]) assert swh_storage.release_get_random() in { release.id, release2.id, release3.id, } def test_origin_add(self, swh_storage, sample_data): origins = list(sample_data.origins[:2]) origin_urls = [o.url for o in origins] assert swh_storage.origin_get(origin_urls) == [None, None] stats = swh_storage.origin_add(origins) assert stats == {"origin:add": 2} actual_origins = swh_storage.origin_get(origin_urls) assert actual_origins == origins assert set(swh_storage.journal_writer.journal.objects) == set( [("origin", origins[0]), ("origin", origins[1]),] ) swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["origin"] == 2 def test_origin_add_twice(self, swh_storage, sample_data): origin, origin2 = sample_data.origins[:2] add1 = swh_storage.origin_add([origin, origin2]) assert set(swh_storage.journal_writer.journal.objects) == set( [("origin", origin), ("origin", origin2),] ) assert add1 == {"origin:add": 2} add2 = swh_storage.origin_add([origin, origin2]) assert set(swh_storage.journal_writer.journal.objects) == set( [("origin", origin), ("origin", origin2),] ) assert add2 == {"origin:add": 0} def test_origin_add_twice_at_once(self, swh_storage, sample_data): origin, origin2 = sample_data.origins[:2] add1 = swh_storage.origin_add([origin, origin2, origin, origin2]) assert set(swh_storage.journal_writer.journal.objects) == set( [("origin", origin), ("origin", origin2),] ) assert add1 == {"origin:add": 2} add2 = swh_storage.origin_add([origin, origin2, origin, origin2]) assert set(swh_storage.journal_writer.journal.objects) == set( [("origin", origin), ("origin", origin2),] ) assert add2 == {"origin:add": 0} def test_origin_get(self, swh_storage, sample_data): origin, origin2 = sample_data.origins[:2] assert swh_storage.origin_get([origin.url]) == [None] swh_storage.origin_add([origin]) actual_origins = swh_storage.origin_get([origin.url]) assert actual_origins == [origin] actual_origins = swh_storage.origin_get([origin.url, "not://exists"]) assert actual_origins == [origin, None] def _generate_random_visits(self, nb_visits=100, start=0, end=7): """Generate random visits within the last 2 months (to avoid computations) """ visits = [] today = now() for weeks in range(nb_visits, 0, -1): hours = random.randint(0, 24) minutes = random.randint(0, 60) seconds = random.randint(0, 60) days = random.randint(0, 28) weeks = random.randint(start, end) date_visit = today - timedelta( weeks=weeks, hours=hours, minutes=minutes, seconds=seconds, days=days ) visits.append(date_visit) return visits def test_origin_visit_get__unknown_origin(self, swh_storage): actual_page = swh_storage.origin_visit_get("foo") assert actual_page.next_page_token is None assert actual_page.results == [] assert actual_page == PagedResult() def test_origin_visit_get__validation_failure(self, swh_storage, sample_data): origin = sample_data.origin swh_storage.origin_add([origin]) with pytest.raises( StorageArgumentException, match="page_token must be a string" ): swh_storage.origin_visit_get(origin.url, page_token=10) # not bytes with pytest.raises( StorageArgumentException, match="order must be a ListOrder value" ): swh_storage.origin_visit_get(origin.url, order="foobar") # wrong order def test_origin_visit_get_all(self, swh_storage, sample_data): origin = sample_data.origin swh_storage.origin_add([origin]) ov1, ov2, ov3 = swh_storage.origin_visit_add( [ OriginVisit( origin=origin.url, date=sample_data.date_visit1, type=sample_data.type_visit1, ), OriginVisit( origin=origin.url, date=sample_data.date_visit2, type=sample_data.type_visit2, ), OriginVisit( origin=origin.url, date=sample_data.date_visit2, type=sample_data.type_visit2, ), ] ) # order asc, no token, no limit actual_page = swh_storage.origin_visit_get(origin.url) assert actual_page.next_page_token is None assert actual_page.results == [ov1, ov2, ov3] # order asc, no token, limit actual_page = swh_storage.origin_visit_get(origin.url, limit=2) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [ov1, ov2] # order asc, token, no limit actual_page = swh_storage.origin_visit_get( origin.url, page_token=next_page_token ) assert actual_page.next_page_token is None assert actual_page.results == [ov3] # order asc, no token, limit actual_page = swh_storage.origin_visit_get(origin.url, limit=1) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [ov1] # order asc, token, no limit actual_page = swh_storage.origin_visit_get( origin.url, page_token=next_page_token ) assert actual_page.next_page_token is None assert actual_page.results == [ov2, ov3] # order asc, token, limit actual_page = swh_storage.origin_visit_get( origin.url, page_token=next_page_token, limit=2 ) assert actual_page.next_page_token is None assert actual_page.results == [ov2, ov3] actual_page = swh_storage.origin_visit_get( origin.url, page_token=next_page_token, limit=1 ) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [ov2] actual_page = swh_storage.origin_visit_get( origin.url, page_token=next_page_token, limit=1 ) assert actual_page.next_page_token is None assert actual_page.results == [ov3] # order desc, no token, no limit actual_page = swh_storage.origin_visit_get(origin.url, order=ListOrder.DESC) assert actual_page.next_page_token is None assert actual_page.results == [ov3, ov2, ov1] # order desc, no token, limit actual_page = swh_storage.origin_visit_get( origin.url, limit=2, order=ListOrder.DESC ) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [ov3, ov2] # order desc, token, no limit actual_page = swh_storage.origin_visit_get( origin.url, page_token=next_page_token, order=ListOrder.DESC ) assert actual_page.next_page_token is None assert actual_page.results == [ov1] # order desc, no token, limit actual_page = swh_storage.origin_visit_get( origin.url, limit=1, order=ListOrder.DESC ) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [ov3] # order desc, token, no limit actual_page = swh_storage.origin_visit_get( origin.url, page_token=next_page_token, order=ListOrder.DESC ) assert actual_page.next_page_token is None assert actual_page.results == [ov2, ov1] # order desc, token, limit actual_page = swh_storage.origin_visit_get( origin.url, page_token=next_page_token, order=ListOrder.DESC, limit=1 ) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [ov2] actual_page = swh_storage.origin_visit_get( origin.url, page_token=next_page_token, order=ListOrder.DESC ) assert actual_page.next_page_token is None assert actual_page.results == [ov1] def test_origin_visit_status_get__unknown_cases(self, swh_storage, sample_data): origin = sample_data.origin actual_page = swh_storage.origin_visit_status_get("foobar", 1) assert actual_page.next_page_token is None assert actual_page.results == [] actual_page = swh_storage.origin_visit_status_get(origin.url, 1) assert actual_page.next_page_token is None assert actual_page.results == [] origin = sample_data.origin swh_storage.origin_add([origin]) ov1 = swh_storage.origin_visit_add( [ OriginVisit( origin=origin.url, date=sample_data.date_visit1, type=sample_data.type_visit1, ), ] )[0] actual_page = swh_storage.origin_visit_status_get(origin.url, ov1.visit + 10) assert actual_page.next_page_token is None assert actual_page.results == [] def test_origin_visit_status_add_unknown_type(self, swh_storage, sample_data): ov = OriginVisit( origin=sample_data.origin.url, date=now(), type=sample_data.type_visit1, visit=42, ) ovs = OriginVisitStatus( origin=ov.origin, visit=ov.visit, date=now(), status="created", snapshot=None, ) with pytest.raises(StorageArgumentException): swh_storage.origin_visit_status_add([ovs]) swh_storage.origin_add([sample_data.origin]) with pytest.raises(StorageArgumentException): swh_storage.origin_visit_status_add([ovs]) swh_storage.origin_visit_add([ov]) swh_storage.origin_visit_status_add([ovs]) def test_origin_visit_status_get_all(self, swh_storage, sample_data): origin = sample_data.origin swh_storage.origin_add([origin]) date_visit3 = round_to_milliseconds(now()) date_visit1 = date_visit3 - datetime.timedelta(hours=2) date_visit2 = date_visit3 - datetime.timedelta(hours=1) assert date_visit1 < date_visit2 < date_visit3 ov1 = swh_storage.origin_visit_add( [ OriginVisit( origin=origin.url, date=date_visit1, type=sample_data.type_visit1, ), ] )[0] ovs1 = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=date_visit1, type=ov1.type, status="created", snapshot=None, ) ovs2 = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=date_visit2, type=ov1.type, status="partial", snapshot=None, ) ovs3 = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=date_visit3, type=ov1.type, status="full", snapshot=sample_data.snapshot.id, metadata={}, ) swh_storage.origin_visit_status_add([ovs2, ovs3]) # order asc, no token, no limit actual_page = swh_storage.origin_visit_status_get(origin.url, ov1.visit) assert actual_page.next_page_token is None assert actual_page.results == [ovs1, ovs2, ovs3] # order asc, no token, limit actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, limit=2 ) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [ovs1, ovs2] # order asc, token, no limit actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, page_token=next_page_token ) assert actual_page.next_page_token is None assert actual_page.results == [ovs3] # order asc, no token, limit actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, limit=1 ) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [ovs1] actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, page_token=next_page_token ) assert actual_page.next_page_token is None assert actual_page.results == [ovs2, ovs3] # order asc, token, limit actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, page_token=next_page_token, limit=2 ) assert actual_page.next_page_token is None assert actual_page.results == [ovs2, ovs3] # order asc, no token, limit actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, limit=2 ) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [ovs1, ovs2] actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, page_token=next_page_token, limit=1 ) assert actual_page.next_page_token is None assert actual_page.results == [ovs3] # order desc, no token, no limit actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, order=ListOrder.DESC ) assert actual_page.next_page_token is None assert actual_page.results == [ovs3, ovs2, ovs1] # order desc, no token, limit actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, limit=2, order=ListOrder.DESC ) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [ovs3, ovs2] actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, page_token=next_page_token, order=ListOrder.DESC ) assert actual_page.next_page_token is None assert actual_page.results == [ovs1] # order desc, no token, limit actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, order=ListOrder.DESC, limit=1 ) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [ovs3] # order desc, token, no limit actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, page_token=next_page_token, order=ListOrder.DESC ) assert actual_page.next_page_token is None assert actual_page.results == [ovs2, ovs1] # order desc, token, limit actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, page_token=next_page_token, order=ListOrder.DESC, limit=1, ) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [ovs2] actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, page_token=next_page_token, order=ListOrder.DESC ) assert actual_page.next_page_token is None assert actual_page.results == [ovs1] def test_origin_visit_status_get_random(self, swh_storage, sample_data): origins = sample_data.origins[:2] swh_storage.origin_add(origins) # Add some random visits within the selection range visits = self._generate_random_visits() visit_type = "git" # Add visits to those origins for origin in origins: for date_visit in visits: visit = swh_storage.origin_visit_add( [OriginVisit(origin=origin.url, date=date_visit, type=visit_type,)] )[0] swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=visit.visit, date=now(), status="full", snapshot=None, ) ] ) swh_storage.refresh_stat_counters() stats = swh_storage.stat_counters() assert stats["origin"] == len(origins) assert stats["origin_visit"] == len(origins) * len(visits) random_ovs = swh_storage.origin_visit_status_get_random(visit_type) assert random_ovs assert random_ovs.origin is not None assert random_ovs.origin in [o.url for o in origins] assert random_ovs.type is not None def test_origin_visit_status_get_random_nothing_found( self, swh_storage, sample_data ): origins = sample_data.origins swh_storage.origin_add(origins) visit_type = "hg" # Add some visits outside of the random generation selection so nothing # will be found by the random selection visits = self._generate_random_visits(nb_visits=3, start=13, end=24) for origin in origins: for date_visit in visits: visit = swh_storage.origin_visit_add( [OriginVisit(origin=origin.url, date=date_visit, type=visit_type,)] )[0] swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=visit.visit, date=now(), status="full", snapshot=None, ) ] ) random_origin_visit = swh_storage.origin_visit_status_get_random(visit_type) assert random_origin_visit is None def test_origin_get_by_sha1(self, swh_storage, sample_data): origin = sample_data.origin assert swh_storage.origin_get([origin.url])[0] is None swh_storage.origin_add([origin]) origins = list(swh_storage.origin_get_by_sha1([sha1(origin.url)])) assert len(origins) == 1 assert origins[0]["url"] == origin.url def test_origin_get_by_sha1_not_found(self, swh_storage, sample_data): unknown_origin = sample_data.origin assert swh_storage.origin_get([unknown_origin.url])[0] is None origins = list(swh_storage.origin_get_by_sha1([sha1(unknown_origin.url)])) assert len(origins) == 1 assert origins[0] is None def test_origin_search_single_result(self, swh_storage, sample_data): origin, origin2 = sample_data.origins[:2] actual_page = swh_storage.origin_search(origin.url) assert actual_page.next_page_token is None assert actual_page.results == [] actual_page = swh_storage.origin_search(origin.url, regexp=True) assert actual_page.next_page_token is None assert actual_page.results == [] swh_storage.origin_add([origin]) actual_page = swh_storage.origin_search(origin.url) assert actual_page.next_page_token is None assert actual_page.results == [origin] actual_page = swh_storage.origin_search(f".{origin.url[1:-1]}.", regexp=True) assert actual_page.next_page_token is None assert actual_page.results == [origin] swh_storage.origin_add([origin2]) actual_page = swh_storage.origin_search(origin2.url) assert actual_page.next_page_token is None assert actual_page.results == [origin2] actual_page = swh_storage.origin_search(f".{origin2.url[1:-1]}.", regexp=True) assert actual_page.next_page_token is None assert actual_page.results == [origin2] def test_origin_search_no_regexp(self, swh_storage, sample_data): origin, origin2 = sample_data.origins[:2] swh_storage.origin_add([origin, origin2]) # no pagination actual_page = swh_storage.origin_search("/") assert actual_page.next_page_token is None assert actual_page.results == [origin, origin2] # offset=0 actual_page = swh_storage.origin_search("/", page_token=None, limit=1) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [origin] # offset=1 actual_page = swh_storage.origin_search( "/", page_token=next_page_token, limit=1 ) assert actual_page.next_page_token is None assert actual_page.results == [origin2] def test_origin_search_regexp_substring(self, swh_storage, sample_data): origin, origin2 = sample_data.origins[:2] swh_storage.origin_add([origin, origin2]) # no pagination actual_page = swh_storage.origin_search("/", regexp=True) assert actual_page.next_page_token is None assert actual_page.results == [origin, origin2] # offset=0 actual_page = swh_storage.origin_search( "/", page_token=None, limit=1, regexp=True ) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [origin] # offset=1 actual_page = swh_storage.origin_search( "/", page_token=next_page_token, limit=1, regexp=True ) assert actual_page.next_page_token is None assert actual_page.results == [origin2] def test_origin_search_regexp_fullstring(self, swh_storage, sample_data): origin, origin2 = sample_data.origins[:2] swh_storage.origin_add([origin, origin2]) # no pagination actual_page = swh_storage.origin_search(".*/.*", regexp=True) assert actual_page.next_page_token is None assert actual_page.results == [origin, origin2] # offset=0 actual_page = swh_storage.origin_search( ".*/.*", page_token=None, limit=1, regexp=True ) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [origin] # offset=1 actual_page = swh_storage.origin_search( ".*/.*", page_token=next_page_token, limit=1, regexp=True ) assert actual_page.next_page_token is None assert actual_page.results == [origin2] def test_origin_search_no_visit_types(self, swh_storage, sample_data): origin = sample_data.origins[0] swh_storage.origin_add([origin]) actual_page = swh_storage.origin_search(origin.url, visit_types=["git"]) assert actual_page.next_page_token is None assert actual_page.results == [] def test_origin_search_with_visit_types(self, swh_storage, sample_data): origin, origin2 = sample_data.origins[:2] swh_storage.origin_add([origin, origin2]) swh_storage.origin_visit_add( [ OriginVisit(origin=origin.url, date=now(), type="git"), OriginVisit(origin=origin2.url, date=now(), type="svn"), ] ) actual_page = swh_storage.origin_search(origin.url, visit_types=["git"]) assert actual_page.next_page_token is None assert actual_page.results == [origin] actual_page = swh_storage.origin_search(origin2.url, visit_types=["svn"]) assert actual_page.next_page_token is None assert actual_page.results == [origin2] def test_origin_search_multiple_visit_types(self, swh_storage, sample_data): origin = sample_data.origins[0] swh_storage.origin_add([origin]) def _add_visit_type(visit_type): swh_storage.origin_visit_add( [OriginVisit(origin=origin.url, date=now(), type=visit_type)] ) def _check_visit_types(visit_types): actual_page = swh_storage.origin_search(origin.url, visit_types=visit_types) assert actual_page.next_page_token is None assert actual_page.results == [origin] _add_visit_type("git") _check_visit_types(["git"]) _check_visit_types(["git", "hg"]) _add_visit_type("hg") _check_visit_types(["hg"]) _check_visit_types(["git", "hg"]) def test_origin_visit_add(self, swh_storage, sample_data): origin1 = sample_data.origins[1] swh_storage.origin_add([origin1]) date_visit = now() date_visit2 = date_visit + datetime.timedelta(minutes=1) date_visit = round_to_milliseconds(date_visit) date_visit2 = round_to_milliseconds(date_visit2) visit1 = OriginVisit( origin=origin1.url, date=date_visit, type=sample_data.type_visit1, ) visit2 = OriginVisit( origin=origin1.url, date=date_visit2, type=sample_data.type_visit2, ) # add once ov1, ov2 = swh_storage.origin_visit_add([visit1, visit2]) # then again (will be ignored as they already exist) origin_visit1, origin_visit2 = swh_storage.origin_visit_add([ov1, ov2]) assert ov1 == origin_visit1 assert ov2 == origin_visit2 ovs1 = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=date_visit, type=ov1.type, status="created", snapshot=None, ) ovs2 = OriginVisitStatus( origin=ov2.origin, visit=ov2.visit, date=date_visit2, type=ov2.type, status="created", snapshot=None, ) actual_visits = swh_storage.origin_visit_get(origin1.url).results expected_visits = [ov1, ov2] assert len(expected_visits) == len(actual_visits) for visit in expected_visits: assert visit in actual_visits actual_objects = list(swh_storage.journal_writer.journal.objects) expected_objects = list( [("origin", origin1)] + [("origin_visit", visit) for visit in expected_visits] * 2 + [("origin_visit_status", ovs) for ovs in [ovs1, ovs2]] ) for obj in expected_objects: assert obj in actual_objects def test_origin_visit_add_validation(self, swh_storage, sample_data): """Unknown origin when adding visits should raise""" visit = attr.evolve(sample_data.origin_visit, origin="something-unknonw") with pytest.raises(StorageArgumentException, match="Unknown origin"): swh_storage.origin_visit_add([visit]) objects = list(swh_storage.journal_writer.journal.objects) assert not objects def test_origin_visit_status_add_validation(self, swh_storage): """Wrong origin_visit_status input should raise storage argument error""" date_visit = now() visit_status1 = OriginVisitStatus( origin="unknown-origin-url", visit=10, date=date_visit, status="full", snapshot=None, ) with pytest.raises(StorageArgumentException, match="Unknown origin"): swh_storage.origin_visit_status_add([visit_status1]) objects = list(swh_storage.journal_writer.journal.objects) assert not objects def test_origin_visit_status_add(self, swh_storage, sample_data): """Correct origin visit statuses should add a new visit status """ snapshot = sample_data.snapshot origin1 = sample_data.origins[1] origin2 = Origin(url="new-origin") swh_storage.origin_add([origin1, origin2]) ov1, ov2 = swh_storage.origin_visit_add( [ OriginVisit( origin=origin1.url, date=sample_data.date_visit1, type=sample_data.type_visit1, ), OriginVisit( origin=origin2.url, date=sample_data.date_visit2, type=sample_data.type_visit2, ), ] ) ovs1 = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=sample_data.date_visit1, type=ov1.type, status="created", snapshot=None, ) ovs2 = OriginVisitStatus( origin=ov2.origin, visit=ov2.visit, date=sample_data.date_visit2, type=ov2.type, status="created", snapshot=None, ) date_visit_now = round_to_milliseconds(now()) visit_status1 = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=date_visit_now, type=ov1.type, status="full", snapshot=snapshot.id, ) date_visit_now = round_to_milliseconds(now()) visit_status2 = OriginVisitStatus( origin=ov2.origin, visit=ov2.visit, date=date_visit_now, type=ov2.type, status="ongoing", snapshot=None, metadata={"intrinsic": "something"}, ) stats = swh_storage.origin_visit_status_add([visit_status1, visit_status2]) assert stats == {"origin_visit_status:add": 2} visit = swh_storage.origin_visit_get_latest(origin1.url, require_snapshot=True) visit_status = swh_storage.origin_visit_status_get_latest( origin1.url, visit.visit, require_snapshot=True ) assert visit_status == visit_status1 visit = swh_storage.origin_visit_get_latest(origin2.url, require_snapshot=False) visit_status = swh_storage.origin_visit_status_get_latest( origin2.url, visit.visit, require_snapshot=False ) assert origin2.url != origin1.url assert visit_status == visit_status2 actual_objects = list(swh_storage.journal_writer.journal.objects) expected_origins = [origin1, origin2] expected_visits = [ov1, ov2] expected_visit_statuses = [ovs1, ovs2, visit_status1, visit_status2] expected_objects = ( [("origin", o) for o in expected_origins] + [("origin_visit", v) for v in expected_visits] + [("origin_visit_status", ovs) for ovs in expected_visit_statuses] ) for obj in expected_objects: assert obj in actual_objects def test_origin_visit_status_add_twice(self, swh_storage, sample_data): """Correct origin visit statuses should add a new visit status """ snapshot = sample_data.snapshot origin1 = sample_data.origins[1] swh_storage.origin_add([origin1]) ov1 = swh_storage.origin_visit_add( [ OriginVisit( origin=origin1.url, date=sample_data.date_visit1, type=sample_data.type_visit1, ), ] )[0] ovs1 = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=sample_data.date_visit1, type=ov1.type, status="created", snapshot=None, ) date_visit_now = round_to_milliseconds(now()) visit_status1 = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=date_visit_now, type=ov1.type, status="full", snapshot=snapshot.id, ) stats = swh_storage.origin_visit_status_add([visit_status1]) assert stats == {"origin_visit_status:add": 1} # second call will ignore existing entries (will send to storage though) stats = swh_storage.origin_visit_status_add([visit_status1]) # ...so the storage still returns it as an addition assert stats == {"origin_visit_status:add": 1} visit_status = swh_storage.origin_visit_status_get_latest(ov1.origin, ov1.visit) assert visit_status == visit_status1 actual_objects = list(swh_storage.journal_writer.journal.objects) expected_origins = [origin1] expected_visits = [ov1] expected_visit_statuses = [ovs1, visit_status1, visit_status1] # write twice in the journal expected_objects = ( [("origin", o) for o in expected_origins] + [("origin_visit", v) for v in expected_visits] + [("origin_visit_status", ovs) for ovs in expected_visit_statuses] ) for obj in expected_objects: assert obj in actual_objects def test_origin_visit_find_by_date(self, swh_storage, sample_data): origin = sample_data.origin swh_storage.origin_add([origin]) visit1 = OriginVisit( origin=origin.url, date=sample_data.date_visit2, type=sample_data.type_visit1, ) visit2 = OriginVisit( origin=origin.url, date=sample_data.date_visit3, type=sample_data.type_visit2, ) visit3 = OriginVisit( origin=origin.url, date=sample_data.date_visit2, type=sample_data.type_visit3, ) ov1, ov2, ov3 = swh_storage.origin_visit_add([visit1, visit2, visit3]) ovs1 = OriginVisitStatus( origin=origin.url, visit=ov1.visit, date=sample_data.date_visit2, status="ongoing", snapshot=None, ) ovs2 = OriginVisitStatus( origin=origin.url, visit=ov2.visit, date=sample_data.date_visit3, status="ongoing", snapshot=None, ) ovs3 = OriginVisitStatus( origin=origin.url, visit=ov3.visit, date=sample_data.date_visit2, status="ongoing", snapshot=None, ) swh_storage.origin_visit_status_add([ovs1, ovs2, ovs3]) # Simple case actual_visit = swh_storage.origin_visit_find_by_date( origin.url, sample_data.date_visit3 ) assert actual_visit == ov2 # There are two visits at the same date, the latest must be returned actual_visit = swh_storage.origin_visit_find_by_date( origin.url, sample_data.date_visit2 ) assert actual_visit == ov3 def test_origin_visit_find_by_date__unknown_origin(self, swh_storage, sample_data): actual_visit = swh_storage.origin_visit_find_by_date( "foo", sample_data.date_visit2 ) assert actual_visit is None def test_origin_visit_get_by(self, swh_storage, sample_data): snapshot = sample_data.snapshot origins = sample_data.origins[:2] swh_storage.origin_add(origins) origin_url, origin_url2 = [o.url for o in origins] visit = OriginVisit( origin=origin_url, date=sample_data.date_visit2, type=sample_data.type_visit2, ) origin_visit1 = swh_storage.origin_visit_add([visit])[0] swh_storage.snapshot_add([snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin_url, visit=origin_visit1.visit, date=now(), status="ongoing", snapshot=snapshot.id, ) ] ) # Add some other {origin, visit} entries visit2 = OriginVisit( origin=origin_url, date=sample_data.date_visit3, type=sample_data.type_visit3, ) visit3 = OriginVisit( origin=origin_url2, date=sample_data.date_visit3, type=sample_data.type_visit3, ) swh_storage.origin_visit_add([visit2, visit3]) # when visit1_metadata = { "contents": 42, "directories": 22, } swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin_url, visit=origin_visit1.visit, date=now(), status="full", snapshot=snapshot.id, metadata=visit1_metadata, ) ] ) actual_visit = swh_storage.origin_visit_get_by(origin_url, origin_visit1.visit) assert actual_visit == origin_visit1 def test_origin_visit_get_by__no_result(self, swh_storage, sample_data): actual_visit = swh_storage.origin_visit_get_by("unknown", 10) # unknown origin assert actual_visit is None origin = sample_data.origin swh_storage.origin_add([origin]) actual_visit = swh_storage.origin_visit_get_by(origin.url, 999) # unknown visit assert actual_visit is None def test_origin_visit_get_latest_edge_cases(self, swh_storage, sample_data): # unknown origin so no result assert swh_storage.origin_visit_get_latest("unknown-origin") is None # unknown type so no result origin = sample_data.origin swh_storage.origin_add([origin]) assert swh_storage.origin_visit_get_latest(origin.url, type="unknown") is None # unknown allowed statuses should raise with pytest.raises(StorageArgumentException, match="Unknown allowed statuses"): swh_storage.origin_visit_get_latest( origin.url, allowed_statuses=["unknown"] ) def test_origin_visit_get_latest_filter_type(self, swh_storage, sample_data): """Filtering origin visit get latest with filter type should be ok """ origin = sample_data.origin swh_storage.origin_add([origin]) visit1 = OriginVisit( origin=origin.url, date=sample_data.date_visit1, type="git", ) visit2 = OriginVisit( origin=origin.url, date=sample_data.date_visit2, type="hg", ) date_now = round_to_milliseconds(now()) visit3 = OriginVisit(origin=origin.url, date=date_now, type="hg",) assert sample_data.date_visit1 < sample_data.date_visit2 assert sample_data.date_visit2 < date_now ov1, ov2, ov3 = swh_storage.origin_visit_add([visit1, visit2, visit3]) # Check type filter is ok actual_visit = swh_storage.origin_visit_get_latest(origin.url, type="git") assert actual_visit == ov1 actual_visit = swh_storage.origin_visit_get_latest(origin.url, type="hg") assert actual_visit == ov3 actual_visit_unknown_type = swh_storage.origin_visit_get_latest( origin.url, type="npm", # no visit matching that type ) assert actual_visit_unknown_type is None def test_origin_visit_get_latest(self, swh_storage, sample_data): empty_snapshot, complete_snapshot = sample_data.snapshots[1:3] origin = sample_data.origin swh_storage.origin_add([origin]) visit1 = OriginVisit( origin=origin.url, date=sample_data.date_visit1, type="git", ) visit2 = OriginVisit( origin=origin.url, date=sample_data.date_visit2, type="hg", ) date_now = round_to_milliseconds(now()) visit3 = OriginVisit(origin=origin.url, date=date_now, type="hg",) assert visit1.date < visit2.date assert visit2.date < visit3.date ov1, ov2, ov3 = swh_storage.origin_visit_add([visit1, visit2, visit3]) # no filters, latest visit is the last one (whose date is most recent) actual_visit = swh_storage.origin_visit_get_latest(origin.url) assert actual_visit == ov3 # 3 visits, none has snapshot so nothing is returned actual_visit = swh_storage.origin_visit_get_latest( origin.url, require_snapshot=True ) assert actual_visit is None # visit are created with "created" status, so nothing will get returned actual_visit = swh_storage.origin_visit_get_latest( origin.url, allowed_statuses=["partial"] ) assert actual_visit is None # visit are created with "created" status, so most recent again actual_visit = swh_storage.origin_visit_get_latest( origin.url, allowed_statuses=["created"] ) assert actual_visit == ov3 # Add snapshot to visit1; require_snapshot=True makes it return first visit swh_storage.snapshot_add([complete_snapshot]) visit_status_with_snapshot = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=round_to_milliseconds(now()), type=ov1.type, status="ongoing", snapshot=complete_snapshot.id, ) swh_storage.origin_visit_status_add([visit_status_with_snapshot]) # only the first visit has a snapshot now actual_visit = swh_storage.origin_visit_get_latest( origin.url, require_snapshot=True ) assert actual_visit == ov1 # only the first visit has a status ongoing now actual_visit = swh_storage.origin_visit_get_latest( origin.url, allowed_statuses=["ongoing"] ) assert actual_visit == ov1 actual_visit_status = swh_storage.origin_visit_status_get_latest( origin.url, ov1.visit, require_snapshot=True ) assert actual_visit_status == visit_status_with_snapshot # ... and require_snapshot=False (defaults) still returns latest visit (3rd) actual_visit = swh_storage.origin_visit_get_latest( origin.url, require_snapshot=False ) assert actual_visit == ov3 # no specific filter, this returns as before the latest visit actual_visit = swh_storage.origin_visit_get_latest(origin.url) assert actual_visit == ov3 # Status filter: all three visits are status=ongoing, so no visit # returned actual_visit = swh_storage.origin_visit_get_latest( origin.url, allowed_statuses=["full"] ) assert actual_visit is None visit_status1_full = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=round_to_milliseconds(now()), type=ov1.type, status="full", snapshot=complete_snapshot.id, ) # Mark the first visit as completed and check status filter again swh_storage.origin_visit_status_add([visit_status1_full]) # only the first visit has the full status actual_visit = swh_storage.origin_visit_get_latest( origin.url, allowed_statuses=["full"] ) assert actual_visit == ov1 actual_visit_status = swh_storage.origin_visit_status_get_latest( origin.url, ov1.visit, allowed_statuses=["full"] ) assert actual_visit_status == visit_status1_full # no specific filter, this returns as before the latest visit actual_visit = swh_storage.origin_visit_get_latest(origin.url) assert actual_visit == ov3 # Add snapshot to visit2 and check that the new snapshot is returned swh_storage.snapshot_add([empty_snapshot]) visit_status2_full = OriginVisitStatus( origin=ov2.origin, visit=ov2.visit, date=round_to_milliseconds(now()), type=ov2.type, status="ongoing", snapshot=empty_snapshot.id, ) swh_storage.origin_visit_status_add([visit_status2_full]) actual_visit = swh_storage.origin_visit_get_latest( origin.url, require_snapshot=True ) # 2nd visit is most recent with a snapshot assert actual_visit == ov2 actual_visit_status = swh_storage.origin_visit_status_get_latest( origin.url, ov2.visit, require_snapshot=True ) assert actual_visit_status == visit_status2_full # no specific filter, this returns as before the latest visit, 3rd one actual_origin = swh_storage.origin_visit_get_latest(origin.url) assert actual_origin == ov3 # full status is still the first visit actual_visit = swh_storage.origin_visit_get_latest( origin.url, allowed_statuses=["full"] ) assert actual_visit == ov1 # Add snapshot to visit3 (same date as visit2) visit_status3_with_snapshot = OriginVisitStatus( origin=ov3.origin, visit=ov3.visit, date=round_to_milliseconds(now()), type=ov3.type, status="ongoing", snapshot=complete_snapshot.id, ) swh_storage.origin_visit_status_add([visit_status3_with_snapshot]) # full status is still the first visit actual_visit = swh_storage.origin_visit_get_latest( origin.url, allowed_statuses=["full"], require_snapshot=True, ) assert actual_visit == ov1 actual_visit_status = swh_storage.origin_visit_status_get_latest( origin.url, visit=actual_visit.visit, allowed_statuses=["full"], require_snapshot=True, ) assert actual_visit_status == visit_status1_full # most recent is still the 3rd visit actual_visit = swh_storage.origin_visit_get_latest(origin.url) assert actual_visit == ov3 # 3rd visit has a snapshot now, so it's elected actual_visit = swh_storage.origin_visit_get_latest( origin.url, require_snapshot=True ) assert actual_visit == ov3 actual_visit_status = swh_storage.origin_visit_status_get_latest( origin.url, ov3.visit, require_snapshot=True ) assert actual_visit_status == visit_status3_with_snapshot def test_origin_visit_get_latest__same_date(self, swh_storage, sample_data): empty_snapshot, complete_snapshot = sample_data.snapshots[1:3] origin = sample_data.origin swh_storage.origin_add([origin]) visit1 = OriginVisit( origin=origin.url, date=sample_data.date_visit1, type="git", ) visit2 = OriginVisit( origin=origin.url, date=sample_data.date_visit1, type="hg", ) ov1, ov2 = swh_storage.origin_visit_add([visit1, visit2]) # ties should be broken by using the visit id actual_visit = swh_storage.origin_visit_get_latest(origin.url) assert actual_visit == ov2 def test_origin_visit_get_latest__not_last(self, swh_storage, sample_data): origin = sample_data.origin swh_storage.origin_add([origin]) visit1, visit2 = sample_data.origin_visits[:2] assert visit1.origin == origin.url swh_storage.origin_visit_add([visit1]) ov1 = swh_storage.origin_visit_get_latest(origin.url) # Add snapshot to visit1, latest snapshot = visit 1 snapshot complete_snapshot = sample_data.snapshots[2] swh_storage.snapshot_add([complete_snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=ov1.visit, date=visit2.date, status="partial", snapshot=None, ) ] ) assert visit1.date < visit2.date # no snapshot associated to the visit, so None visit = swh_storage.origin_visit_get_latest( origin.url, allowed_statuses=["partial"], require_snapshot=True, ) assert visit is None date_now = now() assert visit2.date < date_now swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=ov1.visit, date=date_now, status="full", snapshot=complete_snapshot.id, ) ] ) swh_storage.origin_visit_add( [OriginVisit(origin=origin.url, date=now(), type=visit1.type,)] ) visit = swh_storage.origin_visit_get_latest(origin.url, require_snapshot=True) assert visit is not None def test_origin_visit_status_get_latest__validation(self, swh_storage, sample_data): origin = sample_data.origin swh_storage.origin_add([origin]) visit1 = OriginVisit( origin=origin.url, date=sample_data.date_visit1, type="git", ) # unknown allowed statuses should raise with pytest.raises(StorageArgumentException, match="Unknown allowed statuses"): swh_storage.origin_visit_status_get_latest( origin.url, visit1.visit, allowed_statuses=["unknown"] ) def test_origin_visit_status_get_latest(self, swh_storage, sample_data): snapshot = sample_data.snapshots[2] origin1 = sample_data.origin swh_storage.origin_add([origin1]) # to have some reference visits ov1, ov2 = swh_storage.origin_visit_add( [ OriginVisit( origin=origin1.url, date=sample_data.date_visit1, type=sample_data.type_visit1, ), OriginVisit( origin=origin1.url, date=sample_data.date_visit2, type=sample_data.type_visit2, ), ] ) swh_storage.snapshot_add([snapshot]) date_now = round_to_milliseconds(now()) assert sample_data.date_visit1 < sample_data.date_visit2 assert sample_data.date_visit2 < date_now ovs1 = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=sample_data.date_visit1, type=ov1.type, status="partial", snapshot=None, ) ovs2 = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=sample_data.date_visit2, type=ov1.type, status="ongoing", snapshot=None, ) ovs3 = OriginVisitStatus( origin=ov2.origin, visit=ov2.visit, date=sample_data.date_visit2 + datetime.timedelta(minutes=1), # to not be ignored type=ov2.type, status="ongoing", snapshot=None, ) ovs4 = OriginVisitStatus( origin=ov2.origin, visit=ov2.visit, date=date_now, type=ov2.type, status="full", snapshot=snapshot.id, metadata={"something": "wicked"}, ) swh_storage.origin_visit_status_add([ovs1, ovs2, ovs3, ovs4]) # unknown origin so no result actual_origin_visit = swh_storage.origin_visit_status_get_latest( "unknown-origin", ov1.visit ) assert actual_origin_visit is None # unknown visit so no result actual_origin_visit = swh_storage.origin_visit_status_get_latest( ov1.origin, ov1.visit + 10 ) assert actual_origin_visit is None # Two visits, both with no snapshot, take the most recent actual_origin_visit2 = swh_storage.origin_visit_status_get_latest( origin1.url, ov1.visit ) assert isinstance(actual_origin_visit2, OriginVisitStatus) assert actual_origin_visit2 == ovs2 assert ovs2.origin == origin1.url assert ovs2.visit == ov1.visit actual_origin_visit = swh_storage.origin_visit_status_get_latest( origin1.url, ov1.visit, require_snapshot=True ) # there is no visit with snapshot yet for that visit assert actual_origin_visit is None actual_origin_visit2 = swh_storage.origin_visit_status_get_latest( origin1.url, ov1.visit, allowed_statuses=["partial", "ongoing"] ) # visit status with partial status visit elected assert actual_origin_visit2 == ovs2 assert actual_origin_visit2.status == "ongoing" actual_origin_visit4 = swh_storage.origin_visit_status_get_latest( origin1.url, ov2.visit, require_snapshot=True ) assert actual_origin_visit4 == ovs4 assert actual_origin_visit4.snapshot == snapshot.id actual_origin_visit = swh_storage.origin_visit_status_get_latest( origin1.url, ov2.visit, require_snapshot=True, allowed_statuses=["ongoing"] ) # nothing matches so nothing assert actual_origin_visit is None # there is no visit with status full actual_origin_visit3 = swh_storage.origin_visit_status_get_latest( origin1.url, ov2.visit, allowed_statuses=["ongoing"] ) assert actual_origin_visit3 == ovs3 def test_person_fullname_unicity(self, swh_storage, sample_data): revision, rev2 = sample_data.revisions[0:2] # create a revision with same committer fullname but wo name and email revision2 = attr.evolve( rev2, committer=Person( fullname=revision.committer.fullname, name=None, email=None ), ) swh_storage.revision_add([revision, revision2]) # when getting added revisions revisions = swh_storage.revision_get([revision.id, revision2.id]) # then check committers are the same assert revisions[0].committer == revisions[1].committer def test_snapshot_add_get_empty(self, swh_storage, sample_data): empty_snapshot = sample_data.snapshots[1] empty_snapshot_dict = empty_snapshot.to_dict() origin = sample_data.origin swh_storage.origin_add([origin]) ov1 = swh_storage.origin_visit_add( [ OriginVisit( origin=origin.url, date=sample_data.date_visit1, type=sample_data.type_visit1, ) ] )[0] actual_result = swh_storage.snapshot_add([empty_snapshot]) assert actual_result == {"snapshot:add": 1} date_now = now() swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=date_now, type=ov1.type, status="full", snapshot=empty_snapshot.id, ) ] ) by_id = swh_storage.snapshot_get(empty_snapshot.id) assert by_id == {**empty_snapshot_dict, "next_branch": None} ovs1 = OriginVisitStatus.from_dict( { "origin": ov1.origin, "date": sample_data.date_visit1, "type": ov1.type, "visit": ov1.visit, "status": "created", "snapshot": None, "metadata": None, } ) ovs2 = OriginVisitStatus.from_dict( { "origin": ov1.origin, "date": date_now, "type": ov1.type, "visit": ov1.visit, "status": "full", "metadata": None, "snapshot": empty_snapshot.id, } ) actual_objects = list(swh_storage.journal_writer.journal.objects) expected_objects = [ ("origin", origin), ("origin_visit", ov1), ("origin_visit_status", ovs1,), ("snapshot", empty_snapshot), ("origin_visit_status", ovs2,), ] for obj in expected_objects: assert obj in actual_objects def test_snapshot_add_get_complete(self, swh_storage, sample_data): complete_snapshot = sample_data.snapshots[2] complete_snapshot_dict = complete_snapshot.to_dict() origin = sample_data.origin swh_storage.origin_add([origin]) visit = OriginVisit( origin=origin.url, date=sample_data.date_visit1, type=sample_data.type_visit1, ) origin_visit1 = swh_storage.origin_visit_add([visit])[0] actual_result = swh_storage.snapshot_add([complete_snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=origin_visit1.visit, date=now(), status="ongoing", snapshot=complete_snapshot.id, ) ] ) assert actual_result == {"snapshot:add": 1} by_id = swh_storage.snapshot_get(complete_snapshot.id) assert by_id == {**complete_snapshot_dict, "next_branch": None} def test_snapshot_add_many(self, swh_storage, sample_data): snapshot, _, complete_snapshot = sample_data.snapshots[:3] actual_result = swh_storage.snapshot_add([snapshot, complete_snapshot]) assert actual_result == {"snapshot:add": 2} assert swh_storage.snapshot_get(complete_snapshot.id) == { **complete_snapshot.to_dict(), "next_branch": None, } assert swh_storage.snapshot_get(snapshot.id) == { **snapshot.to_dict(), "next_branch": None, } swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["snapshot"] == 2 def test_snapshot_add_many_incremental(self, swh_storage, sample_data): snapshot, _, complete_snapshot = sample_data.snapshots[:3] actual_result = swh_storage.snapshot_add([complete_snapshot]) assert actual_result == {"snapshot:add": 1} actual_result2 = swh_storage.snapshot_add([snapshot, complete_snapshot]) assert actual_result2 == {"snapshot:add": 1} assert swh_storage.snapshot_get(complete_snapshot.id) == { **complete_snapshot.to_dict(), "next_branch": None, } assert swh_storage.snapshot_get(snapshot.id) == { **snapshot.to_dict(), "next_branch": None, } def test_snapshot_add_twice(self, swh_storage, sample_data): snapshot, empty_snapshot = sample_data.snapshots[:2] actual_result = swh_storage.snapshot_add([empty_snapshot]) assert actual_result == {"snapshot:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ ("snapshot", empty_snapshot) ] actual_result = swh_storage.snapshot_add([snapshot]) assert actual_result == {"snapshot:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ ("snapshot", empty_snapshot), ("snapshot", snapshot), ] def test_snapshot_add_count_branches(self, swh_storage, sample_data): complete_snapshot = sample_data.snapshots[2] actual_result = swh_storage.snapshot_add([complete_snapshot]) assert actual_result == {"snapshot:add": 1} snp_size = swh_storage.snapshot_count_branches(complete_snapshot.id) expected_snp_size = { "alias": 1, "content": 1, "directory": 2, "release": 1, "revision": 1, "snapshot": 1, None: 1, } assert snp_size == expected_snp_size def test_snapshot_add_count_branches_with_filtering(self, swh_storage, sample_data): complete_snapshot = sample_data.snapshots[2] actual_result = swh_storage.snapshot_add([complete_snapshot]) assert actual_result == {"snapshot:add": 1} snp_size = swh_storage.snapshot_count_branches( complete_snapshot.id, branch_name_exclude_prefix=b"release" ) expected_snp_size = { "alias": 1, "content": 1, "directory": 2, "revision": 1, "snapshot": 1, None: 1, } assert snp_size == expected_snp_size def test_snapshot_add_count_branches_with_filtering_edge_cases( self, swh_storage, sample_data ): snapshot = Snapshot( branches={ b"\xaa\xff": SnapshotBranch( target=sample_data.revision.id, target_type=TargetType.REVISION, ), b"\xaa\xff\x00": SnapshotBranch( target=sample_data.revision.id, target_type=TargetType.REVISION, ), b"\xff\xff": SnapshotBranch( target=sample_data.release.id, target_type=TargetType.RELEASE, ), b"\xff\xff\x00": SnapshotBranch( target=sample_data.release.id, target_type=TargetType.RELEASE, ), b"dangling": None, }, ) swh_storage.snapshot_add([snapshot]) assert swh_storage.snapshot_count_branches( snapshot.id, branch_name_exclude_prefix=b"\xaa\xff" ) == {None: 1, "release": 2} assert swh_storage.snapshot_count_branches( snapshot.id, branch_name_exclude_prefix=b"\xff\xff" ) == {None: 1, "revision": 2} def test_snapshot_add_get_paginated(self, swh_storage, sample_data): complete_snapshot = sample_data.snapshots[2] swh_storage.snapshot_add([complete_snapshot]) snp_id = complete_snapshot.id branches = complete_snapshot.branches branch_names = list(sorted(branches)) # Test branch_from snapshot = swh_storage.snapshot_get_branches(snp_id, branches_from=b"release") rel_idx = branch_names.index(b"release") expected_snapshot = { "id": snp_id, "branches": {name: branches[name] for name in branch_names[rel_idx:]}, "next_branch": None, } assert snapshot == expected_snapshot # Test branches_count snapshot = swh_storage.snapshot_get_branches(snp_id, branches_count=1) expected_snapshot = { "id": snp_id, "branches": {branch_names[0]: branches[branch_names[0]],}, "next_branch": b"content", } assert snapshot == expected_snapshot # test branch_from + branches_count snapshot = swh_storage.snapshot_get_branches( snp_id, branches_from=b"directory", branches_count=3 ) dir_idx = branch_names.index(b"directory") expected_snapshot = { "id": snp_id, "branches": { name: branches[name] for name in branch_names[dir_idx : dir_idx + 3] }, "next_branch": branch_names[dir_idx + 3], } assert snapshot == expected_snapshot def test_snapshot_add_get_filtered(self, swh_storage, sample_data): origin = sample_data.origin complete_snapshot = sample_data.snapshots[2] swh_storage.origin_add([origin]) visit = OriginVisit( origin=origin.url, date=sample_data.date_visit1, type=sample_data.type_visit1, ) origin_visit1 = swh_storage.origin_visit_add([visit])[0] swh_storage.snapshot_add([complete_snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=origin_visit1.visit, date=now(), status="ongoing", snapshot=complete_snapshot.id, ) ] ) snp_id = complete_snapshot.id branches = complete_snapshot.branches snapshot = swh_storage.snapshot_get_branches( snp_id, target_types=["release", "revision"] ) expected_snapshot = { "id": snp_id, "branches": { name: tgt for name, tgt in branches.items() if tgt and tgt.target_type in [TargetType.RELEASE, TargetType.REVISION] }, "next_branch": None, } assert snapshot == expected_snapshot snapshot = swh_storage.snapshot_get_branches(snp_id, target_types=["alias"]) expected_snapshot = { "id": snp_id, "branches": { name: tgt for name, tgt in branches.items() if tgt and tgt.target_type == TargetType.ALIAS }, "next_branch": None, } assert snapshot == expected_snapshot def test_snapshot_add_get_filtered_and_paginated(self, swh_storage, sample_data): complete_snapshot = sample_data.snapshots[2] swh_storage.snapshot_add([complete_snapshot]) snp_id = complete_snapshot.id branches = complete_snapshot.branches branch_names = list(sorted(branches)) # Test branch_from snapshot = swh_storage.snapshot_get_branches( snp_id, target_types=["directory", "release"], branches_from=b"directory2" ) expected_snapshot = { "id": snp_id, "branches": {name: branches[name] for name in (b"directory2", b"release")}, "next_branch": None, } assert snapshot == expected_snapshot # Test branches_count snapshot = swh_storage.snapshot_get_branches( snp_id, target_types=["directory", "release"], branches_count=1 ) expected_snapshot = { "id": snp_id, "branches": {b"directory": branches[b"directory"]}, "next_branch": b"directory2", } assert snapshot == expected_snapshot # Test branches_count snapshot = swh_storage.snapshot_get_branches( snp_id, target_types=["directory", "release"], branches_count=2 ) expected_snapshot = { "id": snp_id, "branches": { name: branches[name] for name in (b"directory", b"directory2") }, "next_branch": b"release", } assert snapshot == expected_snapshot # test branch_from + branches_count snapshot = swh_storage.snapshot_get_branches( snp_id, target_types=["directory", "release"], branches_from=b"directory2", branches_count=1, ) dir_idx = branch_names.index(b"directory2") expected_snapshot = { "id": snp_id, "branches": {branch_names[dir_idx]: branches[branch_names[dir_idx]],}, "next_branch": b"release", } assert snapshot == expected_snapshot def test_snapshot_add_get_branch_by_type(self, swh_storage, sample_data): complete_snapshot = sample_data.snapshots[2] snapshot = complete_snapshot.to_dict() alias1 = b"alias1" alias2 = b"alias2" target1 = random.choice(list(snapshot["branches"].keys())) target2 = random.choice(list(snapshot["branches"].keys())) snapshot["branches"][alias2] = { "target": target2, "target_type": "alias", } snapshot["branches"][alias1] = { "target": target1, "target_type": "alias", } new_snapshot = Snapshot.from_dict(snapshot) swh_storage.snapshot_add([new_snapshot]) branches = swh_storage.snapshot_get_branches( new_snapshot.id, target_types=["alias"], branches_from=alias1, branches_count=1, )["branches"] assert len(branches) == 1 assert alias1 in branches def test_snapshot_add_get_by_branches_name_pattern(self, swh_storage, sample_data): snapshot = Snapshot( branches={ b"refs/heads/master": SnapshotBranch( target=sample_data.revision.id, target_type=TargetType.REVISION, ), b"refs/heads/incoming": SnapshotBranch( target=sample_data.revision.id, target_type=TargetType.REVISION, ), b"refs/pull/1": SnapshotBranch( target=sample_data.revision.id, target_type=TargetType.REVISION, ), b"refs/pull/2": SnapshotBranch( target=sample_data.revision.id, target_type=TargetType.REVISION, ), b"dangling": None, b"\xaa\xff": SnapshotBranch( target=sample_data.revision.id, target_type=TargetType.REVISION, ), b"\xaa\xff\x00": SnapshotBranch( target=sample_data.revision.id, target_type=TargetType.REVISION, ), b"\xff\xff": SnapshotBranch( target=sample_data.revision.id, target_type=TargetType.REVISION, ), b"\xff\xff\x00": SnapshotBranch( target=sample_data.revision.id, target_type=TargetType.REVISION, ), }, ) swh_storage.snapshot_add([snapshot]) for include_pattern, exclude_prefix, nb_results in ( (b"pull", None, 2), (b"incoming", None, 1), (b"dangling", None, 1), (None, b"refs/heads/", 7), (b"refs", b"refs/heads/master", 3), (b"refs", b"refs/heads/master", 3), (None, b"\xaa\xff", 7), (None, b"\xff\xff", 7), ): branches = swh_storage.snapshot_get_branches( snapshot.id, branch_name_include_substring=include_pattern, branch_name_exclude_prefix=exclude_prefix, )["branches"] expected_branches = [ branch_name for branch_name in snapshot.branches if (include_pattern is None or include_pattern in branch_name) and ( exclude_prefix is None or not branch_name.startswith(exclude_prefix) ) ] assert sorted(branches) == sorted(expected_branches) assert len(branches) == nb_results def test_snapshot_add_get_by_branches_name_pattern_filtered_paginated( self, swh_storage, sample_data ): pattern = b"foo" nb_branches_by_target_type = 10 branches = {} for i in range(nb_branches_by_target_type): branches[f"branch/directory/bar{i}".encode()] = SnapshotBranch( target=sample_data.directory.id, target_type=TargetType.DIRECTORY, ) branches[f"branch/revision/bar{i}".encode()] = SnapshotBranch( target=sample_data.revision.id, target_type=TargetType.REVISION, ) branches[f"branch/directory/{pattern}{i}".encode()] = SnapshotBranch( target=sample_data.directory.id, target_type=TargetType.DIRECTORY, ) branches[f"branch/revision/{pattern}{i}".encode()] = SnapshotBranch( target=sample_data.revision.id, target_type=TargetType.REVISION, ) snapshot = Snapshot(branches=branches) swh_storage.snapshot_add([snapshot]) branches_count = nb_branches_by_target_type // 2 for target_type in ( TargetType.DIRECTORY, TargetType.REVISION, ): target_type_str = target_type.value partial_branches = swh_storage.snapshot_get_branches( snapshot.id, branch_name_include_substring=pattern, target_types=[target_type_str], branches_count=branches_count, ) branches = partial_branches["branches"] expected_branches = [ branch_name for branch_name, branch_data in snapshot.branches.items() if pattern in branch_name and branch_data.target_type == target_type ][:branches_count] assert sorted(branches) == sorted(expected_branches) assert ( partial_branches["next_branch"] == f"branch/{target_type_str}/{pattern}{branches_count}".encode() ) partial_branches = swh_storage.snapshot_get_branches( snapshot.id, branch_name_include_substring=pattern, target_types=[target_type_str], branches_from=partial_branches["next_branch"], ) branches = partial_branches["branches"] expected_branches = [ branch_name for branch_name, branch_data in snapshot.branches.items() if pattern in branch_name and branch_data.target_type == target_type ][branches_count:] assert sorted(branches) == sorted(expected_branches) assert partial_branches["next_branch"] is None def test_snapshot_add_get(self, swh_storage, sample_data): snapshot = sample_data.snapshot origin = sample_data.origin swh_storage.origin_add([origin]) visit = OriginVisit( origin=origin.url, date=sample_data.date_visit1, type=sample_data.type_visit1, ) ov1 = swh_storage.origin_visit_add([visit])[0] swh_storage.snapshot_add([snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=ov1.visit, date=now(), status="ongoing", snapshot=snapshot.id, ) ] ) expected_snapshot = {**snapshot.to_dict(), "next_branch": None} by_id = swh_storage.snapshot_get(snapshot.id) assert by_id == expected_snapshot actual_visit = swh_storage.origin_visit_get_by(origin.url, ov1.visit) assert actual_visit == ov1 visit_status = swh_storage.origin_visit_status_get_latest( origin.url, ov1.visit, require_snapshot=True ) assert visit_status.snapshot == snapshot.id def test_snapshot_get_random(self, swh_storage, sample_data): snapshot, empty_snapshot, complete_snapshot = sample_data.snapshots[:3] swh_storage.snapshot_add([snapshot, empty_snapshot, complete_snapshot]) assert swh_storage.snapshot_get_random() in { snapshot.id, empty_snapshot.id, complete_snapshot.id, } def test_snapshot_missing(self, swh_storage, sample_data): snapshot, missing_snapshot = sample_data.snapshots[:2] snapshots = [snapshot.id, missing_snapshot.id] swh_storage.snapshot_add([snapshot]) missing_snapshots = swh_storage.snapshot_missing(snapshots) assert list(missing_snapshots) == [missing_snapshot.id] def test_stat_counters(self, swh_storage, sample_data): origin = sample_data.origin snapshot = sample_data.snapshot revision = sample_data.revision release = sample_data.release directory = sample_data.directory content = sample_data.content expected_keys = ["content", "directory", "origin", "revision"] # Initially, all counters are 0 swh_storage.refresh_stat_counters() counters = swh_storage.stat_counters() assert set(expected_keys) <= set(counters) for key in expected_keys: assert counters[key] == 0 # Add a content. Only the content counter should increase. swh_storage.content_add([content]) swh_storage.refresh_stat_counters() counters = swh_storage.stat_counters() assert set(expected_keys) <= set(counters) for key in expected_keys: if key != "content": assert counters[key] == 0 assert counters["content"] == 1 # Add other objects. Check their counter increased as well. swh_storage.origin_add([origin]) visit = OriginVisit( origin=origin.url, date=sample_data.date_visit2, type=sample_data.type_visit2, ) origin_visit1 = swh_storage.origin_visit_add([visit])[0] swh_storage.snapshot_add([snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=origin_visit1.visit, date=now(), status="ongoing", snapshot=snapshot.id, ) ] ) swh_storage.directory_add([directory]) swh_storage.revision_add([revision]) swh_storage.release_add([release]) swh_storage.refresh_stat_counters() counters = swh_storage.stat_counters() assert counters["content"] == 1 assert counters["directory"] == 1 assert counters["snapshot"] == 1 assert counters["origin"] == 1 assert counters["origin_visit"] == 1 assert counters["revision"] == 1 assert counters["release"] == 1 assert counters["snapshot"] == 1 if "person" in counters: assert counters["person"] == 3 def test_content_find_ctime(self, swh_storage, sample_data): origin_content = sample_data.content ctime = round_to_milliseconds(now()) content = attr.evolve(origin_content, data=None, ctime=ctime) swh_storage.content_add_metadata([content]) actually_present = swh_storage.content_find({"sha1": content.sha1}) assert actually_present[0] == content assert actually_present[0].ctime is not None assert actually_present[0].ctime.tzinfo is not None def test_content_find_with_present_content(self, swh_storage, sample_data): content = sample_data.content expected_content = attr.evolve(content, data=None) # 1. with something to find swh_storage.content_add([content]) actually_present = swh_storage.content_find({"sha1": content.sha1}) assert 1 == len(actually_present) assert actually_present[0] == expected_content # 2. with something to find actually_present = swh_storage.content_find({"sha1_git": content.sha1_git}) assert 1 == len(actually_present) assert actually_present[0] == expected_content # 3. with something to find actually_present = swh_storage.content_find({"sha256": content.sha256}) assert 1 == len(actually_present) assert actually_present[0] == expected_content # 4. with something to find actually_present = swh_storage.content_find(content.hashes()) assert 1 == len(actually_present) assert actually_present[0] == expected_content def test_content_find_with_non_present_content(self, swh_storage, sample_data): missing_content = sample_data.skipped_content # 1. with something that does not exist actually_present = swh_storage.content_find({"sha1": missing_content.sha1}) assert actually_present == [] # 2. with something that does not exist actually_present = swh_storage.content_find( {"sha1_git": missing_content.sha1_git} ) assert actually_present == [] # 3. with something that does not exist actually_present = swh_storage.content_find({"sha256": missing_content.sha256}) assert actually_present == [] def test_content_find_with_duplicate_input(self, swh_storage, sample_data): content = sample_data.content # Create fake data with colliding sha256 and blake2s256 sha1_array = bytearray(content.sha1) sha1_array[0] += 1 sha1git_array = bytearray(content.sha1_git) sha1git_array[0] += 1 duplicated_content = attr.evolve( content, sha1=bytes(sha1_array), sha1_git=bytes(sha1git_array) ) # Inject the data swh_storage.content_add([content, duplicated_content]) actual_result = swh_storage.content_find( { "blake2s256": duplicated_content.blake2s256, "sha256": duplicated_content.sha256, } ) expected_content = attr.evolve(content, data=None) expected_duplicated_content = attr.evolve(duplicated_content, data=None) for result in actual_result: assert result in [expected_content, expected_duplicated_content] def test_content_find_with_duplicate_sha256(self, swh_storage, sample_data): content = sample_data.content hashes = {} # Create fake data with colliding sha256 for hashalgo in ("sha1", "sha1_git", "blake2s256"): value = bytearray(getattr(content, hashalgo)) value[0] += 1 hashes[hashalgo] = bytes(value) duplicated_content = attr.evolve( content, sha1=hashes["sha1"], sha1_git=hashes["sha1_git"], blake2s256=hashes["blake2s256"], ) swh_storage.content_add([content, duplicated_content]) actual_result = swh_storage.content_find({"sha256": duplicated_content.sha256}) assert len(actual_result) == 2 expected_content = attr.evolve(content, data=None) expected_duplicated_content = attr.evolve(duplicated_content, data=None) for result in actual_result: assert result in [expected_content, expected_duplicated_content] # Find with both sha256 and blake2s256 actual_result = swh_storage.content_find( { "sha256": duplicated_content.sha256, "blake2s256": duplicated_content.blake2s256, } ) assert len(actual_result) == 1 assert actual_result == [expected_duplicated_content] def test_content_find_with_duplicate_blake2s256(self, swh_storage, sample_data): content = sample_data.content # Create fake data with colliding sha256 and blake2s256 sha1_array = bytearray(content.sha1) sha1_array[0] += 1 sha1git_array = bytearray(content.sha1_git) sha1git_array[0] += 1 sha256_array = bytearray(content.sha256) sha256_array[0] += 1 duplicated_content = attr.evolve( content, sha1=bytes(sha1_array), sha1_git=bytes(sha1git_array), sha256=bytes(sha256_array), ) swh_storage.content_add([content, duplicated_content]) actual_result = swh_storage.content_find( {"blake2s256": duplicated_content.blake2s256} ) expected_content = attr.evolve(content, data=None) expected_duplicated_content = attr.evolve(duplicated_content, data=None) for result in actual_result: assert result in [expected_content, expected_duplicated_content] # Find with both sha256 and blake2s256 actual_result = swh_storage.content_find( { "sha256": duplicated_content.sha256, "blake2s256": duplicated_content.blake2s256, } ) assert actual_result == [expected_duplicated_content] def test_content_find_bad_input(self, swh_storage): # 1. with no hash to lookup with pytest.raises(StorageArgumentException): swh_storage.content_find({}) # need at least one hash # 2. with bad hash with pytest.raises(StorageArgumentException): swh_storage.content_find({"unknown-sha1": "something"}) # not the right key def test_object_find_by_sha1_git(self, swh_storage, sample_data): content = sample_data.content directory = sample_data.directory revision = sample_data.revision release = sample_data.release sha1_gits = [b"00000000000000000000"] expected = { b"00000000000000000000": [], } swh_storage.content_add([content]) sha1_gits.append(content.sha1_git) expected[content.sha1_git] = [ {"sha1_git": content.sha1_git, "type": "content",} ] swh_storage.directory_add([directory]) sha1_gits.append(directory.id) expected[directory.id] = [{"sha1_git": directory.id, "type": "directory",}] swh_storage.revision_add([revision]) sha1_gits.append(revision.id) expected[revision.id] = [{"sha1_git": revision.id, "type": "revision",}] swh_storage.release_add([release]) sha1_gits.append(release.id) expected[release.id] = [{"sha1_git": release.id, "type": "release",}] ret = swh_storage.object_find_by_sha1_git(sha1_gits) assert expected == ret def test_metadata_fetcher_add_get(self, swh_storage, sample_data): fetcher = sample_data.metadata_fetcher actual_fetcher = swh_storage.metadata_fetcher_get(fetcher.name, fetcher.version) assert actual_fetcher is None # does not exist swh_storage.metadata_fetcher_add([fetcher]) res = swh_storage.metadata_fetcher_get(fetcher.name, fetcher.version) assert res == fetcher actual_objects = list(swh_storage.journal_writer.journal.objects) expected_objects = [ ("metadata_fetcher", fetcher), ] for obj in expected_objects: assert obj in actual_objects def test_metadata_fetcher_add_zero(self, swh_storage, sample_data): fetcher = sample_data.metadata_fetcher actual_fetcher = swh_storage.metadata_fetcher_get(fetcher.name, fetcher.version) assert actual_fetcher is None # does not exist swh_storage.metadata_fetcher_add([]) def test_metadata_authority_add_get(self, swh_storage, sample_data): authority = sample_data.metadata_authority actual_authority = swh_storage.metadata_authority_get( authority.type, authority.url ) assert actual_authority is None # does not exist swh_storage.metadata_authority_add([authority]) res = swh_storage.metadata_authority_get(authority.type, authority.url) assert res == authority actual_objects = list(swh_storage.journal_writer.journal.objects) expected_objects = [ ("metadata_authority", authority), ] for obj in expected_objects: assert obj in actual_objects def test_metadata_authority_add_zero(self, swh_storage, sample_data): authority = sample_data.metadata_authority actual_authority = swh_storage.metadata_authority_get( authority.type, authority.url ) assert actual_authority is None # does not exist swh_storage.metadata_authority_add([]) def test_content_metadata_add(self, swh_storage, sample_data): content = sample_data.content fetcher = sample_data.metadata_fetcher authority = sample_data.metadata_authority content_metadata = sample_data.content_metadata[:2] swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.raw_extrinsic_metadata_add(content_metadata) result = swh_storage.raw_extrinsic_metadata_get( content.swhid().to_extended(), authority ) assert result.next_page_token is None assert list(sorted(result.results, key=lambda x: x.discovery_date,)) == list( content_metadata ) actual_objects = list(swh_storage.journal_writer.journal.objects) expected_objects = [ ("metadata_authority", authority), ("metadata_fetcher", fetcher), ] + [("raw_extrinsic_metadata", item) for item in content_metadata] for obj in expected_objects: assert obj in actual_objects def test_content_metadata_add_duplicate(self, swh_storage, sample_data): """Duplicates should be silently ignored.""" content = sample_data.content fetcher = sample_data.metadata_fetcher authority = sample_data.metadata_authority content_metadata, content_metadata2 = sample_data.content_metadata[:2] swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.raw_extrinsic_metadata_add([content_metadata, content_metadata2]) swh_storage.raw_extrinsic_metadata_add([content_metadata2, content_metadata]) result = swh_storage.raw_extrinsic_metadata_get( content.swhid().to_extended(), authority ) assert result.next_page_token is None expected_results = (content_metadata, content_metadata2) assert ( tuple(sorted(result.results, key=lambda x: x.discovery_date,)) == expected_results ) def test_content_metadata_get(self, swh_storage, sample_data): content, content2 = sample_data.contents[:2] fetcher, fetcher2 = sample_data.fetchers[:2] authority, authority2 = sample_data.authorities[:2] ( content1_metadata1, content1_metadata2, content1_metadata3, ) = sample_data.content_metadata[:3] content2_metadata = RawExtrinsicMetadata.from_dict( { **remove_keys(content1_metadata2.to_dict(), ("id",)), # recompute id "target": str(content2.swhid()), } ) swh_storage.metadata_authority_add([authority, authority2]) swh_storage.metadata_fetcher_add([fetcher, fetcher2]) swh_storage.raw_extrinsic_metadata_add( [ content1_metadata1, content1_metadata2, content1_metadata3, content2_metadata, ] ) result = swh_storage.raw_extrinsic_metadata_get( content.swhid().to_extended(), authority ) assert result.next_page_token is None assert [content1_metadata1, content1_metadata2] == list( sorted(result.results, key=lambda x: x.discovery_date,) ) result = swh_storage.raw_extrinsic_metadata_get( content.swhid().to_extended(), authority2 ) assert result.next_page_token is None assert [content1_metadata3] == list( sorted(result.results, key=lambda x: x.discovery_date,) ) result = swh_storage.raw_extrinsic_metadata_get( content2.swhid().to_extended(), authority ) assert result.next_page_token is None assert [content2_metadata] == list(result.results,) def test_content_metadata_get_after(self, swh_storage, sample_data): content = sample_data.content fetcher = sample_data.metadata_fetcher authority = sample_data.metadata_authority content_metadata, content_metadata2 = sample_data.content_metadata[:2] swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.raw_extrinsic_metadata_add([content_metadata, content_metadata2]) result = swh_storage.raw_extrinsic_metadata_get( content.swhid().to_extended(), authority, after=content_metadata.discovery_date - timedelta(seconds=1), ) assert result.next_page_token is None assert [content_metadata, content_metadata2] == list( sorted(result.results, key=lambda x: x.discovery_date,) ) result = swh_storage.raw_extrinsic_metadata_get( content.swhid().to_extended(), authority, after=content_metadata.discovery_date, ) assert result.next_page_token is None assert result.results == [content_metadata2] result = swh_storage.raw_extrinsic_metadata_get( content.swhid().to_extended(), authority, after=content_metadata2.discovery_date, ) assert result.next_page_token is None assert result.results == [] def test_content_metadata_get_paginate(self, swh_storage, sample_data): content = sample_data.content fetcher = sample_data.metadata_fetcher authority = sample_data.metadata_authority content_metadata, content_metadata2 = sample_data.content_metadata[:2] swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.raw_extrinsic_metadata_add([content_metadata, content_metadata2]) swh_storage.raw_extrinsic_metadata_get(content.swhid().to_extended(), authority) result = swh_storage.raw_extrinsic_metadata_get( content.swhid().to_extended(), authority, limit=1 ) assert result.next_page_token is not None assert result.results == [content_metadata] result = swh_storage.raw_extrinsic_metadata_get( content.swhid().to_extended(), authority, limit=1, page_token=result.next_page_token, ) assert result.next_page_token is None assert result.results == [content_metadata2] def test_content_metadata_get_paginate_same_date(self, swh_storage, sample_data): content = sample_data.content fetcher1, fetcher2 = sample_data.fetchers[:2] authority = sample_data.metadata_authority content_metadata, content_metadata2 = sample_data.content_metadata[:2] swh_storage.metadata_fetcher_add([fetcher1, fetcher2]) swh_storage.metadata_authority_add([authority]) new_content_metadata2 = RawExtrinsicMetadata.from_dict( { **remove_keys(content_metadata2.to_dict(), ("id",)), # recompute id "discovery_date": content_metadata2.discovery_date, "fetcher": attr.evolve(fetcher2, metadata=None).to_dict(), } ) swh_storage.raw_extrinsic_metadata_add( [content_metadata, new_content_metadata2] ) result = swh_storage.raw_extrinsic_metadata_get( content.swhid().to_extended(), authority, limit=1 ) assert result.next_page_token is not None assert result.results == [content_metadata] result = swh_storage.raw_extrinsic_metadata_get( content.swhid().to_extended(), authority, limit=1, page_token=result.next_page_token, ) assert result.next_page_token is None assert result.results[0].to_dict() == new_content_metadata2.to_dict() assert result.results == [new_content_metadata2] def test_origin_metadata_add(self, swh_storage, sample_data): origin = sample_data.origin fetcher = sample_data.metadata_fetcher authority = sample_data.metadata_authority origin_metadata, origin_metadata2 = sample_data.origin_metadata[:2] assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.raw_extrinsic_metadata_add([origin_metadata, origin_metadata2]) result = swh_storage.raw_extrinsic_metadata_get( Origin(origin.url).swhid(), authority ) assert result.next_page_token is None assert list(sorted(result.results, key=lambda x: x.discovery_date)) == [ origin_metadata, origin_metadata2, ] actual_objects = list(swh_storage.journal_writer.journal.objects) expected_objects = [ ("metadata_authority", authority), ("metadata_fetcher", fetcher), ("raw_extrinsic_metadata", origin_metadata), ("raw_extrinsic_metadata", origin_metadata2), ] for obj in expected_objects: assert obj in actual_objects def test_origin_metadata_add_duplicate(self, swh_storage, sample_data): """Duplicates should be silently updated.""" origin = sample_data.origin fetcher = sample_data.metadata_fetcher authority = sample_data.metadata_authority origin_metadata, origin_metadata2 = sample_data.origin_metadata[:2] assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.raw_extrinsic_metadata_add([origin_metadata, origin_metadata2]) swh_storage.raw_extrinsic_metadata_add([origin_metadata2, origin_metadata]) result = swh_storage.raw_extrinsic_metadata_get( Origin(origin.url).swhid(), authority ) assert result.next_page_token is None # which of the two behavior happens is backend-specific. expected_results = (origin_metadata, origin_metadata2) assert ( tuple(sorted(result.results, key=lambda x: x.discovery_date,)) == expected_results ) def test_origin_metadata_get(self, swh_storage, sample_data): origin, origin2 = sample_data.origins[:2] fetcher, fetcher2 = sample_data.fetchers[:2] authority, authority2 = sample_data.authorities[:2] ( origin1_metadata1, origin1_metadata2, origin1_metadata3, ) = sample_data.origin_metadata[:3] assert swh_storage.origin_add([origin, origin2]) == {"origin:add": 2} origin2_metadata = RawExtrinsicMetadata.from_dict( { **remove_keys(origin1_metadata2.to_dict(), ("id",)), # recompute id "target": str(Origin(origin2.url).swhid()), } ) swh_storage.metadata_authority_add([authority, authority2]) swh_storage.metadata_fetcher_add([fetcher, fetcher2]) swh_storage.raw_extrinsic_metadata_add( [origin1_metadata1, origin1_metadata2, origin1_metadata3, origin2_metadata] ) result = swh_storage.raw_extrinsic_metadata_get( Origin(origin.url).swhid(), authority ) assert result.next_page_token is None assert [origin1_metadata1, origin1_metadata2] == list( sorted(result.results, key=lambda x: x.discovery_date,) ) result = swh_storage.raw_extrinsic_metadata_get( Origin(origin.url).swhid(), authority2 ) assert result.next_page_token is None assert [origin1_metadata3] == list( sorted(result.results, key=lambda x: x.discovery_date,) ) result = swh_storage.raw_extrinsic_metadata_get( Origin(origin2.url).swhid(), authority ) assert result.next_page_token is None assert [origin2_metadata] == list(result.results,) def test_origin_metadata_get_after(self, swh_storage, sample_data): origin = sample_data.origin fetcher = sample_data.metadata_fetcher authority = sample_data.metadata_authority origin_metadata, origin_metadata2 = sample_data.origin_metadata[:2] assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.raw_extrinsic_metadata_add([origin_metadata, origin_metadata2]) result = swh_storage.raw_extrinsic_metadata_get( Origin(origin.url).swhid(), authority, after=origin_metadata.discovery_date - timedelta(seconds=1), ) assert result.next_page_token is None assert list(sorted(result.results, key=lambda x: x.discovery_date,)) == [ origin_metadata, origin_metadata2, ] result = swh_storage.raw_extrinsic_metadata_get( Origin(origin.url).swhid(), authority, after=origin_metadata.discovery_date, ) assert result.next_page_token is None assert result.results == [origin_metadata2] result = swh_storage.raw_extrinsic_metadata_get( Origin(origin.url).swhid(), authority, after=origin_metadata2.discovery_date, ) assert result.next_page_token is None assert result.results == [] def test_origin_metadata_get_paginate(self, swh_storage, sample_data): origin = sample_data.origin fetcher = sample_data.metadata_fetcher authority = sample_data.metadata_authority origin_metadata, origin_metadata2 = sample_data.origin_metadata[:2] assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.raw_extrinsic_metadata_add([origin_metadata, origin_metadata2]) swh_storage.raw_extrinsic_metadata_get(Origin(origin.url).swhid(), authority) result = swh_storage.raw_extrinsic_metadata_get( Origin(origin.url).swhid(), authority, limit=1 ) assert result.next_page_token is not None assert result.results == [origin_metadata] result = swh_storage.raw_extrinsic_metadata_get( Origin(origin.url).swhid(), authority, limit=1, page_token=result.next_page_token, ) assert result.next_page_token is None assert result.results == [origin_metadata2] def test_origin_metadata_get_paginate_same_date(self, swh_storage, sample_data): origin = sample_data.origin fetcher1, fetcher2 = sample_data.fetchers[:2] authority = sample_data.metadata_authority origin_metadata, origin_metadata2 = sample_data.origin_metadata[:2] assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_fetcher_add([fetcher1, fetcher2]) swh_storage.metadata_authority_add([authority]) new_origin_metadata2 = RawExtrinsicMetadata.from_dict( { **remove_keys(origin_metadata2.to_dict(), ("id",)), # recompute id "discovery_date": origin_metadata2.discovery_date, "fetcher": attr.evolve(fetcher2, metadata=None).to_dict(), } ) swh_storage.raw_extrinsic_metadata_add([origin_metadata, new_origin_metadata2]) result = swh_storage.raw_extrinsic_metadata_get( Origin(origin.url).swhid(), authority, limit=1 ) assert result.next_page_token is not None assert result.results == [origin_metadata] result = swh_storage.raw_extrinsic_metadata_get( Origin(origin.url).swhid(), authority, limit=1, page_token=result.next_page_token, ) assert result.next_page_token is None assert result.results == [new_origin_metadata2] def test_origin_metadata_add_missing_authority(self, swh_storage, sample_data): origin = sample_data.origin fetcher = sample_data.metadata_fetcher origin_metadata, origin_metadata2 = sample_data.origin_metadata[:2] assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_fetcher_add([fetcher]) with pytest.raises(StorageArgumentException, match="authority"): swh_storage.raw_extrinsic_metadata_add([origin_metadata, origin_metadata2]) def test_origin_metadata_add_missing_fetcher(self, swh_storage, sample_data): origin = sample_data.origin authority = sample_data.metadata_authority origin_metadata, origin_metadata2 = sample_data.origin_metadata[:2] assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_authority_add([authority]) with pytest.raises(StorageArgumentException, match="fetcher"): swh_storage.raw_extrinsic_metadata_add([origin_metadata, origin_metadata2]) class TestStorageGeneratedData: def test_generate_content_get_data(self, swh_storage, swh_contents): contents_with_data = [c for c in swh_contents if c.status != "absent"] # retrieve contents for content in contents_with_data: actual_content_data = swh_storage.content_get_data(content.sha1) assert actual_content_data is not None assert actual_content_data == content.data def test_generate_content_get(self, swh_storage, swh_contents): expected_contents = [ attr.evolve(c, data=None) for c in swh_contents if c.status != "absent" ] actual_contents = swh_storage.content_get([c.sha1 for c in expected_contents]) assert len(actual_contents) == len(expected_contents) assert actual_contents == expected_contents @pytest.mark.parametrize("limit", [1, 7, 10, 100, 1000]) def test_origin_list(self, swh_storage, swh_origins, limit): returned_origins = [] page_token = None i = 0 while True: actual_page = swh_storage.origin_list(page_token=page_token, limit=limit) assert len(actual_page.results) <= limit returned_origins.extend(actual_page.results) i += 1 page_token = actual_page.next_page_token if page_token is None: assert i * limit >= len(swh_origins) break else: assert len(actual_page.results) == limit assert sorted(returned_origins) == sorted(swh_origins) def test_origin_count(self, swh_storage, sample_data): swh_storage.origin_add(sample_data.origins) assert swh_storage.origin_count("github") == 3 assert swh_storage.origin_count("gitlab") == 2 assert swh_storage.origin_count(".*user.*", regexp=True) == 5 assert swh_storage.origin_count(".*user.*", regexp=False) == 0 assert swh_storage.origin_count(".*user1.*", regexp=True) == 2 assert swh_storage.origin_count(".*user1.*", regexp=False) == 0 def test_origin_count_with_visit_no_visits(self, swh_storage, sample_data): swh_storage.origin_add(sample_data.origins) # none of them have visits, so with_visit=True => 0 assert swh_storage.origin_count("github", with_visit=True) == 0 assert swh_storage.origin_count("gitlab", with_visit=True) == 0 assert swh_storage.origin_count(".*user.*", regexp=True, with_visit=True) == 0 assert swh_storage.origin_count(".*user.*", regexp=False, with_visit=True) == 0 assert swh_storage.origin_count(".*user1.*", regexp=True, with_visit=True) == 0 assert swh_storage.origin_count(".*user1.*", regexp=False, with_visit=True) == 0 def test_origin_count_with_visit_with_visits_no_snapshot( self, swh_storage, sample_data ): swh_storage.origin_add(sample_data.origins) origin_url = "https://github.com/user1/repo1" visit = OriginVisit(origin=origin_url, date=now(), type="git",) swh_storage.origin_visit_add([visit]) assert swh_storage.origin_count("github", with_visit=False) == 3 # it has a visit, but no snapshot, so with_visit=True => 0 assert swh_storage.origin_count("github", with_visit=True) == 0 assert swh_storage.origin_count("gitlab", with_visit=False) == 2 # these gitlab origins have no visit assert swh_storage.origin_count("gitlab", with_visit=True) == 0 assert ( swh_storage.origin_count("github.*user1", regexp=True, with_visit=False) == 1 ) assert ( swh_storage.origin_count("github.*user1", regexp=True, with_visit=True) == 0 ) assert swh_storage.origin_count("github", regexp=True, with_visit=True) == 0 def test_origin_count_with_visit_with_visits_and_snapshot( self, swh_storage, sample_data ): snapshot = sample_data.snapshot swh_storage.origin_add(sample_data.origins) swh_storage.snapshot_add([snapshot]) origin_url = "https://github.com/user1/repo1" visit = OriginVisit(origin=origin_url, date=now(), type="git",) visit = swh_storage.origin_visit_add([visit])[0] swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin_url, visit=visit.visit, date=now(), status="ongoing", snapshot=snapshot.id, ) ] ) assert swh_storage.origin_count("github", with_visit=False) == 3 # github/user1 has a visit and a snapshot, so with_visit=True => 1 assert swh_storage.origin_count("github", with_visit=True) == 1 assert ( swh_storage.origin_count("github.*user1", regexp=True, with_visit=False) == 1 ) assert ( swh_storage.origin_count("github.*user1", regexp=True, with_visit=True) == 1 ) assert swh_storage.origin_count("github", regexp=True, with_visit=True) == 1 @settings( suppress_health_check=[HealthCheck.too_slow] + function_scoped_fixture_check, ) @given(strategies.lists(objects(split_content=True), max_size=2)) def test_add_arbitrary(self, swh_storage, objects): for (obj_type, obj) in objects: if obj.object_type == "origin_visit": swh_storage.origin_add([Origin(url=obj.origin)]) visit = OriginVisit(origin=obj.origin, date=obj.date, type=obj.type,) swh_storage.origin_visit_add([visit]) else: method = getattr(swh_storage, obj_type + "_add") try: method([obj]) except HashCollision: pass diff --git a/swh/storage/tests/test_backfill.py b/swh/storage/tests/test_backfill.py index 6865aa6c..b1a90520 100644 --- a/swh/storage/tests/test_backfill.py +++ b/swh/storage/tests/test_backfill.py @@ -1,298 +1,298 @@ -# Copyright (C) 2019 The Software Heritage developers +# Copyright (C) 2019-2021 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import functools import logging from unittest.mock import patch import pytest from swh.journal.client import JournalClient from swh.model.tests.swh_model_data import TEST_OBJECTS from swh.storage import get_storage from swh.storage.backfill import ( PARTITION_KEY, JournalBackfiller, byte_ranges, compute_query, raw_extrinsic_metadata_target_ranges, ) from swh.storage.in_memory import InMemoryStorage from swh.storage.replay import process_replay_objects from swh.storage.tests.test_replay import check_replayed TEST_CONFIG = { "journal_writer": { "brokers": ["localhost"], "prefix": "swh.tmp_journal.new", "client_id": "swh.journal.client.test", }, - "storage": {"cls": "local", "db": "service=swh-dev"}, + "storage": {"cls": "postgresql", "db": "service=swh-dev"}, } def test_config_ko_missing_mandatory_key(): """Missing configuration key will make the initialization fail """ for key in TEST_CONFIG.keys(): config = TEST_CONFIG.copy() config.pop(key) with pytest.raises(ValueError) as e: JournalBackfiller(config) error = "Configuration error: The following keys must be provided: %s" % ( ",".join([key]), ) assert e.value.args[0] == error def test_config_ko_unknown_object_type(): """Parse arguments will fail if the object type is unknown """ backfiller = JournalBackfiller(TEST_CONFIG) with pytest.raises(ValueError) as e: backfiller.parse_arguments("unknown-object-type", 1, 2) error = ( "Object type unknown-object-type is not supported. " "The only possible values are %s" % (", ".join(sorted(PARTITION_KEY))) ) assert e.value.args[0] == error def test_compute_query_content(): query, where_args, column_aliases = compute_query("content", "\x000000", "\x000001") assert where_args == ["\x000000", "\x000001"] assert column_aliases == [ "sha1", "sha1_git", "sha256", "blake2s256", "length", "status", "ctime", ] assert ( query == """ select sha1,sha1_git,sha256,blake2s256,length,status,ctime from content where (sha1) >= %s and (sha1) < %s """ ) def test_compute_query_skipped_content(): query, where_args, column_aliases = compute_query("skipped_content", None, None) assert where_args == [] assert column_aliases == [ "sha1", "sha1_git", "sha256", "blake2s256", "length", "ctime", "status", "reason", ] assert ( query == """ select sha1,sha1_git,sha256,blake2s256,length,ctime,status,reason from skipped_content """ ) def test_compute_query_origin_visit(): query, where_args, column_aliases = compute_query("origin_visit", 1, 10) assert where_args == [1, 10] assert column_aliases == [ "visit", "type", "origin", "date", ] assert ( query == """ select visit,type,origin.url as origin,date from origin_visit left join origin on origin_visit.origin=origin.id where (origin_visit.origin) >= %s and (origin_visit.origin) < %s """ ) def test_compute_query_release(): query, where_args, column_aliases = compute_query("release", "\x000002", "\x000003") assert where_args == ["\x000002", "\x000003"] assert column_aliases == [ "id", "date", "date_offset", "date_neg_utc_offset", "comment", "name", "synthetic", "target", "target_type", "author_id", "author_name", "author_email", "author_fullname", ] assert ( query == """ select release.id as id,date,date_offset,date_neg_utc_offset,comment,release.name as name,synthetic,target,target_type,a.id as author_id,a.name as author_name,a.email as author_email,a.fullname as author_fullname from release left join person a on release.author=a.id where (release.id) >= %s and (release.id) < %s """ # noqa ) @pytest.mark.parametrize("numbits", [2, 3, 8, 16]) def test_byte_ranges(numbits): ranges = list(byte_ranges(numbits)) assert len(ranges) == 2 ** numbits assert ranges[0][0] is None assert ranges[-1][1] is None bounds = [] for i, (left, right) in enumerate(zip(ranges[:-1], ranges[1:])): assert left[1] == right[0], f"Mismatched bounds in {i}th range" bounds.append(left[1]) assert bounds == sorted(bounds) def test_raw_extrinsic_metadata_target_ranges(): ranges = list(raw_extrinsic_metadata_target_ranges()) assert ranges[0][0] == "" assert ranges[-1][1] is None bounds = [] for i, (left, right) in enumerate(zip(ranges[:-1], ranges[1:])): assert left[1] == right[0], f"Mismatched bounds in {i}th range" bounds.append(left[1]) assert bounds == sorted(bounds) RANGE_GENERATORS = { "content": lambda start, end: [(None, None)], "skipped_content": lambda start, end: [(None, None)], "directory": lambda start, end: [(None, None)], "extid": lambda start, end: [(None, None)], "metadata_authority": lambda start, end: [(None, None)], "metadata_fetcher": lambda start, end: [(None, None)], "revision": lambda start, end: [(None, None)], "release": lambda start, end: [(None, None)], "snapshot": lambda start, end: [(None, None)], "origin": lambda start, end: [(None, 10000)], "origin_visit": lambda start, end: [(None, 10000)], "origin_visit_status": lambda start, end: [(None, 10000)], "raw_extrinsic_metadata": lambda start, end: [(None, None)], } @patch("swh.storage.backfill.RANGE_GENERATORS", RANGE_GENERATORS) def test_backfiller( swh_storage_backend_config, kafka_prefix: str, kafka_consumer_group: str, kafka_server: str, caplog, ): prefix1 = f"{kafka_prefix}-1" prefix2 = f"{kafka_prefix}-2" journal1 = { "cls": "kafka", "brokers": [kafka_server], "client_id": "kafka_writer-1", "prefix": prefix1, } swh_storage_backend_config["journal_writer"] = journal1 storage = get_storage(**swh_storage_backend_config) # fill the storage and the journal (under prefix1) for object_type, objects in TEST_OBJECTS.items(): method = getattr(storage, object_type + "_add") method(objects) # now apply the backfiller on the storage to fill the journal under prefix2 backfiller_config = { "journal_writer": { "brokers": [kafka_server], "client_id": "kafka_writer-2", "prefix": prefix2, }, "storage": swh_storage_backend_config, } # Backfilling backfiller = JournalBackfiller(backfiller_config) for object_type in TEST_OBJECTS: backfiller.run(object_type, None, None) # Trace log messages for unhandled object types in the replayer caplog.set_level(logging.DEBUG, "swh.storage.replay") # now check journal content are the same under both topics # use the replayer scaffolding to fill storages to make is a bit easier # Replaying #1 sto1 = get_storage(cls="memory") replayer1 = JournalClient( brokers=kafka_server, group_id=f"{kafka_consumer_group}-1", prefix=prefix1, stop_on_eof=True, ) worker_fn1 = functools.partial(process_replay_objects, storage=sto1) replayer1.process(worker_fn1) # Replaying #2 sto2 = get_storage(cls="memory") replayer2 = JournalClient( brokers=kafka_server, group_id=f"{kafka_consumer_group}-2", prefix=prefix2, stop_on_eof=True, ) worker_fn2 = functools.partial(process_replay_objects, storage=sto2) replayer2.process(worker_fn2) # Compare storages assert isinstance(sto1, InMemoryStorage) # needed to help mypy assert isinstance(sto2, InMemoryStorage) check_replayed(sto1, sto2) for record in caplog.records: assert ( "this should not happen" not in record.message ), "Replayer ignored some message types, see captured logging" diff --git a/swh/storage/tests/test_cassandra.py b/swh/storage/tests/test_cassandra.py index 8e743f58..3d37ac04 100644 --- a/swh/storage/tests/test_cassandra.py +++ b/swh/storage/tests/test_cassandra.py @@ -1,633 +1,636 @@ # Copyright (C) 2018-2021 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import datetime import itertools import os import signal import socket import subprocess import time from typing import Any, Dict import attr import pytest from swh.core.api.classes import stream_results from swh.model.model import Directory, DirectoryEntry, Snapshot, SnapshotBranch from swh.storage import get_storage from swh.storage.cassandra import create_keyspace from swh.storage.cassandra.model import ContentRow, ExtIDRow from swh.storage.cassandra.schema import HASH_ALGORITHMS, TABLES from swh.storage.tests.storage_data import StorageData from swh.storage.tests.storage_tests import ( TestStorageGeneratedData as _TestStorageGeneratedData, ) from swh.storage.tests.storage_tests import TestStorage as _TestStorage from swh.storage.utils import now, remove_keys CONFIG_TEMPLATE = """ data_file_directories: - {data_dir}/data commitlog_directory: {data_dir}/commitlog hints_directory: {data_dir}/hints saved_caches_directory: {data_dir}/saved_caches commitlog_sync: periodic commitlog_sync_period_in_ms: 1000000 partitioner: org.apache.cassandra.dht.Murmur3Partitioner endpoint_snitch: SimpleSnitch seed_provider: - class_name: org.apache.cassandra.locator.SimpleSeedProvider parameters: - seeds: "127.0.0.1" storage_port: {storage_port} native_transport_port: {native_transport_port} start_native_transport: true listen_address: 127.0.0.1 enable_user_defined_functions: true # speed-up by disabling period saving to disk key_cache_save_period: 0 row_cache_save_period: 0 trickle_fsync: false commitlog_sync_period_in_ms: 100000 """ def free_port(): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.bind(("127.0.0.1", 0)) port = sock.getsockname()[1] sock.close() return port def wait_for_peer(addr, port): wait_until = time.time() + 20 while time.time() < wait_until: try: sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.connect((addr, port)) except ConnectionRefusedError: time.sleep(0.1) else: sock.close() return True return False @pytest.fixture(scope="session") def cassandra_cluster(tmpdir_factory): cassandra_conf = tmpdir_factory.mktemp("cassandra_conf") cassandra_data = tmpdir_factory.mktemp("cassandra_data") cassandra_log = tmpdir_factory.mktemp("cassandra_log") native_transport_port = free_port() storage_port = free_port() jmx_port = free_port() with open(str(cassandra_conf.join("cassandra.yaml")), "w") as fd: fd.write( CONFIG_TEMPLATE.format( data_dir=str(cassandra_data), storage_port=storage_port, native_transport_port=native_transport_port, ) ) if os.environ.get("SWH_CASSANDRA_LOG"): stdout = stderr = None else: stdout = stderr = subprocess.DEVNULL cassandra_bin = os.environ.get("SWH_CASSANDRA_BIN", "/usr/sbin/cassandra") env = { "MAX_HEAP_SIZE": "300M", "HEAP_NEWSIZE": "50M", "JVM_OPTS": "-Xlog:gc=error:file=%s/gc.log" % cassandra_log, } if "JAVA_HOME" in os.environ: env["JAVA_HOME"] = os.environ["JAVA_HOME"] proc = subprocess.Popen( [ cassandra_bin, "-Dcassandra.config=file://%s/cassandra.yaml" % cassandra_conf, "-Dcassandra.logdir=%s" % cassandra_log, "-Dcassandra.jmx.local.port=%d" % jmx_port, "-Dcassandra-foreground=yes", ], start_new_session=True, env=env, stdout=stdout, stderr=stderr, ) - running = wait_for_peer("127.0.0.1", native_transport_port) + listening = wait_for_peer("127.0.0.1", native_transport_port) - if running: + if listening: yield (["127.0.0.1"], native_transport_port) - if not running or os.environ.get("SWH_CASSANDRA_LOG"): + if not listening or os.environ.get("SWH_CASSANDRA_LOG"): debug_log_path = str(cassandra_log.join("debug.log")) if os.path.exists(debug_log_path): with open(debug_log_path) as fd: print(fd.read()) - if not running: - raise Exception("cassandra process stopped unexpectedly.") + if not listening: + if proc.poll() is None: + raise Exception("cassandra process unexpectedly not listening.") + else: + raise Exception("cassandra process unexpectedly stopped.") pgrp = os.getpgid(proc.pid) os.killpg(pgrp, signal.SIGKILL) class RequestHandler: def on_request(self, rf): if hasattr(rf.message, "query"): print() print(rf.message.query) @pytest.fixture(scope="session") def keyspace(cassandra_cluster): (hosts, port) = cassandra_cluster keyspace = os.urandom(10).hex() create_keyspace(hosts, keyspace, port) return keyspace # tests are executed using imported classes (TestStorage and # TestStorageGeneratedData) using overloaded swh_storage fixture # below @pytest.fixture def swh_storage_backend_config(cassandra_cluster, keyspace): (hosts, port) = cassandra_cluster storage_config = dict( cls="cassandra", hosts=hosts, port=port, keyspace=keyspace, journal_writer={"cls": "memory"}, objstorage={"cls": "memory"}, ) yield storage_config storage = get_storage(**storage_config) for table in TABLES: storage._cql_runner._session.execute('TRUNCATE TABLE "%s"' % table) storage._cql_runner._cluster.shutdown() @pytest.mark.cassandra class TestCassandraStorage(_TestStorage): def test_content_add_murmur3_collision(self, swh_storage, mocker, sample_data): """The Murmur3 token is used as link from index tables to the main table; and non-matching contents with colliding murmur3-hash are filtered-out when reading the main table. This test checks the content methods do filter out these collision. """ called = 0 cont, cont2 = sample_data.contents[:2] # always return a token def mock_cgtfsh(algo, hash_): nonlocal called called += 1 assert algo in ("sha1", "sha1_git") return [123456] mocker.patch.object( swh_storage._cql_runner, "content_get_tokens_from_single_hash", mock_cgtfsh, ) # For all tokens, always return cont def mock_cgft(token): nonlocal called called += 1 return [ ContentRow( length=10, ctime=datetime.datetime.now(), status="present", **{algo: getattr(cont, algo) for algo in HASH_ALGORITHMS}, ) ] mocker.patch.object( swh_storage._cql_runner, "content_get_from_token", mock_cgft ) actual_result = swh_storage.content_add([cont2]) assert called == 4 assert actual_result == { "content:add": 1, "content:add:bytes": cont2.length, } def test_content_get_metadata_murmur3_collision( self, swh_storage, mocker, sample_data ): """The Murmur3 token is used as link from index tables to the main table; and non-matching contents with colliding murmur3-hash are filtered-out when reading the main table. This test checks the content methods do filter out these collisions. """ called = 0 cont, cont2 = [attr.evolve(c, ctime=now()) for c in sample_data.contents[:2]] # always return a token def mock_cgtfsh(algo, hash_): nonlocal called called += 1 assert algo in ("sha1", "sha1_git") return [123456] mocker.patch.object( swh_storage._cql_runner, "content_get_tokens_from_single_hash", mock_cgtfsh, ) # For all tokens, always return cont and cont2 cols = list(set(cont.to_dict()) - {"data"}) def mock_cgft(token): nonlocal called called += 1 return [ ContentRow(**{col: getattr(cont, col) for col in cols},) for cont in [cont, cont2] ] mocker.patch.object( swh_storage._cql_runner, "content_get_from_token", mock_cgft ) actual_result = swh_storage.content_get([cont.sha1]) assert called == 2 # dropping extra column not returned expected_cont = attr.evolve(cont, data=None) # but cont2 should be filtered out assert actual_result == [expected_cont] def test_content_find_murmur3_collision(self, swh_storage, mocker, sample_data): """The Murmur3 token is used as link from index tables to the main table; and non-matching contents with colliding murmur3-hash are filtered-out when reading the main table. This test checks the content methods do filter out these collisions. """ called = 0 cont, cont2 = [attr.evolve(c, ctime=now()) for c in sample_data.contents[:2]] # always return a token def mock_cgtfsh(algo, hash_): nonlocal called called += 1 assert algo in ("sha1", "sha1_git") return [123456] mocker.patch.object( swh_storage._cql_runner, "content_get_tokens_from_single_hash", mock_cgtfsh, ) # For all tokens, always return cont and cont2 cols = list(set(cont.to_dict()) - {"data"}) def mock_cgft(token): nonlocal called called += 1 return [ ContentRow(**{col: getattr(cont, col) for col in cols}) for cont in [cont, cont2] ] mocker.patch.object( swh_storage._cql_runner, "content_get_from_token", mock_cgft ) expected_content = attr.evolve(cont, data=None) actual_result = swh_storage.content_find({"sha1": cont.sha1}) assert called == 2 # but cont2 should be filtered out assert actual_result == [expected_content] def test_content_get_partition_murmur3_collision( self, swh_storage, mocker, sample_data ): """The Murmur3 token is used as link from index tables to the main table; and non-matching contents with colliding murmur3-hash are filtered-out when reading the main table. This test checks the content_get_partition endpoints return all contents, even the collisions. """ called = 0 rows: Dict[int, Dict] = {} for tok, content in enumerate(sample_data.contents): cont = attr.evolve(content, data=None, ctime=now()) row_d = {**cont.to_dict(), "tok": tok} rows[tok] = row_d # For all tokens, always return cont def mock_content_get_token_range(range_start, range_end, limit): nonlocal called called += 1 for tok in list(rows.keys()) * 3: # yield multiple times the same tok row_d = dict(rows[tok].items()) row_d.pop("tok") yield (tok, ContentRow(**row_d)) mocker.patch.object( swh_storage._cql_runner, "content_get_token_range", mock_content_get_token_range, ) actual_results = list( stream_results( swh_storage.content_get_partition, partition_id=0, nb_partitions=1 ) ) assert called > 0 # everything is listed, even collisions assert len(actual_results) == 3 * len(sample_data.contents) # as we duplicated the returned results, dropping duplicate should yield # the original length assert len(set(actual_results)) == len(sample_data.contents) @pytest.mark.skip("content_update is not yet implemented for Cassandra") def test_content_update(self): pass def test_extid_murmur3_collision(self, swh_storage, mocker, sample_data): """The Murmur3 token is used as link from index table to the main table; and non-matching extid with colliding murmur3-hash are filtered-out when reading the main table. This test checks the extid methods do filter out these collision. """ swh_storage.extid_add(sample_data.extids) # For any token, always return all extids, i.e. make as if all tokens # for all extid entries collide def mock_egft(token): return [ ExtIDRow( extid_type=extid.extid_type, extid=extid.extid, target_type=extid.target.object_type.value, target=extid.target.object_id, ) for extid in sample_data.extids ] mocker.patch.object( swh_storage._cql_runner, "extid_get_from_token", mock_egft, ) for extid in sample_data.extids: extids = swh_storage.extid_get_from_target( target_type=extid.target.object_type, ids=[extid.target.object_id] ) assert extids == [extid] def test_directory_add_atomic(self, swh_storage, sample_data, mocker): """Checks that a crash occurring after some directory entries were written does not cause the directory to be (partially) visible. ie. checks directories are added somewhat atomically.""" # Disable the journal writer, it would detect the CrashyEntry exception too # early for this test to be relevant swh_storage.journal_writer.journal = None class MyException(Exception): pass class CrashyEntry(DirectoryEntry): def __init__(self): pass def to_dict(self): raise MyException() directory = sample_data.directory3 entries = directory.entries directory = attr.evolve(directory, entries=entries + (CrashyEntry(),)) with pytest.raises(MyException): swh_storage.directory_add([directory]) # This should have written some of the entries to the database: entry_rows = swh_storage._cql_runner.directory_entry_get([directory.id]) assert {row.name for row in entry_rows} == {entry.name for entry in entries} # BUT, because not all the entries were written, the directory should # be considered not written. assert swh_storage.directory_missing([directory.id]) == [directory.id] assert list(swh_storage.directory_ls(directory.id)) == [] assert swh_storage.directory_get_entries(directory.id) is None def test_snapshot_add_atomic(self, swh_storage, sample_data, mocker): """Checks that a crash occurring after some snapshot branches were written does not cause the snapshot to be (partially) visible. ie. checks snapshots are added somewhat atomically.""" # Disable the journal writer, it would detect the CrashyBranch exception too # early for this test to be relevant swh_storage.journal_writer.journal = None class MyException(Exception): pass class CrashyBranch(SnapshotBranch): def __getattribute__(self, name): if name == "target" and should_raise: raise MyException() else: return super().__getattribute__(name) snapshot = sample_data.complete_snapshot branches = snapshot.branches should_raise = False # just so that we can construct the object crashy_branch = CrashyBranch.from_dict(branches[b"directory"].to_dict()) should_raise = True snapshot = attr.evolve( snapshot, branches={**branches, b"crashy": crashy_branch,}, ) with pytest.raises(MyException): swh_storage.snapshot_add([snapshot]) # This should have written some of the branches to the database: branch_rows = swh_storage._cql_runner.snapshot_branch_get(snapshot.id, b"", 10) assert {row.name for row in branch_rows} == set(branches) # BUT, because not all the branches were written, the snapshot should # be considered not written. assert swh_storage.snapshot_missing([snapshot.id]) == [snapshot.id] assert swh_storage.snapshot_get(snapshot.id) is None assert swh_storage.snapshot_count_branches(snapshot.id) is None assert swh_storage.snapshot_get_branches(snapshot.id) is None @pytest.mark.skip( 'The "person" table of the pgsql is a legacy thing, and not ' "supported by the cassandra backend." ) def test_person_fullname_unicity(self): pass @pytest.mark.skip( 'The "person" table of the pgsql is a legacy thing, and not ' "supported by the cassandra backend." ) def test_person_get(self): pass @pytest.mark.skip("Not supported by Cassandra") def test_origin_count(self): pass @pytest.mark.cassandra class TestCassandraStorageGeneratedData(_TestStorageGeneratedData): @pytest.mark.skip("Not supported by Cassandra") def test_origin_count(self): pass @pytest.mark.skip("Not supported by Cassandra") def test_origin_count_with_visit_no_visits(self): pass @pytest.mark.skip("Not supported by Cassandra") def test_origin_count_with_visit_with_visits_and_snapshot(self): pass @pytest.mark.skip("Not supported by Cassandra") def test_origin_count_with_visit_with_visits_no_snapshot(self): pass @pytest.mark.parametrize( "allow_overwrite,object_type", itertools.product( [False, True], # Note the absence of "content", it's tested above. ["directory", "revision", "release", "snapshot", "origin", "extid"], ), ) def test_allow_overwrite( allow_overwrite: bool, object_type: str, swh_storage_backend_config ): if object_type in ("origin", "extid"): pytest.skip( f"test_disallow_overwrite not implemented for {object_type} objects, " f"because all their columns are in the primary key." ) swh_storage = get_storage( allow_overwrite=allow_overwrite, **swh_storage_backend_config ) # directory_ls joins with content and directory table, and needs those to return # non-None entries: if object_type == "directory": swh_storage.directory_add([StorageData.directory5]) swh_storage.content_add([StorageData.content, StorageData.content2]) obj1: Any obj2: Any # Get two test objects if object_type == "directory": (obj1, obj2, *_) = StorageData.directories elif object_type == "snapshot": # StorageData.snapshots[1] is the empty snapshot, which is the corner case # that makes this test succeed for the wrong reasons obj1 = StorageData.snapshot obj2 = StorageData.complete_snapshot else: (obj1, obj2, *_) = getattr(StorageData, (object_type + "s")) # Let's make both objects have the same hash, but different content obj1 = attr.evolve(obj1, id=obj2.id) # Get the methods used to add and get these objects add = getattr(swh_storage, object_type + "_add") if object_type == "directory": def get(ids): return [ Directory( id=ids[0], entries=tuple( map( lambda entry: DirectoryEntry( name=entry["name"], type=entry["type"], target=entry["sha1_git"], perms=entry["perms"], ), swh_storage.directory_ls(ids[0]), ) ), ) ] elif object_type == "snapshot": def get(ids): return [ Snapshot.from_dict( remove_keys(swh_storage.snapshot_get(ids[0]), ("next_branch",)) ) ] else: get = getattr(swh_storage, object_type + "_get") # Add the first object add([obj1]) # It should be returned as-is assert get([obj1.id]) == [obj1] # Add the second object add([obj2]) if allow_overwrite: # obj1 was overwritten by obj2 expected = obj2 else: # obj2 was not written, because obj1 already exists and has the same hash expected = obj1 if allow_overwrite and object_type in ("directory", "snapshot"): # TODO pytest.xfail( "directory entries and snapshot branches are concatenated " "instead of being replaced" ) assert get([obj1.id]) == [expected] diff --git a/swh/storage/tests/test_init.py b/swh/storage/tests/test_init.py index 5fad7c86..4ff3fe9c 100644 --- a/swh/storage/tests/test_init.py +++ b/swh/storage/tests/test_init.py @@ -1,232 +1,234 @@ -# Copyright (C) 2019 The Software Heritage developers +# Copyright (C) 2019-2021 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information from unittest.mock import patch import pytest from swh.core.pytest_plugin import RPCTestAdapter from swh.storage import get_storage from swh.storage.api import client, server from swh.storage.in_memory import InMemoryStorage from swh.storage.postgresql.storage import Storage as DbStorage from swh.storage.proxies.buffer import BufferingProxyStorage from swh.storage.proxies.filter import FilteringProxyStorage from swh.storage.proxies.retry import RetryingProxyStorage STORAGES = [ pytest.param(cls, real_class, kwargs, id=cls) for (cls, real_class, kwargs) in [ ("remote", client.RemoteStorage, {"url": "url"}), ("memory", InMemoryStorage, {}), ( - "local", + "postgresql", DbStorage, {"db": "postgresql://db", "objstorage": {"cls": "memory"}}, ), ("filter", FilteringProxyStorage, {"storage": {"cls": "memory"}}), ("buffer", BufferingProxyStorage, {"storage": {"cls": "memory"}}), ("retry", RetryingProxyStorage, {"storage": {"cls": "memory"}}), ] ] @pytest.mark.parametrize("cls,real_class,args", STORAGES) @patch("swh.storage.postgresql.storage.psycopg2.pool") def test_get_storage(mock_pool, cls, real_class, args): """Instantiating an existing storage should be ok """ mock_pool.ThreadedConnectionPool.return_value = None actual_storage = get_storage(cls, **args) assert actual_storage is not None assert isinstance(actual_storage, real_class) @pytest.mark.parametrize("cls,real_class,args", STORAGES) @patch("swh.storage.postgresql.storage.psycopg2.pool") def test_get_storage_legacy_args(mock_pool, cls, real_class, args): """Instantiating an existing storage should be ok even with the legacy explicit 'args' keys """ mock_pool.ThreadedConnectionPool.return_value = None with pytest.warns(DeprecationWarning): actual_storage = get_storage(cls, args=args) assert actual_storage is not None assert isinstance(actual_storage, real_class) def test_get_storage_failure(): """Instantiating an unknown storage should raise """ with pytest.raises(ValueError, match="Unknown storage class `unknown`"): get_storage("unknown") def test_get_storage_pipeline(): config = { "cls": "pipeline", "steps": [ {"cls": "filter",}, {"cls": "buffer", "min_batch_size": {"content": 10,},}, {"cls": "memory",}, ], } storage = get_storage(**config) assert isinstance(storage, FilteringProxyStorage) assert isinstance(storage.storage, BufferingProxyStorage) assert isinstance(storage.storage.storage, InMemoryStorage) def test_get_storage_pipeline_legacy_args(): config = { "cls": "pipeline", "steps": [ {"cls": "filter",}, {"cls": "buffer", "args": {"min_batch_size": {"content": 10,},}}, {"cls": "memory",}, ], } with pytest.warns(DeprecationWarning): storage = get_storage(**config) assert isinstance(storage, FilteringProxyStorage) assert isinstance(storage.storage, BufferingProxyStorage) assert isinstance(storage.storage.storage, InMemoryStorage) # get_storage's check_config argument tests # the "remote" and "pipeline" cases are tested in dedicated test functions below @pytest.mark.parametrize( - "cls,real_class,kwargs", [x for x in STORAGES if x.id not in ("remote", "local")] + "cls,real_class,kwargs", + [x for x in STORAGES if x.id not in ("remote", "local", "postgresql")], ) def test_get_storage_check_config(cls, real_class, kwargs, monkeypatch): """Instantiating an existing storage with check_config should be ok """ check_backend_check_config(monkeypatch, dict(cls=cls, **kwargs)) @patch("swh.storage.postgresql.storage.psycopg2.pool") -def test_get_storage_local_check_config(mock_pool, monkeypatch): +@pytest.mark.parametrize("clazz", ["local", "postgresql"]) +def test_get_storage_local_check_config(mock_pool, monkeypatch, clazz): """Instantiating a local storage with check_config should be ok """ mock_pool.ThreadedConnectionPool.return_value = None check_backend_check_config( monkeypatch, - {"cls": "local", "db": "postgresql://db", "objstorage": {"cls": "memory"}}, + {"cls": clazz, "db": "postgresql://db", "objstorage": {"cls": "memory"}}, backend_storage_cls=DbStorage, ) def test_get_storage_pipeline_check_config(monkeypatch): """Test that the check_config option works as intended for a pipelined storage""" config = { "cls": "pipeline", "steps": [ {"cls": "filter",}, {"cls": "buffer", "min_batch_size": {"content": 10,},}, {"cls": "memory",}, ], } check_backend_check_config( monkeypatch, config, ) def test_get_storage_remote_check_config(monkeypatch): """Test that the check_config option works as intended for a remote storage""" monkeypatch.setattr( server, "storage", get_storage(cls="memory", journal_writer={"cls": "memory"}) ) test_client = server.app.test_client() class MockedRemoteStorage(client.RemoteStorage): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.session.adapters.clear() self.session.mount("mock://", RPCTestAdapter(test_client)) monkeypatch.setattr(client, "RemoteStorage", MockedRemoteStorage) config = { "cls": "remote", "url": "mock://example.com", } check_backend_check_config( monkeypatch, config, ) def check_backend_check_config( monkeypatch, config, backend_storage_cls=InMemoryStorage ): """Check the staged/indirect storage (pipeline or remote) works as desired with regard to the check_config option of the get_storage() factory function. If set, the check_config argument is used to call the Storage.check_config() at instantiation time in the get_storage() factory function. This is supposed to be passed through each step of the Storage pipeline until it reached the actual backend's (typically in memory or local) check_config() method which will perform the verification for read/write access to the backend storage. monkeypatch is supposed to be the monkeypatch pytest fixture to be used from the calling test_ function. config is the config dict passed to get_storage() backend_storage_cls is the class of the backend storage to be mocked to simulate the check_config behavior; it should then be the class of the actual backend storage defined in the `config`. """ access = None def mockcheck(self, check_write=False): if access == "none": return False if access == "read": return check_write is False if access == "write": return True monkeypatch.setattr(backend_storage_cls, "check_config", mockcheck) # simulate no read nor write access to the underlying (memory) storage access = "none" # by default, no check, so no complain assert get_storage(**config) # if asked to check, complain with pytest.raises(EnvironmentError): get_storage(check_config={"check_write": False}, **config) with pytest.raises(EnvironmentError): get_storage(check_config={"check_write": True}, **config) # simulate no write access to the underlying (memory) storage access = "read" # by default, no check so no complain assert get_storage(**config) # if asked to check for read access, no complain get_storage(check_config={"check_write": False}, **config) # if asked to check for write access, complain with pytest.raises(EnvironmentError): get_storage(check_config={"check_write": True}, **config) # simulate read & write access to the underlying (memory) storage access = "write" # by default, no check so no complain assert get_storage(**config) # if asked to check for read access, no complain get_storage(check_config={"check_write": False}, **config) # if asked to check for write access, no complain get_storage(check_config={"check_write": True}, **config) diff --git a/swh/storage/tests/test_server.py b/swh/storage/tests/test_server.py index d4089cf8..2c75ac85 100644 --- a/swh/storage/tests/test_server.py +++ b/swh/storage/tests/test_server.py @@ -1,96 +1,96 @@ -# Copyright (C) 2019-2020 The Software Heritage developers +# Copyright (C) 2019-2021 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import os from typing import Any, Dict import pytest import yaml from swh.core.config import load_from_envvar from swh.storage.api.server import ( StorageServerApp, load_and_check_config, make_app_from_configfile, ) def prepare_config_file(tmpdir, content, name="config.yml"): """Prepare configuration file in `$tmpdir/name` with content `content`. Args: tmpdir (LocalPath): root directory content (str/dict): Content of the file either as string or as a dict. If a dict, converts the dict into a yaml string. name (str): configuration filename Returns path (str) of the configuration file prepared. """ config_path = tmpdir / name if isinstance(content, dict): # convert if needed content = yaml.dump(content) config_path.write_text(content, encoding="utf-8") # pytest on python3.5 does not support LocalPath manipulation, so # convert path to string return str(config_path) @pytest.mark.parametrize("storage_class", [None, ""]) def test_load_and_check_config_no_configuration(storage_class): """Inexistent configuration files raises""" with pytest.raises(EnvironmentError, match="Configuration file must be defined"): load_and_check_config(storage_class) def test_load_and_check_config_inexistent_file(): config_path = "/some/inexistent/config.yml" expected_error = f"Configuration file {config_path} does not exist" with pytest.raises(FileNotFoundError, match=expected_error): load_and_check_config(config_path) def test_load_and_check_config_wrong_configuration(tmpdir): """Wrong configuration raises""" config_path = prepare_config_file(tmpdir, "something: useless") with pytest.raises(KeyError, match="Missing 'storage' configuration"): load_and_check_config(config_path) def test_load_and_check_config_local_config_fine(tmpdir): """'local' complete configuration is fine""" - config = {"storage": {"cls": "local", "db": "db", "objstorage": "something",}} + config = {"storage": {"cls": "postgresql", "db": "db", "objstorage": "something",}} config_path = prepare_config_file(tmpdir, config) cfg = load_and_check_config(config_path) assert cfg == config @pytest.fixture def swh_storage_server_config( swh_storage_backend_config: Dict[str, Any] ) -> Dict[str, Any]: return {"storage": swh_storage_backend_config} @pytest.fixture def swh_storage_config(monkeypatch, swh_storage_server_config, tmp_path): conf_path = os.path.join(str(tmp_path), "storage.yml") with open(conf_path, "w") as f: f.write(yaml.dump(swh_storage_server_config)) monkeypatch.setenv("SWH_CONFIG_FILENAME", conf_path) return conf_path def test_server_make_app_from_config_file(swh_storage_config): app = make_app_from_configfile() expected_cfg = load_from_envvar() assert app is not None assert isinstance(app, StorageServerApp) assert app.config["storage"] == expected_cfg["storage"] app2 = make_app_from_configfile() assert app is app2