diff --git a/PKG-INFO b/PKG-INFO index 97e2f192..6bf581ad 100644 --- a/PKG-INFO +++ b/PKG-INFO @@ -1,250 +1,250 @@ Metadata-Version: 2.1 Name: swh.storage -Version: 0.35.0 +Version: 0.35.1 Summary: Software Heritage storage manager Home-page: https://forge.softwareheritage.org/diffusion/DSTO/ Author: Software Heritage developers Author-email: swh-devel@inria.fr License: UNKNOWN Project-URL: Bug Reports, https://forge.softwareheritage.org/maniphest Project-URL: Funding, https://www.softwareheritage.org/donate Project-URL: Source, https://forge.softwareheritage.org/source/swh-storage Project-URL: Documentation, https://docs.softwareheritage.org/devel/swh-storage/ Platform: UNKNOWN Classifier: Programming Language :: Python :: 3 Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3) Classifier: Operating System :: OS Independent Classifier: Development Status :: 5 - Production/Stable Requires-Python: >=3.7 Description-Content-Type: text/markdown Provides-Extra: testing Provides-Extra: journal License-File: LICENSE License-File: AUTHORS swh-storage =========== Abstraction layer over the archive, allowing to access all stored source code artifacts as well as their metadata. See the [documentation](https://docs.softwareheritage.org/devel/swh-storage/index.html) for more details. ## Quick start ### Dependencies Python tests for this module include tests that cannot be run without a local Postgresql database, so you need the Postgresql server executable on your machine (no need to have a running Postgresql server). They also expect a cassandra server. #### Debian-like host ``` $ sudo apt install libpq-dev postgresql-11 cassandra ``` #### Non Debian-like host The tests expects the path to `cassandra` to either be unspecified, it is then looked up at `/usr/sbin/cassandra`, either specified through the environment variable `SWH_CASSANDRA_BIN`. Optionally, you can avoid running the cassandra tests. ``` (swh) :~/swh-storage$ tox -- -m 'not cassandra' ``` ### Installation It is strongly recommended to use a virtualenv. In the following, we consider you work in a virtualenv named `swh`. See the [developer setup guide](https://docs.softwareheritage.org/devel/developer-setup.html#developer-setup) for a more details on how to setup a working environment. You can install the package directly from [pypi](https://pypi.org/p/swh.storage): ``` (swh) :~$ pip install swh.storage [...] ``` Or from sources: ``` (swh) :~$ git clone https://forge.softwareheritage.org/source/swh-storage.git [...] (swh) :~$ cd swh-storage (swh) :~/swh-storage$ pip install . [...] ``` Then you can check it's properly installed: ``` (swh) :~$ swh storage --help Usage: swh storage [OPTIONS] COMMAND [ARGS]... Software Heritage Storage tools. Options: -h, --help Show this message and exit. Commands: rpc-serve Software Heritage Storage RPC server. ``` ## Tests The best way of running Python tests for this module is to use [tox](https://tox.readthedocs.io/). ``` (swh) :~$ pip install tox ``` ### tox From the sources directory, simply use tox: ``` (swh) :~/swh-storage$ tox [...] ========= 315 passed, 6 skipped, 15 warnings in 40.86 seconds ========== _______________________________ summary ________________________________ flake8: commands succeeded py3: commands succeeded congratulations :) ``` Note: it is possible to set the `JAVA_HOME` environment variable to specify the version of the JVM to be used by Cassandra. For example, at the time of writing this, Cassandra does not support java 14, so one may want to use for example java 11: ``` (swh) :~/swh-storage$ export JAVA_HOME=/usr/lib/jvm/java-14-openjdk-amd64/bin/java (swh) :~/swh-storage$ tox [...] ``` ## Development The storage server can be locally started. It requires a configuration file and a running Postgresql database. ### Sample configuration A typical configuration `storage.yml` file is: ``` storage: cls: postgresql db: "dbname=softwareheritage-dev user= password=" objstorage: cls: pathslicing root: /tmp/swh-storage/ slicing: 0:2/2:4/4:6 ``` which means, this uses: - a local storage instance whose db connection is to `softwareheritage-dev` local instance, - the objstorage uses a local objstorage instance whose: - `root` path is /tmp/swh-storage, - slicing scheme is `0:2/2:4/4:6`. This means that the identifier of the content (sha1) which will be stored on disk at first level with the first 2 hex characters, the second level with the next 2 hex characters and the third level with the next 2 hex characters. And finally the complete hash file holding the raw content. For example: 00062f8bd330715c4f819373653d97b3cd34394c will be stored at 00/06/2f/00062f8bd330715c4f819373653d97b3cd34394c Note that the `root` path should exist on disk before starting the server. ### Starting the storage server If the python package has been properly installed (e.g. in a virtual env), you should be able to use the command: ``` (swh) :~/swh-storage$ swh storage rpc-serve storage.yml ``` This runs a local swh-storage api at 5002 port. ``` (swh) :~/swh-storage$ curl http://127.0.0.1:5002 Software Heritage storage server

You have reached the Software Heritage storage server.
See its documentation and API for more information

``` ### And then what? In your upper layer ([loader-git](https://forge.softwareheritage.org/source/swh-loader-git/), [loader-svn](https://forge.softwareheritage.org/source/swh-loader-svn/), etc...), you can define a remote storage with this snippet of yaml configuration. ``` storage: cls: remote url: http://localhost:5002/ ``` You could directly define a postgresql storage with the following snippet: ``` storage: cls: postgresql db: service=swh-dev objstorage: cls: pathslicing root: /home/storage/swh-storage/ slicing: 0:2/2:4/4:6 ``` ## Cassandra As an alternative to PostgreSQL, swh-storage can use Cassandra as a database backend. It can be used like this: ``` storage: cls: cassandra hosts: - localhost objstorage: cls: pathslicing root: /home/storage/swh-storage/ slicing: 0:2/2:4/4:6 ``` The Cassandra swh-storage implementation supports both Cassandra >= 4.0-alpha2 and ScyllaDB >= 4.4 (and possibly earlier versions, but this is untested). While the main code supports both transparently, running tests or configuring the schema requires specific code when using ScyllaDB, enabled by setting the `SWH_USE_SCYLLADB=1` environment variable. diff --git a/debian/changelog b/debian/changelog index f6585b84..628ed7d9 100644 --- a/debian/changelog +++ b/debian/changelog @@ -1,2673 +1,2676 @@ -swh-storage (0.35.0-1~swh1~bpo10+1) buster-swh; urgency=medium +swh-storage (0.35.1-1~swh1) unstable-swh; urgency=medium - * Rebuild for buster-swh + * New upstream release 0.35.1 - (tagged by Valentin Lorentz + on 2021-08-20 11:53:33 +0200) + * Upstream changes: - v0.35.1 - * cassandra: Fix crash when + using _missing() functions with more than 100 ids with ScyllaDB. - -- Software Heritage autobuilder (on jenkins-debian1) Wed, 28 Jul 2021 13:11:03 +0000 + -- Software Heritage autobuilder (on jenkins-debian1) Fri, 20 Aug 2021 10:01:16 +0000 swh-storage (0.35.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.35.0 - (tagged by Antoine R. Dumont (@ardumont) on 2021-07-28 10:36:09 +0200) * Upstream changes: - v0.35.0 - Implement storage of the ExtID.extid_version field -- Software Heritage autobuilder (on jenkins-debian1) Wed, 28 Jul 2021 08:43:06 +0000 swh-storage (0.34.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.34.0 - (tagged by Vincent SELLIER on 2021-07-07 18:22:00 +0200) * Upstream changes: - v0.34.0 - cassandra: allow to configure the consistency level -- Software Heritage autobuilder (on jenkins-debian1) Wed, 07 Jul 2021 16:58:42 +0000 swh-storage (0.33.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.33.0 - (tagged by Valentin Lorentz on 2021-07-05 16:48:16 +0200) * Upstream changes: - v0.33.0 - * Add endpoint raw_extrinsic_metadata_get_authorities -- Software Heritage autobuilder (on jenkins-debian1) Mon, 05 Jul 2021 15:00:12 +0000 swh-storage (0.32.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.32.0 - (tagged by Vincent SELLIER on 2021-06-28 15:35:44 +0200) * Upstream changes: - v0.32.0 - * cassandra: Add support for non-ASCII origin 'URLs'. -- Software Heritage autobuilder (on jenkins-debian1) Mon, 28 Jun 2021 16:20:21 +0000 swh-storage (0.31.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.31.0 - (tagged by Valentin Lorentz on 2021-06-25 11:17:50 +0200) * Upstream changes: - v0.31.0 - * cassandra: Add partial support for ScyllaDB - * mypy: Fix errors with release >= v0.900 (but breaks older mypy versions) - * Add endpoints to access REMD by id -- Software Heritage autobuilder (on jenkins-debian1) Fri, 25 Jun 2021 09:26:09 +0000 swh-storage (0.30.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.30.1 - (tagged by Antoine R. Dumont (@ardumont) on 2021-05-21 10:09:02 +0200) * Upstream changes: - v0.30.1 - Finalize the config "local" deprecation in favor of "postgresql" - tests: Make test parameters order deterministic, so they don't crash pytest-xdist - test_cassandra: Improve error when the process is started but not listening -- Software Heritage autobuilder (on jenkins-debian1) Fri, 21 May 2021 08:22:33 +0000 swh-storage (0.30.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.30.0 - (tagged by David Douard on 2021-05-18 16:34:25 +0200) * Upstream changes: - v0.30.0 -- Software Heritage autobuilder (on jenkins-debian1) Tue, 18 May 2021 14:45:21 +0000 swh-storage (0.29.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.29.1 - (tagged by Nicolas Dandrimont on 2021-05-14 18:31:52 +0200) * Upstream changes: - Release swh.storage 0.29.1 - Add missing db migration -- Software Heritage autobuilder (on jenkins-debian1) Fri, 14 May 2021 16:59:42 +0000 swh-storage (0.29.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.29.0 - (tagged by Valentin Lorentz on 2021-05-11 15:04:58 +0200) * Upstream changes: - v0.29.0 - * Make the TenaciousProxyStorage retry when a single object add fails - * Move all proxy storages in swh/storage/proxies/ - * Deprecate the "local" storage cls in favor of "postgresql" - * cassandra: Add tests checking directory_add and snapshot_add are atomic. - * Add endpoint directory_get_entries, to quickly list a directory's entries - * content_get: Add support for queries by sha1_git -- Software Heritage autobuilder (on jenkins-debian1) Tue, 11 May 2021 13:12:42 +0000 swh-storage (0.28.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.28.0 - (tagged by Valentin Lorentz on 2021-05-06 15:52:03 +0200) * Upstream changes: - v0.28.0 - * Normalize all Storage.xxx_add() methods to return a summary - * cassandra: Add 'check_missing' option, to allow updating objects - * cassandra: Add a test of a 'complex' migration, with a PK update - * Add a new TenaciousProxyStorage - * Make postgresql's origin_add not raise an error in case of conflict - * Stop storing authority/fetcher metadata. - * tenacious: Document potential issues about objects being dropped - * Use swh.core 0.14 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 06 May 2021 14:06:51 +0000 swh-storage (0.27.4-1~swh1) unstable-swh; urgency=medium * New upstream release 0.27.4 - (tagged by Antoine Lambert on 2021-04-29 14:38:49 +0200) * Upstream changes: - version 0.27.4 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 29 Apr 2021 13:04:46 +0000 swh-storage (0.27.3-1~swh1) unstable-swh; urgency=medium * New upstream release 0.27.3 - (tagged by Antoine Lambert on 2021-04-09 14:59:36 +0200) * Upstream changes: - version 0.27.3 -- Software Heritage autobuilder (on jenkins-debian1) Fri, 09 Apr 2021 13:06:58 +0000 swh-storage (0.27.2-1~swh1) unstable-swh; urgency=medium * New upstream release 0.27.2 - (tagged by David Douard on 2021-04-07 15:06:41 +0200) * Upstream changes: - v0.27.2 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 08 Apr 2021 08:05:43 +0000 swh-storage (0.27.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.27.1 - (tagged by Valentin Lorentz on 2021-03-30 17:47:03 +0200) * Upstream changes: - v0.27.1 - * buffer: Add support for 'extid' -- Software Heritage autobuilder (on jenkins-debian1) Tue, 30 Mar 2021 15:59:01 +0000 swh-storage (0.27.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.27.0 - (tagged by Valentin Lorentz on 2021-03-29 14:33:24 +0200) * Upstream changes: - v0.27.0 - * origin_visit_status_add: Fix inconsistent/incorrect errors when type is None and visit is missing. - * extid: remove unicity on (extid_type, extid) and (target_type, target) -- Software Heritage autobuilder (on jenkins-debian1) Mon, 29 Mar 2021 12:44:14 +0000 swh-storage (0.26.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.26.0 - (tagged by Nicolas Dandrimont on 2021-03-22 14:44:35 +0100) * Upstream changes: - Release swh.storage v0.26.0 - Move raw_extrinsic_metadata deduplication to use a new id column. -- Software Heritage autobuilder (on jenkins-debian1) Mon, 22 Mar 2021 21:53:39 +0000 swh-storage (0.25.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.25.0 - (tagged by Antoine Lambert on 2021-03-18 13:55:10 +0100) * Upstream changes: - version 0.25.0 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 18 Mar 2021 13:02:02 +0000 swh-storage (0.24.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.24.1 - (tagged by Valentin Lorentz on 2021-03-04 23:32:36 +0100) * Upstream changes: - v0.24.1 - * tests: Drop hypothesis < 6 requirement - * Remove the remaining references to the deprecated SWHID class - * postgresql: Ensure a minimum limit for the snapshot branches query -- Software Heritage autobuilder (on jenkins-debian1) Thu, 04 Mar 2021 22:39:03 +0000 swh-storage (0.24.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.24.0 - (tagged by Valentin Lorentz on 2021-03-02 10:00:23 +0100) * Upstream changes: - v0.24.0 - * storage_tests: recompute ids when evolving RawExtrinsicMetadata objects. - * RawExtrinsicMetadata: update to use the API in swh-model 1.0.0 -- Software Heritage autobuilder (on jenkins-debian1) Tue, 02 Mar 2021 09:11:15 +0000 swh-storage (0.23.2-1~swh1) unstable-swh; urgency=medium * New upstream release 0.23.2 - (tagged by Antoine Lambert on 2021-02-19 11:47:03 +0100) * Upstream changes: - version 0.23.2 -- Software Heritage autobuilder (on jenkins-debian1) Fri, 19 Feb 2021 10:58:50 +0000 swh-storage (0.23.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.23.1 - (tagged by Antoine R. Dumont (@ardumont) on 2021-02-16 17:19:00 +0100) * Upstream changes: - v0.23.1 - Switch anonymized replayer test to use pytest parametrization -- Software Heritage autobuilder (on jenkins-debian1) Tue, 16 Feb 2021 16:28:25 +0000 swh-storage (0.23.0-1~swh2) unstable-swh; urgency=medium * Fix dependency issue -- Antoine R. Dumont (@ardumont) Tue, 16 Feb 2021 14:34:57 +0100 swh-storage (0.23.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.23.0 - (tagged by Antoine R. Dumont (@ardumont) on 2021-02-15 15:20:21 +0100) * Upstream changes: - v0.23.0 - storage: Refactor OriginVisitStatus instantiation - db: Unify sql joins on origin_visit_status using "USING" - storage.postgresql: Use origin_visit_status.type value as source - test_replay: Fix hang since confluent-kafka 1.6 release - postgresql: Fix dbversion() to return the max version instead of a random one. - buffer: ensure objects are flushed in topological order - Return an accurate summary from buffer's flush() method - buffer: add support for snapshots - buffer: add type annotations for tests -- Software Heritage autobuilder (on jenkins-debian1) Mon, 15 Feb 2021 14:39:04 +0000 swh-storage (0.22.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.22.0 - (tagged by Antoine R. Dumont (@ardumont) on 2021-02-03 12:09:29 +0100) * Upstream changes: - v0.22.0 - storage: Make origin_get_latest_visit_status return OriginVisitStatus - storage: Change origin_visit_status_get_random interface to return visit_status - Write introduction to swh-storage -- Software Heritage autobuilder (on jenkins-debian1) Wed, 03 Feb 2021 11:15:27 +0000 swh-storage (0.21.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.21.1 - (tagged by Vincent SELLIER on 2021-01-28 14:11:26 +0100) * Upstream changes: - v0.21.1 - * Correctly return origin_visit_status.type value everywhere -- Software Heritage autobuilder (on jenkins-debian1) Thu, 28 Jan 2021 13:19:24 +0000 swh-storage (0.21.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.21.0 - (tagged by Antoine R. Dumont (@ardumont) on 2021-01-20 15:42:40 +0100) * Upstream changes: - v0.21.0 - db: Allow new status values not_found, failed to OriginVisitStatus -- Software Heritage autobuilder (on jenkins-debian1) Wed, 20 Jan 2021 14:52:20 +0000 swh-storage (0.20.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.20.0 - (tagged by Antoine R. Dumont (@ardumont) on 2021-01-20 10:24:00 +0100) * Upstream changes: - v0.20.0 - storage: Add persistence of the field OriginVisitStatus.type - backfiller: Add type to the origin_visit_status topic - tests: Make test_content_add_race fail for the right reason. -- Software Heritage autobuilder (on jenkins-debian1) Wed, 20 Jan 2021 09:29:54 +0000 swh-storage (0.19.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.19.0 - (tagged by Vincent SELLIER on 2021-01-14 11:09:17 +0100) * Upstream changes: - v0.19.0 - * 2021-01-12 Adapt cassandra storage to ignore the new OriginVisitStatus.type field - * 2021- 01-08 Allow to use the JAVA_HOME environment for cassandra tests - * 2021-01-13 Enforce hypothesis <6 to prevent test breakage - * 2021-01-08 Make the CREATE_TABLES_QUERIES in cassandra/schema.py an explicit list - * 2020-12-18 Add a cli section in the doc - * 2020-11-24 storage.backfill: Allow cli run for origin_visit_status as well - * 2020-11-24 conftest: Reference swh.core.db.pytest_plugin -- Software Heritage autobuilder (on jenkins-debian1) Thu, 14 Jan 2021 10:18:31 +0000 swh-storage (0.18.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.18.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-11-23 14:46:41 +0100) * Upstream changes: - v0.18.0 - requirements-test.txt: Drop no longer needed pytest-postgresql requirement - backfill: Reverse flawed logic in SnapshotBranch generation - migrate_extrinsic_metadata: don't crash when deb revisions aren't referenced by any snapshot -- Software Heritage autobuilder (on jenkins-debian1) Mon, 23 Nov 2020 13:52:32 +0000 swh-storage (0.17.2-1~swh1) unstable-swh; urgency=medium * New upstream release 0.17.2 - (tagged by Nicolas Dandrimont on 2020-11-13 11:56:37 +0100) * Upstream changes: - Release swh.storage 0.17.2 - Future- proof get_journal_writer by setting the value_sanitizer argument - migrate_extrinsic_metadata improvements - backfill: only flush on every batch -- Software Heritage autobuilder (on jenkins-debian1) Fri, 13 Nov 2020 11:05:35 +0000 swh-storage (0.17.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.17.1 - (tagged by Antoine Lambert on 2020-11-05 13:50:35 +0100) * Upstream changes: - version 0.17.1 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 05 Nov 2020 12:56:53 +0000 swh-storage (0.17.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.17.0 - (tagged by Nicolas Dandrimont on 2020-11-03 18:09:53 +0100) * Upstream changes: - Release swh.storage v0.17.0 - Migrate all raw extrinsic metadata attributes from id to target - Add an `algos` function to resolve branch aliases - Prepare updates to make swh.journal more generic - Improve api server initialization - Various updates to the migrate_extrinsic_metadata script, notably writing - most metadata on directories instead of revisions -- Software Heritage autobuilder (on jenkins-debian1) Tue, 03 Nov 2020 17:20:45 +0000 swh-storage (0.16.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.16.0 - (tagged by Nicolas Dandrimont on 2020-10-09 18:23:24 +0200) * Upstream changes: - Release swh.storage v0.16.0 - Updates to the intrinsic metadata migration script - Various improvements to the buffer storage - Update swh storage backfill to use common configuration keys -- Software Heritage autobuilder (on jenkins-debian1) Fri, 09 Oct 2020 16:33:11 +0000 swh-storage (0.15.3-1~swh1) unstable-swh; urgency=medium * New upstream release 0.15.3 - (tagged by Nicolas Dandrimont on 2020-09-24 20:14:39 +0200) * Upstream changes: - Release swh.storage v0.15.3 - hopefully fix the documentation build -- Software Heritage autobuilder (on jenkins-debian1) Thu, 24 Sep 2020 18:24:14 +0000 swh-storage (0.15.2-1~swh1) unstable-swh; urgency=medium * New upstream release 0.15.2 - (tagged by Nicolas Dandrimont on 2020-09-24 19:22:11 +0200) * Upstream changes: - Release swh.storage v0.15.2 - no change rebuild to clean up jenkins fsckup accumulating old files. -- Software Heritage autobuilder (on jenkins-debian1) Thu, 24 Sep 2020 17:28:22 +0000 swh-storage (0.15.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.15.1 - (tagged by Nicolas Dandrimont on 2020-09-24 18:34:54 +0200) * Upstream changes: - Release swh.storage v0.15.1 - Restore buffer proxy behavior with default arguments -- Software Heritage autobuilder (on jenkins-debian1) Thu, 24 Sep 2020 16:44:22 +0000 swh-storage (0.15.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.15.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-09-24 16:54:07 +0200) * Upstream changes: - v0.15.0 - Support different database flavors in the SQL scripts - Add the SQL commands used to set up the logical replication publication - Output a warning when the version of the database is different than expected - Improve code quality and doc in BufferedProxyStorage - Adapt cli declaration entrypoint to swh.core 0.3 - Add warning about skipped_content (sneaking into the 'content' topics) - graph- replayer: fix to prevent wrong warning - pre-commit: Add isort hook and reorder imports with isort - pytest_plugin: Change dbname to storage to avoid clash in tests - pytest_plugin: Use psql to load SQL files instead of connecting with psycopg2 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 24 Sep 2020 15:03:58 +0000 swh-storage (0.14.3-1~swh1) unstable-swh; urgency=medium * New upstream release 0.14.3 - (tagged by David Douard on 2020-09-17 16:58:59 +0200) * Upstream changes: - v0.14.3 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 17 Sep 2020 16:53:56 +0000 swh-storage (0.14.2-1~swh1) unstable-swh; urgency=medium * New upstream release 0.14.2 - (tagged by David Douard on 2020-09-11 15:31:22 +0200) * Upstream changes: - v0.14.2 -- Software Heritage autobuilder (on jenkins-debian1) Fri, 11 Sep 2020 13:37:11 +0000 swh-storage (0.14.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.14.1 - (tagged by Antoine R. Dumont (@ardumont) on 2020-09-04 15:43:51 +0200) * Upstream changes: - v0.14.1 - algos.diff: Add missed revision_get conversion -- Software Heritage autobuilder (on jenkins-debian1) Fri, 04 Sep 2020 13:52:17 +0000 swh-storage (0.14.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.14.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-09-04 12:23:52 +0200) * Upstream changes: - v0.14.0 - Refactor revision_get storage API to return Revision objects - cassandra: Discard Content ctime field in content_get_partition -- Software Heritage autobuilder (on jenkins-debian1) Fri, 04 Sep 2020 10:59:54 +0000 swh-storage (0.13.3-1~swh1) unstable-swh; urgency=medium * New upstream release 0.13.3 - (tagged by Antoine R. Dumont (@ardumont) on 2020-09-01 14:34:57 +0200) * Upstream changes: - v0.13.3 - storage*: release_get(...) -> List[Optional[Release]] - Make StorageInterface a Protocol. - Add a validating storage proxy, to check ids before insertion. - Add a --check-config option for cli commands - Remove the deprecated config-path option from `swh storage rpc-serve` command - Add support for a new "check_config" config option in get_storage() - Check for db version mismatch in PgStorage.check_config() - Add a check_dbversion() method to the Db class - Fix pytest_plugin's database janitor: do not truncate the dbversion table - algos.snapshot: Add visits_and_snapshots_get_from_revision - storage/interface: Remove deprecated diff endpoints - storage_tests: Remove duplicated postgresql-specific tests. - Move postgresql-related files to swh/storage/postgresql/ -- Software Heritage autobuilder (on jenkins-debian1) Tue, 01 Sep 2020 12:40:29 +0000 swh-storage (0.13.2-1~swh2) unstable-swh; urgency=medium * Add mypy-extensions to build-dependencies -- Nicolas Dandrimont Fri, 21 Aug 2020 12:17:05 +0200 swh-storage (0.13.2-1~swh1) unstable-swh; urgency=medium * New upstream release 0.13.2 - (tagged by Valentin Lorentz on 2020-08-20 08:59:39 +0200) * Upstream changes: - v0.13.2 - * pg: Fix crash in snapshot_get when the snapshot does not exist. - * cassandra: fix signatures - * in_memory: rewrite as a backend for the cassandra storage - * remove endpoint snapshot_get_by_origin_visit. - * pg: rewrite converters to work with model objects -- Software Heritage autobuilder (on jenkins-debian1) Thu, 20 Aug 2020 07:18:50 +0000 swh-storage (0.13.1-1~swh3) unstable-swh; urgency=medium * Update dependencies -- Antoine R. Dumont (@ardumont) Fri, 07 Aug 2020 21:17:01 +0000 swh-storage (0.13.1-1~swh2) unstable-swh; urgency=medium * Update dependencies -- Antoine R. Dumont (@ardumont) Fri, 07 Aug 2020 21:02:01 +0000 swh-storage (0.13.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.13.1 - (tagged by Valentin Lorentz on 2020-08-07 18:14:32 +0200) * Upstream changes: - v0.13.1 - * Make snapshot_get_branches return a TypedDict containing SnapshotBranch objects. -- Software Heritage autobuilder (on jenkins-debian1) Fri, 07 Aug 2020 16:23:01 +0000 swh-storage (0.13.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.13.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-08-07 12:38:47 +0200) * Upstream changes: - v0.13.0 - storage*: Rename and type content_get(List[Sha1]) -> List[Optional[Content]] - storage*: Rename content_get_data(Sha1) -> Optional[bytes] - Simplify as Content.ctime None is popped out of a to_dict call in recent model - cassandra.storage: Use next token for pagination instead of computing it -- Software Heritage autobuilder (on jenkins-debian1) Fri, 07 Aug 2020 10:49:28 +0000 swh-storage (0.12.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.12.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-08-06 08:50:17 +0200) * Upstream changes: - v0.12.0 - Type storage endpoints - Drop content_get_range endpoint in favor of content_get_partition -- Software Heritage autobuilder (on jenkins-debian1) Thu, 06 Aug 2020 06:55:26 +0000 swh-storage (0.11.10-1~swh1) unstable-swh; urgency=medium * New upstream release 0.11.10 - (tagged by Antoine R. Dumont (@ardumont) on 2020-08-04 14:10:21 +0200) * Upstream changes: - v0.11.10 - tests: Improve coverage on directory_ls endpoints - storage*: Type content_find(...) -> List[Content] - storage*: Type {cnt,dir,rev,rel,snp}_get_random(...) -> Sha1Git -- Software Heritage autobuilder (on jenkins-debian1) Tue, 04 Aug 2020 12:15:21 +0000 swh-storage (0.11.9-1~swh1) unstable-swh; urgency=medium * New upstream release 0.11.9 - (tagged by Antoine R. Dumont (@ardumont) on 2020-08-03 11:55:10 +0200) * Upstream changes: - v0.11.9 - storage*: Drop origin-get- range in favor of origin-list - storage*: Do not allow unknown visit status in origin_visit*_get_latest - storage*: Add type annotation to origin_count - Reuse swh.core stream_results function -- Software Heritage autobuilder (on jenkins-debian1) Mon, 03 Aug 2020 10:02:56 +0000 swh-storage (0.11.8-1~swh1) unstable-swh; urgency=medium * New upstream release 0.11.8 - (tagged by Valentin Lorentz on 2020-07-31 14:57:09 +0200) * Upstream changes: - v0.11.8 - * test_replay: update for swh.journal 0.4.1. - * Add support for metadata-related object types to the backfiller and replayer. - * pg: Rewrite _origin_query to force the query planner to filter on URLs before filtering on visits. - * Make raw_extrinsic_metadata_get return PagedResult instead of Dict. - * Rename argument 'object_type' of raw_extrinsic_metadata_get to 'type'. -- Software Heritage autobuilder (on jenkins-debian1) Fri, 31 Jul 2020 13:17:40 +0000 swh-storage (0.11.6-1~swh1) unstable-swh; urgency=medium * New upstream release 0.11.6 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-30 16:20:48 +0200) * Upstream changes: - v0.11.6 - storage*: Adapt origin_list(...) -> PagedResult[Origin] - algos.snapshot: Open snapshot_id_get_from_revision - storage*: add origin_visit_status_get(...) -> PagedResult[OriginVisitStatus] - Add type annotations on get_storage. - buffer: Pass lists to backend functions, not iterables. - storage*: Simplify next-page- token computation - filter: Fix types passed to the proxied storage. - Fix upcoming type warning with swh.core > v0.1.2. - Make API endpoints take Lists instead of Iterables as arguments - storage*: use an enum to explicit the order in origin_visit_get - storage*: origin_visit_get(...) -> PagedResult[OriginVisit] - Write metadata + metadata authorities/fetchers to the journal. -- Software Heritage autobuilder (on jenkins-debian1) Thu, 30 Jul 2020 14:29:10 +0000 swh-storage (0.11.5-1~swh1) unstable-swh; urgency=medium * New upstream release 0.11.5 - (tagged by Valentin Lorentz on 2020-07-28 09:55:34 +0200) * Upstream changes: - v0.11.5 - in_memory: fix tie-breaking when two visits have the same date. -- Software Heritage autobuilder (on jenkins-debian1) Tue, 28 Jul 2020 08:10:21 +0000 swh-storage (0.11.4-1~swh1) unstable-swh; urgency=medium * New upstream release 0.11.4 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-27 16:08:42 +0200) * Upstream changes: - v0.11.4 - Rename object_metadata to raw_extrinsic_metadata - metadata_{authority,fetcher}_add: Fix crash when the iterable argument is empty - storage*: origin_visit_get_by -> Optional[OriginVisit] - storage*: origin_visit_find_by_date -> Optional[OriginVisit] - storage*: type origin_visit_get_latest endpoint result - algos.origin: Simplify origin_get_latest_visit_status function -- Software Heritage autobuilder (on jenkins-debian1) Mon, 27 Jul 2020 14:16:18 +0000 swh-storage (0.11.3-1~swh1) unstable-swh; urgency=medium * New upstream release 0.11.3 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-27 08:01:03 +0200) * Upstream changes: - v0.11.3 - storage*: origin_get(Iterable[str]) -> Iterable[Optional[Origin]] - storage*.origin_visit_get_random: Read model objects -- Software Heritage autobuilder (on jenkins-debian1) Mon, 27 Jul 2020 06:08:55 +0000 swh-storage (0.11.2-1~swh1) unstable-swh; urgency=medium * New upstream release 0.11.2 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-23 12:09:51 +0200) * Upstream changes: - v0.11.2 - pgstorage: Drop unnecessary indirection from reading origin_visit - pytest-plugin: Make sample_data return data model objects - tests: Use only model objects for testing - Drop validate storage proxy -- Software Heritage autobuilder (on jenkins-debian1) Thu, 23 Jul 2020 10:18:15 +0000 swh-storage (0.11.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.11.1 - (tagged by Valentin Lorentz on 2020-07-20 13:01:20 +0200) * Upstream changes: - v0.11.1 - * Use model objects in tests - * Rename 'deposit' authority type to 'deposit_client'. -- Software Heritage autobuilder (on jenkins-debian1) Mon, 20 Jul 2020 11:14:39 +0000 swh-storage (0.11.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.11.0 - (tagged by Valentin Lorentz on 2020-07-20 11:01:10 +0200) * Upstream changes: - v0.11.0 - * Make metadata-related endpoints consistent with other endpoints by using Iterables of swh- model objects instead of a dict. - * Update tests to use model objects -- Software Heritage autobuilder (on jenkins-debian1) Mon, 20 Jul 2020 09:12:25 +0000 swh-storage (0.10.6-1~swh1) unstable-swh; urgency=medium * New upstream release 0.10.6 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-16 15:31:19 +0200) * Upstream changes: - v0.10.6 - pytest_plugin: Ensure fixture instantiates correctly -- Software Heritage autobuilder (on jenkins-debian1) Thu, 16 Jul 2020 13:36:34 +0000 swh-storage (0.10.5-1~swh1) unstable-swh; urgency=medium * New upstream release 0.10.5 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-16 14:24:50 +0200) * Upstream changes: - v0.10.5 - pytest_plugin: Do not expose the validate proxy storage - pytest-plugin: Expose a sample_data_model fixture - tests: Start using model objects and drop validate proxy when possible -- Software Heritage autobuilder (on jenkins-debian1) Thu, 16 Jul 2020 12:34:44 +0000 swh-storage (0.10.4-1~swh1) unstable-swh; urgency=medium * New upstream release 0.10.4 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-16 11:25:25 +0200) * Upstream changes: - v0.10.4 - pytest_plugin: Avoid fixture client to declare optional dependency - Allow cassandra binary path to be configured through env variable - 158: Make schema and migration converge so the migration works -- Software Heritage autobuilder (on jenkins-debian1) Thu, 16 Jul 2020 09:37:24 +0000 swh-storage (0.10.3-1~swh1) unstable-swh; urgency=medium * New upstream release 0.10.3 - (tagged by Antoine Lambert on 2020-07-10 16:26:27 +0200) * Upstream changes: - version 0.10.3 -- Software Heritage autobuilder (on jenkins-debian1) Fri, 10 Jul 2020 14:40:28 +0000 swh-storage (0.10.2-1~swh2) unstable-swh; urgency=medium * Fix debian rules to avoid double pytest-plugin loading clash -- Antoine R. Dumont (@ardumont) Fri, 10 Jul 2020 09:21:14 +0200 swh-storage (0.10.2-1~swh1) unstable-swh; urgency=medium * New upstream release 0.10.2 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-10 08:30:37 +0200) * Upstream changes: - v0.10.2 - tests: Do no expose the pytest- plugin through setuptools entry - Convert ImmutableDict to dict before passing it to json.dumps - docs: Rework dia -> pdf pipeline for inkscape 1.0 -- Software Heritage autobuilder (on jenkins-debian1) Fri, 10 Jul 2020 06:52:42 +0000 swh-storage (0.10.1-1~swh2) unstable-swh; urgency=medium * Update runtime dependencies -- Antoine R. Dumont (@ardumont) Wed, 08 Jul 2020 14:56:01 +0200 swh-storage (0.10.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.10.1 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-08 14:32:52 +0200) * Upstream changes: - v0.10.1 - extract-pytest-fixture Move shareable fixtures out of conftest into a dedicated pytest plugin - Migrate from vcversioner to setuptools-scm -- Software Heritage autobuilder (on jenkins-debian1) Wed, 08 Jul 2020 12:39:15 +0000 swh-storage (0.10.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.10.0 - (tagged by David Douard on 2020-07-08 09:20:49 +0200) * Upstream changes: - v0.10.0 -- Software Heritage autobuilder (on jenkins-debian1) Wed, 08 Jul 2020 10:11:09 +0000 swh-storage (0.9.3-1~swh1) unstable-swh; urgency=medium * New upstream release 0.9.3 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-06 09:55:56 +0200) * Upstream changes: - v0.9.3 - storage: Send metrics from the origin_add endpoint -- Software Heritage autobuilder (on jenkins-debian1) Mon, 06 Jul 2020 08:06:13 +0000 swh-storage (0.9.2-1~swh1) unstable-swh; urgency=medium * New upstream release 0.9.2 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-03 18:48:39 +0200) * Upstream changes: - v0.9.2 - pg-storage: Add missing cur parameter passing -- Software Heritage autobuilder (on jenkins-debian1) Fri, 03 Jul 2020 16:54:13 +0000 swh-storage (0.9.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.9.1 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-03 16:50:45 +0200) * Upstream changes: - v0.9.1 - storage.db: Drop db.origin_visit_upsert behavior -- Software Heritage autobuilder (on jenkins-debian1) Fri, 03 Jul 2020 15:00:32 +0000 swh-storage (0.9.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.9.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-07-01 09:53:34 +0200) * Upstream changes: - v0.9.0 - storage*: Drop intermediary conversion step into OriginVisit - pg: use 'on conflict do nothing' strategy for duplicate metadata rows. - Make the code location of metadata endpoints consistent across backends. - Add content_metadata_{add,get}. - Add context columns to object_metadata table and object_metadata_{add,get}. - Generalize origin_metadata to allow support for other object types in the future. - Work around the segmentation faults caused by pytest-coverage + multiprocessing. -- Software Heritage autobuilder (on jenkins-debian1) Wed, 01 Jul 2020 08:02:08 +0000 swh-storage (0.8.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.8.1 - (tagged by David Douard on 2020-06-30 10:08:21 +0200) * Upstream changes: - v0.8.1 -- Software Heritage autobuilder (on jenkins-debian1) Tue, 30 Jun 2020 08:36:45 +0000 swh-storage (0.8.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.8.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-06-29 09:33:12 +0200) * Upstream changes: - v0.8.0 - Iterate over paginated visits in batches to retrieve latest visit/snapshot - storage*: Open order parameter to origin-visit-get endpoint - tests/replayer/storage*: Drop obsolete origin visit fields - Relax checks on journal writes regarding origin-visit* - replayer: Fix isoformat datetime string for origin-visit - Deprecate the origin_add_one() endpoint - test_storage: Add missing tests on origin_visit_get method -- Software Heritage autobuilder (on jenkins-debian1) Mon, 29 Jun 2020 07:44:00 +0000 swh-storage (0.7.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.7.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-06-22 15:42:25 +0200) * Upstream changes: - v0.7.0 - test_origin: Rename appropriately tests - algos: Improve origin visit get latest visit status algorithm - test_snapshot: Do not use origin_visit_add returned result - algos.snapshot: Fix edge case when snapshot is not resolved - Ensure ids are correct in tests' storage_data - Fix tests' storage_data revisions - SQL: replace the hash(url) index by a unique btree(url) on the origin table - Make sure the pagination in swh_snapshot_get_by_id uses the proper indexes -- Software Heritage autobuilder (on jenkins-debian1) Mon, 22 Jun 2020 14:09:33 +0000 swh-storage (0.6.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.6.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-06-19 11:29:42 +0200) * Upstream changes: - v0.6.0 - Move deprecated endpoint snapshot_get_latest from api endpoint to algos - algos.origin: Open origin-get-latest-visit-status function - storage*: Allow origin-visit-get-latest to filter on type - test_origin: Align storage initialization within tests -- Software Heritage autobuilder (on jenkins-debian1) Fri, 19 Jun 2020 12:45:32 +0000 swh-storage (0.5.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.5.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-06-17 16:03:15 +0200) * Upstream changes: - v0.5.0 - test_storage: Fix flakiness in round to milliseconds test util method - storage*: Add origin- visit-status-get-latest endpoint - Fix/update the backfiller - validate: accept model objects as well as dicts on all add endpoints - cql: Fix blackified strings - storage: Add missing cur parameter - Fix db_to_author() converter to return None is all fields are None -- Software Heritage autobuilder (on jenkins-debian1) Wed, 17 Jun 2020 14:19:37 +0000 swh-storage (0.4.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.4.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-06-16 09:50:25 +0200) * Upstream changes: - v0.4.0 - ardumont/master storage*: Drop leftover code - storage*: Drop origin_visit_upsert endpoint - storage*: Remove origin-visit-update endpoint - replay: Replay origin-visit and origin-visit-status - in_memory: Make origin- visit-status-add respect "on conflict ignore" policy - test_storage: Add journal behavior coverage for origin-visit-*add - Start migrating the validate proxy toward using BaseModel objects - storage*: Do not write twice origin-visit-status in journal -- Software Heritage autobuilder (on jenkins-debian1) Tue, 16 Jun 2020 07:58:23 +0000 swh-storage (0.3.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.3.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-06-12 09:08:23 +0200) * Upstream changes: - v0.3.0 - origin-visit-add storage*: Align origin-visit-add to take iterable of OriginVisit objects -- Software Heritage autobuilder (on jenkins-debian1) Fri, 12 Jun 2020 07:22:03 +0000 swh-storage (0.2.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.2.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-06-10 11:51:30 +0200) * Upstream changes: - v0.2.0 - origin-visit-upsert: Write visit status objects to the journal - origin-visit-update: Write visit status objects to the journal - origin-visit-add: Write visit status to the journal - Add pagination to origin_metadata_get. - Deduplicate origin-metadata when they have the same authority + discovery_date + fetcher. - Open `origin_visit_status_add` endpoint to add origin visit statuses - Add a replayer test for anonymized journal topics - Small refactoring of the InMemoryStorage to make it more consistent -- Software Heritage autobuilder (on jenkins-debian1) Wed, 10 Jun 2020 10:02:45 +0000 swh-storage (0.1.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.1.1 - (tagged by Nicolas Dandrimont on 2020-06-04 16:49:22 +0200) * Upstream changes: - Release swh.storage v0.1.1 - Work around tests hanging during Debian build -- Software Heritage autobuilder (on jenkins-debian1) Thu, 04 Jun 2020 14:56:54 +0000 swh-storage (0.1.0-2~swh1) unstable-swh; urgency=medium * Update dependencies. -- David Douard Thu, 04 Jun 2020 13:40:52 +0200 swh-storage (0.1.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.1.0 - (tagged by David Douard on 2020-06-04 12:08:46 +0200) * Upstream changes: - v0.1.0 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 04 Jun 2020 10:28:43 +0000 swh-storage (0.0.193-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.193 - (tagged by Antoine R. Dumont (@ardumont) on 2020-05-28 14:28:54 +0200) * Upstream changes: - v0.0.193 - pg: Write origin visit updates & status, read from origin_visit_status - Make content.blake2s256 not null. - Remove unused SQL functions. - README: Update necessary dependencies for test purposes - Add a pre-commit hook to check there are version bumps in sql/upgrades/*.sql - Add missing dbversion bump in 150.sql. - Add artifact metadata to the extrinsic metadata storage specification. - Add not null constraints to metadata_authority/origin_metadata - Realign schema with latest 149 migration script -- Software Heritage autobuilder (on jenkins-debian1) Thu, 28 May 2020 12:37:58 +0000 swh-storage (0.0.192-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.192 - (tagged by Valentin Lorentz on 2020-05-19 18:42:00 +0200) * Upstream changes: - v0.0.192 - * origin_metadata_add: Reject non-bytes types for 'metadata'. -- Software Heritage autobuilder (on jenkins-debian1) Tue, 19 May 2020 16:54:00 +0000 swh-storage (0.0.191-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.191 - (tagged by Valentin Lorentz on 2020-05-19 13:43:35 +0200) * Upstream changes: - v0.0.191 - * Implement the new extrinsic metadata specification/vocabulary. -- Software Heritage autobuilder (on jenkins-debian1) Tue, 19 May 2020 11:52:00 +0000 swh-storage (0.0.190-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.190 - (tagged by Antoine R. Dumont (@ardumont) on 2020-05-18 14:10:39 +0200) * Upstream changes: - v0.0.190 - storage: metadata_provider: Ensure idempotency when creating provider - journal: add a skipped_content topic dedicated to SkippedContent objects - Add missing return annotations on JournalWriter methods - Improve a bit the exception message of JournalWriter.content_update - Refactor the JournalWriter class to normalize its methods - tests: fix test_replay; do only use aware datetime objects - test_kafka_writer: Add missing object type skipped_content -- Software Heritage autobuilder (on jenkins-debian1) Mon, 18 May 2020 12:18:09 +0000 swh-storage (0.0.189-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.189 - (tagged by Antoine R. Dumont (@ardumont) on 2020-04-30 14:50:54 +0200) * Upstream changes: - v0.0.189 - pg: Write both origin visit updates & status, read from origin_visit - pg-storage: Add new created state - setup.py: add documentation link - metadata spec: Fix title hierarchy - tests: Use aware datetimes instead of naive ones. - cassandra: Adapt internal implementations to use origin visit status - in_memory: Adapt internal implementations to use origin visit status -- Software Heritage autobuilder (on jenkins-debian1) Thu, 30 Apr 2020 12:58:57 +0000 swh-storage (0.0.188-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.188 - (tagged by David Douard on 2020-04-28 13:44:20 +0200) * Upstream changes: - v0.0.188 -- Software Heritage autobuilder (on jenkins-debian1) Tue, 28 Apr 2020 11:52:08 +0000 swh-storage (0.0.187-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.187 - (tagged by Antoine R. Dumont (@ardumont) on 2020-04-14 18:13:08 +0200) * Upstream changes: - v0.0.187 - storage.interface: Actually define the remote flush operation -- Software Heritage autobuilder (on jenkins-debian1) Tue, 14 Apr 2020 16:23:41 +0000 swh-storage (0.0.186-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.186 - (tagged by Nicolas Dandrimont on 2020-04-14 17:09:22 +0200) * Upstream changes: - Release swh.storage v0.0.186 - Drop backwards-compatibility code with swh.journal < 0.0.30 -- Software Heritage autobuilder (on jenkins-debian1) Tue, 14 Apr 2020 15:20:57 +0000 swh-storage (0.0.185-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.185 - (tagged by Antoine R. Dumont (@ardumont) on 2020-04-14 14:15:32 +0200) * Upstream changes: - v0.0.185 - storage.filter: Remove internal state - test: update storage tests to (future) swh.journal 0.0.30 -- Software Heritage autobuilder (on jenkins-debian1) Tue, 14 Apr 2020 12:22:06 +0000 swh-storage (0.0.184-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.184 - (tagged by Antoine R. Dumont (@ardumont) on 2020-04-10 16:07:32 +0200) * Upstream changes: - v0.0.184 - storage*: Add flush endpoints to storage implems (backend, proxy) - test_retry: Add missing skipped_content_add tests -- Software Heritage autobuilder (on jenkins-debian1) Fri, 10 Apr 2020 14:14:20 +0000 swh-storage (0.0.183-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.183 - (tagged by Antoine R. Dumont (@ardumont) on 2020-04-09 12:35:53 +0200) * Upstream changes: - v0.0.183 - proxy storage: Add a clear_buffers endpoint - buffer proxy storage: Filter out duplicate objects prior to storage write - storage: Prevent erroneous HashCollisions by using the same ctime for all rows. - Enable black - origin_visit_update: ensure it raises a StorageArgumentException - Adapt cassandra backend to validating model types - tests: many refactoring improvements - tests: Shut down cassandra connection before closing the fixture down - Add more type annotations -- Software Heritage autobuilder (on jenkins-debian1) Thu, 09 Apr 2020 10:46:29 +0000 swh-storage (0.0.182-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.182 - (tagged by Antoine R. Dumont (@ardumont) on 2020-03-27 07:02:13 +0100) * Upstream changes: - v0.0.182 - storage*: Update origin_visit_update to make status parameter mandatory - test: Adapt origin validation test according to latest model changes - Respec discovery_date as a Python datetime instead of an ISO string. - origin_visit_add: Add missing db/cur argument to call to origin_get. -- Software Heritage autobuilder (on jenkins-debian1) Fri, 27 Mar 2020 06:13:17 +0000 swh-storage (0.0.181-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.181 - (tagged by Antoine R. Dumont (@ardumont) on 2020-03-25 09:50:49 +0100) * Upstream changes: - v0.0.181 - storage*: Hex encode content hashes in HashCollision exception - Add format of discovery_date in the metadata specification. - Store the value of token(partition_key) in skipped_content_by_* table, instead of three hashes. - Store the value of token(partition_key) in content_by_* table, instead of three hashes. -- Software Heritage autobuilder (on jenkins-debian1) Wed, 25 Mar 2020 09:03:43 +0000 swh-storage (0.0.180-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.180 - (tagged by Nicolas Dandrimont on 2020-03-18 18:24:41 +0100) * Upstream changes: - Release swh.storage v0.0.180 - Stop counting origin additions multiple times in statsd -- Software Heritage autobuilder (on jenkins-debian1) Wed, 18 Mar 2020 17:45:36 +0000 swh-storage (0.0.179-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.179 - (tagged by Nicolas Dandrimont on 2020-03-18 16:05:13 +0100) * Upstream changes: - Release swh.storage v0.0.179. - fix requirements-swh.txt to use proper version restriction - reduce the transaction load for content writes and reads -- Software Heritage autobuilder (on jenkins-debian1) Wed, 18 Mar 2020 15:50:50 +0000 swh-storage (0.0.178-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.178 - (tagged by Antoine R. Dumont (@ardumont) on 2020-03-16 12:51:28 +0100) * Upstream changes: - v0.0.178 - origin_visit_add: Adapt endpoint signature to return OriginVisit - origin_visit_upsert: Use OriginVisit object as input - storage/writer: refactor JournalWriter.content_add to send model objects -- Software Heritage autobuilder (on jenkins-debian1) Mon, 16 Mar 2020 11:59:18 +0000 swh-storage (0.0.177-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.177 - (tagged by Antoine R. Dumont (@ardumont) on 2020-03-10 11:37:33 +0100) * Upstream changes: - v0.0.177 - storage: Identify and provide the collision hashes in exception - Guarantee the order of results for revision_get and release_get - tests: Improve test speed - sql: do not attempt to create the plpgsql lang if already exists - Update requirement on swh.core for RPCClient method overrides -- Software Heritage autobuilder (on jenkins-debian1) Tue, 10 Mar 2020 10:48:11 +0000 swh-storage (0.0.176-1~swh2) unstable-swh; urgency=medium * Update build dependencies -- Antoine R. Dumont (@ardumont) Mon, 02 Mar 2020 14:36:00 +0100 swh-storage (0.0.176-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.176 - (tagged by Valentin Lorentz on 2020-02-28 14:44:10 +0100) * Upstream changes: - v0.0.176 - * Accept cassandra-driver >= 3.22. - * Make the RPC client and objstorage helper fetch Content.data from lazy - contents. - * Move ctime out of the validation proxy. -- Software Heritage autobuilder (on jenkins-debian1) Fri, 28 Feb 2020 15:21:27 +0000 swh-storage (0.0.175-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.175 - (tagged by Antoine Lambert on 2020-02-20 13:51:40 +0100) * Upstream changes: - version 0.0.175 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 20 Feb 2020 13:18:34 +0000 swh-storage (0.0.174-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.174 - (tagged by Valentin Lorentz on 2020-02-19 14:18:59 +0100) * Upstream changes: - v0.0.174 - * Fix inconsistent behavior of skipped_content_missing across backends. - * Fix FilteringProxy to not drop skipped-contents with a missing sha1_git. - * Make storage proxies use swh-model objects instead of dicts. - * Add support for (de)serializing swh-model in RPC calls. -- Software Heritage autobuilder (on jenkins-debian1) Wed, 19 Feb 2020 15:00:32 +0000 swh-storage (0.0.172-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.172 - (tagged by Valentin Lorentz on 2020-02-12 14:00:04 +0100) * Upstream changes: - v0.0.172 - * Unify exception raised by invalid input to API endpoints. - * Add a validation proxy for _add() methods. This proxy is *required* - in front of all backends whose _add() methods may be called or they'll - crash at runtime. - * Fix RecursionError when storage proxies are deepcopied or unpickled. - * storages: Refactor objstorage operations with a dedicated collaborator - * storages: Refactor journal operations with a dedicated writer collab -- Software Heritage autobuilder (on jenkins-debian1) Wed, 12 Feb 2020 13:13:47 +0000 swh-storage (0.0.171-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.171 - (tagged by Valentin Lorentz on 2020-02-06 14:46:05 +0100) * Upstream changes: - v0.0.171 - * Split 'content_add' method into 'content_add' and 'skipped_content_add'. - * Increase Cassandra requests timeout to 1 second. -- Software Heritage autobuilder (on jenkins-debian1) Thu, 06 Feb 2020 14:07:37 +0000 swh-storage (0.0.170-1~swh3) unstable-swh; urgency=medium * Update build dependencies -- Antoine R. Dumont (@ardumont) Mon, 03 Feb 2020 17:30:38 +0100 swh-storage (0.0.170-1~swh2) unstable-swh; urgency=medium * Update build dependencies -- Antoine R. Dumont (@ardumont) Mon, 03 Feb 2020 16:00:39 +0100 swh-storage (0.0.170-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.170 - (tagged by Antoine R. Dumont (@ardumont) on 2020-02-03 14:11:53 +0100) * Upstream changes: - v0.0.170 - swh.storage.cassandra: Add Cassandra backend implementation -- Software Heritage autobuilder (on jenkins-debian1) Mon, 03 Feb 2020 13:23:48 +0000 swh-storage (0.0.169-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.169 - (tagged by Antoine R. Dumont (@ardumont) on 2020-01-30 13:40:00 +0100) * Upstream changes: - v0.0.169 - retry: Add retry behavior on pipeline storage with flushing failure -- Software Heritage autobuilder (on jenkins-debian1) Thu, 30 Jan 2020 13:26:23 +0000 swh-storage (0.0.168-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.168 - (tagged by Valentin Lorentz on 2020-01-30 11:19:31 +0100) * Upstream changes: - v0.0.168 - * Implement content_update for the in-mem storage. - * Remove cur/db arguments from the in- mem storage. - * Move Storage documentation and endpoint paths to a new StorageInterface class - * Rename in_memory.Storage to in_memory.InMemoryStorage. - * CONTRIBUTORS: add Daniele Serafini -- Software Heritage autobuilder (on jenkins-debian1) Thu, 30 Jan 2020 10:25:30 +0000 swh-storage (0.0.167-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.167 - (tagged by Antoine R. Dumont (@ardumont) on 2020-01-24 14:55:57 +0100) * Upstream changes: - v0.0.167 - pgstorage: Empty temp tables instead of dropping them -- Software Heritage autobuilder (on jenkins-debian1) Fri, 24 Jan 2020 14:01:57 +0000 swh-storage (0.0.166-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.166 - (tagged by Antoine R. Dumont (@ardumont) on 2020-01-24 09:51:52 +0100) * Upstream changes: - v0.0.166 - storage: Add endpoint to get missing content (by sha1_git) and missing snapshot - Remove redundant config checks in load_and_check_config - Remove 'id' and 'object_id' from the output of object_find_by_sha1_git - Make origin_visit_get_random return None instead of {} if there are no results - docs: Fix sphinx warnings -- Software Heritage autobuilder (on jenkins-debian1) Fri, 24 Jan 2020 09:00:12 +0000 swh-storage (0.0.165-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.165 - (tagged by Antoine R. Dumont (@ardumont) on 2020-01-17 14:04:53 +0100) * Upstream changes: - v0.0.165 - storage.retry: Fix objects loading when using generator parameters -- Software Heritage autobuilder (on jenkins-debian1) Fri, 17 Jan 2020 13:09:39 +0000 swh-storage (0.0.164-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.164 - (tagged by Antoine Lambert on 2020-01-16 17:54:40 +0100) * Upstream changes: - version 0.0.164 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 16 Jan 2020 17:05:02 +0000 swh-storage (0.0.163-1~swh2) unstable-swh; urgency=medium * Fix test dependency -- Antoine R. Dumont (@ardumont) Tue, 14 Jan 2020 17:26:08 +0100 swh-storage (0.0.163-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.163 - (tagged by Antoine R. Dumont (@ardumont) on 2020-01-14 17:12:03 +0100) * Upstream changes: - v0.0.163 - retry: Improve proxy storage for add endpoints - in_memory: Make directory_get_random return None when storage empty - storage: Change content_get_metadata api to return Dict[bytes, List[Dict]] - storage: Add content_get_partition endpoint to replace content_get_range - storage: Add endpoint origin_list to replace origin_get_range -- Software Heritage autobuilder (on jenkins-debian1) Tue, 14 Jan 2020 16:17:45 +0000 swh-storage (0.0.162-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.162 - (tagged by Valentin Lorentz on 2019-12-16 14:37:44 +0100) * Upstream changes: - v0.0.162 - Add {content,directory,revision,release,snapshot}_get_random. -- Software Heritage autobuilder (on jenkins-debian1) Mon, 16 Dec 2019 13:41:39 +0000 swh-storage (0.0.161-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.161 - (tagged by Antoine R. Dumont (@ardumont) on 2019-12-10 15:03:28 +0100) * Upstream changes: - v0.0.161 - storage: Add endpoint to randomly pick an origin -- Software Heritage autobuilder (on jenkins-debian1) Tue, 10 Dec 2019 14:08:15 +0000 swh-storage (0.0.160-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.160 - (tagged by Antoine R. Dumont (@ardumont) on 2019-12-06 11:15:48 +0100) * Upstream changes: - v0.0.160 - storage.buffer: Buffer release objects as well - storage.tests: Unify tests sample data - Implement origin lookup by sha1 -- Software Heritage autobuilder (on jenkins-debian1) Fri, 06 Dec 2019 10:23:44 +0000 swh-storage (0.0.159-1~swh2) unstable-swh; urgency=medium * Force fast hypothesis profile when running tests -- Antoine R. Dumont (@ardumont) Tue, 26 Nov 2019 17:08:16 +0100 swh-storage (0.0.159-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.159 - (tagged by Antoine R. Dumont (@ardumont) on 2019-11-22 11:05:41 +0100) * Upstream changes: - v0.0.159 - Add 'pipeline' storage "class" for more readable configurations. - tests: Improve tests environments configuration - Fix a few typos reported by codespell - Add a pre-commit-hooks.yaml config file - Remove utils/(dump|fix)_revisions scripts -- Software Heritage autobuilder (on jenkins-debian1) Fri, 22 Nov 2019 10:10:31 +0000 swh-storage (0.0.158-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.158 - (tagged by Antoine R. Dumont (@ardumont) on 2019-11-14 13:33:00 +0100) * Upstream changes: - v0.0.158 - Drop schemata module (migrated back to swh-lister) -- Software Heritage autobuilder (on jenkins-debian1) Thu, 14 Nov 2019 12:37:18 +0000 swh-storage (0.0.157-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.157 - (tagged by Nicolas Dandrimont on 2019-11-13 13:22:39 +0100) * Upstream changes: - Release swh.storage 0.0.157 - schemata.distribution: Fix bogus NotImplementedError on Area.index_uris -- Software Heritage autobuilder (on jenkins-debian1) Wed, 13 Nov 2019 12:27:07 +0000 swh-storage (0.0.156-1~swh2) unstable-swh; urgency=medium * Add version constraint on psycopg2 -- Nicolas Dandrimont Wed, 30 Oct 2019 18:21:34 +0100 swh-storage (0.0.156-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.156 - (tagged by Valentin Lorentz on 2019-10-30 15:12:10 +0100) * Upstream changes: - v0.0.156 - * Stop supporting origin ids in API (except in origin_get_range). - * Make visit['origin'] a string everywhere (instead of a dict). -- Software Heritage autobuilder (on jenkins-debian1) Wed, 30 Oct 2019 14:29:28 +0000 swh-storage (0.0.155-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.155 - (tagged by David Douard on 2019-10-30 12:14:14 +0100) * Upstream changes: - v0.0.155 -- Software Heritage autobuilder (on jenkins-debian1) Wed, 30 Oct 2019 11:18:37 +0000 swh-storage (0.0.154-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.154 - (tagged by Antoine R. Dumont (@ardumont) on 2019-10-17 13:47:57 +0200) * Upstream changes: - v0.0.154 - Fix tests in debian build -- Software Heritage autobuilder (on jenkins-debian1) Thu, 17 Oct 2019 11:52:46 +0000 swh-storage (0.0.153-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.153 - (tagged by Antoine R. Dumont (@ardumont) on 2019-10-17 13:21:00 +0200) * Upstream changes: - v0.0.153 - Deploy new test fixture -- Software Heritage autobuilder (on jenkins-debian1) Thu, 17 Oct 2019 11:26:12 +0000 swh-storage (0.0.152-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.152 - (tagged by Antoine R. Dumont (@ardumont) on 2019-10-08 16:55:43 +0200) * Upstream changes: - v0.0.152 - swh.storage.buffer: Add buffering proxy storage implementation - swh.storage.filter: Add filtering storage implementation - swh.storage.tests: Improve db transaction handling - swh.storage.tests: Add more tests - swh.storage.storage: introduce a db() context manager -- Software Heritage autobuilder (on jenkins-debian1) Tue, 08 Oct 2019 15:03:16 +0000 swh-storage (0.0.151-1~swh2) unstable-swh; urgency=medium * Add missing build-dependency on python3-swh.journal -- Nicolas Dandrimont Tue, 01 Oct 2019 18:28:19 +0200 swh-storage (0.0.151-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.151 - (tagged by Stefano Zacchiroli on 2019-10-01 10:04:36 +0200) * Upstream changes: - v0.0.151 - * tox: anticipate mypy run to just after flake8 - * mypy.ini: be less flaky w.r.t. the packages installed in tox - * storage.py: ignore typing of optional get_journal_writer import - * mypy: ignore swh.journal to work-around dependency loop - * init.py: switch to documented way of extending path - * typing: minimal changes to make a no- op mypy run pass - * Write objects to the journal only if they don't exist yet. - * Use origin URLs for skipped_content['origin'] instead of origin ids. - * Properly mock get_journal_writer for the remote-pg-storage tests. - * journal_writer: use journal writer from swh.journal - * fix typos in docstrings and sample paths - * storage.origin_visit_add: Remove deprecated 'ts' parameter - * click "required" param wants bool, not int -- Software Heritage autobuilder (on jenkins-debian1) Tue, 01 Oct 2019 08:09:53 +0000 swh-storage (0.0.150-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.150 - (tagged by Antoine R. Dumont (@ardumont) on 2019-09-04 16:09:59 +0200) * Upstream changes: - v0.0.150 - tests/test_storage: Remove failing assertion after swh-model update - tests/test_storage: Fix tests execution with psycopg2 < 2.8 -- Software Heritage autobuilder (on jenkins-debian1) Wed, 04 Sep 2019 14:16:09 +0000 swh-storage (0.0.149-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.149 - (tagged by Antoine R. Dumont (@ardumont) on 2019-09-03 14:00:57 +0200) * Upstream changes: - v0.0.149 - Add support for origin_url in origin_metadata_* - Make origin_add/origin_visit_update validate their input - Make snapshot_add validate its input - Make revision_add and release_add validate their input - Make directory_add validate its input - Make content_add validate its input using swh-model -- Software Heritage autobuilder (on jenkins-debian1) Tue, 03 Sep 2019 12:27:51 +0000 swh-storage (0.0.148-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.148 - (tagged by Valentin Lorentz on 2019-08-23 10:33:02 +0200) * Upstream changes: - v0.0.148 - Tests improvements: - * Remove 'next_branch' from test input data. - * Fix off-by-one error when using origin_visit_upsert on with an unknown visit id. - * Use explicit arguments for origin_visit_add. - * Remove test_content_missing__marked_missing, it makes no sense. - Drop person ids: - * Stop leaking person ids. - * Remove person_get endpoint. - Logging fixes: - * Enforce log level for the werkzeug logger. - * Eliminate warnings about %TYPE. - * api: use RPCServerApp and RPCClient instead of deprecated classes - Other: - * Add support for skipped content in in- memory storage -- Software Heritage autobuilder (on jenkins-debian1) Fri, 23 Aug 2019 08:48:21 +0000 swh-storage (0.0.147-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.147 - (tagged by Valentin Lorentz on 2019-07-18 12:11:37 +0200) * Upstream changes: - Make origin_get ignore the `type` argument -- Software Heritage autobuilder (on jenkins-debian1) Thu, 18 Jul 2019 10:16:16 +0000 swh-storage (0.0.146-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.146 - (tagged by Valentin Lorentz on 2019-07-18 10:46:21 +0200) * Upstream changes: - Progress toward getting rid of origin ids - * Less dependency on origin ids in the in-mem storage - * add the SWH_STORAGE_IN_MEMORY_ENABLE_ORIGIN_IDS env var - * Remove legacy behavior of snapshot_add -- Software Heritage autobuilder (on jenkins-debian1) Thu, 18 Jul 2019 08:52:09 +0000 swh-storage (0.0.145-1~swh3) unstable-swh; urgency=medium * Properly rebuild for unstable-swh -- Nicolas Dandrimont Thu, 11 Jul 2019 14:03:30 +0200 swh-storage (0.0.145-1~swh2) buster-swh; urgency=medium * Remove useless swh.scheduler dependency -- Nicolas Dandrimont Thu, 11 Jul 2019 13:53:45 +0200 swh-storage (0.0.145-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.145 - (tagged by Valentin Lorentz on 2019-07-02 12:00:53 +0200) * Upstream changes: - v0.0.145 - Add an 'origin_visit_find_by_date' endpoint. - Add support for origin urls in all endpoints -- Software Heritage autobuilder (on jenkins-debian1) Tue, 02 Jul 2019 10:19:19 +0000 swh-storage (0.0.143-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.143 - (tagged by Valentin Lorentz on 2019-06-05 13:18:14 +0200) * Upstream changes: - Add test for snapshot/release counters. -- Software Heritage autobuilder (on jenkins-debian1) Mon, 01 Jul 2019 12:38:40 +0000 swh-storage (0.0.142-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.142 - (tagged by Valentin Lorentz on 2019-06-11 15:24:49 +0200) * Upstream changes: - Mark network tests, so they can be disabled. -- Software Heritage autobuilder (on jenkins-debian1) Tue, 11 Jun 2019 13:44:19 +0000 swh-storage (0.0.141-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.141 - (tagged by Valentin Lorentz on 2019-06-06 17:05:03 +0200) * Upstream changes: - Add support for using URL instead of ID in snapshot_get_latest. -- Software Heritage autobuilder (on jenkins-debian1) Tue, 11 Jun 2019 10:36:32 +0000 swh-storage (0.0.140-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.140 - (tagged by mihir(faux__) on 2019-03-24 21:47:31 +0530) * Upstream changes: - Changes the output of content_find method to a list in case of hash collisions and makes the sql query on python side and added test duplicate input, colliding sha256 and colliding blake2s256 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 16 May 2019 12:09:04 +0000 swh-storage (0.0.139-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.139 - (tagged by Nicolas Dandrimont on 2019-04-18 17:57:57 +0200) * Upstream changes: - Release swh.storage v0.0.139 - Backwards- compatibility improvements for snapshot_add - Better transactionality in revision_add/release_add - Fix backwards metric names - Handle shallow histories properly -- Software Heritage autobuilder (on jenkins-debian1) Thu, 18 Apr 2019 16:08:28 +0000 swh-storage (0.0.138-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.138 - (tagged by Valentin Lorentz on 2019-04-09 16:40:49 +0200) * Upstream changes: - Use the db_transaction decorator on all _add() methods. - So they gracefully release the connection on error instead - of relying on reference-counting to call the Db's `__del__` - (which does not happen in Hypothesis tests) because a ref - to it is kept via the traceback object. -- Software Heritage autobuilder (on jenkins-debian1) Tue, 09 Apr 2019 16:50:48 +0000 swh-storage (0.0.137-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.137 - (tagged by Valentin Lorentz on 2019-04-08 15:40:24 +0200) * Upstream changes: - Make test_origin_get_range run faster. -- Software Heritage autobuilder (on jenkins-debian1) Mon, 08 Apr 2019 13:56:16 +0000 swh-storage (0.0.135-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.135 - (tagged by Valentin Lorentz on 2019-04-04 20:42:32 +0200) * Upstream changes: - Make content_add_metadata require a ctime argument. - This makes Python set the ctime instead of pgsql. -- Software Heritage autobuilder (on jenkins-debian1) Fri, 05 Apr 2019 14:43:28 +0000 swh-storage (0.0.134-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.134 - (tagged by Valentin Lorentz on 2019-04-03 13:38:58 +0200) * Upstream changes: - Don't leak origin ids to the journal. -- Software Heritage autobuilder (on jenkins-debian1) Thu, 04 Apr 2019 10:16:09 +0000 swh-storage (0.0.132-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.132 - (tagged by Valentin Lorentz on 2019-04-01 11:50:30 +0200) * Upstream changes: - Use sha1 instead of bigint as FK from origin_visit to snapshot (part 1: add new column) -- Software Heritage autobuilder (on jenkins-debian1) Mon, 01 Apr 2019 13:30:48 +0000 swh-storage (0.0.131-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.131 - (tagged by Nicolas Dandrimont on 2019-03-28 17:24:44 +0100) * Upstream changes: - Release swh.storage v0.0.131 - Add statsd metrics to storage RPC backend - Clean up snapshot_add/origin_visit_update - Uniformize RPC backend to use POSTs everywhere -- Software Heritage autobuilder (on jenkins-debian1) Thu, 28 Mar 2019 16:34:07 +0000 swh-storage (0.0.130-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.130 - (tagged by Valentin Lorentz on 2019-02-26 10:50:44 +0100) * Upstream changes: - Add an helper function to list all origins in the storage. -- Software Heritage autobuilder (on jenkins-debian1) Wed, 13 Mar 2019 14:01:04 +0000 swh-storage (0.0.129-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.129 - (tagged by Valentin Lorentz on 2019-02-27 10:42:29 +0100) * Upstream changes: - Double the timeout of revision_get. - Metadata indexers often hit the limit. -- Software Heritage autobuilder (on jenkins-debian1) Fri, 01 Mar 2019 10:11:28 +0000 swh-storage (0.0.128-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.128 - (tagged by Antoine R. Dumont (@ardumont) on 2019-02-21 14:59:22 +0100) * Upstream changes: - v0.0.128 - api.server: Fix wrong exception type - storage.cli: Fix cli entry point name to the expected name (setup.py) -- Software Heritage autobuilder (on jenkins-debian1) Thu, 21 Feb 2019 14:07:23 +0000 swh-storage (0.0.127-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.127 - (tagged by Antoine R. Dumont (@ardumont) on 2019-02-21 13:34:19 +0100) * Upstream changes: - v0.0.127 - api.wsgi: Open wsgi entrypoint and check config at startup time - api.server: Make the api server load and check its configuration - swh.storage.cli: Migrate the api server startup in swh.storage.cli -- Software Heritage autobuilder (on jenkins-debian1) Thu, 21 Feb 2019 12:59:48 +0000 swh-storage (0.0.126-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.126 - (tagged by Valentin Lorentz on 2019-02-21 10:18:26 +0100) * Upstream changes: - Double the timeout of snapshot_get_latest. - Metadata indexers often hit the limit. -- Software Heritage autobuilder (on jenkins-debian1) Thu, 21 Feb 2019 11:24:52 +0000 swh-storage (0.0.125-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.125 - (tagged by Antoine R. Dumont (@ardumont) on 2019-02-14 10:13:31 +0100) * Upstream changes: - v0.0.125 - api/server: Do not read configuration at each request -- Software Heritage autobuilder (on jenkins-debian1) Thu, 14 Feb 2019 16:57:01 +0000 swh-storage (0.0.124-1~swh3) unstable-swh; urgency=low * New upstream release, fixing the distribution this time -- Antoine R. Dumont (@ardumont) Thu, 14 Feb 2019 17:51:29 +0100 swh-storage (0.0.124-1~swh2) unstable; urgency=medium * New upstream release for dependency fix reasons -- Antoine R. Dumont (@ardumont) Thu, 14 Feb 2019 09:27:55 +0100 swh-storage (0.0.124-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.124 - (tagged by Antoine Lambert on 2019-02-12 14:40:53 +0100) * Upstream changes: - version 0.0.124 -- Software Heritage autobuilder (on jenkins-debian1) Tue, 12 Feb 2019 13:46:08 +0000 swh-storage (0.0.123-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.123 - (tagged by Antoine R. Dumont (@ardumont) on 2019-02-08 15:06:49 +0100) * Upstream changes: - v0.0.123 - Make Storage.origin_get support a list of origins, like other - Storage.*_get methods. - Stop using _to_bytes functions. - Use the BaseDb (and friends) from swh-core -- Software Heritage autobuilder (on jenkins-debian1) Fri, 08 Feb 2019 14:14:18 +0000 swh-storage (0.0.122-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.122 - (tagged by Antoine Lambert on 2019-01-28 11:57:27 +0100) * Upstream changes: - version 0.0.122 -- Software Heritage autobuilder (on jenkins-debian1) Mon, 28 Jan 2019 11:02:45 +0000 swh-storage (0.0.121-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.121 - (tagged by Antoine Lambert on 2019-01-28 11:31:48 +0100) * Upstream changes: - version 0.0.121 -- Software Heritage autobuilder (on jenkins-debian1) Mon, 28 Jan 2019 10:36:40 +0000 swh-storage (0.0.120-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.120 - (tagged by Antoine Lambert on 2019-01-17 12:04:27 +0100) * Upstream changes: - version 0.0.120 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 17 Jan 2019 11:12:47 +0000 swh-storage (0.0.119-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.119 - (tagged by Antoine R. Dumont (@ardumont) on 2019-01-11 11:57:13 +0100) * Upstream changes: - v0.0.119 - listener: Notify Kafka when an origin visit is updated -- Software Heritage autobuilder (on jenkins-debian1) Fri, 11 Jan 2019 11:02:07 +0000 swh-storage (0.0.118-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.118 - (tagged by Antoine Lambert on 2019-01-09 16:59:15 +0100) * Upstream changes: - version 0.0.118 -- Software Heritage autobuilder (on jenkins-debian1) Wed, 09 Jan 2019 18:51:34 +0000 swh-storage (0.0.117-1~swh1) unstable-swh; urgency=medium * v0.0.117 * listener: Adapt decoding behavior depending on the object type -- Antoine R. Dumont (@ardumont) Thu, 20 Dec 2018 14:48:44 +0100 swh-storage (0.0.116-1~swh1) unstable-swh; urgency=medium * v0.0.116 * Update requirements to latest swh.core -- Antoine R. Dumont (@ardumont) Fri, 14 Dec 2018 15:57:04 +0100 swh-storage (0.0.115-1~swh1) unstable-swh; urgency=medium * version 0.0.115 -- Antoine Lambert Fri, 14 Dec 2018 15:47:52 +0100 swh-storage (0.0.114-1~swh1) unstable-swh; urgency=medium * version 0.0.114 -- Antoine Lambert Wed, 05 Dec 2018 10:59:49 +0100 swh-storage (0.0.113-1~swh1) unstable-swh; urgency=medium * v0.0.113 * in-memory storage: Add recursive argument to directory_ls endpoint -- Antoine R. Dumont (@ardumont) Fri, 30 Nov 2018 11:56:44 +0100 swh-storage (0.0.112-1~swh1) unstable-swh; urgency=medium * v0.0.112 * in-memory storage: Align with existing storage * docstring: Improvements and adapt according to api * doc: update index to match new swh-doc format * Increase test coverage for stat_counters + fix its bugs. -- Antoine R. Dumont (@ardumont) Fri, 30 Nov 2018 10:28:02 +0100 swh-storage (0.0.111-1~swh1) unstable-swh; urgency=medium * v0.0.111 * Move generative tests in their own module * Open in-memory storage implementation -- Antoine R. Dumont (@ardumont) Wed, 21 Nov 2018 08:55:14 +0100 swh-storage (0.0.110-1~swh1) unstable-swh; urgency=medium * v0.0.110 * storage: Open content_get_range endpoint * tests: Start using hypothesis for tests generation * Improvements: Remove SQLisms from the tests and API * docs: Document metadata providers -- Antoine R. Dumont (@ardumont) Fri, 16 Nov 2018 11:53:14 +0100 swh-storage (0.0.109-1~swh1) unstable-swh; urgency=medium * version 0.0.109 -- Antoine Lambert Mon, 12 Nov 2018 14:11:09 +0100 swh-storage (0.0.108-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.108 * Add a function to get a full snapshot from the paginated view -- Nicolas Dandrimont Thu, 18 Oct 2018 18:32:10 +0200 swh-storage (0.0.107-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.107 * Enable pagination of snapshot branches * Drop occurrence-related tables * Drop entity-related tables -- Nicolas Dandrimont Wed, 17 Oct 2018 15:06:07 +0200 swh-storage (0.0.106-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.106 * Fix origin_visit_get_latest_snapshot logic * Improve directory iterator * Drop backwards compatibility between snapshots and occurrences * Drop the occurrence table -- Nicolas Dandrimont Mon, 08 Oct 2018 17:03:54 +0200 swh-storage (0.0.105-1~swh1) unstable-swh; urgency=medium * v0.0.105 * Increase directory_ls endpoint to 20 seconds * Add snapshot to the stats endpoint * Improve documentation -- Antoine R. Dumont (@ardumont) Mon, 10 Sep 2018 11:36:27 +0200 swh-storage (0.0.104-1~swh1) unstable-swh; urgency=medium * version 0.0.104 -- Antoine Lambert Wed, 29 Aug 2018 15:55:37 +0200 swh-storage (0.0.103-1~swh1) unstable-swh; urgency=medium * v0.0.103 * swh.storage.storage: origin_add returns updated list of dict with id -- Antoine R. Dumont (@ardumont) Mon, 30 Jul 2018 11:47:53 +0200 swh-storage (0.0.102-1~swh1) unstable-swh; urgency=medium * Release swh-storage v0.0.102 * Stop using temporary tables for read-only queries * Add timeouts for some read-only queries -- Nicolas Dandrimont Tue, 05 Jun 2018 14:06:54 +0200 swh-storage (0.0.101-1~swh1) unstable-swh; urgency=medium * v0.0.101 * swh.storage.api.client: Permit to specify the query timeout option -- Antoine R. Dumont (@ardumont) Thu, 24 May 2018 12:13:51 +0200 swh-storage (0.0.100-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.100 * remote api: only instantiate storage once per import * add thread-awareness to the storage implementation * properly cleanup after tests * parallelize objstorage and storage additions -- Nicolas Dandrimont Sat, 12 May 2018 18:12:40 +0200 swh-storage (0.0.99-1~swh1) unstable-swh; urgency=medium * v0.0.99 * storage: Add methods to compute directories/revisions diff * Add a new table for "bucketed" object counts * doc: update table clusters in SQL diagram * swh.storage.content_missing: Improve docstring -- Antoine R. Dumont (@ardumont) Tue, 20 Feb 2018 13:32:25 +0100 swh-storage (0.0.98-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.98 * Switch backwards compatibility for snapshots off -- Nicolas Dandrimont Tue, 06 Feb 2018 15:27:15 +0100 swh-storage (0.0.97-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.97 * refactor database initialization * use a separate thread instead of a temporary file for COPY operations * add more snapshot-related endpoints -- Nicolas Dandrimont Tue, 06 Feb 2018 14:07:07 +0100 swh-storage (0.0.96-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.96 * Add snapshot models * Add support for hg revision type -- Nicolas Dandrimont Tue, 19 Dec 2017 16:25:57 +0100 swh-storage (0.0.95-1~swh1) unstable-swh; urgency=medium * v0.0.95 * swh.storage: Rename indexer_configuration to tool * swh.storage: Migrate indexer model to its own model -- Antoine R. Dumont (@ardumont) Thu, 07 Dec 2017 09:56:31 +0100 swh-storage (0.0.94-1~swh1) unstable-swh; urgency=medium * v0.0.94 * Open searching origins methods to storage -- Antoine R. Dumont (@ardumont) Tue, 05 Dec 2017 12:32:57 +0100 swh-storage (0.0.93-1~swh1) unstable-swh; urgency=medium * v0.0.93 * swh.storage: Open indexer_configuration_add endpoint * swh-data: Update content mimetype indexer configuration * origin_visit_get: make order repeatable * db: Make unique indices actually unique and vice versa * Add origin_metadata endpoints (add, get, etc...) * cleanup: Remove unused content provenance cache tables -- Antoine R. Dumont (@ardumont) Fri, 24 Nov 2017 11:14:11 +0100 swh-storage (0.0.92-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.92 * make swh.storage.schemata work on SQLAlchemy 1.0 -- Nicolas Dandrimont Thu, 12 Oct 2017 19:51:24 +0200 swh-storage (0.0.91-1~swh1) unstable-swh; urgency=medium * Release swh.storage version 0.0.91 * Update packaging runes -- Nicolas Dandrimont Thu, 12 Oct 2017 18:41:46 +0200 swh-storage (0.0.90-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.90 * Remove leaky dependency on python3-kafka -- Nicolas Dandrimont Wed, 11 Oct 2017 18:53:22 +0200 swh-storage (0.0.89-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.89 * Add new package for ancillary schemata * Add new metadata-related entry points * Update for new swh.model -- Nicolas Dandrimont Wed, 11 Oct 2017 17:39:29 +0200 swh-storage (0.0.88-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.88 * Move the archiver to its own module * Prepare building for stretch -- Nicolas Dandrimont Fri, 30 Jun 2017 14:52:12 +0200 swh-storage (0.0.87-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.87 * update tasks to new swh.scheduler api -- Nicolas Dandrimont Mon, 12 Jun 2017 17:54:11 +0200 swh-storage (0.0.86-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.86 * archiver updates -- Nicolas Dandrimont Tue, 06 Jun 2017 18:43:43 +0200 swh-storage (0.0.85-1~swh1) unstable-swh; urgency=medium * v0.0.85 * Improve license endpoint's unknown license policy -- Antoine R. Dumont (@ardumont) Tue, 06 Jun 2017 17:55:40 +0200 swh-storage (0.0.84-1~swh1) unstable-swh; urgency=medium * v0.0.84 * Update indexer endpoints to use indexer configuration id * Add indexer configuration endpoint -- Antoine R. Dumont (@ardumont) Fri, 02 Jun 2017 16:16:47 +0200 swh-storage (0.0.83-1~swh1) unstable-swh; urgency=medium * v0.0.83 * Add blake2s256 new hash computation on content -- Antoine R. Dumont (@ardumont) Fri, 31 Mar 2017 12:27:09 +0200 swh-storage (0.0.82-1~swh1) unstable-swh; urgency=medium * v0.0.82 * swh.storage.listener: Subscribe to new origin notifications * sql/swh-func: improve equality check on the three columns for swh_content_missing * swh.storage: add length to directory listing primitives * refactoring: Migrate from swh.core.hashutil to swh.model.hashutil * swh.storage.archiver.updater: Create a content updater journal client * vault: add a git fast-import cooker * vault: generic cache to allow multiple cooker types and formats -- Antoine R. Dumont (@ardumont) Tue, 21 Mar 2017 14:50:16 +0100 swh-storage (0.0.81-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.81 * archiver improvements for mass injection in azure -- Nicolas Dandrimont Thu, 09 Mar 2017 11:15:28 +0100 swh-storage (0.0.80-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.80 * archiver improvements related to the mass injection of contents in azure * updates to the vault cooker -- Nicolas Dandrimont Tue, 07 Mar 2017 15:12:35 +0100 swh-storage (0.0.79-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.79 * archiver: keep counts of objects in each archive * converters: normalize timestamps using swh.model -- Nicolas Dandrimont Tue, 14 Feb 2017 19:37:36 +0100 swh-storage (0.0.78-1~swh1) unstable-swh; urgency=medium * v0.0.78 * Refactoring some common code into swh.core + adaptation api calls in * swh.objstorage and swh.storage (storage and vault) -- Antoine R. Dumont (@ardumont) Thu, 26 Jan 2017 15:08:03 +0100 swh-storage (0.0.77-1~swh1) unstable-swh; urgency=medium * v0.0.77 * Paginate results for origin_visits endpoint -- Antoine R. Dumont (@ardumont) Thu, 19 Jan 2017 14:41:49 +0100 swh-storage (0.0.76-1~swh1) unstable-swh; urgency=medium * v0.0.76 * Unify storage and objstorage configuration and instantiation functions -- Antoine R. Dumont (@ardumont) Thu, 15 Dec 2016 18:25:58 +0100 swh-storage (0.0.75-1~swh1) unstable-swh; urgency=medium * v0.0.75 * Add information on indexer tools (T610) -- Antoine R. Dumont (@ardumont) Fri, 02 Dec 2016 18:21:36 +0100 swh-storage (0.0.74-1~swh1) unstable-swh; urgency=medium * v0.0.74 * Use strict equality for content ctags' symbols search -- Antoine R. Dumont (@ardumont) Tue, 29 Nov 2016 17:25:29 +0100 swh-storage (0.0.73-1~swh1) unstable-swh; urgency=medium * v0.0.73 * Improve ctags search query for edge cases -- Antoine R. Dumont (@ardumont) Mon, 28 Nov 2016 16:34:55 +0100 swh-storage (0.0.72-1~swh1) unstable-swh; urgency=medium * v0.0.72 * Permit pagination on content_ctags_search api endpoint -- Antoine R. Dumont (@ardumont) Thu, 24 Nov 2016 14:19:29 +0100 swh-storage (0.0.71-1~swh1) unstable-swh; urgency=medium * v0.0.71 * Open full-text search endpoint on ctags -- Antoine R. Dumont (@ardumont) Wed, 23 Nov 2016 17:33:51 +0100 swh-storage (0.0.70-1~swh1) unstable-swh; urgency=medium * v0.0.70 * Add new license endpoints (add/get) * Update ctags endpoints to align update conflict policy -- Antoine R. Dumont (@ardumont) Thu, 10 Nov 2016 17:27:49 +0100 swh-storage (0.0.69-1~swh1) unstable-swh; urgency=medium * v0.0.69 * storage: Open ctags entry points (missing, add, get) * storage: allow adding several origins at once -- Antoine R. Dumont (@ardumont) Thu, 20 Oct 2016 16:07:07 +0200 swh-storage (0.0.68-1~swh1) unstable-swh; urgency=medium * v0.0.68 * indexer: Open mimetype/language get endpoints * indexer: Add the mimetype/language add function with conflict_update flag * archiver: Extend worker-to-backend to transmit messages to another * queue (once done) -- Antoine R. Dumont (@ardumont) Thu, 13 Oct 2016 15:30:21 +0200 swh-storage (0.0.67-1~swh1) unstable-swh; urgency=medium * v0.0.67 * Fix provenance storage init function -- Antoine R. Dumont (@ardumont) Wed, 12 Oct 2016 02:24:12 +0200 swh-storage (0.0.66-1~swh1) unstable-swh; urgency=medium * v0.0.66 * Improve provenance configuration format -- Antoine R. Dumont (@ardumont) Wed, 12 Oct 2016 01:39:26 +0200 swh-storage (0.0.65-1~swh1) unstable-swh; urgency=medium * v0.0.65 * Open api entry points for swh.indexer about content mimetype and * language * Update schema graph to latest version -- Antoine R. Dumont (@ardumont) Sat, 08 Oct 2016 10:00:30 +0200 swh-storage (0.0.64-1~swh1) unstable-swh; urgency=medium * v0.0.64 * Fix: Missing incremented version 5 for archiver.dbversion * Retrieve information on a content cached * sql/swh-func: content cache populates lines in deterministic order -- Antoine R. Dumont (@ardumont) Thu, 29 Sep 2016 21:50:59 +0200 swh-storage (0.0.63-1~swh1) unstable-swh; urgency=medium * v0.0.63 * Make the 'worker to backend' destination agnostic (message parameter) * Improve 'unknown sha1' policy (archiver db can lag behind swh db) * Improve 'force copy' policy -- Antoine R. Dumont (@ardumont) Fri, 23 Sep 2016 12:29:50 +0200 swh-storage (0.0.62-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.62 * Updates to the provenance cache to reduce churn on the main tables -- Nicolas Dandrimont Thu, 22 Sep 2016 18:54:52 +0200 swh-storage (0.0.61-1~swh1) unstable-swh; urgency=medium * v0.0.61 * Handle copies of unregistered sha1 in archiver db * Fix copy to only the targeted destination * Update to latest python3-swh.core dependency -- Antoine R. Dumont (@ardumont) Thu, 22 Sep 2016 13:44:05 +0200 swh-storage (0.0.60-1~swh1) unstable-swh; urgency=medium * v0.0.60 * Update archiver dependencies -- Antoine R. Dumont (@ardumont) Tue, 20 Sep 2016 16:46:48 +0200 swh-storage (0.0.59-1~swh1) unstable-swh; urgency=medium * v0.0.59 * Unify configuration property between director/worker * Deal with potential missing contents in the archiver db * Improve get_contents_error implementation * Remove dead code in swh.storage.db about archiver -- Antoine R. Dumont (@ardumont) Sat, 17 Sep 2016 12:50:14 +0200 swh-storage (0.0.58-1~swh1) unstable-swh; urgency=medium * v0.0.58 * ArchiverDirectorToBackend reads sha1 from stdin and sends chunks of sha1 * for archival. -- Antoine R. Dumont (@ardumont) Fri, 16 Sep 2016 22:17:14 +0200 swh-storage (0.0.57-1~swh1) unstable-swh; urgency=medium * v0.0.57 * Update swh.storage.archiver -- Antoine R. Dumont (@ardumont) Thu, 15 Sep 2016 16:30:11 +0200 swh-storage (0.0.56-1~swh1) unstable-swh; urgency=medium * v0.0.56 * Vault: Add vault implementation (directory cooker & cache * implementation + its api) * Archiver: Add another archiver implementation (direct to backend) -- Antoine R. Dumont (@ardumont) Thu, 15 Sep 2016 10:56:35 +0200 swh-storage (0.0.55-1~swh1) unstable-swh; urgency=medium * v0.0.55 * Fix origin_visit endpoint -- Antoine R. Dumont (@ardumont) Thu, 08 Sep 2016 15:21:28 +0200 swh-storage (0.0.54-1~swh1) unstable-swh; urgency=medium * v0.0.54 * Open origin_visit_get_by entry point -- Antoine R. Dumont (@ardumont) Mon, 05 Sep 2016 12:36:34 +0200 swh-storage (0.0.53-1~swh1) unstable-swh; urgency=medium * v0.0.53 * Add cache about content provenance * debian: fix python3-swh.storage.archiver runtime dependency * debian: create new package python3-swh.storage.provenance -- Antoine R. Dumont (@ardumont) Fri, 02 Sep 2016 11:14:09 +0200 swh-storage (0.0.52-1~swh1) unstable-swh; urgency=medium * v0.0.52 * Package python3-swh.storage.archiver -- Antoine R. Dumont (@ardumont) Thu, 25 Aug 2016 14:55:23 +0200 swh-storage (0.0.51-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.51 * Add new metadata column to origin_visit * Update swh-add-directory script for updated API -- Nicolas Dandrimont Wed, 24 Aug 2016 14:36:03 +0200 swh-storage (0.0.50-1~swh1) unstable-swh; urgency=medium * v0.0.50 * Add a function to pull (only) metadata for a list of contents * Update occurrence_add api entry point to properly deal with origin_visit * Add origin_visit api entry points to create/update origin_visit -- Antoine R. Dumont (@ardumont) Tue, 23 Aug 2016 16:29:26 +0200 swh-storage (0.0.49-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.49 * Proper dependency on python3-kafka -- Nicolas Dandrimont Fri, 19 Aug 2016 13:45:52 +0200 swh-storage (0.0.48-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.48 * Updates to the archiver * Notification support for new object creations -- Nicolas Dandrimont Fri, 19 Aug 2016 12:13:50 +0200 swh-storage (0.0.47-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.47 * Update storage archiver to new schemaless schema -- Nicolas Dandrimont Fri, 22 Jul 2016 16:59:19 +0200 swh-storage (0.0.46-1~swh1) unstable-swh; urgency=medium * v0.0.46 * Update archiver bootstrap -- Antoine R. Dumont (@ardumont) Wed, 20 Jul 2016 19:04:42 +0200 swh-storage (0.0.45-1~swh1) unstable-swh; urgency=medium * v0.0.45 * Separate swh.storage.archiver's db from swh.storage.storage -- Antoine R. Dumont (@ardumont) Tue, 19 Jul 2016 15:05:36 +0200 swh-storage (0.0.44-1~swh1) unstable-swh; urgency=medium * v0.0.44 * Open listing visits per origin api -- Quentin Campos Fri, 08 Jul 2016 11:27:10 +0200 swh-storage (0.0.43-1~swh1) unstable-swh; urgency=medium * v0.0.43 * Extract objstorage to its own package swh.objstorage -- Quentin Campos Mon, 27 Jun 2016 14:57:12 +0200 swh-storage (0.0.42-1~swh1) unstable-swh; urgency=medium * Add an object storage multiplexer to allow transition between multiple versions of object storages. -- Quentin Campos Tue, 21 Jun 2016 15:03:52 +0200 swh-storage (0.0.41-1~swh1) unstable-swh; urgency=medium * Refactoring of the object storage in order to allow multiple versions of it, as well as a multiplexer for version transition. -- Quentin Campos Thu, 16 Jun 2016 15:54:16 +0200 swh-storage (0.0.40-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.40: * Refactor objstorage to allow for different implementations * Updates to the checker functionality * Bump swh.core dependency to v0.0.20 -- Nicolas Dandrimont Tue, 14 Jun 2016 17:25:42 +0200 swh-storage (0.0.39-1~swh1) unstable-swh; urgency=medium * v0.0.39 * Add run_from_webserver function for objstorage api server * Add unique identifier message on default api server route endpoints -- Antoine R. Dumont (@ardumont) Fri, 20 May 2016 15:27:34 +0200 swh-storage (0.0.38-1~swh1) unstable-swh; urgency=medium * v0.0.38 * Add an http api for object storage * Implement an archiver to perform backup copies -- Quentin Campos Fri, 20 May 2016 14:40:14 +0200 swh-storage (0.0.37-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.37 * Add fullname to person table * Add svn as a revision type -- Nicolas Dandrimont Fri, 08 Apr 2016 16:44:24 +0200 swh-storage (0.0.36-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.36 * Add json-schema documentation for the jsonb fields * Overhaul entity handling -- Nicolas Dandrimont Wed, 16 Mar 2016 17:27:17 +0100 swh-storage (0.0.35-1~swh1) unstable-swh; urgency=medium * Release swh-storage v0.0.35 * Factor in temporary tables with only an id (db v059) * Allow generic object search by sha1_git (db v060) -- Nicolas Dandrimont Thu, 25 Feb 2016 16:21:01 +0100 swh-storage (0.0.34-1~swh1) unstable-swh; urgency=medium * Release swh.storage version 0.0.34 * occurrence improvements * commit metadata improvements -- Nicolas Dandrimont Fri, 19 Feb 2016 18:20:07 +0100 swh-storage (0.0.33-1~swh1) unstable-swh; urgency=medium * Bump swh.storage to version 0.0.33 -- Nicolas Dandrimont Fri, 05 Feb 2016 11:17:00 +0100 swh-storage (0.0.32-1~swh1) unstable-swh; urgency=medium * v0.0.32 * Let the person's id flow * sql/upgrades/051: 050->051 schema change * sql/upgrades/050: 049->050 schema change - Clean up obsolete functions * sql/upgrades/049: Final take for 048->049 schema change. * sql: Use a new schema for occurrences -- Antoine R. Dumont (@ardumont) Fri, 29 Jan 2016 17:44:27 +0100 swh-storage (0.0.31-1~swh1) unstable-swh; urgency=medium * v0.0.31 * Deal with occurrence_history.branch, occurrence.branch, release.name as bytes -- Antoine R. Dumont (@ardumont) Wed, 27 Jan 2016 15:45:53 +0100 swh-storage (0.0.30-1~swh1) unstable-swh; urgency=medium * Prepare swh.storage v0.0.30 release * type-agnostic occurrences and revisions -- Nicolas Dandrimont Tue, 26 Jan 2016 07:36:43 +0100 swh-storage (0.0.29-1~swh1) unstable-swh; urgency=medium * v0.0.29 * New: * Upgrade sql schema to 041→043 * Deal with communication downtime between clients and storage * Open occurrence_get(origin_id) to retrieve latest occurrences per origin * Open release_get_by to retrieve a release by origin * Open directory_get to retrieve information on directory by id * Open entity_get to retrieve information on entity + hierarchy from its uuid * Open directory_get that retrieve information on directory per id * Update: * directory_get/directory_ls: Rename to directory_ls * revision_log: update to retrieve logs from multiple root revisions * revision_get_by: branch name filtering is now optional -- Antoine R. Dumont (@ardumont) Wed, 20 Jan 2016 16:15:50 +0100 swh-storage (0.0.28-1~swh1) unstable-swh; urgency=medium * v0.0.28 * Open entity_get api -- Antoine R. Dumont (@ardumont) Fri, 15 Jan 2016 16:37:27 +0100 swh-storage (0.0.27-1~swh1) unstable-swh; urgency=medium * v0.0.27 * Open directory_entry_get_by_path api * Improve get_revision_by api performance * sql/swh-schema: add index on origin(type, url) --> improve origin lookup api * Bump to 039 db version -- Antoine R. Dumont (@ardumont) Fri, 15 Jan 2016 12:42:47 +0100 swh-storage (0.0.26-1~swh1) unstable-swh; urgency=medium * v0.0.26 * Open revision_get_by to retrieve a revision by occurrence criterion filtering * sql/upgrades/036: add 035→036 upgrade script -- Antoine R. Dumont (@ardumont) Wed, 13 Jan 2016 12:46:44 +0100 swh-storage (0.0.25-1~swh1) unstable-swh; urgency=medium * v0.0.25 * Limit results in swh_revision_list* * Create the package to align the current db production version on https://archive.softwareheritage.org/ -- Antoine R. Dumont (@ardumont) Fri, 08 Jan 2016 11:33:08 +0100 swh-storage (0.0.24-1~swh1) unstable-swh; urgency=medium * Prepare swh.storage release v0.0.24 * Add a limit argument to revision_log -- Nicolas Dandrimont Wed, 06 Jan 2016 15:12:53 +0100 swh-storage (0.0.23-1~swh1) unstable-swh; urgency=medium * v0.0.23 * Protect against overflow, wrapped in ValueError for client * Fix relative path import for remote storage. * api to retrieve revision_log is now 'parents' aware -- Antoine R. Dumont (@ardumont) Wed, 06 Jan 2016 11:30:58 +0100 swh-storage (0.0.22-1~swh1) unstable-swh; urgency=medium * Release v0.0.22 * Fix relative import for remote storage -- Nicolas Dandrimont Wed, 16 Dec 2015 16:04:48 +0100 swh-storage (0.0.21-1~swh1) unstable-swh; urgency=medium * Prepare release v0.0.21 * Protect the storage api client from overflows * Add a get_storage function mapping to local or remote storage -- Nicolas Dandrimont Wed, 16 Dec 2015 13:34:46 +0100 swh-storage (0.0.20-1~swh1) unstable-swh; urgency=medium * v0.0.20 * allow numeric timestamps with offset * Open revision_log api * start migration to swh.model -- Antoine R. Dumont (@ardumont) Mon, 07 Dec 2015 15:20:36 +0100 swh-storage (0.0.19-1~swh1) unstable-swh; urgency=medium * v0.0.19 * Improve directory listing with content data * Open person_get * Open release_get data reading * Improve origin_get api * Effort to unify api output on dict (for read) * Migrate backend to 032 -- Antoine R. Dumont (@ardumont) Fri, 27 Nov 2015 13:33:34 +0100 swh-storage (0.0.18-1~swh1) unstable-swh; urgency=medium * v0.0.18 * Improve origin_get to permit retrieval per id * Update directory_get implementation (add join from * directory_entry_file to content) * Open release_get : [sha1] -> [Release] -- Antoine R. Dumont (@ardumont) Thu, 19 Nov 2015 11:18:35 +0100 swh-storage (0.0.17-1~swh1) unstable-swh; urgency=medium * Prepare deployment of swh.storage v0.0.17 * Add some entity related entry points -- Nicolas Dandrimont Tue, 03 Nov 2015 16:40:59 +0100 swh-storage (0.0.16-1~swh1) unstable-swh; urgency=medium * v0.0.16 * Add metadata column in revision (db version 29) * cache http connection for remote storage client -- Antoine R. Dumont (@ardumont) Thu, 29 Oct 2015 10:29:00 +0100 swh-storage (0.0.15-1~swh1) unstable-swh; urgency=medium * Prepare deployment of swh.storage v0.0.15 * Allow population of fetch_history * Update organizations / projects as entities * Use schema v028 for directory addition -- Nicolas Dandrimont Tue, 27 Oct 2015 11:43:39 +0100 swh-storage (0.0.14-1~swh1) unstable-swh; urgency=medium * Prepare swh.storage v0.0.14 deployment -- Nicolas Dandrimont Fri, 16 Oct 2015 15:34:08 +0200 swh-storage (0.0.13-1~swh1) unstable-swh; urgency=medium * Prepare deploying swh.storage v0.0.13 -- Nicolas Dandrimont Fri, 16 Oct 2015 14:51:44 +0200 swh-storage (0.0.12-1~swh1) unstable-swh; urgency=medium * Prepare deploying swh.storage v0.0.12 -- Nicolas Dandrimont Tue, 13 Oct 2015 12:39:18 +0200 swh-storage (0.0.11-1~swh1) unstable-swh; urgency=medium * Preparing deployment of swh.storage v0.0.11 -- Nicolas Dandrimont Fri, 09 Oct 2015 17:44:51 +0200 swh-storage (0.0.10-1~swh1) unstable-swh; urgency=medium * Prepare deployment of swh.storage v0.0.10 -- Nicolas Dandrimont Tue, 06 Oct 2015 17:37:00 +0200 swh-storage (0.0.9-1~swh1) unstable-swh; urgency=medium * Prepare deployment of swh.storage v0.0.9 -- Nicolas Dandrimont Thu, 01 Oct 2015 19:03:00 +0200 swh-storage (0.0.8-1~swh1) unstable-swh; urgency=medium * Prepare deployment of swh.storage v0.0.8 -- Nicolas Dandrimont Thu, 01 Oct 2015 11:32:46 +0200 swh-storage (0.0.7-1~swh1) unstable-swh; urgency=medium * Prepare deployment of swh.storage v0.0.7 -- Nicolas Dandrimont Tue, 29 Sep 2015 16:52:54 +0200 swh-storage (0.0.6-1~swh1) unstable-swh; urgency=medium * Prepare deployment of swh.storage v0.0.6 -- Nicolas Dandrimont Tue, 29 Sep 2015 16:43:24 +0200 swh-storage (0.0.5-1~swh1) unstable-swh; urgency=medium * Prepare deploying swh.storage v0.0.5 -- Nicolas Dandrimont Tue, 29 Sep 2015 16:27:00 +0200 swh-storage (0.0.1-1~swh1) unstable-swh; urgency=medium * Initial release * swh.storage.api: Properly escape arbitrary byte sequences in arguments -- Nicolas Dandrimont Tue, 22 Sep 2015 17:02:34 +0200 diff --git a/swh.storage.egg-info/PKG-INFO b/swh.storage.egg-info/PKG-INFO index 97e2f192..6bf581ad 100644 --- a/swh.storage.egg-info/PKG-INFO +++ b/swh.storage.egg-info/PKG-INFO @@ -1,250 +1,250 @@ Metadata-Version: 2.1 Name: swh.storage -Version: 0.35.0 +Version: 0.35.1 Summary: Software Heritage storage manager Home-page: https://forge.softwareheritage.org/diffusion/DSTO/ Author: Software Heritage developers Author-email: swh-devel@inria.fr License: UNKNOWN Project-URL: Bug Reports, https://forge.softwareheritage.org/maniphest Project-URL: Funding, https://www.softwareheritage.org/donate Project-URL: Source, https://forge.softwareheritage.org/source/swh-storage Project-URL: Documentation, https://docs.softwareheritage.org/devel/swh-storage/ Platform: UNKNOWN Classifier: Programming Language :: Python :: 3 Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3) Classifier: Operating System :: OS Independent Classifier: Development Status :: 5 - Production/Stable Requires-Python: >=3.7 Description-Content-Type: text/markdown Provides-Extra: testing Provides-Extra: journal License-File: LICENSE License-File: AUTHORS swh-storage =========== Abstraction layer over the archive, allowing to access all stored source code artifacts as well as their metadata. See the [documentation](https://docs.softwareheritage.org/devel/swh-storage/index.html) for more details. ## Quick start ### Dependencies Python tests for this module include tests that cannot be run without a local Postgresql database, so you need the Postgresql server executable on your machine (no need to have a running Postgresql server). They also expect a cassandra server. #### Debian-like host ``` $ sudo apt install libpq-dev postgresql-11 cassandra ``` #### Non Debian-like host The tests expects the path to `cassandra` to either be unspecified, it is then looked up at `/usr/sbin/cassandra`, either specified through the environment variable `SWH_CASSANDRA_BIN`. Optionally, you can avoid running the cassandra tests. ``` (swh) :~/swh-storage$ tox -- -m 'not cassandra' ``` ### Installation It is strongly recommended to use a virtualenv. In the following, we consider you work in a virtualenv named `swh`. See the [developer setup guide](https://docs.softwareheritage.org/devel/developer-setup.html#developer-setup) for a more details on how to setup a working environment. You can install the package directly from [pypi](https://pypi.org/p/swh.storage): ``` (swh) :~$ pip install swh.storage [...] ``` Or from sources: ``` (swh) :~$ git clone https://forge.softwareheritage.org/source/swh-storage.git [...] (swh) :~$ cd swh-storage (swh) :~/swh-storage$ pip install . [...] ``` Then you can check it's properly installed: ``` (swh) :~$ swh storage --help Usage: swh storage [OPTIONS] COMMAND [ARGS]... Software Heritage Storage tools. Options: -h, --help Show this message and exit. Commands: rpc-serve Software Heritage Storage RPC server. ``` ## Tests The best way of running Python tests for this module is to use [tox](https://tox.readthedocs.io/). ``` (swh) :~$ pip install tox ``` ### tox From the sources directory, simply use tox: ``` (swh) :~/swh-storage$ tox [...] ========= 315 passed, 6 skipped, 15 warnings in 40.86 seconds ========== _______________________________ summary ________________________________ flake8: commands succeeded py3: commands succeeded congratulations :) ``` Note: it is possible to set the `JAVA_HOME` environment variable to specify the version of the JVM to be used by Cassandra. For example, at the time of writing this, Cassandra does not support java 14, so one may want to use for example java 11: ``` (swh) :~/swh-storage$ export JAVA_HOME=/usr/lib/jvm/java-14-openjdk-amd64/bin/java (swh) :~/swh-storage$ tox [...] ``` ## Development The storage server can be locally started. It requires a configuration file and a running Postgresql database. ### Sample configuration A typical configuration `storage.yml` file is: ``` storage: cls: postgresql db: "dbname=softwareheritage-dev user= password=" objstorage: cls: pathslicing root: /tmp/swh-storage/ slicing: 0:2/2:4/4:6 ``` which means, this uses: - a local storage instance whose db connection is to `softwareheritage-dev` local instance, - the objstorage uses a local objstorage instance whose: - `root` path is /tmp/swh-storage, - slicing scheme is `0:2/2:4/4:6`. This means that the identifier of the content (sha1) which will be stored on disk at first level with the first 2 hex characters, the second level with the next 2 hex characters and the third level with the next 2 hex characters. And finally the complete hash file holding the raw content. For example: 00062f8bd330715c4f819373653d97b3cd34394c will be stored at 00/06/2f/00062f8bd330715c4f819373653d97b3cd34394c Note that the `root` path should exist on disk before starting the server. ### Starting the storage server If the python package has been properly installed (e.g. in a virtual env), you should be able to use the command: ``` (swh) :~/swh-storage$ swh storage rpc-serve storage.yml ``` This runs a local swh-storage api at 5002 port. ``` (swh) :~/swh-storage$ curl http://127.0.0.1:5002 Software Heritage storage server

You have reached the Software Heritage storage server.
See its documentation and API for more information

``` ### And then what? In your upper layer ([loader-git](https://forge.softwareheritage.org/source/swh-loader-git/), [loader-svn](https://forge.softwareheritage.org/source/swh-loader-svn/), etc...), you can define a remote storage with this snippet of yaml configuration. ``` storage: cls: remote url: http://localhost:5002/ ``` You could directly define a postgresql storage with the following snippet: ``` storage: cls: postgresql db: service=swh-dev objstorage: cls: pathslicing root: /home/storage/swh-storage/ slicing: 0:2/2:4/4:6 ``` ## Cassandra As an alternative to PostgreSQL, swh-storage can use Cassandra as a database backend. It can be used like this: ``` storage: cls: cassandra hosts: - localhost objstorage: cls: pathslicing root: /home/storage/swh-storage/ slicing: 0:2/2:4/4:6 ``` The Cassandra swh-storage implementation supports both Cassandra >= 4.0-alpha2 and ScyllaDB >= 4.4 (and possibly earlier versions, but this is untested). While the main code supports both transparently, running tests or configuring the schema requires specific code when using ScyllaDB, enabled by setting the `SWH_USE_SCYLLADB=1` environment variable. diff --git a/swh/storage/cassandra/cql.py b/swh/storage/cassandra/cql.py index 10c5ba86..89662459 100644 --- a/swh/storage/cassandra/cql.py +++ b/swh/storage/cassandra/cql.py @@ -1,1278 +1,1292 @@ # Copyright (C) 2019-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information from collections import Counter import dataclasses import datetime import functools import logging import random from typing import ( Any, Callable, Dict, Iterable, Iterator, List, Optional, Tuple, Type, TypeVar, Union, ) from cassandra import ConsistencyLevel, CoordinationFailure from cassandra.cluster import EXEC_PROFILE_DEFAULT, Cluster, ExecutionProfile, ResultSet from cassandra.policies import DCAwareRoundRobinPolicy, TokenAwarePolicy from cassandra.query import BoundStatement, PreparedStatement, dict_factory from mypy_extensions import NamedArg from tenacity import ( retry, retry_if_exception_type, stop_after_attempt, wait_random_exponential, ) +from swh.core.utils import grouper from swh.model.identifiers import CoreSWHID from swh.model.model import ( Content, Person, Sha1Git, SkippedContent, Timestamp, TimestampWithTimezone, ) from swh.storage.interface import ListOrder from ..utils import remove_keys from .common import TOKEN_BEGIN, TOKEN_END, hash_url from .model import ( MAGIC_NULL_PK, BaseRow, ContentRow, DirectoryEntryRow, DirectoryRow, ExtIDByTargetRow, ExtIDRow, MetadataAuthorityRow, MetadataFetcherRow, ObjectCountRow, OriginRow, OriginVisitRow, OriginVisitStatusRow, RawExtrinsicMetadataByIdRow, RawExtrinsicMetadataRow, ReleaseRow, RevisionParentRow, RevisionRow, SkippedContentRow, SnapshotBranchRow, SnapshotRow, content_index_table_name, ) from .schema import CREATE_TABLES_QUERIES, HASH_ALGORITHMS +PARTITION_KEY_RESTRICTION_MAX_SIZE = 100 +"""Maximum number of restrictions in a single query. +Usually this is a very low number (eg. SELECT ... FROM ... WHERE x=?), +but some queries can request arbitrarily many (eg. SELECT ... FROM ... WHERE x IN ?). + +This can cause performance issues, as the node getting the query need to +coordinate with other nodes to get the complete results. +See for details and rationale. +""" + + logger = logging.getLogger(__name__) def get_execution_profiles( consistency_level: str = "ONE", ) -> Dict[object, ExecutionProfile]: if consistency_level not in ConsistencyLevel.name_to_value: raise ValueError( f"Configuration error: Unknown consistency level '{consistency_level}'" ) return { EXEC_PROFILE_DEFAULT: ExecutionProfile( load_balancing_policy=TokenAwarePolicy(DCAwareRoundRobinPolicy()), row_factory=dict_factory, consistency_level=ConsistencyLevel.name_to_value[consistency_level], ) } # Configuration for cassandra-driver's access to servers: # * hit the right server directly when sending a query (TokenAwarePolicy), # * if there's more than one, then pick one at random that's in the same # datacenter as the client (DCAwareRoundRobinPolicy) def create_keyspace( hosts: List[str], keyspace: str, port: int = 9042, *, durable_writes=True ): cluster = Cluster(hosts, port=port, execution_profiles=get_execution_profiles()) session = cluster.connect() extra_params = "" if not durable_writes: extra_params = "AND durable_writes = false" session.execute( """CREATE KEYSPACE IF NOT EXISTS "%s" WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 } %s; """ % (keyspace, extra_params) ) session.execute('USE "%s"' % keyspace) for query in CREATE_TABLES_QUERIES: session.execute(query) TRet = TypeVar("TRet") def _prepared_statement( query: str, ) -> Callable[[Callable[..., TRet]], Callable[..., TRet]]: """Returns a decorator usable on methods of CqlRunner, to inject them with a 'statement' argument, that is a prepared statement corresponding to the query. This only works on methods of CqlRunner, as preparing a statement requires a connection to a Cassandra server.""" def decorator(f): @functools.wraps(f) def newf(self, *args, **kwargs) -> TRet: if f.__name__ not in self._prepared_statements: statement: PreparedStatement = self._session.prepare(query) self._prepared_statements[f.__name__] = statement return f( self, *args, **kwargs, statement=self._prepared_statements[f.__name__] ) return newf return decorator TArg = TypeVar("TArg") TSelf = TypeVar("TSelf") def _prepared_insert_statement( row_class: Type[BaseRow], ) -> Callable[ [Callable[[TSelf, TArg, NamedArg(Any, "statement")], TRet]], # noqa Callable[[TSelf, TArg], TRet], ]: """Shorthand for using `_prepared_statement` for `INSERT INTO` statements.""" columns = row_class.cols() return _prepared_statement( "INSERT INTO %s (%s) VALUES (%s)" % (row_class.TABLE, ", ".join(columns), ", ".join("?" for _ in columns),) ) def _prepared_exists_statement( table_name: str, ) -> Callable[ [Callable[[TSelf, TArg, NamedArg(Any, "statement")], TRet]], # noqa Callable[[TSelf, TArg], TRet], ]: """Shorthand for using `_prepared_statement` for queries that only check which ids in a list exist in the table.""" return _prepared_statement(f"SELECT id FROM {table_name} WHERE id IN ?") def _prepared_select_statement( row_class: Type[BaseRow], clauses: str = "", cols: Optional[List[str]] = None, ) -> Callable[[Callable[..., TRet]], Callable[..., TRet]]: if cols is None: cols = row_class.cols() return _prepared_statement( f"SELECT {', '.join(cols)} FROM {row_class.TABLE} {clauses}" ) def _prepared_select_statements( row_class: Type[BaseRow], queries: Dict[Any, str], ) -> Callable[[Callable[..., TRet]], Callable[..., TRet]]: """Like _prepared_statement, but supports multiple statements, passed a dict, and passes a dict of prepared statements to the decorated method""" cols = row_class.cols() statement_start = f"SELECT {', '.join(cols)} FROM {row_class.TABLE} " def decorator(f): @functools.wraps(f) def newf(self, *args, **kwargs) -> TRet: if f.__name__ not in self._prepared_statements: self._prepared_statements[f.__name__] = { key: self._session.prepare(statement_start + query) for (key, query) in queries.items() } return f( self, *args, **kwargs, statements=self._prepared_statements[f.__name__] ) return newf return decorator def _next_bytes_value(value: bytes) -> bytes: """Returns the next bytes value by incrementing the integer representation of the provided value and converting it back to bytes. For instance when prefix is b"abcd", it returns b"abce". """ next_value_int = int.from_bytes(value, byteorder="big") + 1 return next_value_int.to_bytes( (next_value_int.bit_length() + 7) // 8, byteorder="big" ) class CqlRunner: """Class managing prepared statements and building queries to be sent to Cassandra.""" def __init__( self, hosts: List[str], keyspace: str, port: int, consistency_level: str ): self._cluster = Cluster( hosts, port=port, execution_profiles=get_execution_profiles(consistency_level), ) self._session = self._cluster.connect(keyspace) self._cluster.register_user_type( keyspace, "microtimestamp_with_timezone", TimestampWithTimezone ) self._cluster.register_user_type(keyspace, "microtimestamp", Timestamp) self._cluster.register_user_type(keyspace, "person", Person) # directly a PreparedStatement for methods decorated with # @_prepared_statements (and its wrappers, _prepared_insert_statement, # _prepared_exists_statement, and _prepared_select_statement); # and a dict of PreparedStatements with @_prepared_select_statements self._prepared_statements: Dict[ str, Union[PreparedStatement, Dict[Any, PreparedStatement]] ] = {} ########################## # Common utility functions ########################## MAX_RETRIES = 3 @retry( wait=wait_random_exponential(multiplier=1, max=10), stop=stop_after_attempt(MAX_RETRIES), retry=retry_if_exception_type(CoordinationFailure), ) def _execute_with_retries(self, statement, args) -> ResultSet: return self._session.execute(statement, args, timeout=1000.0) @_prepared_statement( "UPDATE object_count SET count = count + ? " "WHERE partition_key = 0 AND object_type = ?" ) def _increment_counter( self, object_type: str, nb: int, *, statement: PreparedStatement ) -> None: self._execute_with_retries(statement, [nb, object_type]) def _add_one(self, statement, obj: BaseRow) -> None: self._increment_counter(obj.TABLE, 1) self._execute_with_retries(statement, dataclasses.astuple(obj)) _T = TypeVar("_T", bound=BaseRow) def _get_random_row(self, row_class: Type[_T], statement) -> Optional[_T]: # noqa """Takes a prepared statement of the form "SELECT * FROM WHERE token() > ? LIMIT 1" and uses it to return a random row""" token = random.randint(TOKEN_BEGIN, TOKEN_END) rows = self._execute_with_retries(statement, [token]) if not rows: # There are no row with a greater token; wrap around to get # the row with the smallest token rows = self._execute_with_retries(statement, [TOKEN_BEGIN]) if rows: return row_class.from_dict(rows.one()) # type: ignore else: return None def _missing(self, statement, ids): - rows = self._execute_with_retries(statement, [ids]) - found_ids = {row["id"] for row in rows} + found_ids = set() + for id_group in grouper(ids, PARTITION_KEY_RESTRICTION_MAX_SIZE): + rows = self._execute_with_retries(statement, [list(id_group)]) + found_ids.update(row["id"] for row in rows) return [id_ for id_ in ids if id_ not in found_ids] ########################## # 'content' table ########################## def _content_add_finalize(self, statement: BoundStatement) -> None: """Returned currified by content_add_prepare, to be called when the content row should be added to the primary table.""" self._execute_with_retries(statement, None) self._increment_counter("content", 1) @_prepared_insert_statement(ContentRow) def content_add_prepare( self, content: ContentRow, *, statement ) -> Tuple[int, Callable[[], None]]: """Prepares insertion of a Content to the main 'content' table. Returns a token (to be used in secondary tables), and a function to be called to perform the insertion in the main table.""" statement = statement.bind(dataclasses.astuple(content)) # Type used for hashing keys (usually, it will be # cassandra.metadata.Murmur3Token) token_class = self._cluster.metadata.token_map.token_class # Token of the row when it will be inserted. This is equivalent to # "SELECT token({', '.join(ContentRow.PARTITION_KEY)}) FROM content WHERE ..." # after the row is inserted; but we need the token to insert in the # index tables *before* inserting to the main 'content' table token = token_class.from_key(statement.routing_key).value assert TOKEN_BEGIN <= token <= TOKEN_END # Function to be called after the indexes contain their respective # row finalizer = functools.partial(self._content_add_finalize, statement) return (token, finalizer) @_prepared_select_statement( ContentRow, f"WHERE {' AND '.join(map('%s = ?'.__mod__, HASH_ALGORITHMS))}" ) def content_get_from_pk( self, content_hashes: Dict[str, bytes], *, statement ) -> Optional[ContentRow]: rows = list( self._execute_with_retries( statement, [content_hashes[algo] for algo in HASH_ALGORITHMS] ) ) assert len(rows) <= 1 if rows: return ContentRow(**rows[0]) else: return None @_prepared_select_statement( ContentRow, f"WHERE token({', '.join(ContentRow.PARTITION_KEY)}) = ?" ) def content_get_from_token(self, token, *, statement) -> Iterable[ContentRow]: return map(ContentRow.from_dict, self._execute_with_retries(statement, [token])) @_prepared_select_statement( ContentRow, f"WHERE token({', '.join(ContentRow.PARTITION_KEY)}) > ? LIMIT 1" ) def content_get_random(self, *, statement) -> Optional[ContentRow]: return self._get_random_row(ContentRow, statement) @_prepared_statement( """ SELECT token({pk}) AS tok, {cols} FROM {table} WHERE token({pk}) >= ? AND token({pk}) <= ? LIMIT ? """.format( pk=", ".join(ContentRow.PARTITION_KEY), cols=", ".join(ContentRow.cols()), table=ContentRow.TABLE, ) ) def content_get_token_range( self, start: int, end: int, limit: int, *, statement ) -> Iterable[Tuple[int, ContentRow]]: """Returns an iterable of (token, row)""" return ( (row["tok"], ContentRow.from_dict(remove_keys(row, ("tok",)))) for row in self._execute_with_retries(statement, [start, end, limit]) ) ########################## # 'content_by_*' tables ########################## @_prepared_statement( f""" SELECT sha1_git AS id FROM {content_index_table_name("sha1_git", skipped_content=False)} WHERE sha1_git IN ? """ ) def content_missing_by_sha1_git( self, ids: List[bytes], *, statement ) -> List[bytes]: return self._missing(statement, ids) def content_index_add_one(self, algo: str, content: Content, token: int) -> None: """Adds a row mapping content[algo] to the token of the Content in the main 'content' table.""" query = f""" INSERT INTO {content_index_table_name(algo, skipped_content=False)} ({algo}, target_token) VALUES (%s, %s) """ self._execute_with_retries(query, [content.get_hash(algo), token]) def content_get_tokens_from_single_hash( self, algo: str, hash_: bytes ) -> Iterable[int]: assert algo in HASH_ALGORITHMS query = f""" SELECT target_token FROM {content_index_table_name(algo, skipped_content=False)} WHERE {algo} = %s """ return ( row["target_token"] for row in self._execute_with_retries(query, [hash_]) ) ########################## # 'skipped_content' table ########################## def _skipped_content_add_finalize(self, statement: BoundStatement) -> None: """Returned currified by skipped_content_add_prepare, to be called when the content row should be added to the primary table.""" self._execute_with_retries(statement, None) self._increment_counter("skipped_content", 1) @_prepared_insert_statement(SkippedContentRow) def skipped_content_add_prepare( self, content, *, statement ) -> Tuple[int, Callable[[], None]]: """Prepares insertion of a Content to the main 'skipped_content' table. Returns a token (to be used in secondary tables), and a function to be called to perform the insertion in the main table.""" # Replace NULLs (which are not allowed in the partition key) with # an empty byte string for key in SkippedContentRow.PARTITION_KEY: if getattr(content, key) is None: setattr(content, key, MAGIC_NULL_PK) statement = statement.bind(dataclasses.astuple(content)) # Type used for hashing keys (usually, it will be # cassandra.metadata.Murmur3Token) token_class = self._cluster.metadata.token_map.token_class # Token of the row when it will be inserted. This is equivalent to # "SELECT token({', '.join(SkippedContentRow.PARTITION_KEY)}) # FROM skipped_content WHERE ..." # after the row is inserted; but we need the token to insert in the # index tables *before* inserting to the main 'skipped_content' table token = token_class.from_key(statement.routing_key).value assert TOKEN_BEGIN <= token <= TOKEN_END # Function to be called after the indexes contain their respective # row finalizer = functools.partial(self._skipped_content_add_finalize, statement) return (token, finalizer) @_prepared_select_statement( SkippedContentRow, f"WHERE {' AND '.join(map('%s = ?'.__mod__, HASH_ALGORITHMS))}", ) def skipped_content_get_from_pk( self, content_hashes: Dict[str, bytes], *, statement ) -> Optional[SkippedContentRow]: rows = list( self._execute_with_retries( statement, [content_hashes[algo] or MAGIC_NULL_PK for algo in HASH_ALGORITHMS], ) ) assert len(rows) <= 1 if rows: return SkippedContentRow.from_dict(rows[0]) else: return None @_prepared_select_statement( SkippedContentRow, f"WHERE token({', '.join(SkippedContentRow.PARTITION_KEY)}) = ?", ) def skipped_content_get_from_token( self, token, *, statement ) -> Iterable[SkippedContentRow]: return map( SkippedContentRow.from_dict, self._execute_with_retries(statement, [token]) ) ########################## # 'skipped_content_by_*' tables ########################## def skipped_content_index_add_one( self, algo: str, content: SkippedContent, token: int ) -> None: """Adds a row mapping content[algo] to the token of the SkippedContent in the main 'skipped_content' table.""" query = ( f"INSERT INTO skipped_content_by_{algo} ({algo}, target_token) " f"VALUES (%s, %s)" ) self._execute_with_retries( query, [content.get_hash(algo) or MAGIC_NULL_PK, token] ) def skipped_content_get_tokens_from_single_hash( self, algo: str, hash_: bytes ) -> Iterable[int]: assert algo in HASH_ALGORITHMS query = f""" SELECT target_token FROM {content_index_table_name(algo, skipped_content=True)} WHERE {algo} = %s """ return ( row["target_token"] for row in self._execute_with_retries(query, [hash_]) ) ########################## # 'revision' table ########################## @_prepared_exists_statement("revision") def revision_missing(self, ids: List[bytes], *, statement) -> List[bytes]: return self._missing(statement, ids) @_prepared_insert_statement(RevisionRow) def revision_add_one(self, revision: RevisionRow, *, statement) -> None: self._add_one(statement, revision) @_prepared_statement(f"SELECT id FROM {RevisionRow.TABLE} WHERE id IN ?") def revision_get_ids(self, revision_ids, *, statement) -> Iterable[int]: return ( row["id"] for row in self._execute_with_retries(statement, [revision_ids]) ) @_prepared_select_statement(RevisionRow, "WHERE id IN ?") def revision_get( self, revision_ids: List[Sha1Git], *, statement ) -> Iterable[RevisionRow]: return map( RevisionRow.from_dict, self._execute_with_retries(statement, [revision_ids]) ) @_prepared_select_statement(RevisionRow, "WHERE token(id) > ? LIMIT 1") def revision_get_random(self, *, statement) -> Optional[RevisionRow]: return self._get_random_row(RevisionRow, statement) ########################## # 'revision_parent' table ########################## @_prepared_insert_statement(RevisionParentRow) def revision_parent_add_one( self, revision_parent: RevisionParentRow, *, statement ) -> None: self._add_one(statement, revision_parent) @_prepared_statement( f"SELECT parent_id FROM {RevisionParentRow.TABLE} WHERE id = ?" ) def revision_parent_get( self, revision_id: Sha1Git, *, statement ) -> Iterable[bytes]: return ( row["parent_id"] for row in self._execute_with_retries(statement, [revision_id]) ) ########################## # 'release' table ########################## @_prepared_exists_statement("release") def release_missing(self, ids: List[bytes], *, statement) -> List[bytes]: return self._missing(statement, ids) @_prepared_insert_statement(ReleaseRow) def release_add_one(self, release: ReleaseRow, *, statement) -> None: self._add_one(statement, release) @_prepared_select_statement(ReleaseRow, "WHERE id in ?") def release_get(self, release_ids: List[str], *, statement) -> Iterable[ReleaseRow]: return map( ReleaseRow.from_dict, self._execute_with_retries(statement, [release_ids]) ) @_prepared_select_statement(ReleaseRow, "WHERE token(id) > ? LIMIT 1") def release_get_random(self, *, statement) -> Optional[ReleaseRow]: return self._get_random_row(ReleaseRow, statement) ########################## # 'directory' table ########################## @_prepared_exists_statement("directory") def directory_missing(self, ids: List[bytes], *, statement) -> List[bytes]: return self._missing(statement, ids) @_prepared_insert_statement(DirectoryRow) def directory_add_one(self, directory: DirectoryRow, *, statement) -> None: """Called after all calls to directory_entry_add_one, to commit/finalize the directory.""" self._add_one(statement, directory) @_prepared_select_statement(DirectoryRow, "WHERE token(id) > ? LIMIT 1") def directory_get_random(self, *, statement) -> Optional[DirectoryRow]: return self._get_random_row(DirectoryRow, statement) ########################## # 'directory_entry' table ########################## @_prepared_insert_statement(DirectoryEntryRow) def directory_entry_add_one(self, entry: DirectoryEntryRow, *, statement) -> None: self._add_one(statement, entry) @_prepared_select_statement(DirectoryEntryRow, "WHERE directory_id IN ?") def directory_entry_get( self, directory_ids, *, statement ) -> Iterable[DirectoryEntryRow]: return map( DirectoryEntryRow.from_dict, self._execute_with_retries(statement, [directory_ids]), ) @_prepared_select_statement( DirectoryEntryRow, "WHERE directory_id = ? AND name >= ? LIMIT ?" ) def directory_entry_get_from_name( self, directory_id: Sha1Git, from_: bytes, limit: int, *, statement ) -> Iterable[DirectoryEntryRow]: return map( DirectoryEntryRow.from_dict, self._execute_with_retries(statement, [directory_id, from_, limit]), ) ########################## # 'snapshot' table ########################## @_prepared_exists_statement("snapshot") def snapshot_missing(self, ids: List[bytes], *, statement) -> List[bytes]: return self._missing(statement, ids) @_prepared_insert_statement(SnapshotRow) def snapshot_add_one(self, snapshot: SnapshotRow, *, statement) -> None: self._add_one(statement, snapshot) @_prepared_select_statement(SnapshotRow, "WHERE token(id) > ? LIMIT 1") def snapshot_get_random(self, *, statement) -> Optional[SnapshotRow]: return self._get_random_row(SnapshotRow, statement) ########################## # 'snapshot_branch' table ########################## @_prepared_insert_statement(SnapshotBranchRow) def snapshot_branch_add_one(self, branch: SnapshotBranchRow, *, statement) -> None: self._add_one(statement, branch) @_prepared_statement( f""" SELECT ascii_bins_count(target_type) AS counts FROM {SnapshotBranchRow.TABLE} WHERE snapshot_id = ? AND name >= ? """ ) def snapshot_count_branches_from_name( self, snapshot_id: Sha1Git, from_: bytes, *, statement ) -> Dict[Optional[str], int]: row = self._execute_with_retries(statement, [snapshot_id, from_]).one() (nb_none, counts) = row["counts"] return {None: nb_none, **counts} @_prepared_statement( f""" SELECT ascii_bins_count(target_type) AS counts FROM {SnapshotBranchRow.TABLE} WHERE snapshot_id = ? AND name < ? """ ) def snapshot_count_branches_before_name( self, snapshot_id: Sha1Git, before: bytes, *, statement, ) -> Dict[Optional[str], int]: row = self._execute_with_retries(statement, [snapshot_id, before]).one() (nb_none, counts) = row["counts"] return {None: nb_none, **counts} def snapshot_count_branches( self, snapshot_id: Sha1Git, branch_name_exclude_prefix: Optional[bytes] = None, ) -> Dict[Optional[str], int]: """Returns a dictionary from type names to the number of branches of that type.""" prefix = branch_name_exclude_prefix if prefix is None: return self.snapshot_count_branches_from_name(snapshot_id, b"") else: # counts branches before exclude prefix counts = Counter( self.snapshot_count_branches_before_name(snapshot_id, prefix) ) # no need to execute that part if each bit of the prefix equals 1 if prefix.replace(b"\xff", b"") != b"": # counts branches after exclude prefix and update counters counts.update( self.snapshot_count_branches_from_name( snapshot_id, _next_bytes_value(prefix) ) ) return counts @_prepared_select_statement( SnapshotBranchRow, "WHERE snapshot_id = ? AND name >= ? LIMIT ?" ) def snapshot_branch_get_from_name( self, snapshot_id: Sha1Git, from_: bytes, limit: int, *, statement ) -> Iterable[SnapshotBranchRow]: return map( SnapshotBranchRow.from_dict, self._execute_with_retries(statement, [snapshot_id, from_, limit]), ) @_prepared_select_statement( SnapshotBranchRow, "WHERE snapshot_id = ? AND name >= ? AND name < ? LIMIT ?" ) def snapshot_branch_get_range( self, snapshot_id: Sha1Git, from_: bytes, before: bytes, limit: int, *, statement, ) -> Iterable[SnapshotBranchRow]: return map( SnapshotBranchRow.from_dict, self._execute_with_retries(statement, [snapshot_id, from_, before, limit]), ) def snapshot_branch_get( self, snapshot_id: Sha1Git, from_: bytes, limit: int, branch_name_exclude_prefix: Optional[bytes] = None, ) -> Iterable[SnapshotBranchRow]: prefix = branch_name_exclude_prefix if prefix is None: return self.snapshot_branch_get_from_name(snapshot_id, from_, limit) else: # get branches before the exclude prefix branches = list( self.snapshot_branch_get_range(snapshot_id, from_, prefix, limit) ) nb_branches = len(branches) # no need to execute that part if limit is reached # or if each bit of the prefix equals 1 if nb_branches < limit and prefix.replace(b"\xff", b"") != b"": # get branches after the exclude prefix and update list to return branches.extend( self.snapshot_branch_get_from_name( snapshot_id, _next_bytes_value(prefix), limit - nb_branches ) ) return branches ########################## # 'origin' table ########################## @_prepared_insert_statement(OriginRow) def origin_add_one(self, origin: OriginRow, *, statement) -> None: self._add_one(statement, origin) @_prepared_select_statement(OriginRow, "WHERE sha1 = ?") def origin_get_by_sha1(self, sha1: bytes, *, statement) -> Iterable[OriginRow]: return map(OriginRow.from_dict, self._execute_with_retries(statement, [sha1])) def origin_get_by_url(self, url: str) -> Iterable[OriginRow]: return self.origin_get_by_sha1(hash_url(url)) @_prepared_statement( f""" SELECT token(sha1) AS tok, {", ".join(OriginRow.cols())} FROM {OriginRow.TABLE} WHERE token(sha1) >= ? LIMIT ? """ ) def origin_list( self, start_token: int, limit: int, *, statement ) -> Iterable[Tuple[int, OriginRow]]: """Returns an iterable of (token, origin)""" return ( (row["tok"], OriginRow.from_dict(remove_keys(row, ("tok",)))) for row in self._execute_with_retries(statement, [start_token, limit]) ) @_prepared_select_statement(OriginRow) def origin_iter_all(self, *, statement) -> Iterable[OriginRow]: return map(OriginRow.from_dict, self._execute_with_retries(statement, [])) @_prepared_statement(f"SELECT next_visit_id FROM {OriginRow.TABLE} WHERE sha1 = ?") def _origin_get_next_visit_id(self, origin_sha1: bytes, *, statement) -> int: rows = list(self._execute_with_retries(statement, [origin_sha1])) assert len(rows) == 1 # TODO: error handling return rows[0]["next_visit_id"] @_prepared_statement( f""" UPDATE {OriginRow.TABLE} SET next_visit_id=? WHERE sha1 = ? IF next_visit_id=? """ ) def origin_generate_unique_visit_id(self, origin_url: str, *, statement) -> int: origin_sha1 = hash_url(origin_url) next_id = self._origin_get_next_visit_id(origin_sha1) while True: res = list( self._execute_with_retries( statement, [next_id + 1, origin_sha1, next_id] ) ) assert len(res) == 1 if res[0]["[applied]"]: # No data race return next_id else: # Someone else updated it before we did, let's try again next_id = res[0]["next_visit_id"] # TODO: abort after too many attempts return next_id ########################## # 'origin_visit' table ########################## @_prepared_select_statements( OriginVisitRow, { (True, ListOrder.ASC): ( "WHERE origin = ? AND visit > ? ORDER BY visit ASC LIMIT ?" ), (True, ListOrder.DESC): ( "WHERE origin = ? AND visit < ? ORDER BY visit DESC LIMIT ?" ), (False, ListOrder.ASC): "WHERE origin = ? ORDER BY visit ASC LIMIT ?", (False, ListOrder.DESC): "WHERE origin = ? ORDER BY visit DESC LIMIT ?", }, ) def origin_visit_get( self, origin_url: str, last_visit: Optional[int], limit: int, order: ListOrder, *, statements, ) -> Iterable[OriginVisitRow]: args: List[Any] = [origin_url] if last_visit is not None: args.append(last_visit) args.append(limit) statement = statements[(last_visit is not None, order)] return map( OriginVisitRow.from_dict, self._execute_with_retries(statement, args) ) @_prepared_insert_statement(OriginVisitRow) def origin_visit_add_one(self, visit: OriginVisitRow, *, statement) -> None: self._add_one(statement, visit) @_prepared_select_statement(OriginVisitRow, "WHERE origin = ? AND visit = ?") def origin_visit_get_one( self, origin_url: str, visit_id: int, *, statement ) -> Optional[OriginVisitRow]: # TODO: error handling rows = list(self._execute_with_retries(statement, [origin_url, visit_id])) if rows: return OriginVisitRow.from_dict(rows[0]) else: return None @_prepared_select_statement(OriginVisitRow, "WHERE origin = ?") def origin_visit_get_all( self, origin_url: str, *, statement ) -> Iterable[OriginVisitRow]: return map( OriginVisitRow.from_dict, self._execute_with_retries(statement, [origin_url]), ) @_prepared_select_statement(OriginVisitRow, "WHERE token(origin) >= ?") def _origin_visit_iter_from( self, min_token: int, *, statement ) -> Iterable[OriginVisitRow]: return map( OriginVisitRow.from_dict, self._execute_with_retries(statement, [min_token]) ) @_prepared_select_statement(OriginVisitRow, "WHERE token(origin) < ?") def _origin_visit_iter_to( self, max_token: int, *, statement ) -> Iterable[OriginVisitRow]: return map( OriginVisitRow.from_dict, self._execute_with_retries(statement, [max_token]) ) def origin_visit_iter(self, start_token: int) -> Iterator[OriginVisitRow]: """Returns all origin visits in order from this token, and wraps around the token space.""" yield from self._origin_visit_iter_from(start_token) yield from self._origin_visit_iter_to(start_token) ########################## # 'origin_visit_status' table ########################## @_prepared_select_statements( OriginVisitStatusRow, { (True, ListOrder.ASC): ( "WHERE origin = ? AND visit = ? AND date >= ? " "ORDER BY visit ASC LIMIT ?" ), (True, ListOrder.DESC): ( "WHERE origin = ? AND visit = ? AND date <= ? " "ORDER BY visit DESC LIMIT ?" ), (False, ListOrder.ASC): ( "WHERE origin = ? AND visit = ? ORDER BY visit ASC LIMIT ?" ), (False, ListOrder.DESC): ( "WHERE origin = ? AND visit = ? ORDER BY visit DESC LIMIT ?" ), }, ) def origin_visit_status_get_range( self, origin: str, visit: int, date_from: Optional[datetime.datetime], limit: int, order: ListOrder, *, statements, ) -> Iterable[OriginVisitStatusRow]: args: List[Any] = [origin, visit] if date_from is not None: args.append(date_from) args.append(limit) statement = statements[(date_from is not None, order)] return map( OriginVisitStatusRow.from_dict, self._execute_with_retries(statement, args) ) @_prepared_insert_statement(OriginVisitStatusRow) def origin_visit_status_add_one( self, visit_update: OriginVisitStatusRow, *, statement ) -> None: self._add_one(statement, visit_update) def origin_visit_status_get_latest( self, origin: str, visit: int, ) -> Optional[OriginVisitStatusRow]: """Given an origin visit id, return its latest origin_visit_status """ return next(self.origin_visit_status_get(origin, visit), None) @_prepared_select_statement( OriginVisitStatusRow, # 'visit DESC,' is optional with Cassandra 4, but ScyllaDB needs it "WHERE origin = ? AND visit = ? ORDER BY visit DESC, date DESC", ) def origin_visit_status_get( self, origin: str, visit: int, *, statement, ) -> Iterator[OriginVisitStatusRow]: """Return all origin visit statuses for a given visit """ return map( OriginVisitStatusRow.from_dict, self._execute_with_retries(statement, [origin, visit]), ) ########################## # 'metadata_authority' table ########################## @_prepared_insert_statement(MetadataAuthorityRow) def metadata_authority_add(self, authority: MetadataAuthorityRow, *, statement): self._add_one(statement, authority) @_prepared_select_statement(MetadataAuthorityRow, "WHERE type = ? AND url = ?") def metadata_authority_get( self, type, url, *, statement ) -> Optional[MetadataAuthorityRow]: rows = list(self._execute_with_retries(statement, [type, url])) if rows: return MetadataAuthorityRow.from_dict(rows[0]) else: return None ########################## # 'metadata_fetcher' table ########################## @_prepared_insert_statement(MetadataFetcherRow) def metadata_fetcher_add(self, fetcher, *, statement): self._add_one(statement, fetcher) @_prepared_select_statement(MetadataFetcherRow, "WHERE name = ? AND version = ?") def metadata_fetcher_get( self, name, version, *, statement ) -> Optional[MetadataFetcherRow]: rows = list(self._execute_with_retries(statement, [name, version])) if rows: return MetadataFetcherRow.from_dict(rows[0]) else: return None ######################### # 'raw_extrinsic_metadata_by_id' table ######################### @_prepared_insert_statement(RawExtrinsicMetadataByIdRow) def raw_extrinsic_metadata_by_id_add(self, row, *, statement): self._add_one(statement, row) @_prepared_select_statement(RawExtrinsicMetadataByIdRow, "WHERE id IN ?") def raw_extrinsic_metadata_get_by_ids( self, ids: List[Sha1Git], *, statement ) -> Iterable[RawExtrinsicMetadataByIdRow]: return map( RawExtrinsicMetadataByIdRow.from_dict, self._execute_with_retries(statement, [ids]), ) ######################### # 'raw_extrinsic_metadata' table ######################### @_prepared_insert_statement(RawExtrinsicMetadataRow) def raw_extrinsic_metadata_add(self, raw_extrinsic_metadata, *, statement): self._add_one(statement, raw_extrinsic_metadata) @_prepared_select_statement( RawExtrinsicMetadataRow, "WHERE target=? AND authority_url=? AND discovery_date>? AND authority_type=?", ) def raw_extrinsic_metadata_get_after_date( self, target: str, authority_type: str, authority_url: str, after: datetime.datetime, *, statement, ) -> Iterable[RawExtrinsicMetadataRow]: return map( RawExtrinsicMetadataRow.from_dict, self._execute_with_retries( statement, [target, authority_url, after, authority_type] ), ) @_prepared_select_statement( RawExtrinsicMetadataRow, # This is equivalent to: # WHERE target=? AND authority_type = ? AND authority_url = ? " # AND (discovery_date, id) > (?, ?)" # but it needs to be written this way to work with ScyllaDB. "WHERE target=? AND (authority_type, authority_url) <= (?, ?) " "AND (authority_type, authority_url, discovery_date, id) > (?, ?, ?, ?)", ) def raw_extrinsic_metadata_get_after_date_and_id( self, target: str, authority_type: str, authority_url: str, after_date: datetime.datetime, after_id: bytes, *, statement, ) -> Iterable[RawExtrinsicMetadataRow]: return map( RawExtrinsicMetadataRow.from_dict, self._execute_with_retries( statement, [ target, authority_type, authority_url, authority_type, authority_url, after_date, after_id, ], ), ) @_prepared_select_statement( RawExtrinsicMetadataRow, "WHERE target=? AND authority_url=? AND authority_type=?", ) def raw_extrinsic_metadata_get( self, target: str, authority_type: str, authority_url: str, *, statement ) -> Iterable[RawExtrinsicMetadataRow]: return map( RawExtrinsicMetadataRow.from_dict, self._execute_with_retries( statement, [target, authority_url, authority_type] ), ) @_prepared_statement( "SELECT authority_type, authority_url FROM raw_extrinsic_metadata " "WHERE target = ?" ) def raw_extrinsic_metadata_get_authorities( self, target: str, *, statement ) -> Iterable[Tuple[str, str]]: return ( (entry["authority_type"], entry["authority_url"]) for entry in self._execute_with_retries(statement, [target]) ) ########################## # 'extid' table ########################## def _extid_add_finalize(self, statement: BoundStatement) -> None: """Returned currified by extid_add_prepare, to be called when the extid row should be added to the primary table.""" self._execute_with_retries(statement, None) self._increment_counter("extid", 1) @_prepared_insert_statement(ExtIDRow) def extid_add_prepare( self, extid: ExtIDRow, *, statement ) -> Tuple[int, Callable[[], None]]: statement = statement.bind(dataclasses.astuple(extid)) token_class = self._cluster.metadata.token_map.token_class token = token_class.from_key(statement.routing_key).value assert TOKEN_BEGIN <= token <= TOKEN_END # Function to be called after the indexes contain their respective # row finalizer = functools.partial(self._extid_add_finalize, statement) return (token, finalizer) @_prepared_select_statement( ExtIDRow, "WHERE extid_type=? AND extid=? AND extid_version=? " "AND target_type=? AND target=?", ) def extid_get_from_pk( self, extid_type: str, extid: bytes, extid_version: int, target: CoreSWHID, *, statement, ) -> Optional[ExtIDRow]: rows = list( self._execute_with_retries( statement, [ extid_type, extid, extid_version, target.object_type.value, target.object_id, ], ), ) assert len(rows) <= 1 if rows: return ExtIDRow(**rows[0]) else: return None @_prepared_select_statement( ExtIDRow, "WHERE token(extid_type, extid) = ?", ) def extid_get_from_token(self, token: int, *, statement) -> Iterable[ExtIDRow]: return map(ExtIDRow.from_dict, self._execute_with_retries(statement, [token]),) @_prepared_select_statement( ExtIDRow, "WHERE extid_type=? AND extid=?", ) def extid_get_from_extid( self, extid_type: str, extid: bytes, *, statement ) -> Iterable[ExtIDRow]: return map( ExtIDRow.from_dict, self._execute_with_retries(statement, [extid_type, extid]), ) def extid_get_from_target( self, target_type: str, target: bytes ) -> Iterable[ExtIDRow]: for token in self._extid_get_tokens_from_target(target_type, target): if token is not None: for extid in self.extid_get_from_token(token): # re-check the extid against target (in case of murmur3 collision) if ( extid is not None and extid.target_type == target_type and extid.target == target ): yield extid ########################## # 'extid_by_target' table ########################## @_prepared_insert_statement(ExtIDByTargetRow) def extid_index_add_one(self, row: ExtIDByTargetRow, *, statement) -> None: """Adds a row mapping extid[target_type, target] to the token of the ExtID in the main 'extid' table.""" self._add_one(statement, row) @_prepared_statement( f""" SELECT target_token FROM {ExtIDByTargetRow.TABLE} WHERE target_type = ? AND target = ? """ ) def _extid_get_tokens_from_target( self, target_type: str, target: bytes, *, statement ) -> Iterable[int]: return ( row["target_token"] for row in self._execute_with_retries(statement, [target_type, target]) ) ########################## # Miscellaneous ########################## @_prepared_statement("SELECT uuid() FROM revision LIMIT 1;") def check_read(self, *, statement): self._execute_with_retries(statement, []) @_prepared_select_statement(ObjectCountRow, "WHERE partition_key=0") def stat_counters(self, *, statement) -> Iterable[ObjectCountRow]: return map(ObjectCountRow.from_dict, self._execute_with_retries(statement, [])) diff --git a/swh/storage/tests/storage_tests.py b/swh/storage/tests/storage_tests.py index 0d9686fb..a8906cbb 100644 --- a/swh/storage/tests/storage_tests.py +++ b/swh/storage/tests/storage_tests.py @@ -1,4531 +1,4547 @@ # Copyright (C) 2015-2021 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information from collections import defaultdict import datetime from datetime import timedelta import inspect import itertools import math import random from typing import Any, ClassVar, Dict, Iterator, Optional from unittest.mock import MagicMock import attr from hypothesis import HealthCheck, given, settings, strategies import pytest from swh.core.api.classes import stream_results from swh.model import from_disk from swh.model.hashutil import DEFAULT_ALGORITHMS, hash_to_bytes from swh.model.hypothesis_strategies import objects from swh.model.identifiers import CoreSWHID, ObjectType from swh.model.model import ( Content, Directory, ExtID, Origin, OriginVisit, OriginVisitStatus, Person, RawExtrinsicMetadata, Revision, SkippedContent, Snapshot, SnapshotBranch, TargetType, ) from swh.storage import get_storage from swh.storage.common import origin_url_to_sha1 as sha1 from swh.storage.exc import HashCollision, StorageArgumentException from swh.storage.interface import ListOrder, PagedResult, StorageInterface from swh.storage.tests.conftest import function_scoped_fixture_check from swh.storage.utils import ( content_hex_hashes, now, remove_keys, round_to_milliseconds, ) def transform_entries( storage: StorageInterface, dir_: Directory, *, prefix: bytes = b"" ) -> Iterator[Dict[str, Any]]: """Iterate through a directory's entries, and yields the items 'directory_ls' is expected to return; including content metadata for file entries.""" for ent in dir_.entries: if ent.type == "dir": yield { "dir_id": dir_.id, "type": ent.type, "target": ent.target, "name": prefix + ent.name, "perms": ent.perms, "status": None, "sha1": None, "sha1_git": None, "sha256": None, "length": None, } elif ent.type == "file": contents = storage.content_find({"sha1_git": ent.target}) assert contents ent_dict = contents[0].to_dict() for key in ["ctime", "blake2s256"]: ent_dict.pop(key, None) ent_dict.update( { "dir_id": dir_.id, "type": ent.type, "target": ent.target, "name": prefix + ent.name, "perms": ent.perms, } ) yield ent_dict def assert_contents_ok( expected_contents, actual_contents, keys_to_check={"sha1", "data"} ): """Assert that a given list of contents matches on a given set of keys. """ for k in keys_to_check: expected_list = set([c.get(k) for c in expected_contents]) actual_list = set([c.get(k) for c in actual_contents]) assert actual_list == expected_list, k class LazyContent(Content): def with_data(self): return Content.from_dict({**self.to_dict(), "data": b"42\n"}) class TestStorage: """Main class for Storage testing. This class is used as-is to test local storage (see TestLocalStorage below) and remote storage (see TestRemoteStorage in test_remote_storage.py. We need to have the two classes inherit from this base class separately to avoid nosetests running the tests from the base class twice. """ maxDiff = None # type: ClassVar[Optional[int]] def test_types(self, swh_storage_backend_config): """Checks all methods of StorageInterface are implemented by this backend, and that they have the same signature.""" # Create an instance of the protocol (which cannot be instantiated # directly, so this creates a subclass, then instantiates it) interface = type("_", (StorageInterface,), {})() storage = get_storage(**swh_storage_backend_config) assert "content_add" in dir(interface) missing_methods = [] for meth_name in dir(interface): if meth_name.startswith("_"): continue interface_meth = getattr(interface, meth_name) try: concrete_meth = getattr(storage, meth_name) except AttributeError: if not getattr(interface_meth, "deprecated_endpoint", False): # The backend is missing a (non-deprecated) endpoint missing_methods.append(meth_name) continue expected_signature = inspect.signature(interface_meth) actual_signature = inspect.signature(concrete_meth) assert expected_signature == actual_signature, meth_name assert missing_methods == [] # If all the assertions above succeed, then this one should too. # But there's no harm in double-checking. # And we could replace the assertions above by this one, but unlike # the assertions above, it doesn't explain what is missing. assert isinstance(storage, StorageInterface) def test_check_config(self, swh_storage): assert swh_storage.check_config(check_write=True) assert swh_storage.check_config(check_write=False) def test_content_add(self, swh_storage, sample_data): cont = sample_data.content insertion_start_time = now() actual_result = swh_storage.content_add([cont]) insertion_end_time = now() assert actual_result == { "content:add": 1, "content:add:bytes": cont.length, } assert swh_storage.content_get_data(cont.sha1) == cont.data expected_cont = attr.evolve(cont, data=None) contents = [ obj for (obj_type, obj) in swh_storage.journal_writer.journal.objects if obj_type == "content" ] assert len(contents) == 1 for obj in contents: assert insertion_start_time <= obj.ctime assert obj.ctime <= insertion_end_time assert obj == expected_cont swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["content"] == 1 def test_content_add_from_lazy_content(self, swh_storage, sample_data): cont = sample_data.content lazy_content = LazyContent.from_dict(cont.to_dict()) insertion_start_time = now() actual_result = swh_storage.content_add([lazy_content]) insertion_end_time = now() assert actual_result == { "content:add": 1, "content:add:bytes": cont.length, } # the fact that we retrieve the content object from the storage with # the correct 'data' field ensures it has been 'called' assert swh_storage.content_get_data(cont.sha1) == cont.data expected_cont = attr.evolve(lazy_content, data=None, ctime=None) contents = [ obj for (obj_type, obj) in swh_storage.journal_writer.journal.objects if obj_type == "content" ] assert len(contents) == 1 for obj in contents: assert insertion_start_time <= obj.ctime assert obj.ctime <= insertion_end_time assert attr.evolve(obj, ctime=None).to_dict() == expected_cont.to_dict() swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["content"] == 1 def test_content_get_data_missing(self, swh_storage, sample_data): cont, cont2 = sample_data.contents[:2] swh_storage.content_add([cont]) # Query a single missing content actual_content_data = swh_storage.content_get_data(cont2.sha1) assert actual_content_data is None # Check content_get does not abort after finding a missing content actual_content_data = swh_storage.content_get_data(cont.sha1) assert actual_content_data == cont.data actual_content_data = swh_storage.content_get_data(cont2.sha1) assert actual_content_data is None def test_content_add_different_input(self, swh_storage, sample_data): cont, cont2 = sample_data.contents[:2] actual_result = swh_storage.content_add([cont, cont2]) assert actual_result == { "content:add": 2, "content:add:bytes": cont.length + cont2.length, } def test_content_add_twice(self, swh_storage, sample_data): cont, cont2 = sample_data.contents[:2] actual_result = swh_storage.content_add([cont]) assert actual_result == { "content:add": 1, "content:add:bytes": cont.length, } assert len(swh_storage.journal_writer.journal.objects) == 1 actual_result = swh_storage.content_add([cont, cont2]) assert actual_result == { "content:add": 1, "content:add:bytes": cont2.length, } assert 2 <= len(swh_storage.journal_writer.journal.objects) <= 3 assert len(swh_storage.content_find(cont.to_dict())) == 1 assert len(swh_storage.content_find(cont2.to_dict())) == 1 def test_content_add_collision(self, swh_storage, sample_data): cont1 = sample_data.content # create (corrupted) content with same sha1{,_git} but != sha256 sha256_array = bytearray(cont1.sha256) sha256_array[0] += 1 cont1b = attr.evolve(cont1, sha256=bytes(sha256_array)) with pytest.raises(HashCollision) as cm: swh_storage.content_add([cont1, cont1b]) exc = cm.value actual_algo = exc.algo assert actual_algo in ["sha1", "sha1_git"] actual_id = exc.hash_id assert actual_id == getattr(cont1, actual_algo).hex() collisions = exc.args[2] assert len(collisions) == 2 assert collisions == [ content_hex_hashes(cont1.hashes()), content_hex_hashes(cont1b.hashes()), ] assert exc.colliding_content_hashes() == [ cont1.hashes(), cont1b.hashes(), ] def test_content_add_duplicate(self, swh_storage, sample_data): cont = sample_data.content swh_storage.content_add([cont, cont]) assert swh_storage.content_get_data(cont.sha1) == cont.data def test_content_update(self, swh_storage, sample_data): cont1 = sample_data.content if hasattr(swh_storage, "journal_writer"): swh_storage.journal_writer.journal = None # TODO, not supported swh_storage.content_add([cont1]) # alter the sha1_git for example cont1b = attr.evolve( cont1, sha1_git=hash_to_bytes("3a60a5275d0333bf13468e8b3dcab90f4046e654") ) swh_storage.content_update([cont1b.to_dict()], keys=["sha1_git"]) actual_contents = swh_storage.content_get([cont1.sha1]) expected_content = attr.evolve(cont1b, data=None) assert actual_contents == [expected_content] def test_content_add_metadata(self, swh_storage, sample_data): cont = attr.evolve(sample_data.content, data=None, ctime=now()) actual_result = swh_storage.content_add_metadata([cont]) assert actual_result == { "content:add": 1, } expected_cont = cont assert swh_storage.content_get([cont.sha1]) == [expected_cont] contents = [ obj for (obj_type, obj) in swh_storage.journal_writer.journal.objects if obj_type == "content" ] assert len(contents) == 1 for obj in contents: obj = attr.evolve(obj, ctime=None) assert obj == cont def test_content_add_metadata_different_input(self, swh_storage, sample_data): contents = sample_data.contents[:2] cont = attr.evolve(contents[0], data=None, ctime=now()) cont2 = attr.evolve(contents[1], data=None, ctime=now()) actual_result = swh_storage.content_add_metadata([cont, cont2]) assert actual_result == { "content:add": 2, } def test_content_add_metadata_collision(self, swh_storage, sample_data): cont1 = attr.evolve(sample_data.content, data=None, ctime=now()) # create (corrupted) content with same sha1{,_git} but != sha256 sha1_git_array = bytearray(cont1.sha256) sha1_git_array[0] += 1 cont1b = attr.evolve(cont1, sha256=bytes(sha1_git_array)) with pytest.raises(HashCollision) as cm: swh_storage.content_add_metadata([cont1, cont1b]) exc = cm.value actual_algo = exc.algo assert actual_algo in ["sha1", "sha1_git", "blake2s256"] actual_id = exc.hash_id assert actual_id == getattr(cont1, actual_algo).hex() collisions = exc.args[2] assert len(collisions) == 2 assert collisions == [ content_hex_hashes(cont1.hashes()), content_hex_hashes(cont1b.hashes()), ] assert exc.colliding_content_hashes() == [ cont1.hashes(), cont1b.hashes(), ] def test_content_add_objstorage_first(self, swh_storage, sample_data): """Tests the objstorage is written to before the DB and journal""" cont = sample_data.content swh_storage.objstorage.content_add = MagicMock(side_effect=Exception("Oops")) # Try to add, but the objstorage crashes try: swh_storage.content_add([cont]) except Exception: pass # The DB must be written to after the objstorage, so the DB should be # unchanged if the objstorage crashed assert swh_storage.content_get_data(cont.sha1) is None # The journal too assert list(swh_storage.journal_writer.journal.objects) == [] def test_skipped_content_add(self, swh_storage, sample_data): contents = sample_data.skipped_contents[:2] cont = contents[0] cont2 = attr.evolve(contents[1], blake2s256=None) contents_dict = [c.to_dict() for c in [cont, cont2]] missing = list(swh_storage.skipped_content_missing(contents_dict)) assert missing == [cont.hashes(), cont2.hashes()] actual_result = swh_storage.skipped_content_add([cont, cont, cont2]) assert 2 <= actual_result.pop("skipped_content:add") <= 3 assert actual_result == {} missing = list(swh_storage.skipped_content_missing(contents_dict)) assert missing == [] def test_skipped_content_add_missing_hashes(self, swh_storage, sample_data): cont, cont2 = [ attr.evolve(c, sha1_git=None) for c in sample_data.skipped_contents[:2] ] contents_dict = [c.to_dict() for c in [cont, cont2]] missing = list(swh_storage.skipped_content_missing(contents_dict)) assert len(missing) == 2 actual_result = swh_storage.skipped_content_add([cont, cont, cont2]) assert 2 <= actual_result.pop("skipped_content:add") <= 3 assert actual_result == {} missing = list(swh_storage.skipped_content_missing(contents_dict)) assert missing == [] def test_skipped_content_missing_partial_hash(self, swh_storage, sample_data): cont = sample_data.skipped_content cont2 = attr.evolve(cont, sha1_git=None) contents_dict = [c.to_dict() for c in [cont, cont2]] missing = list(swh_storage.skipped_content_missing(contents_dict)) assert len(missing) == 2 actual_result = swh_storage.skipped_content_add([cont]) assert actual_result.pop("skipped_content:add") == 1 assert actual_result == {} missing = list(swh_storage.skipped_content_missing(contents_dict)) assert missing == [cont2.hashes()] @pytest.mark.property_based @settings( deadline=None, # this test is very slow suppress_health_check=function_scoped_fixture_check, ) @given( strategies.sets( elements=strategies.sampled_from(["sha256", "sha1_git", "blake2s256"]), min_size=0, ) ) def test_content_missing(self, swh_storage, sample_data, algos): algos |= {"sha1"} content, missing_content = [sample_data.content2, sample_data.skipped_content] swh_storage.content_add([content]) test_contents = [content.to_dict()] missing_per_hash = defaultdict(list) for i in range(256): test_content = missing_content.to_dict() for hash in algos: test_content[hash] = bytes([i]) + test_content[hash][1:] missing_per_hash[hash].append(test_content[hash]) test_contents.append(test_content) assert set(swh_storage.content_missing(test_contents)) == set( missing_per_hash["sha1"] ) for hash in algos: assert set( swh_storage.content_missing(test_contents, key_hash=hash) ) == set(missing_per_hash[hash]) @pytest.mark.property_based @settings(suppress_health_check=function_scoped_fixture_check,) @given( strategies.sets( elements=strategies.sampled_from(["sha256", "sha1_git", "blake2s256"]), min_size=0, ) ) def test_content_missing_unknown_algo(self, swh_storage, sample_data, algos): algos |= {"sha1"} content, missing_content = [sample_data.content2, sample_data.skipped_content] swh_storage.content_add([content]) test_contents = [content.to_dict()] missing_per_hash = defaultdict(list) for i in range(16): test_content = missing_content.to_dict() for hash in algos: test_content[hash] = bytes([i]) + test_content[hash][1:] missing_per_hash[hash].append(test_content[hash]) test_content["nonexisting_algo"] = b"\x00" test_contents.append(test_content) assert set(swh_storage.content_missing(test_contents)) == set( missing_per_hash["sha1"] ) for hash in algos: assert set( swh_storage.content_missing(test_contents, key_hash=hash) ) == set(missing_per_hash[hash]) def test_content_missing_per_sha1(self, swh_storage, sample_data): # given cont = sample_data.content cont2 = sample_data.content2 missing_cont = sample_data.skipped_content missing_cont2 = sample_data.skipped_content2 swh_storage.content_add([cont, cont2]) # when gen = swh_storage.content_missing_per_sha1( [cont.sha1, missing_cont.sha1, cont2.sha1, missing_cont2.sha1] ) # then assert list(gen) == [missing_cont.sha1, missing_cont2.sha1] def test_content_missing_per_sha1_git(self, swh_storage, sample_data): cont, cont2 = sample_data.contents[:2] missing_cont = sample_data.skipped_content missing_cont2 = sample_data.skipped_content2 swh_storage.content_add([cont, cont2]) contents = [ cont.sha1_git, cont2.sha1_git, missing_cont.sha1_git, missing_cont2.sha1_git, ] missing_contents = swh_storage.content_missing_per_sha1_git(contents) assert list(missing_contents) == [missing_cont.sha1_git, missing_cont2.sha1_git] missing_contents = swh_storage.content_missing_per_sha1_git([]) assert list(missing_contents) == [] def test_content_get_partition(self, swh_storage, swh_contents): """content_get_partition paginates results if limit exceeded""" expected_contents = [ attr.evolve(c, data=None) for c in swh_contents if c.status != "absent" ] actual_contents = [] for i in range(16): actual_result = swh_storage.content_get_partition(i, 16) assert actual_result.next_page_token is None actual_contents.extend(actual_result.results) assert len(actual_contents) == len(expected_contents) for content in actual_contents: assert content in expected_contents assert content.ctime is None def test_content_get_partition_full(self, swh_storage, swh_contents): """content_get_partition for a single partition returns all available contents """ expected_contents = [ attr.evolve(c, data=None) for c in swh_contents if c.status != "absent" ] actual_result = swh_storage.content_get_partition(0, 1) assert actual_result.next_page_token is None actual_contents = actual_result.results assert len(actual_contents) == len(expected_contents) for content in actual_contents: assert content in expected_contents def test_content_get_partition_empty(self, swh_storage, swh_contents): """content_get_partition when at least one of the partitions is empty""" expected_contents = { cont.sha1 for cont in swh_contents if cont.status != "absent" } # nb_partitions = smallest power of 2 such that at least one of # the partitions is empty nb_partitions = 1 << math.floor(math.log2(len(swh_contents)) + 1) seen_sha1s = [] for i in range(nb_partitions): actual_result = swh_storage.content_get_partition( i, nb_partitions, limit=len(swh_contents) + 1 ) for content in actual_result.results: seen_sha1s.append(content.sha1) # Limit is higher than the max number of results assert actual_result.next_page_token is None assert set(seen_sha1s) == expected_contents def test_content_get_partition_limit_none(self, swh_storage): """content_get_partition call with wrong limit input should fail""" with pytest.raises(StorageArgumentException, match="limit should not be None"): swh_storage.content_get_partition(1, 16, limit=None) def test_content_get_partition_pagination_generate(self, swh_storage, swh_contents): """content_get_partition returns contents within range provided""" expected_contents = [ attr.evolve(c, data=None) for c in swh_contents if c.status != "absent" ] # retrieve contents actual_contents = [] for i in range(4): page_token = None while True: actual_result = swh_storage.content_get_partition( i, 4, limit=3, page_token=page_token ) actual_contents.extend(actual_result.results) page_token = actual_result.next_page_token if page_token is None: break assert len(actual_contents) == len(expected_contents) for content in actual_contents: assert content in expected_contents @pytest.mark.parametrize("algo", sorted(DEFAULT_ALGORITHMS)) def test_content_get(self, swh_storage, sample_data, algo): cont1, cont2 = sample_data.contents[:2] swh_storage.content_add([cont1, cont2]) actual_contents = swh_storage.content_get( [getattr(cont1, algo), getattr(cont2, algo)], algo ) # we only retrieve the metadata so no data nor ctime within expected_contents = [attr.evolve(c, data=None) for c in [cont1, cont2]] assert actual_contents == expected_contents for content in actual_contents: assert content.ctime is None @pytest.mark.parametrize("algo", sorted(DEFAULT_ALGORITHMS)) def test_content_get_missing(self, swh_storage, sample_data, algo): cont1, cont2 = sample_data.contents[:2] assert cont1.sha1 != cont2.sha1 missing_cont = sample_data.skipped_content swh_storage.content_add([cont1, cont2]) actual_contents = swh_storage.content_get( [getattr(cont1, algo), getattr(cont2, algo), getattr(missing_cont, algo)], algo, ) expected_contents = [ attr.evolve(c, data=None) if c else None for c in [cont1, cont2, None] ] assert actual_contents == expected_contents def test_content_get_random(self, swh_storage, sample_data): cont, cont2, cont3 = sample_data.contents[:3] swh_storage.content_add([cont, cont2, cont3]) assert swh_storage.content_get_random() in { cont.sha1_git, cont2.sha1_git, cont3.sha1_git, } def test_directory_add(self, swh_storage, sample_data): content = sample_data.content directory = sample_data.directories[1] assert directory.entries[0].target == content.sha1_git swh_storage.content_add([content]) init_missing = list(swh_storage.directory_missing([directory.id])) assert [directory.id] == init_missing actual_result = swh_storage.directory_add([directory]) assert actual_result == {"directory:add": 1} assert ("directory", directory) in list( swh_storage.journal_writer.journal.objects ) actual_data = list(swh_storage.directory_ls(directory.id)) expected_data = list(transform_entries(swh_storage, directory)) for data in actual_data: assert data in expected_data after_missing = list(swh_storage.directory_missing([directory.id])) assert after_missing == [] swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["directory"] == 1 def test_directory_add_twice(self, swh_storage, sample_data): directory = sample_data.directories[1] actual_result = swh_storage.directory_add([directory]) assert actual_result == {"directory:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ ("directory", directory) ] actual_result = swh_storage.directory_add([directory]) assert actual_result == {"directory:add": 0} assert list(swh_storage.journal_writer.journal.objects) == [ ("directory", directory) ] def test_directory_ls_recursive(self, swh_storage, sample_data): # create consistent dataset regarding the directories we want to list content, content2 = sample_data.contents[:2] swh_storage.content_add([content, content2]) dir1, dir2, dir3 = sample_data.directories[:3] dir_ids = [d.id for d in [dir1, dir2, dir3]] init_missing = list(swh_storage.directory_missing(dir_ids)) assert init_missing == dir_ids actual_result = swh_storage.directory_add([dir1, dir2, dir3]) assert actual_result == {"directory:add": 3} # List directory containing one file actual_data = list(swh_storage.directory_ls(dir1.id, recursive=True)) expected_data = list(transform_entries(swh_storage, dir1)) for data in actual_data: assert data in expected_data # List directory containing a file and an unknown subdirectory actual_data = list(swh_storage.directory_ls(dir2.id, recursive=True)) expected_data = list(transform_entries(swh_storage, dir2)) for data in actual_data: assert data in expected_data # List directory containing both a known and unknown subdirectory, entries # should be both those of the directory and of the known subdir (up to contents) actual_data = list(swh_storage.directory_ls(dir3.id, recursive=True)) expected_data = list( itertools.chain( transform_entries(swh_storage, dir3), transform_entries(swh_storage, dir2, prefix=b"subdir/"), ) ) for data in actual_data: assert data in expected_data def test_directory_ls_non_recursive(self, swh_storage, sample_data): # create consistent dataset regarding the directories we want to list content, content2 = sample_data.contents[:2] swh_storage.content_add([content, content2]) dir1, dir2, dir3, _, dir5 = sample_data.directories[:5] dir_ids = [d.id for d in [dir1, dir2, dir3, dir5]] init_missing = list(swh_storage.directory_missing(dir_ids)) assert init_missing == dir_ids actual_result = swh_storage.directory_add([dir1, dir2, dir3, dir5]) assert actual_result == {"directory:add": 4} # List directory containing a file and an unknown subdirectory actual_data = list(swh_storage.directory_ls(dir1.id)) expected_data = list(transform_entries(swh_storage, dir1)) for data in actual_data: assert data in expected_data # List directory containing a single file actual_data = list(swh_storage.directory_ls(dir2.id)) expected_data = list(transform_entries(swh_storage, dir2)) for data in actual_data: assert data in expected_data # List directory containing a known subdirectory, entries should # only be those of the parent directory, not of the subdir actual_data = list(swh_storage.directory_ls(dir3.id)) expected_data = list(transform_entries(swh_storage, dir3)) for data in actual_data: assert data in expected_data def test_directory_ls_missing_content(self, swh_storage, sample_data): swh_storage.directory_add([sample_data.directory2]) assert list(swh_storage.directory_ls(sample_data.directory2.id)) == [ { "dir_id": sample_data.directory2.id, "length": None, "name": b"oof", "perms": 33188, "sha1": None, "sha1_git": None, "sha256": None, "status": None, "target": sample_data.directory2.entries[0].target, "type": "file", }, ] def test_directory_ls_skipped_content(self, swh_storage, sample_data): swh_storage.directory_add([sample_data.directory2]) cont = SkippedContent( sha1_git=sample_data.directory2.entries[0].target, sha1=b"c" * 20, sha256=None, blake2s256=None, length=42, status="absent", reason="You need a premium subscription to access this content", ) swh_storage.skipped_content_add([cont]) assert list(swh_storage.directory_ls(sample_data.directory2.id)) == [ { "dir_id": sample_data.directory2.id, "length": 42, "name": b"oof", "perms": 33188, "sha1": b"c" * 20, "sha1_git": sample_data.directory2.entries[0].target, "sha256": None, "status": "absent", "target": sample_data.directory2.entries[0].target, "type": "file", }, ] def test_directory_entry_get_by_path(self, swh_storage, sample_data): cont, content2 = sample_data.contents[:2] dir1, dir2, dir3, dir4, dir5 = sample_data.directories[:5] # given dir_ids = [d.id for d in [dir1, dir2, dir3, dir4, dir5]] init_missing = list(swh_storage.directory_missing(dir_ids)) assert init_missing == dir_ids actual_result = swh_storage.directory_add([dir3, dir4]) assert actual_result == {"directory:add": 2} expected_entries = [ { "dir_id": dir3.id, "name": b"foo", "type": "file", "target": cont.sha1_git, "sha1": None, "sha1_git": None, "sha256": None, "status": None, "perms": from_disk.DentryPerms.content, "length": None, }, { "dir_id": dir3.id, "name": b"subdir", "type": "dir", "target": dir2.id, "sha1": None, "sha1_git": None, "sha256": None, "status": None, "perms": from_disk.DentryPerms.directory, "length": None, }, { "dir_id": dir3.id, "name": b"hello", "type": "file", "target": content2.sha1_git, "sha1": None, "sha1_git": None, "sha256": None, "status": None, "perms": from_disk.DentryPerms.content, "length": None, }, ] # when (all must be found here) for entry, expected_entry in zip(dir3.entries, expected_entries): actual_entry = swh_storage.directory_entry_get_by_path( dir3.id, [entry.name] ) assert actual_entry == expected_entry # same, but deeper for entry, expected_entry in zip(dir3.entries, expected_entries): actual_entry = swh_storage.directory_entry_get_by_path( dir4.id, [b"subdir1", entry.name] ) expected_entry = expected_entry.copy() expected_entry["name"] = b"subdir1/" + expected_entry["name"] assert actual_entry == expected_entry # when (nothing should be found here since `dir` is not persisted.) for entry in dir2.entries: actual_entry = swh_storage.directory_entry_get_by_path( dir2.id, [entry.name] ) assert actual_entry is None def test_directory_get_entries_pagination(self, swh_storage, sample_data): # Note: this test assumes entries are returned in lexicographic order, # which is not actually guaranteed by the interface. dir_ = sample_data.directory3 entries = sorted(dir_.entries, key=lambda entry: entry.name) swh_storage.directory_add(sample_data.directories) # No pagination needed actual_data = swh_storage.directory_get_entries(dir_.id) assert actual_data == PagedResult(results=entries, next_page_token=None) # A little pagination actual_data = swh_storage.directory_get_entries(dir_.id, limit=2) assert actual_data.results == entries[0:2] assert actual_data.next_page_token is not None actual_data = swh_storage.directory_get_entries( dir_.id, page_token=actual_data.next_page_token ) assert actual_data == PagedResult(results=entries[2:], next_page_token=None) @pytest.mark.parametrize("limit", [1, 2, 3, 4, 5]) def test_directory_get_entries(self, swh_storage, sample_data, limit): dir_ = sample_data.directory3 swh_storage.directory_add(sample_data.directories) actual_data = list( stream_results(swh_storage.directory_get_entries, dir_.id, limit=limit,) ) assert sorted(actual_data) == sorted(dir_.entries) def test_directory_get_random(self, swh_storage, sample_data): dir1, dir2, dir3 = sample_data.directories[:3] swh_storage.directory_add([dir1, dir2, dir3]) assert swh_storage.directory_get_random() in { dir1.id, dir2.id, dir3.id, } def test_revision_add(self, swh_storage, sample_data): revision = sample_data.revision init_missing = swh_storage.revision_missing([revision.id]) assert list(init_missing) == [revision.id] actual_result = swh_storage.revision_add([revision]) assert actual_result == {"revision:add": 1} end_missing = swh_storage.revision_missing([revision.id]) assert list(end_missing) == [] assert list(swh_storage.journal_writer.journal.objects) == [ ("revision", revision) ] # already there so nothing added actual_result = swh_storage.revision_add([revision]) assert actual_result == {"revision:add": 0} swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["revision"] == 1 def test_revision_add_twice(self, swh_storage, sample_data): revision, revision2 = sample_data.revisions[:2] actual_result = swh_storage.revision_add([revision]) assert actual_result == {"revision:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ ("revision", revision) ] actual_result = swh_storage.revision_add([revision, revision2]) assert actual_result == {"revision:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ ("revision", revision), ("revision", revision2), ] def test_revision_add_name_clash(self, swh_storage, sample_data): revision, revision2 = sample_data.revisions[:2] revision1 = attr.evolve( revision, author=Person( fullname=b"John Doe ", name=b"John Doe", email=b"john.doe@example.com", ), ) revision2 = attr.evolve( revision2, author=Person( fullname=b"John Doe ", name=b"John Doe ", email=b"john.doe@example.com ", ), ) actual_result = swh_storage.revision_add([revision1, revision2]) assert actual_result == {"revision:add": 2} def test_revision_get_order(self, swh_storage, sample_data): revision, revision2 = sample_data.revisions[:2] add_result = swh_storage.revision_add([revision, revision2]) assert add_result == {"revision:add": 2} # order 1 actual_revisions = swh_storage.revision_get([revision.id, revision2.id]) assert actual_revisions == [revision, revision2] # order 2 actual_revisions2 = swh_storage.revision_get([revision2.id, revision.id]) assert actual_revisions2 == [revision2, revision] def test_revision_log(self, swh_storage, sample_data): revision1, revision2, revision3, revision4 = sample_data.revisions[:4] # rev4 -is-child-of-> rev3 -> rev1, (rev2 -> rev1) swh_storage.revision_add([revision1, revision2, revision3, revision4]) # when results = list(swh_storage.revision_log([revision4.id])) # for comparison purposes actual_results = [Revision.from_dict(r) for r in results] assert len(actual_results) == 4 # rev4 -child-> rev3 -> rev1, (rev2 -> rev1) assert actual_results == [revision4, revision3, revision1, revision2] def test_revision_log_with_limit(self, swh_storage, sample_data): revision1, revision2, revision3, revision4 = sample_data.revisions[:4] # revision4 -is-child-of-> revision3 swh_storage.revision_add([revision3, revision4]) results = list(swh_storage.revision_log([revision4.id], 1)) actual_results = [Revision.from_dict(r) for r in results] assert len(actual_results) == 1 assert actual_results[0] == revision4 def test_revision_log_unknown_revision(self, swh_storage, sample_data): revision = sample_data.revision rev_log = list(swh_storage.revision_log([revision.id])) assert rev_log == [] def test_revision_shortlog(self, swh_storage, sample_data): revision1, revision2, revision3, revision4 = sample_data.revisions[:4] # rev4 -is-child-of-> rev3 -> (rev1, rev2); rev2 -> rev1 swh_storage.revision_add([revision1, revision2, revision3, revision4]) results = list(swh_storage.revision_shortlog([revision4.id])) actual_results = [[id, tuple(parents)] for (id, parents) in results] assert len(actual_results) == 4 assert actual_results == [ [revision4.id, revision4.parents], [revision3.id, revision3.parents], [revision1.id, revision1.parents], [revision2.id, revision2.parents], ] def test_revision_shortlog_with_limit(self, swh_storage, sample_data): revision1, revision2, revision3, revision4 = sample_data.revisions[:4] # revision4 -is-child-of-> revision3 swh_storage.revision_add([revision1, revision2, revision3, revision4]) results = list(swh_storage.revision_shortlog([revision4.id], 1)) actual_results = [[id, tuple(parents)] for (id, parents) in results] assert len(actual_results) == 1 assert list(actual_results[0]) == [revision4.id, revision4.parents] def test_revision_get(self, swh_storage, sample_data): revision, revision2 = sample_data.revisions[:2] swh_storage.revision_add([revision]) actual_revisions = swh_storage.revision_get([revision.id, revision2.id]) assert len(actual_revisions) == 2 assert actual_revisions == [revision, None] def test_revision_get_no_parents(self, swh_storage, sample_data): revision = sample_data.revision swh_storage.revision_add([revision]) actual_revision = swh_storage.revision_get([revision.id])[0] assert revision.parents == () assert actual_revision.parents == () # no parents on this one def test_revision_get_random(self, swh_storage, sample_data): revision1, revision2, revision3 = sample_data.revisions[:3] swh_storage.revision_add([revision1, revision2, revision3]) assert swh_storage.revision_get_random() in { revision1.id, revision2.id, revision3.id, } + def test_revision_missing_many(self, swh_storage, sample_data): + """Large number of revision ids to check can cause ScyllaDB to reject + queries.""" + revision = sample_data.revision + ids = [bytes([b1, b2]) * 10 for b1 in range(256) for b2 in range(10)] + ids.append(revision.id) + ids.sort() + init_missing = swh_storage.revision_missing(ids) + assert set(init_missing) == set(ids) + + actual_result = swh_storage.revision_add([revision]) + assert actual_result == {"revision:add": 1} + + end_missing = swh_storage.revision_missing(ids) + assert set(end_missing) == set(ids) - {revision.id} + def test_extid_add_git(self, swh_storage, sample_data): gitids = [ revision.id for revision in sample_data.revisions if revision.type.value == "git" ] extids = [ ExtID( extid=gitid, extid_type="git", target=CoreSWHID(object_id=gitid, object_type=ObjectType.REVISION,), ) for gitid in gitids ] assert swh_storage.extid_get_from_extid("git", gitids) == [] assert swh_storage.extid_get_from_target(ObjectType.REVISION, gitids) == [] summary = swh_storage.extid_add(extids) assert summary == {"extid:add": len(gitids)} assert swh_storage.extid_get_from_extid("git", gitids) == extids assert swh_storage.extid_get_from_target(ObjectType.REVISION, gitids) == extids assert swh_storage.extid_get_from_extid("hg", gitids) == [] assert swh_storage.extid_get_from_target(ObjectType.RELEASE, gitids) == [] # check ExtIDs have been added to the journal extids_in_journal = [ obj for (obj_type, obj) in swh_storage.journal_writer.journal.objects if obj_type == "extid" ] assert extids == extids_in_journal def test_extid_add_hg(self, swh_storage, sample_data): def get_node(revision): node = None if revision.extra_headers: node = dict(revision.extra_headers).get(b"node") if node is None and revision.metadata: node = hash_to_bytes(revision.metadata.get("node")) return node swhids = [ revision.id for revision in sample_data.revisions if revision.type.value == "hg" ] extids = [ get_node(revision) for revision in sample_data.revisions if revision.type.value == "hg" ] assert swh_storage.extid_get_from_extid("hg", extids) == [] assert swh_storage.extid_get_from_target(ObjectType.REVISION, swhids) == [] extid_objs = [ ExtID( extid=hgid, extid_type="hg", extid_version=1, target=CoreSWHID(object_id=swhid, object_type=ObjectType.REVISION,), ) for hgid, swhid in zip(extids, swhids) ] summary = swh_storage.extid_add(extid_objs) assert summary == {"extid:add": len(swhids)} assert swh_storage.extid_get_from_extid("hg", extids) == extid_objs assert ( swh_storage.extid_get_from_target(ObjectType.REVISION, swhids) == extid_objs ) assert swh_storage.extid_get_from_extid("git", extids) == [] assert swh_storage.extid_get_from_target(ObjectType.RELEASE, swhids) == [] # check ExtIDs have been added to the journal extids_in_journal = [ obj for (obj_type, obj) in swh_storage.journal_writer.journal.objects if obj_type == "extid" ] assert extid_objs == extids_in_journal def test_extid_add_twice(self, swh_storage, sample_data): gitids = [ revision.id for revision in sample_data.revisions if revision.type.value == "git" ] extids = [ ExtID( extid=gitid, extid_type="git", target=CoreSWHID(object_id=gitid, object_type=ObjectType.REVISION,), ) for gitid in gitids ] summary = swh_storage.extid_add(extids) assert summary == {"extid:add": len(gitids)} # add them again, should be noop summary = swh_storage.extid_add(extids) # assert summary == {"extid:add": 0} assert swh_storage.extid_get_from_extid("git", gitids) == extids assert swh_storage.extid_get_from_target(ObjectType.REVISION, gitids) == extids def test_extid_add_extid_multicity(self, swh_storage, sample_data): ids = [ revision.id for revision in sample_data.revisions if revision.type.value == "git" ] extids = [ ExtID( extid=extid, extid_type="git", extid_version=2, target=CoreSWHID(object_id=extid, object_type=ObjectType.REVISION,), ) for extid in ids ] swh_storage.extid_add(extids) # try to add "modified-extid" versions, should be added extids2 = [ ExtID( extid=extid, extid_type="hg", extid_version=2, target=CoreSWHID(object_id=extid, object_type=ObjectType.REVISION,), ) for extid in ids ] swh_storage.extid_add(extids2) assert swh_storage.extid_get_from_extid("git", ids) == extids assert swh_storage.extid_get_from_extid("hg", ids) == extids2 assert set(swh_storage.extid_get_from_target(ObjectType.REVISION, ids)) == { *extids, *extids2, } def test_extid_add_target_multicity(self, swh_storage, sample_data): ids = [ revision.id for revision in sample_data.revisions if revision.type.value == "git" ] extids = [ ExtID( extid=extid, extid_type="git", target=CoreSWHID(object_id=extid, object_type=ObjectType.REVISION,), ) for extid in ids ] swh_storage.extid_add(extids) # try to add "modified" versions, should be added extids2 = [ ExtID( extid=extid, extid_type="git", target=CoreSWHID(object_id=extid, object_type=ObjectType.RELEASE,), ) for extid in ids ] swh_storage.extid_add(extids2) assert set(swh_storage.extid_get_from_extid("git", ids)) == {*extids, *extids2} assert swh_storage.extid_get_from_target(ObjectType.REVISION, ids) == extids assert swh_storage.extid_get_from_target(ObjectType.RELEASE, ids) == extids2 def test_extid_version_behavior(self, swh_storage, sample_data): ids = [ revision.id for revision in sample_data.revisions if revision.type.value == "git" ] # Insert extids with several different versions extids = [ ExtID( extid=extid, extid_type="git", target=CoreSWHID(object_id=extid, object_type=ObjectType.REVISION,), ) for extid in ids ] + [ ExtID( extid=extid, extid_type="git", extid_version=1, target=CoreSWHID(object_id=extid, object_type=ObjectType.REVISION,), ) for extid in ids ] swh_storage.extid_add(extids) # Check that both versions get returned for git_id in ids: objs = swh_storage.extid_get_from_extid("git", [git_id]) assert len(objs) == 2 assert set(obj.extid_version for obj in objs) == {0, 1} for swhid in ids: objs = swh_storage.extid_get_from_target(ObjectType.REVISION, [swhid]) assert len(objs) == 2 assert set(obj.extid_version for obj in objs) == {0, 1} def test_release_add(self, swh_storage, sample_data): release, release2 = sample_data.releases[:2] init_missing = swh_storage.release_missing([release.id, release2.id]) assert list(init_missing) == [release.id, release2.id] actual_result = swh_storage.release_add([release, release2]) assert actual_result == {"release:add": 2} end_missing = swh_storage.release_missing([release.id, release2.id]) assert list(end_missing) == [] assert list(swh_storage.journal_writer.journal.objects) == [ ("release", release), ("release", release2), ] # already present so nothing added actual_result = swh_storage.release_add([release, release2]) assert actual_result == {"release:add": 0} swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["release"] == 2 def test_release_add_no_author_date(self, swh_storage, sample_data): full_release = sample_data.release release = attr.evolve(full_release, author=None, date=None) actual_result = swh_storage.release_add([release]) assert actual_result == {"release:add": 1} end_missing = swh_storage.release_missing([release.id]) assert list(end_missing) == [] assert list(swh_storage.journal_writer.journal.objects) == [ ("release", release) ] def test_release_add_twice(self, swh_storage, sample_data): release, release2 = sample_data.releases[:2] actual_result = swh_storage.release_add([release]) assert actual_result == {"release:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ ("release", release) ] actual_result = swh_storage.release_add([release, release2, release, release2]) assert actual_result == {"release:add": 1} assert set(swh_storage.journal_writer.journal.objects) == set( [("release", release), ("release", release2),] ) def test_release_add_name_clash(self, swh_storage, sample_data): release, release2 = [ attr.evolve( c, author=Person( fullname=b"John Doe ", name=b"John Doe", email=b"john.doe@example.com", ), ) for c in sample_data.releases[:2] ] actual_result = swh_storage.release_add([release, release2]) assert actual_result == {"release:add": 2} def test_release_get(self, swh_storage, sample_data): release, release2, release3 = sample_data.releases[:3] # given swh_storage.release_add([release, release2]) # when actual_releases = swh_storage.release_get([release.id, release2.id]) # then assert actual_releases == [release, release2] unknown_releases = swh_storage.release_get([release3.id]) assert unknown_releases[0] is None def test_release_get_order(self, swh_storage, sample_data): release, release2 = sample_data.releases[:2] add_result = swh_storage.release_add([release, release2]) assert add_result == {"release:add": 2} # order 1 actual_releases = swh_storage.release_get([release.id, release2.id]) assert actual_releases == [release, release2] # order 2 actual_releases2 = swh_storage.release_get([release2.id, release.id]) assert actual_releases2 == [release2, release] def test_release_get_random(self, swh_storage, sample_data): release, release2, release3 = sample_data.releases[:3] swh_storage.release_add([release, release2, release3]) assert swh_storage.release_get_random() in { release.id, release2.id, release3.id, } def test_origin_add(self, swh_storage, sample_data): origins = list(sample_data.origins) origin_urls = [o.url for o in origins] assert swh_storage.origin_get(origin_urls) == [None] * len(origins) stats = swh_storage.origin_add(origins) assert stats == {"origin:add": len(origin_urls)} actual_origins = swh_storage.origin_get(origin_urls) assert actual_origins == origins assert set(swh_storage.journal_writer.journal.objects) == set( [("origin", origin) for origin in origins] ) swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["origin"] == len(origins) def test_origin_add_twice(self, swh_storage, sample_data): origin, origin2 = sample_data.origins[:2] add1 = swh_storage.origin_add([origin, origin2]) assert set(swh_storage.journal_writer.journal.objects) == set( [("origin", origin), ("origin", origin2),] ) assert add1 == {"origin:add": 2} add2 = swh_storage.origin_add([origin, origin2]) assert set(swh_storage.journal_writer.journal.objects) == set( [("origin", origin), ("origin", origin2),] ) assert add2 == {"origin:add": 0} def test_origin_add_twice_at_once(self, swh_storage, sample_data): origin, origin2 = sample_data.origins[:2] add1 = swh_storage.origin_add([origin, origin2, origin, origin2]) assert set(swh_storage.journal_writer.journal.objects) == set( [("origin", origin), ("origin", origin2),] ) assert add1 == {"origin:add": 2} add2 = swh_storage.origin_add([origin, origin2, origin, origin2]) assert set(swh_storage.journal_writer.journal.objects) == set( [("origin", origin), ("origin", origin2),] ) assert add2 == {"origin:add": 0} def test_origin_get(self, swh_storage, sample_data): origin, origin2 = sample_data.origins[:2] assert swh_storage.origin_get([origin.url]) == [None] swh_storage.origin_add([origin]) actual_origins = swh_storage.origin_get([origin.url]) assert actual_origins == [origin] actual_origins = swh_storage.origin_get([origin.url, "not://exists"]) assert actual_origins == [origin, None] def _generate_random_visits(self, nb_visits=100, start=0, end=7): """Generate random visits within the last 2 months (to avoid computations) """ visits = [] today = now() for weeks in range(nb_visits, 0, -1): hours = random.randint(0, 24) minutes = random.randint(0, 60) seconds = random.randint(0, 60) days = random.randint(0, 28) weeks = random.randint(start, end) date_visit = today - timedelta( weeks=weeks, hours=hours, minutes=minutes, seconds=seconds, days=days ) visits.append(date_visit) return visits def test_origin_visit_get__unknown_origin(self, swh_storage): actual_page = swh_storage.origin_visit_get("foo") assert actual_page.next_page_token is None assert actual_page.results == [] assert actual_page == PagedResult() def test_origin_visit_get__validation_failure(self, swh_storage, sample_data): origin = sample_data.origin swh_storage.origin_add([origin]) with pytest.raises( StorageArgumentException, match="page_token must be a string" ): swh_storage.origin_visit_get(origin.url, page_token=10) # not bytes with pytest.raises( StorageArgumentException, match="order must be a ListOrder value" ): swh_storage.origin_visit_get(origin.url, order="foobar") # wrong order def test_origin_visit_get_all(self, swh_storage, sample_data): origin = sample_data.origin swh_storage.origin_add([origin]) ov1, ov2, ov3 = swh_storage.origin_visit_add( [ OriginVisit( origin=origin.url, date=sample_data.date_visit1, type=sample_data.type_visit1, ), OriginVisit( origin=origin.url, date=sample_data.date_visit2, type=sample_data.type_visit2, ), OriginVisit( origin=origin.url, date=sample_data.date_visit2, type=sample_data.type_visit2, ), ] ) # order asc, no token, no limit actual_page = swh_storage.origin_visit_get(origin.url) assert actual_page.next_page_token is None assert actual_page.results == [ov1, ov2, ov3] # order asc, no token, limit actual_page = swh_storage.origin_visit_get(origin.url, limit=2) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [ov1, ov2] # order asc, token, no limit actual_page = swh_storage.origin_visit_get( origin.url, page_token=next_page_token ) assert actual_page.next_page_token is None assert actual_page.results == [ov3] # order asc, no token, limit actual_page = swh_storage.origin_visit_get(origin.url, limit=1) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [ov1] # order asc, token, no limit actual_page = swh_storage.origin_visit_get( origin.url, page_token=next_page_token ) assert actual_page.next_page_token is None assert actual_page.results == [ov2, ov3] # order asc, token, limit actual_page = swh_storage.origin_visit_get( origin.url, page_token=next_page_token, limit=2 ) assert actual_page.next_page_token is None assert actual_page.results == [ov2, ov3] actual_page = swh_storage.origin_visit_get( origin.url, page_token=next_page_token, limit=1 ) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [ov2] actual_page = swh_storage.origin_visit_get( origin.url, page_token=next_page_token, limit=1 ) assert actual_page.next_page_token is None assert actual_page.results == [ov3] # order desc, no token, no limit actual_page = swh_storage.origin_visit_get(origin.url, order=ListOrder.DESC) assert actual_page.next_page_token is None assert actual_page.results == [ov3, ov2, ov1] # order desc, no token, limit actual_page = swh_storage.origin_visit_get( origin.url, limit=2, order=ListOrder.DESC ) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [ov3, ov2] # order desc, token, no limit actual_page = swh_storage.origin_visit_get( origin.url, page_token=next_page_token, order=ListOrder.DESC ) assert actual_page.next_page_token is None assert actual_page.results == [ov1] # order desc, no token, limit actual_page = swh_storage.origin_visit_get( origin.url, limit=1, order=ListOrder.DESC ) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [ov3] # order desc, token, no limit actual_page = swh_storage.origin_visit_get( origin.url, page_token=next_page_token, order=ListOrder.DESC ) assert actual_page.next_page_token is None assert actual_page.results == [ov2, ov1] # order desc, token, limit actual_page = swh_storage.origin_visit_get( origin.url, page_token=next_page_token, order=ListOrder.DESC, limit=1 ) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [ov2] actual_page = swh_storage.origin_visit_get( origin.url, page_token=next_page_token, order=ListOrder.DESC ) assert actual_page.next_page_token is None assert actual_page.results == [ov1] def test_origin_visit_status_get__unknown_cases(self, swh_storage, sample_data): origin = sample_data.origin actual_page = swh_storage.origin_visit_status_get("foobar", 1) assert actual_page.next_page_token is None assert actual_page.results == [] actual_page = swh_storage.origin_visit_status_get(origin.url, 1) assert actual_page.next_page_token is None assert actual_page.results == [] origin = sample_data.origin swh_storage.origin_add([origin]) ov1 = swh_storage.origin_visit_add( [ OriginVisit( origin=origin.url, date=sample_data.date_visit1, type=sample_data.type_visit1, ), ] )[0] actual_page = swh_storage.origin_visit_status_get(origin.url, ov1.visit + 10) assert actual_page.next_page_token is None assert actual_page.results == [] def test_origin_visit_status_add_unknown_type(self, swh_storage, sample_data): ov = OriginVisit( origin=sample_data.origin.url, date=now(), type=sample_data.type_visit1, visit=42, ) ovs = OriginVisitStatus( origin=ov.origin, visit=ov.visit, date=now(), status="created", snapshot=None, ) with pytest.raises(StorageArgumentException): swh_storage.origin_visit_status_add([ovs]) swh_storage.origin_add([sample_data.origin]) with pytest.raises(StorageArgumentException): swh_storage.origin_visit_status_add([ovs]) swh_storage.origin_visit_add([ov]) swh_storage.origin_visit_status_add([ovs]) def test_origin_visit_status_get_all(self, swh_storage, sample_data): origin = sample_data.origin swh_storage.origin_add([origin]) date_visit3 = round_to_milliseconds(now()) date_visit1 = date_visit3 - datetime.timedelta(hours=2) date_visit2 = date_visit3 - datetime.timedelta(hours=1) assert date_visit1 < date_visit2 < date_visit3 ov1 = swh_storage.origin_visit_add( [ OriginVisit( origin=origin.url, date=date_visit1, type=sample_data.type_visit1, ), ] )[0] ovs1 = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=date_visit1, type=ov1.type, status="created", snapshot=None, ) ovs2 = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=date_visit2, type=ov1.type, status="partial", snapshot=None, ) ovs3 = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=date_visit3, type=ov1.type, status="full", snapshot=sample_data.snapshot.id, metadata={}, ) swh_storage.origin_visit_status_add([ovs2, ovs3]) # order asc, no token, no limit actual_page = swh_storage.origin_visit_status_get(origin.url, ov1.visit) assert actual_page.next_page_token is None assert actual_page.results == [ovs1, ovs2, ovs3] # order asc, no token, limit actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, limit=2 ) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [ovs1, ovs2] # order asc, token, no limit actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, page_token=next_page_token ) assert actual_page.next_page_token is None assert actual_page.results == [ovs3] # order asc, no token, limit actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, limit=1 ) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [ovs1] actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, page_token=next_page_token ) assert actual_page.next_page_token is None assert actual_page.results == [ovs2, ovs3] # order asc, token, limit actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, page_token=next_page_token, limit=2 ) assert actual_page.next_page_token is None assert actual_page.results == [ovs2, ovs3] # order asc, no token, limit actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, limit=2 ) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [ovs1, ovs2] actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, page_token=next_page_token, limit=1 ) assert actual_page.next_page_token is None assert actual_page.results == [ovs3] # order desc, no token, no limit actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, order=ListOrder.DESC ) assert actual_page.next_page_token is None assert actual_page.results == [ovs3, ovs2, ovs1] # order desc, no token, limit actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, limit=2, order=ListOrder.DESC ) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [ovs3, ovs2] actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, page_token=next_page_token, order=ListOrder.DESC ) assert actual_page.next_page_token is None assert actual_page.results == [ovs1] # order desc, no token, limit actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, order=ListOrder.DESC, limit=1 ) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [ovs3] # order desc, token, no limit actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, page_token=next_page_token, order=ListOrder.DESC ) assert actual_page.next_page_token is None assert actual_page.results == [ovs2, ovs1] # order desc, token, limit actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, page_token=next_page_token, order=ListOrder.DESC, limit=1, ) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [ovs2] actual_page = swh_storage.origin_visit_status_get( origin.url, ov1.visit, page_token=next_page_token, order=ListOrder.DESC ) assert actual_page.next_page_token is None assert actual_page.results == [ovs1] def test_origin_visit_status_get_random(self, swh_storage, sample_data): origins = sample_data.origins[:2] swh_storage.origin_add(origins) # Add some random visits within the selection range visits = self._generate_random_visits() visit_type = "git" # Add visits to those origins for origin in origins: for date_visit in visits: visit = swh_storage.origin_visit_add( [OriginVisit(origin=origin.url, date=date_visit, type=visit_type,)] )[0] swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=visit.visit, date=now(), status="full", snapshot=None, ) ] ) swh_storage.refresh_stat_counters() stats = swh_storage.stat_counters() assert stats["origin"] == len(origins) assert stats["origin_visit"] == len(origins) * len(visits) random_ovs = swh_storage.origin_visit_status_get_random(visit_type) assert random_ovs assert random_ovs.origin is not None assert random_ovs.origin in [o.url for o in origins] assert random_ovs.type is not None def test_origin_visit_status_get_random_nothing_found( self, swh_storage, sample_data ): origins = sample_data.origins swh_storage.origin_add(origins) visit_type = "hg" # Add some visits outside of the random generation selection so nothing # will be found by the random selection visits = self._generate_random_visits(nb_visits=3, start=13, end=24) for origin in origins: for date_visit in visits: visit = swh_storage.origin_visit_add( [OriginVisit(origin=origin.url, date=date_visit, type=visit_type,)] )[0] swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=visit.visit, date=now(), status="full", snapshot=None, ) ] ) random_origin_visit = swh_storage.origin_visit_status_get_random(visit_type) assert random_origin_visit is None def test_origin_get_by_sha1(self, swh_storage, sample_data): origin = sample_data.origin assert swh_storage.origin_get([origin.url])[0] is None swh_storage.origin_add([origin]) origins = list(swh_storage.origin_get_by_sha1([sha1(origin.url)])) assert len(origins) == 1 assert origins[0]["url"] == origin.url def test_origin_get_by_sha1_not_found(self, swh_storage, sample_data): unknown_origin = sample_data.origin assert swh_storage.origin_get([unknown_origin.url])[0] is None origins = list(swh_storage.origin_get_by_sha1([sha1(unknown_origin.url)])) assert len(origins) == 1 assert origins[0] is None def test_origin_search_single_result(self, swh_storage, sample_data): origin, origin2 = sample_data.origins[:2] actual_page = swh_storage.origin_search(origin.url) assert actual_page.next_page_token is None assert actual_page.results == [] actual_page = swh_storage.origin_search(origin.url, regexp=True) assert actual_page.next_page_token is None assert actual_page.results == [] swh_storage.origin_add([origin]) actual_page = swh_storage.origin_search(origin.url) assert actual_page.next_page_token is None assert actual_page.results == [origin] actual_page = swh_storage.origin_search(f".{origin.url[1:-1]}.", regexp=True) assert actual_page.next_page_token is None assert actual_page.results == [origin] swh_storage.origin_add([origin2]) actual_page = swh_storage.origin_search(origin2.url) assert actual_page.next_page_token is None assert actual_page.results == [origin2] actual_page = swh_storage.origin_search(f".{origin2.url[1:-1]}.", regexp=True) assert actual_page.next_page_token is None assert actual_page.results == [origin2] def test_origin_search_no_regexp(self, swh_storage, sample_data): origin, origin2 = sample_data.origins[:2] swh_storage.origin_add([origin, origin2]) # no pagination actual_page = swh_storage.origin_search("/") assert actual_page.next_page_token is None assert actual_page.results == [origin, origin2] # offset=0 actual_page = swh_storage.origin_search("/", page_token=None, limit=1) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [origin] # offset=1 actual_page = swh_storage.origin_search( "/", page_token=next_page_token, limit=1 ) assert actual_page.next_page_token is None assert actual_page.results == [origin2] def test_origin_search_regexp_substring(self, swh_storage, sample_data): origin, origin2 = sample_data.origins[:2] swh_storage.origin_add([origin, origin2]) # no pagination actual_page = swh_storage.origin_search("/", regexp=True) assert actual_page.next_page_token is None assert actual_page.results == [origin, origin2] # offset=0 actual_page = swh_storage.origin_search( "/", page_token=None, limit=1, regexp=True ) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [origin] # offset=1 actual_page = swh_storage.origin_search( "/", page_token=next_page_token, limit=1, regexp=True ) assert actual_page.next_page_token is None assert actual_page.results == [origin2] def test_origin_search_regexp_fullstring(self, swh_storage, sample_data): origin, origin2 = sample_data.origins[:2] swh_storage.origin_add([origin, origin2]) # no pagination actual_page = swh_storage.origin_search(".*/.*", regexp=True) assert actual_page.next_page_token is None assert actual_page.results == [origin, origin2] # offset=0 actual_page = swh_storage.origin_search( ".*/.*", page_token=None, limit=1, regexp=True ) next_page_token = actual_page.next_page_token assert next_page_token is not None assert actual_page.results == [origin] # offset=1 actual_page = swh_storage.origin_search( ".*/.*", page_token=next_page_token, limit=1, regexp=True ) assert actual_page.next_page_token is None assert actual_page.results == [origin2] def test_origin_search_no_visit_types(self, swh_storage, sample_data): origin = sample_data.origins[0] swh_storage.origin_add([origin]) actual_page = swh_storage.origin_search(origin.url, visit_types=["git"]) assert actual_page.next_page_token is None assert actual_page.results == [] def test_origin_search_with_visit_types(self, swh_storage, sample_data): origin, origin2 = sample_data.origins[:2] swh_storage.origin_add([origin, origin2]) swh_storage.origin_visit_add( [ OriginVisit(origin=origin.url, date=now(), type="git"), OriginVisit(origin=origin2.url, date=now(), type="svn"), ] ) actual_page = swh_storage.origin_search(origin.url, visit_types=["git"]) assert actual_page.next_page_token is None assert actual_page.results == [origin] actual_page = swh_storage.origin_search(origin2.url, visit_types=["svn"]) assert actual_page.next_page_token is None assert actual_page.results == [origin2] def test_origin_search_multiple_visit_types(self, swh_storage, sample_data): origin = sample_data.origins[0] swh_storage.origin_add([origin]) def _add_visit_type(visit_type): swh_storage.origin_visit_add( [OriginVisit(origin=origin.url, date=now(), type=visit_type)] ) def _check_visit_types(visit_types): actual_page = swh_storage.origin_search(origin.url, visit_types=visit_types) assert actual_page.next_page_token is None assert actual_page.results == [origin] _add_visit_type("git") _check_visit_types(["git"]) _check_visit_types(["git", "hg"]) _add_visit_type("hg") _check_visit_types(["hg"]) _check_visit_types(["git", "hg"]) def test_origin_visit_add(self, swh_storage, sample_data): origin1 = sample_data.origins[1] swh_storage.origin_add([origin1]) date_visit = now() date_visit2 = date_visit + datetime.timedelta(minutes=1) date_visit = round_to_milliseconds(date_visit) date_visit2 = round_to_milliseconds(date_visit2) visit1 = OriginVisit( origin=origin1.url, date=date_visit, type=sample_data.type_visit1, ) visit2 = OriginVisit( origin=origin1.url, date=date_visit2, type=sample_data.type_visit2, ) # add once ov1, ov2 = swh_storage.origin_visit_add([visit1, visit2]) # then again (will be ignored as they already exist) origin_visit1, origin_visit2 = swh_storage.origin_visit_add([ov1, ov2]) assert ov1 == origin_visit1 assert ov2 == origin_visit2 ovs1 = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=date_visit, type=ov1.type, status="created", snapshot=None, ) ovs2 = OriginVisitStatus( origin=ov2.origin, visit=ov2.visit, date=date_visit2, type=ov2.type, status="created", snapshot=None, ) actual_visits = swh_storage.origin_visit_get(origin1.url).results expected_visits = [ov1, ov2] assert len(expected_visits) == len(actual_visits) for visit in expected_visits: assert visit in actual_visits actual_objects = list(swh_storage.journal_writer.journal.objects) expected_objects = list( [("origin", origin1)] + [("origin_visit", visit) for visit in expected_visits] * 2 + [("origin_visit_status", ovs) for ovs in [ovs1, ovs2]] ) for obj in expected_objects: assert obj in actual_objects def test_origin_visit_add_validation(self, swh_storage, sample_data): """Unknown origin when adding visits should raise""" visit = attr.evolve(sample_data.origin_visit, origin="something-unknonw") with pytest.raises(StorageArgumentException, match="Unknown origin"): swh_storage.origin_visit_add([visit]) objects = list(swh_storage.journal_writer.journal.objects) assert not objects def test_origin_visit_status_add_validation(self, swh_storage): """Wrong origin_visit_status input should raise storage argument error""" date_visit = now() visit_status1 = OriginVisitStatus( origin="unknown-origin-url", visit=10, date=date_visit, status="full", snapshot=None, ) with pytest.raises(StorageArgumentException, match="Unknown origin"): swh_storage.origin_visit_status_add([visit_status1]) objects = list(swh_storage.journal_writer.journal.objects) assert not objects def test_origin_visit_status_add(self, swh_storage, sample_data): """Correct origin visit statuses should add a new visit status """ snapshot = sample_data.snapshot origin1 = sample_data.origins[1] origin2 = Origin(url="new-origin") swh_storage.origin_add([origin1, origin2]) ov1, ov2 = swh_storage.origin_visit_add( [ OriginVisit( origin=origin1.url, date=sample_data.date_visit1, type=sample_data.type_visit1, ), OriginVisit( origin=origin2.url, date=sample_data.date_visit2, type=sample_data.type_visit2, ), ] ) ovs1 = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=sample_data.date_visit1, type=ov1.type, status="created", snapshot=None, ) ovs2 = OriginVisitStatus( origin=ov2.origin, visit=ov2.visit, date=sample_data.date_visit2, type=ov2.type, status="created", snapshot=None, ) date_visit_now = round_to_milliseconds(now()) visit_status1 = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=date_visit_now, type=ov1.type, status="full", snapshot=snapshot.id, ) date_visit_now = round_to_milliseconds(now()) visit_status2 = OriginVisitStatus( origin=ov2.origin, visit=ov2.visit, date=date_visit_now, type=ov2.type, status="ongoing", snapshot=None, metadata={"intrinsic": "something"}, ) stats = swh_storage.origin_visit_status_add([visit_status1, visit_status2]) assert stats == {"origin_visit_status:add": 2} visit = swh_storage.origin_visit_get_latest(origin1.url, require_snapshot=True) visit_status = swh_storage.origin_visit_status_get_latest( origin1.url, visit.visit, require_snapshot=True ) assert visit_status == visit_status1 visit = swh_storage.origin_visit_get_latest(origin2.url, require_snapshot=False) visit_status = swh_storage.origin_visit_status_get_latest( origin2.url, visit.visit, require_snapshot=False ) assert origin2.url != origin1.url assert visit_status == visit_status2 actual_objects = list(swh_storage.journal_writer.journal.objects) expected_origins = [origin1, origin2] expected_visits = [ov1, ov2] expected_visit_statuses = [ovs1, ovs2, visit_status1, visit_status2] expected_objects = ( [("origin", o) for o in expected_origins] + [("origin_visit", v) for v in expected_visits] + [("origin_visit_status", ovs) for ovs in expected_visit_statuses] ) for obj in expected_objects: assert obj in actual_objects def test_origin_visit_status_add_twice(self, swh_storage, sample_data): """Correct origin visit statuses should add a new visit status """ snapshot = sample_data.snapshot origin1 = sample_data.origins[1] swh_storage.origin_add([origin1]) ov1 = swh_storage.origin_visit_add( [ OriginVisit( origin=origin1.url, date=sample_data.date_visit1, type=sample_data.type_visit1, ), ] )[0] ovs1 = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=sample_data.date_visit1, type=ov1.type, status="created", snapshot=None, ) date_visit_now = round_to_milliseconds(now()) visit_status1 = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=date_visit_now, type=ov1.type, status="full", snapshot=snapshot.id, ) stats = swh_storage.origin_visit_status_add([visit_status1]) assert stats == {"origin_visit_status:add": 1} # second call will ignore existing entries (will send to storage though) stats = swh_storage.origin_visit_status_add([visit_status1]) # ...so the storage still returns it as an addition assert stats == {"origin_visit_status:add": 1} visit_status = swh_storage.origin_visit_status_get_latest(ov1.origin, ov1.visit) assert visit_status == visit_status1 actual_objects = list(swh_storage.journal_writer.journal.objects) expected_origins = [origin1] expected_visits = [ov1] expected_visit_statuses = [ovs1, visit_status1, visit_status1] # write twice in the journal expected_objects = ( [("origin", o) for o in expected_origins] + [("origin_visit", v) for v in expected_visits] + [("origin_visit_status", ovs) for ovs in expected_visit_statuses] ) for obj in expected_objects: assert obj in actual_objects def test_origin_visit_find_by_date(self, swh_storage, sample_data): origin = sample_data.origin swh_storage.origin_add([origin]) visit1 = OriginVisit( origin=origin.url, date=sample_data.date_visit2, type=sample_data.type_visit1, ) visit2 = OriginVisit( origin=origin.url, date=sample_data.date_visit3, type=sample_data.type_visit2, ) visit3 = OriginVisit( origin=origin.url, date=sample_data.date_visit2, type=sample_data.type_visit3, ) ov1, ov2, ov3 = swh_storage.origin_visit_add([visit1, visit2, visit3]) ovs1 = OriginVisitStatus( origin=origin.url, visit=ov1.visit, date=sample_data.date_visit2, status="ongoing", snapshot=None, ) ovs2 = OriginVisitStatus( origin=origin.url, visit=ov2.visit, date=sample_data.date_visit3, status="ongoing", snapshot=None, ) ovs3 = OriginVisitStatus( origin=origin.url, visit=ov3.visit, date=sample_data.date_visit2, status="ongoing", snapshot=None, ) swh_storage.origin_visit_status_add([ovs1, ovs2, ovs3]) # Simple case actual_visit = swh_storage.origin_visit_find_by_date( origin.url, sample_data.date_visit3 ) assert actual_visit == ov2 # There are two visits at the same date, the latest must be returned actual_visit = swh_storage.origin_visit_find_by_date( origin.url, sample_data.date_visit2 ) assert actual_visit == ov3 def test_origin_visit_find_by_date__unknown_origin(self, swh_storage, sample_data): actual_visit = swh_storage.origin_visit_find_by_date( "foo", sample_data.date_visit2 ) assert actual_visit is None def test_origin_visit_get_by(self, swh_storage, sample_data): snapshot = sample_data.snapshot origins = sample_data.origins[:2] swh_storage.origin_add(origins) origin_url, origin_url2 = [o.url for o in origins] visit = OriginVisit( origin=origin_url, date=sample_data.date_visit2, type=sample_data.type_visit2, ) origin_visit1 = swh_storage.origin_visit_add([visit])[0] swh_storage.snapshot_add([snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin_url, visit=origin_visit1.visit, date=now(), status="ongoing", snapshot=snapshot.id, ) ] ) # Add some other {origin, visit} entries visit2 = OriginVisit( origin=origin_url, date=sample_data.date_visit3, type=sample_data.type_visit3, ) visit3 = OriginVisit( origin=origin_url2, date=sample_data.date_visit3, type=sample_data.type_visit3, ) swh_storage.origin_visit_add([visit2, visit3]) # when visit1_metadata = { "contents": 42, "directories": 22, } swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin_url, visit=origin_visit1.visit, date=now(), status="full", snapshot=snapshot.id, metadata=visit1_metadata, ) ] ) actual_visit = swh_storage.origin_visit_get_by(origin_url, origin_visit1.visit) assert actual_visit == origin_visit1 def test_origin_visit_get_by__no_result(self, swh_storage, sample_data): actual_visit = swh_storage.origin_visit_get_by("unknown", 10) # unknown origin assert actual_visit is None origin = sample_data.origin swh_storage.origin_add([origin]) actual_visit = swh_storage.origin_visit_get_by(origin.url, 999) # unknown visit assert actual_visit is None def test_origin_visit_get_latest_edge_cases(self, swh_storage, sample_data): # unknown origin so no result assert swh_storage.origin_visit_get_latest("unknown-origin") is None # unknown type so no result origin = sample_data.origin swh_storage.origin_add([origin]) assert swh_storage.origin_visit_get_latest(origin.url, type="unknown") is None # unknown allowed statuses should raise with pytest.raises(StorageArgumentException, match="Unknown allowed statuses"): swh_storage.origin_visit_get_latest( origin.url, allowed_statuses=["unknown"] ) def test_origin_visit_get_latest_filter_type(self, swh_storage, sample_data): """Filtering origin visit get latest with filter type should be ok """ origin = sample_data.origin swh_storage.origin_add([origin]) visit1 = OriginVisit( origin=origin.url, date=sample_data.date_visit1, type="git", ) visit2 = OriginVisit( origin=origin.url, date=sample_data.date_visit2, type="hg", ) date_now = round_to_milliseconds(now()) visit3 = OriginVisit(origin=origin.url, date=date_now, type="hg",) assert sample_data.date_visit1 < sample_data.date_visit2 assert sample_data.date_visit2 < date_now ov1, ov2, ov3 = swh_storage.origin_visit_add([visit1, visit2, visit3]) # Check type filter is ok actual_visit = swh_storage.origin_visit_get_latest(origin.url, type="git") assert actual_visit == ov1 actual_visit = swh_storage.origin_visit_get_latest(origin.url, type="hg") assert actual_visit == ov3 actual_visit_unknown_type = swh_storage.origin_visit_get_latest( origin.url, type="npm", # no visit matching that type ) assert actual_visit_unknown_type is None def test_origin_visit_get_latest(self, swh_storage, sample_data): empty_snapshot, complete_snapshot = sample_data.snapshots[1:3] origin = sample_data.origin swh_storage.origin_add([origin]) visit1 = OriginVisit( origin=origin.url, date=sample_data.date_visit1, type="git", ) visit2 = OriginVisit( origin=origin.url, date=sample_data.date_visit2, type="hg", ) date_now = round_to_milliseconds(now()) visit3 = OriginVisit(origin=origin.url, date=date_now, type="hg",) assert visit1.date < visit2.date assert visit2.date < visit3.date ov1, ov2, ov3 = swh_storage.origin_visit_add([visit1, visit2, visit3]) # no filters, latest visit is the last one (whose date is most recent) actual_visit = swh_storage.origin_visit_get_latest(origin.url) assert actual_visit == ov3 # 3 visits, none has snapshot so nothing is returned actual_visit = swh_storage.origin_visit_get_latest( origin.url, require_snapshot=True ) assert actual_visit is None # visit are created with "created" status, so nothing will get returned actual_visit = swh_storage.origin_visit_get_latest( origin.url, allowed_statuses=["partial"] ) assert actual_visit is None # visit are created with "created" status, so most recent again actual_visit = swh_storage.origin_visit_get_latest( origin.url, allowed_statuses=["created"] ) assert actual_visit == ov3 # Add snapshot to visit1; require_snapshot=True makes it return first visit swh_storage.snapshot_add([complete_snapshot]) visit_status_with_snapshot = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=round_to_milliseconds(now()), type=ov1.type, status="ongoing", snapshot=complete_snapshot.id, ) swh_storage.origin_visit_status_add([visit_status_with_snapshot]) # only the first visit has a snapshot now actual_visit = swh_storage.origin_visit_get_latest( origin.url, require_snapshot=True ) assert actual_visit == ov1 # only the first visit has a status ongoing now actual_visit = swh_storage.origin_visit_get_latest( origin.url, allowed_statuses=["ongoing"] ) assert actual_visit == ov1 actual_visit_status = swh_storage.origin_visit_status_get_latest( origin.url, ov1.visit, require_snapshot=True ) assert actual_visit_status == visit_status_with_snapshot # ... and require_snapshot=False (defaults) still returns latest visit (3rd) actual_visit = swh_storage.origin_visit_get_latest( origin.url, require_snapshot=False ) assert actual_visit == ov3 # no specific filter, this returns as before the latest visit actual_visit = swh_storage.origin_visit_get_latest(origin.url) assert actual_visit == ov3 # Status filter: all three visits are status=ongoing, so no visit # returned actual_visit = swh_storage.origin_visit_get_latest( origin.url, allowed_statuses=["full"] ) assert actual_visit is None visit_status1_full = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=round_to_milliseconds(now()), type=ov1.type, status="full", snapshot=complete_snapshot.id, ) # Mark the first visit as completed and check status filter again swh_storage.origin_visit_status_add([visit_status1_full]) # only the first visit has the full status actual_visit = swh_storage.origin_visit_get_latest( origin.url, allowed_statuses=["full"] ) assert actual_visit == ov1 actual_visit_status = swh_storage.origin_visit_status_get_latest( origin.url, ov1.visit, allowed_statuses=["full"] ) assert actual_visit_status == visit_status1_full # no specific filter, this returns as before the latest visit actual_visit = swh_storage.origin_visit_get_latest(origin.url) assert actual_visit == ov3 # Add snapshot to visit2 and check that the new snapshot is returned swh_storage.snapshot_add([empty_snapshot]) visit_status2_full = OriginVisitStatus( origin=ov2.origin, visit=ov2.visit, date=round_to_milliseconds(now()), type=ov2.type, status="ongoing", snapshot=empty_snapshot.id, ) swh_storage.origin_visit_status_add([visit_status2_full]) actual_visit = swh_storage.origin_visit_get_latest( origin.url, require_snapshot=True ) # 2nd visit is most recent with a snapshot assert actual_visit == ov2 actual_visit_status = swh_storage.origin_visit_status_get_latest( origin.url, ov2.visit, require_snapshot=True ) assert actual_visit_status == visit_status2_full # no specific filter, this returns as before the latest visit, 3rd one actual_origin = swh_storage.origin_visit_get_latest(origin.url) assert actual_origin == ov3 # full status is still the first visit actual_visit = swh_storage.origin_visit_get_latest( origin.url, allowed_statuses=["full"] ) assert actual_visit == ov1 # Add snapshot to visit3 (same date as visit2) visit_status3_with_snapshot = OriginVisitStatus( origin=ov3.origin, visit=ov3.visit, date=round_to_milliseconds(now()), type=ov3.type, status="ongoing", snapshot=complete_snapshot.id, ) swh_storage.origin_visit_status_add([visit_status3_with_snapshot]) # full status is still the first visit actual_visit = swh_storage.origin_visit_get_latest( origin.url, allowed_statuses=["full"], require_snapshot=True, ) assert actual_visit == ov1 actual_visit_status = swh_storage.origin_visit_status_get_latest( origin.url, visit=actual_visit.visit, allowed_statuses=["full"], require_snapshot=True, ) assert actual_visit_status == visit_status1_full # most recent is still the 3rd visit actual_visit = swh_storage.origin_visit_get_latest(origin.url) assert actual_visit == ov3 # 3rd visit has a snapshot now, so it's elected actual_visit = swh_storage.origin_visit_get_latest( origin.url, require_snapshot=True ) assert actual_visit == ov3 actual_visit_status = swh_storage.origin_visit_status_get_latest( origin.url, ov3.visit, require_snapshot=True ) assert actual_visit_status == visit_status3_with_snapshot def test_origin_visit_get_latest__same_date(self, swh_storage, sample_data): empty_snapshot, complete_snapshot = sample_data.snapshots[1:3] origin = sample_data.origin swh_storage.origin_add([origin]) visit1 = OriginVisit( origin=origin.url, date=sample_data.date_visit1, type="git", ) visit2 = OriginVisit( origin=origin.url, date=sample_data.date_visit1, type="hg", ) ov1, ov2 = swh_storage.origin_visit_add([visit1, visit2]) # ties should be broken by using the visit id actual_visit = swh_storage.origin_visit_get_latest(origin.url) assert actual_visit == ov2 def test_origin_visit_get_latest__not_last(self, swh_storage, sample_data): origin = sample_data.origin swh_storage.origin_add([origin]) visit1, visit2 = sample_data.origin_visits[:2] assert visit1.origin == origin.url swh_storage.origin_visit_add([visit1]) ov1 = swh_storage.origin_visit_get_latest(origin.url) # Add snapshot to visit1, latest snapshot = visit 1 snapshot complete_snapshot = sample_data.snapshots[2] swh_storage.snapshot_add([complete_snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=ov1.visit, date=visit2.date, status="partial", snapshot=None, ) ] ) assert visit1.date < visit2.date # no snapshot associated to the visit, so None visit = swh_storage.origin_visit_get_latest( origin.url, allowed_statuses=["partial"], require_snapshot=True, ) assert visit is None date_now = now() assert visit2.date < date_now swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=ov1.visit, date=date_now, status="full", snapshot=complete_snapshot.id, ) ] ) swh_storage.origin_visit_add( [OriginVisit(origin=origin.url, date=now(), type=visit1.type,)] ) visit = swh_storage.origin_visit_get_latest(origin.url, require_snapshot=True) assert visit is not None def test_origin_visit_status_get_latest__validation(self, swh_storage, sample_data): origin = sample_data.origin swh_storage.origin_add([origin]) visit1 = OriginVisit( origin=origin.url, date=sample_data.date_visit1, type="git", ) # unknown allowed statuses should raise with pytest.raises(StorageArgumentException, match="Unknown allowed statuses"): swh_storage.origin_visit_status_get_latest( origin.url, visit1.visit, allowed_statuses=["unknown"] ) def test_origin_visit_status_get_latest(self, swh_storage, sample_data): snapshot = sample_data.snapshots[2] origin1 = sample_data.origin swh_storage.origin_add([origin1]) # to have some reference visits ov1, ov2 = swh_storage.origin_visit_add( [ OriginVisit( origin=origin1.url, date=sample_data.date_visit1, type=sample_data.type_visit1, ), OriginVisit( origin=origin1.url, date=sample_data.date_visit2, type=sample_data.type_visit2, ), ] ) swh_storage.snapshot_add([snapshot]) date_now = round_to_milliseconds(now()) assert sample_data.date_visit1 < sample_data.date_visit2 assert sample_data.date_visit2 < date_now ovs1 = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=sample_data.date_visit1, type=ov1.type, status="partial", snapshot=None, ) ovs2 = OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=sample_data.date_visit2, type=ov1.type, status="ongoing", snapshot=None, ) ovs3 = OriginVisitStatus( origin=ov2.origin, visit=ov2.visit, date=sample_data.date_visit2 + datetime.timedelta(minutes=1), # to not be ignored type=ov2.type, status="ongoing", snapshot=None, ) ovs4 = OriginVisitStatus( origin=ov2.origin, visit=ov2.visit, date=date_now, type=ov2.type, status="full", snapshot=snapshot.id, metadata={"something": "wicked"}, ) swh_storage.origin_visit_status_add([ovs1, ovs2, ovs3, ovs4]) # unknown origin so no result actual_origin_visit = swh_storage.origin_visit_status_get_latest( "unknown-origin", ov1.visit ) assert actual_origin_visit is None # unknown visit so no result actual_origin_visit = swh_storage.origin_visit_status_get_latest( ov1.origin, ov1.visit + 10 ) assert actual_origin_visit is None # Two visits, both with no snapshot, take the most recent actual_origin_visit2 = swh_storage.origin_visit_status_get_latest( origin1.url, ov1.visit ) assert isinstance(actual_origin_visit2, OriginVisitStatus) assert actual_origin_visit2 == ovs2 assert ovs2.origin == origin1.url assert ovs2.visit == ov1.visit actual_origin_visit = swh_storage.origin_visit_status_get_latest( origin1.url, ov1.visit, require_snapshot=True ) # there is no visit with snapshot yet for that visit assert actual_origin_visit is None actual_origin_visit2 = swh_storage.origin_visit_status_get_latest( origin1.url, ov1.visit, allowed_statuses=["partial", "ongoing"] ) # visit status with partial status visit elected assert actual_origin_visit2 == ovs2 assert actual_origin_visit2.status == "ongoing" actual_origin_visit4 = swh_storage.origin_visit_status_get_latest( origin1.url, ov2.visit, require_snapshot=True ) assert actual_origin_visit4 == ovs4 assert actual_origin_visit4.snapshot == snapshot.id actual_origin_visit = swh_storage.origin_visit_status_get_latest( origin1.url, ov2.visit, require_snapshot=True, allowed_statuses=["ongoing"] ) # nothing matches so nothing assert actual_origin_visit is None # there is no visit with status full actual_origin_visit3 = swh_storage.origin_visit_status_get_latest( origin1.url, ov2.visit, allowed_statuses=["ongoing"] ) assert actual_origin_visit3 == ovs3 def test_person_fullname_unicity(self, swh_storage, sample_data): revision, rev2 = sample_data.revisions[0:2] # create a revision with same committer fullname but wo name and email revision2 = attr.evolve( rev2, committer=Person( fullname=revision.committer.fullname, name=None, email=None ), ) swh_storage.revision_add([revision, revision2]) # when getting added revisions revisions = swh_storage.revision_get([revision.id, revision2.id]) # then check committers are the same assert revisions[0].committer == revisions[1].committer def test_snapshot_add_get_empty(self, swh_storage, sample_data): empty_snapshot = sample_data.snapshots[1] empty_snapshot_dict = empty_snapshot.to_dict() origin = sample_data.origin swh_storage.origin_add([origin]) ov1 = swh_storage.origin_visit_add( [ OriginVisit( origin=origin.url, date=sample_data.date_visit1, type=sample_data.type_visit1, ) ] )[0] actual_result = swh_storage.snapshot_add([empty_snapshot]) assert actual_result == {"snapshot:add": 1} date_now = now() swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=ov1.origin, visit=ov1.visit, date=date_now, type=ov1.type, status="full", snapshot=empty_snapshot.id, ) ] ) by_id = swh_storage.snapshot_get(empty_snapshot.id) assert by_id == {**empty_snapshot_dict, "next_branch": None} ovs1 = OriginVisitStatus.from_dict( { "origin": ov1.origin, "date": sample_data.date_visit1, "type": ov1.type, "visit": ov1.visit, "status": "created", "snapshot": None, "metadata": None, } ) ovs2 = OriginVisitStatus.from_dict( { "origin": ov1.origin, "date": date_now, "type": ov1.type, "visit": ov1.visit, "status": "full", "metadata": None, "snapshot": empty_snapshot.id, } ) actual_objects = list(swh_storage.journal_writer.journal.objects) expected_objects = [ ("origin", origin), ("origin_visit", ov1), ("origin_visit_status", ovs1,), ("snapshot", empty_snapshot), ("origin_visit_status", ovs2,), ] for obj in expected_objects: assert obj in actual_objects def test_snapshot_add_get_complete(self, swh_storage, sample_data): complete_snapshot = sample_data.snapshots[2] complete_snapshot_dict = complete_snapshot.to_dict() origin = sample_data.origin swh_storage.origin_add([origin]) visit = OriginVisit( origin=origin.url, date=sample_data.date_visit1, type=sample_data.type_visit1, ) origin_visit1 = swh_storage.origin_visit_add([visit])[0] actual_result = swh_storage.snapshot_add([complete_snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=origin_visit1.visit, date=now(), status="ongoing", snapshot=complete_snapshot.id, ) ] ) assert actual_result == {"snapshot:add": 1} by_id = swh_storage.snapshot_get(complete_snapshot.id) assert by_id == {**complete_snapshot_dict, "next_branch": None} def test_snapshot_add_many(self, swh_storage, sample_data): snapshot, _, complete_snapshot = sample_data.snapshots[:3] actual_result = swh_storage.snapshot_add([snapshot, complete_snapshot]) assert actual_result == {"snapshot:add": 2} assert swh_storage.snapshot_get(complete_snapshot.id) == { **complete_snapshot.to_dict(), "next_branch": None, } assert swh_storage.snapshot_get(snapshot.id) == { **snapshot.to_dict(), "next_branch": None, } swh_storage.refresh_stat_counters() assert swh_storage.stat_counters()["snapshot"] == 2 def test_snapshot_add_many_incremental(self, swh_storage, sample_data): snapshot, _, complete_snapshot = sample_data.snapshots[:3] actual_result = swh_storage.snapshot_add([complete_snapshot]) assert actual_result == {"snapshot:add": 1} actual_result2 = swh_storage.snapshot_add([snapshot, complete_snapshot]) assert actual_result2 == {"snapshot:add": 1} assert swh_storage.snapshot_get(complete_snapshot.id) == { **complete_snapshot.to_dict(), "next_branch": None, } assert swh_storage.snapshot_get(snapshot.id) == { **snapshot.to_dict(), "next_branch": None, } def test_snapshot_add_twice(self, swh_storage, sample_data): snapshot, empty_snapshot = sample_data.snapshots[:2] actual_result = swh_storage.snapshot_add([empty_snapshot]) assert actual_result == {"snapshot:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ ("snapshot", empty_snapshot) ] actual_result = swh_storage.snapshot_add([snapshot]) assert actual_result == {"snapshot:add": 1} assert list(swh_storage.journal_writer.journal.objects) == [ ("snapshot", empty_snapshot), ("snapshot", snapshot), ] def test_snapshot_add_count_branches(self, swh_storage, sample_data): complete_snapshot = sample_data.snapshots[2] actual_result = swh_storage.snapshot_add([complete_snapshot]) assert actual_result == {"snapshot:add": 1} snp_size = swh_storage.snapshot_count_branches(complete_snapshot.id) expected_snp_size = { "alias": 1, "content": 1, "directory": 2, "release": 1, "revision": 1, "snapshot": 1, None: 1, } assert snp_size == expected_snp_size def test_snapshot_add_count_branches_with_filtering(self, swh_storage, sample_data): complete_snapshot = sample_data.snapshots[2] actual_result = swh_storage.snapshot_add([complete_snapshot]) assert actual_result == {"snapshot:add": 1} snp_size = swh_storage.snapshot_count_branches( complete_snapshot.id, branch_name_exclude_prefix=b"release" ) expected_snp_size = { "alias": 1, "content": 1, "directory": 2, "revision": 1, "snapshot": 1, None: 1, } assert snp_size == expected_snp_size def test_snapshot_add_count_branches_with_filtering_edge_cases( self, swh_storage, sample_data ): snapshot = Snapshot( branches={ b"\xaa\xff": SnapshotBranch( target=sample_data.revision.id, target_type=TargetType.REVISION, ), b"\xaa\xff\x00": SnapshotBranch( target=sample_data.revision.id, target_type=TargetType.REVISION, ), b"\xff\xff": SnapshotBranch( target=sample_data.release.id, target_type=TargetType.RELEASE, ), b"\xff\xff\x00": SnapshotBranch( target=sample_data.release.id, target_type=TargetType.RELEASE, ), b"dangling": None, }, ) swh_storage.snapshot_add([snapshot]) assert swh_storage.snapshot_count_branches( snapshot.id, branch_name_exclude_prefix=b"\xaa\xff" ) == {None: 1, "release": 2} assert swh_storage.snapshot_count_branches( snapshot.id, branch_name_exclude_prefix=b"\xff\xff" ) == {None: 1, "revision": 2} def test_snapshot_add_get_paginated(self, swh_storage, sample_data): complete_snapshot = sample_data.snapshots[2] swh_storage.snapshot_add([complete_snapshot]) snp_id = complete_snapshot.id branches = complete_snapshot.branches branch_names = list(sorted(branches)) # Test branch_from snapshot = swh_storage.snapshot_get_branches(snp_id, branches_from=b"release") rel_idx = branch_names.index(b"release") expected_snapshot = { "id": snp_id, "branches": {name: branches[name] for name in branch_names[rel_idx:]}, "next_branch": None, } assert snapshot == expected_snapshot # Test branches_count snapshot = swh_storage.snapshot_get_branches(snp_id, branches_count=1) expected_snapshot = { "id": snp_id, "branches": {branch_names[0]: branches[branch_names[0]],}, "next_branch": b"content", } assert snapshot == expected_snapshot # test branch_from + branches_count snapshot = swh_storage.snapshot_get_branches( snp_id, branches_from=b"directory", branches_count=3 ) dir_idx = branch_names.index(b"directory") expected_snapshot = { "id": snp_id, "branches": { name: branches[name] for name in branch_names[dir_idx : dir_idx + 3] }, "next_branch": branch_names[dir_idx + 3], } assert snapshot == expected_snapshot def test_snapshot_add_get_filtered(self, swh_storage, sample_data): origin = sample_data.origin complete_snapshot = sample_data.snapshots[2] swh_storage.origin_add([origin]) visit = OriginVisit( origin=origin.url, date=sample_data.date_visit1, type=sample_data.type_visit1, ) origin_visit1 = swh_storage.origin_visit_add([visit])[0] swh_storage.snapshot_add([complete_snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=origin_visit1.visit, date=now(), status="ongoing", snapshot=complete_snapshot.id, ) ] ) snp_id = complete_snapshot.id branches = complete_snapshot.branches snapshot = swh_storage.snapshot_get_branches( snp_id, target_types=["release", "revision"] ) expected_snapshot = { "id": snp_id, "branches": { name: tgt for name, tgt in branches.items() if tgt and tgt.target_type in [TargetType.RELEASE, TargetType.REVISION] }, "next_branch": None, } assert snapshot == expected_snapshot snapshot = swh_storage.snapshot_get_branches(snp_id, target_types=["alias"]) expected_snapshot = { "id": snp_id, "branches": { name: tgt for name, tgt in branches.items() if tgt and tgt.target_type == TargetType.ALIAS }, "next_branch": None, } assert snapshot == expected_snapshot def test_snapshot_add_get_filtered_and_paginated(self, swh_storage, sample_data): complete_snapshot = sample_data.snapshots[2] swh_storage.snapshot_add([complete_snapshot]) snp_id = complete_snapshot.id branches = complete_snapshot.branches branch_names = list(sorted(branches)) # Test branch_from snapshot = swh_storage.snapshot_get_branches( snp_id, target_types=["directory", "release"], branches_from=b"directory2" ) expected_snapshot = { "id": snp_id, "branches": {name: branches[name] for name in (b"directory2", b"release")}, "next_branch": None, } assert snapshot == expected_snapshot # Test branches_count snapshot = swh_storage.snapshot_get_branches( snp_id, target_types=["directory", "release"], branches_count=1 ) expected_snapshot = { "id": snp_id, "branches": {b"directory": branches[b"directory"]}, "next_branch": b"directory2", } assert snapshot == expected_snapshot # Test branches_count snapshot = swh_storage.snapshot_get_branches( snp_id, target_types=["directory", "release"], branches_count=2 ) expected_snapshot = { "id": snp_id, "branches": { name: branches[name] for name in (b"directory", b"directory2") }, "next_branch": b"release", } assert snapshot == expected_snapshot # test branch_from + branches_count snapshot = swh_storage.snapshot_get_branches( snp_id, target_types=["directory", "release"], branches_from=b"directory2", branches_count=1, ) dir_idx = branch_names.index(b"directory2") expected_snapshot = { "id": snp_id, "branches": {branch_names[dir_idx]: branches[branch_names[dir_idx]],}, "next_branch": b"release", } assert snapshot == expected_snapshot def test_snapshot_add_get_branch_by_type(self, swh_storage, sample_data): complete_snapshot = sample_data.snapshots[2] snapshot = complete_snapshot.to_dict() alias1 = b"alias1" alias2 = b"alias2" target1 = random.choice(list(snapshot["branches"].keys())) target2 = random.choice(list(snapshot["branches"].keys())) snapshot["branches"][alias2] = { "target": target2, "target_type": "alias", } snapshot["branches"][alias1] = { "target": target1, "target_type": "alias", } new_snapshot = Snapshot.from_dict(snapshot) swh_storage.snapshot_add([new_snapshot]) branches = swh_storage.snapshot_get_branches( new_snapshot.id, target_types=["alias"], branches_from=alias1, branches_count=1, )["branches"] assert len(branches) == 1 assert alias1 in branches def test_snapshot_add_get_by_branches_name_pattern(self, swh_storage, sample_data): snapshot = Snapshot( branches={ b"refs/heads/master": SnapshotBranch( target=sample_data.revision.id, target_type=TargetType.REVISION, ), b"refs/heads/incoming": SnapshotBranch( target=sample_data.revision.id, target_type=TargetType.REVISION, ), b"refs/pull/1": SnapshotBranch( target=sample_data.revision.id, target_type=TargetType.REVISION, ), b"refs/pull/2": SnapshotBranch( target=sample_data.revision.id, target_type=TargetType.REVISION, ), b"dangling": None, b"\xaa\xff": SnapshotBranch( target=sample_data.revision.id, target_type=TargetType.REVISION, ), b"\xaa\xff\x00": SnapshotBranch( target=sample_data.revision.id, target_type=TargetType.REVISION, ), b"\xff\xff": SnapshotBranch( target=sample_data.revision.id, target_type=TargetType.REVISION, ), b"\xff\xff\x00": SnapshotBranch( target=sample_data.revision.id, target_type=TargetType.REVISION, ), }, ) swh_storage.snapshot_add([snapshot]) for include_pattern, exclude_prefix, nb_results in ( (b"pull", None, 2), (b"incoming", None, 1), (b"dangling", None, 1), (None, b"refs/heads/", 7), (b"refs", b"refs/heads/master", 3), (b"refs", b"refs/heads/master", 3), (None, b"\xaa\xff", 7), (None, b"\xff\xff", 7), ): branches = swh_storage.snapshot_get_branches( snapshot.id, branch_name_include_substring=include_pattern, branch_name_exclude_prefix=exclude_prefix, )["branches"] expected_branches = [ branch_name for branch_name in snapshot.branches if (include_pattern is None or include_pattern in branch_name) and ( exclude_prefix is None or not branch_name.startswith(exclude_prefix) ) ] assert sorted(branches) == sorted(expected_branches) assert len(branches) == nb_results def test_snapshot_add_get_by_branches_name_pattern_filtered_paginated( self, swh_storage, sample_data ): pattern = b"foo" nb_branches_by_target_type = 10 branches = {} for i in range(nb_branches_by_target_type): branches[f"branch/directory/bar{i}".encode()] = SnapshotBranch( target=sample_data.directory.id, target_type=TargetType.DIRECTORY, ) branches[f"branch/revision/bar{i}".encode()] = SnapshotBranch( target=sample_data.revision.id, target_type=TargetType.REVISION, ) branches[f"branch/directory/{pattern}{i}".encode()] = SnapshotBranch( target=sample_data.directory.id, target_type=TargetType.DIRECTORY, ) branches[f"branch/revision/{pattern}{i}".encode()] = SnapshotBranch( target=sample_data.revision.id, target_type=TargetType.REVISION, ) snapshot = Snapshot(branches=branches) swh_storage.snapshot_add([snapshot]) branches_count = nb_branches_by_target_type // 2 for target_type in ( TargetType.DIRECTORY, TargetType.REVISION, ): target_type_str = target_type.value partial_branches = swh_storage.snapshot_get_branches( snapshot.id, branch_name_include_substring=pattern, target_types=[target_type_str], branches_count=branches_count, ) branches = partial_branches["branches"] expected_branches = [ branch_name for branch_name, branch_data in snapshot.branches.items() if pattern in branch_name and branch_data.target_type == target_type ][:branches_count] assert sorted(branches) == sorted(expected_branches) assert ( partial_branches["next_branch"] == f"branch/{target_type_str}/{pattern}{branches_count}".encode() ) partial_branches = swh_storage.snapshot_get_branches( snapshot.id, branch_name_include_substring=pattern, target_types=[target_type_str], branches_from=partial_branches["next_branch"], ) branches = partial_branches["branches"] expected_branches = [ branch_name for branch_name, branch_data in snapshot.branches.items() if pattern in branch_name and branch_data.target_type == target_type ][branches_count:] assert sorted(branches) == sorted(expected_branches) assert partial_branches["next_branch"] is None def test_snapshot_add_get(self, swh_storage, sample_data): snapshot = sample_data.snapshot origin = sample_data.origin swh_storage.origin_add([origin]) visit = OriginVisit( origin=origin.url, date=sample_data.date_visit1, type=sample_data.type_visit1, ) ov1 = swh_storage.origin_visit_add([visit])[0] swh_storage.snapshot_add([snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=ov1.visit, date=now(), status="ongoing", snapshot=snapshot.id, ) ] ) expected_snapshot = {**snapshot.to_dict(), "next_branch": None} by_id = swh_storage.snapshot_get(snapshot.id) assert by_id == expected_snapshot actual_visit = swh_storage.origin_visit_get_by(origin.url, ov1.visit) assert actual_visit == ov1 visit_status = swh_storage.origin_visit_status_get_latest( origin.url, ov1.visit, require_snapshot=True ) assert visit_status.snapshot == snapshot.id def test_snapshot_get_random(self, swh_storage, sample_data): snapshot, empty_snapshot, complete_snapshot = sample_data.snapshots[:3] swh_storage.snapshot_add([snapshot, empty_snapshot, complete_snapshot]) assert swh_storage.snapshot_get_random() in { snapshot.id, empty_snapshot.id, complete_snapshot.id, } def test_snapshot_missing(self, swh_storage, sample_data): snapshot, missing_snapshot = sample_data.snapshots[:2] snapshots = [snapshot.id, missing_snapshot.id] swh_storage.snapshot_add([snapshot]) missing_snapshots = swh_storage.snapshot_missing(snapshots) assert list(missing_snapshots) == [missing_snapshot.id] def test_stat_counters(self, swh_storage, sample_data): origin = sample_data.origin snapshot = sample_data.snapshot revision = sample_data.revision release = sample_data.release directory = sample_data.directory content = sample_data.content expected_keys = ["content", "directory", "origin", "revision"] # Initially, all counters are 0 swh_storage.refresh_stat_counters() counters = swh_storage.stat_counters() assert set(expected_keys) <= set(counters) for key in expected_keys: assert counters[key] == 0 # Add a content. Only the content counter should increase. swh_storage.content_add([content]) swh_storage.refresh_stat_counters() counters = swh_storage.stat_counters() assert set(expected_keys) <= set(counters) for key in expected_keys: if key != "content": assert counters[key] == 0 assert counters["content"] == 1 # Add other objects. Check their counter increased as well. swh_storage.origin_add([origin]) visit = OriginVisit( origin=origin.url, date=sample_data.date_visit2, type=sample_data.type_visit2, ) origin_visit1 = swh_storage.origin_visit_add([visit])[0] swh_storage.snapshot_add([snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, visit=origin_visit1.visit, date=now(), status="ongoing", snapshot=snapshot.id, ) ] ) swh_storage.directory_add([directory]) swh_storage.revision_add([revision]) swh_storage.release_add([release]) swh_storage.refresh_stat_counters() counters = swh_storage.stat_counters() assert counters["content"] == 1 assert counters["directory"] == 1 assert counters["snapshot"] == 1 assert counters["origin"] == 1 assert counters["origin_visit"] == 1 assert counters["revision"] == 1 assert counters["release"] == 1 assert counters["snapshot"] == 1 if "person" in counters: assert counters["person"] == 3 def test_content_find_ctime(self, swh_storage, sample_data): origin_content = sample_data.content ctime = round_to_milliseconds(now()) content = attr.evolve(origin_content, data=None, ctime=ctime) swh_storage.content_add_metadata([content]) actually_present = swh_storage.content_find({"sha1": content.sha1}) assert actually_present[0] == content assert actually_present[0].ctime is not None assert actually_present[0].ctime.tzinfo is not None def test_content_find_with_present_content(self, swh_storage, sample_data): content = sample_data.content expected_content = attr.evolve(content, data=None) # 1. with something to find swh_storage.content_add([content]) actually_present = swh_storage.content_find({"sha1": content.sha1}) assert 1 == len(actually_present) assert actually_present[0] == expected_content # 2. with something to find actually_present = swh_storage.content_find({"sha1_git": content.sha1_git}) assert 1 == len(actually_present) assert actually_present[0] == expected_content # 3. with something to find actually_present = swh_storage.content_find({"sha256": content.sha256}) assert 1 == len(actually_present) assert actually_present[0] == expected_content # 4. with something to find actually_present = swh_storage.content_find(content.hashes()) assert 1 == len(actually_present) assert actually_present[0] == expected_content def test_content_find_with_non_present_content(self, swh_storage, sample_data): missing_content = sample_data.skipped_content # 1. with something that does not exist actually_present = swh_storage.content_find({"sha1": missing_content.sha1}) assert actually_present == [] # 2. with something that does not exist actually_present = swh_storage.content_find( {"sha1_git": missing_content.sha1_git} ) assert actually_present == [] # 3. with something that does not exist actually_present = swh_storage.content_find({"sha256": missing_content.sha256}) assert actually_present == [] def test_content_find_with_duplicate_input(self, swh_storage, sample_data): content = sample_data.content # Create fake data with colliding sha256 and blake2s256 sha1_array = bytearray(content.sha1) sha1_array[0] += 1 sha1git_array = bytearray(content.sha1_git) sha1git_array[0] += 1 duplicated_content = attr.evolve( content, sha1=bytes(sha1_array), sha1_git=bytes(sha1git_array) ) # Inject the data swh_storage.content_add([content, duplicated_content]) actual_result = swh_storage.content_find( { "blake2s256": duplicated_content.blake2s256, "sha256": duplicated_content.sha256, } ) expected_content = attr.evolve(content, data=None) expected_duplicated_content = attr.evolve(duplicated_content, data=None) for result in actual_result: assert result in [expected_content, expected_duplicated_content] def test_content_find_with_duplicate_sha256(self, swh_storage, sample_data): content = sample_data.content hashes = {} # Create fake data with colliding sha256 for hashalgo in ("sha1", "sha1_git", "blake2s256"): value = bytearray(getattr(content, hashalgo)) value[0] += 1 hashes[hashalgo] = bytes(value) duplicated_content = attr.evolve( content, sha1=hashes["sha1"], sha1_git=hashes["sha1_git"], blake2s256=hashes["blake2s256"], ) swh_storage.content_add([content, duplicated_content]) actual_result = swh_storage.content_find({"sha256": duplicated_content.sha256}) assert len(actual_result) == 2 expected_content = attr.evolve(content, data=None) expected_duplicated_content = attr.evolve(duplicated_content, data=None) for result in actual_result: assert result in [expected_content, expected_duplicated_content] # Find with both sha256 and blake2s256 actual_result = swh_storage.content_find( { "sha256": duplicated_content.sha256, "blake2s256": duplicated_content.blake2s256, } ) assert len(actual_result) == 1 assert actual_result == [expected_duplicated_content] def test_content_find_with_duplicate_blake2s256(self, swh_storage, sample_data): content = sample_data.content # Create fake data with colliding sha256 and blake2s256 sha1_array = bytearray(content.sha1) sha1_array[0] += 1 sha1git_array = bytearray(content.sha1_git) sha1git_array[0] += 1 sha256_array = bytearray(content.sha256) sha256_array[0] += 1 duplicated_content = attr.evolve( content, sha1=bytes(sha1_array), sha1_git=bytes(sha1git_array), sha256=bytes(sha256_array), ) swh_storage.content_add([content, duplicated_content]) actual_result = swh_storage.content_find( {"blake2s256": duplicated_content.blake2s256} ) expected_content = attr.evolve(content, data=None) expected_duplicated_content = attr.evolve(duplicated_content, data=None) for result in actual_result: assert result in [expected_content, expected_duplicated_content] # Find with both sha256 and blake2s256 actual_result = swh_storage.content_find( { "sha256": duplicated_content.sha256, "blake2s256": duplicated_content.blake2s256, } ) assert actual_result == [expected_duplicated_content] def test_content_find_bad_input(self, swh_storage): # 1. with no hash to lookup with pytest.raises(StorageArgumentException): swh_storage.content_find({}) # need at least one hash # 2. with bad hash with pytest.raises(StorageArgumentException): swh_storage.content_find({"unknown-sha1": "something"}) # not the right key def test_object_find_by_sha1_git(self, swh_storage, sample_data): content = sample_data.content directory = sample_data.directory revision = sample_data.revision release = sample_data.release sha1_gits = [b"00000000000000000000"] expected = { b"00000000000000000000": [], } swh_storage.content_add([content]) sha1_gits.append(content.sha1_git) expected[content.sha1_git] = [ {"sha1_git": content.sha1_git, "type": "content",} ] swh_storage.directory_add([directory]) sha1_gits.append(directory.id) expected[directory.id] = [{"sha1_git": directory.id, "type": "directory",}] swh_storage.revision_add([revision]) sha1_gits.append(revision.id) expected[revision.id] = [{"sha1_git": revision.id, "type": "revision",}] swh_storage.release_add([release]) sha1_gits.append(release.id) expected[release.id] = [{"sha1_git": release.id, "type": "release",}] ret = swh_storage.object_find_by_sha1_git(sha1_gits) assert expected == ret def test_metadata_fetcher_add_get(self, swh_storage, sample_data): fetcher = sample_data.metadata_fetcher actual_fetcher = swh_storage.metadata_fetcher_get(fetcher.name, fetcher.version) assert actual_fetcher is None # does not exist swh_storage.metadata_fetcher_add([fetcher]) res = swh_storage.metadata_fetcher_get(fetcher.name, fetcher.version) assert res == fetcher actual_objects = list(swh_storage.journal_writer.journal.objects) expected_objects = [ ("metadata_fetcher", fetcher), ] for obj in expected_objects: assert obj in actual_objects def test_metadata_fetcher_add_zero(self, swh_storage, sample_data): fetcher = sample_data.metadata_fetcher actual_fetcher = swh_storage.metadata_fetcher_get(fetcher.name, fetcher.version) assert actual_fetcher is None # does not exist swh_storage.metadata_fetcher_add([]) def test_metadata_authority_add_get(self, swh_storage, sample_data): authority = sample_data.metadata_authority actual_authority = swh_storage.metadata_authority_get( authority.type, authority.url ) assert actual_authority is None # does not exist swh_storage.metadata_authority_add([authority]) res = swh_storage.metadata_authority_get(authority.type, authority.url) assert res == authority actual_objects = list(swh_storage.journal_writer.journal.objects) expected_objects = [ ("metadata_authority", authority), ] for obj in expected_objects: assert obj in actual_objects def test_metadata_authority_add_zero(self, swh_storage, sample_data): authority = sample_data.metadata_authority actual_authority = swh_storage.metadata_authority_get( authority.type, authority.url ) assert actual_authority is None # does not exist swh_storage.metadata_authority_add([]) def test_content_metadata_add(self, swh_storage, sample_data): content = sample_data.content fetcher = sample_data.metadata_fetcher authority = sample_data.metadata_authority content_metadata = sample_data.content_metadata[:2] swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.raw_extrinsic_metadata_add(content_metadata) result = swh_storage.raw_extrinsic_metadata_get( content.swhid().to_extended(), authority ) assert result.next_page_token is None assert list(sorted(result.results, key=lambda x: x.discovery_date,)) == list( content_metadata ) actual_objects = list(swh_storage.journal_writer.journal.objects) expected_objects = [ ("metadata_authority", authority), ("metadata_fetcher", fetcher), ] + [("raw_extrinsic_metadata", item) for item in content_metadata] for obj in expected_objects: assert obj in actual_objects def test_content_metadata_add_duplicate(self, swh_storage, sample_data): """Duplicates should be silently ignored.""" content = sample_data.content fetcher = sample_data.metadata_fetcher authority = sample_data.metadata_authority content_metadata, content_metadata2 = sample_data.content_metadata[:2] swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.raw_extrinsic_metadata_add([content_metadata, content_metadata2]) swh_storage.raw_extrinsic_metadata_add([content_metadata2, content_metadata]) result = swh_storage.raw_extrinsic_metadata_get( content.swhid().to_extended(), authority ) assert result.next_page_token is None expected_results = (content_metadata, content_metadata2) assert ( tuple(sorted(result.results, key=lambda x: x.discovery_date,)) == expected_results ) def test_content_metadata_get(self, swh_storage, sample_data): content, content2 = sample_data.contents[:2] fetcher, fetcher2 = sample_data.fetchers[:2] authority, authority2 = sample_data.authorities[:2] ( content1_metadata1, content1_metadata2, content1_metadata3, ) = sample_data.content_metadata[:3] content2_metadata = RawExtrinsicMetadata.from_dict( { **remove_keys(content1_metadata2.to_dict(), ("id",)), # recompute id "target": str(content2.swhid()), } ) swh_storage.metadata_authority_add([authority, authority2]) swh_storage.metadata_fetcher_add([fetcher, fetcher2]) swh_storage.raw_extrinsic_metadata_add( [ content1_metadata1, content1_metadata2, content1_metadata3, content2_metadata, ] ) result = swh_storage.raw_extrinsic_metadata_get( content.swhid().to_extended(), authority ) assert result.next_page_token is None assert [content1_metadata1, content1_metadata2] == list( sorted(result.results, key=lambda x: x.discovery_date,) ) result = swh_storage.raw_extrinsic_metadata_get( content.swhid().to_extended(), authority2 ) assert result.next_page_token is None assert [content1_metadata3] == list( sorted(result.results, key=lambda x: x.discovery_date,) ) result = swh_storage.raw_extrinsic_metadata_get( content2.swhid().to_extended(), authority ) assert result.next_page_token is None assert [content2_metadata] == list(result.results,) def test_content_metadata_get_after(self, swh_storage, sample_data): content = sample_data.content fetcher = sample_data.metadata_fetcher authority = sample_data.metadata_authority content_metadata, content_metadata2 = sample_data.content_metadata[:2] swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.raw_extrinsic_metadata_add([content_metadata, content_metadata2]) result = swh_storage.raw_extrinsic_metadata_get( content.swhid().to_extended(), authority, after=content_metadata.discovery_date - timedelta(seconds=1), ) assert result.next_page_token is None assert [content_metadata, content_metadata2] == list( sorted(result.results, key=lambda x: x.discovery_date,) ) result = swh_storage.raw_extrinsic_metadata_get( content.swhid().to_extended(), authority, after=content_metadata.discovery_date, ) assert result.next_page_token is None assert result.results == [content_metadata2] result = swh_storage.raw_extrinsic_metadata_get( content.swhid().to_extended(), authority, after=content_metadata2.discovery_date, ) assert result.next_page_token is None assert result.results == [] def test_content_metadata_get_paginate(self, swh_storage, sample_data): content = sample_data.content fetcher = sample_data.metadata_fetcher authority = sample_data.metadata_authority content_metadata, content_metadata2 = sample_data.content_metadata[:2] swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.raw_extrinsic_metadata_add([content_metadata, content_metadata2]) swh_storage.raw_extrinsic_metadata_get(content.swhid().to_extended(), authority) result = swh_storage.raw_extrinsic_metadata_get( content.swhid().to_extended(), authority, limit=1 ) assert result.next_page_token is not None assert result.results == [content_metadata] result = swh_storage.raw_extrinsic_metadata_get( content.swhid().to_extended(), authority, limit=1, page_token=result.next_page_token, ) assert result.next_page_token is None assert result.results == [content_metadata2] def test_content_metadata_get_paginate_same_date(self, swh_storage, sample_data): content = sample_data.content fetcher1, fetcher2 = sample_data.fetchers[:2] authority = sample_data.metadata_authority content_metadata, content_metadata2 = sample_data.content_metadata[:2] swh_storage.metadata_fetcher_add([fetcher1, fetcher2]) swh_storage.metadata_authority_add([authority]) new_content_metadata2 = RawExtrinsicMetadata.from_dict( { **remove_keys(content_metadata2.to_dict(), ("id",)), # recompute id "discovery_date": content_metadata2.discovery_date, "fetcher": attr.evolve(fetcher2, metadata=None).to_dict(), } ) swh_storage.raw_extrinsic_metadata_add( [content_metadata, new_content_metadata2] ) result = swh_storage.raw_extrinsic_metadata_get( content.swhid().to_extended(), authority, limit=1 ) assert result.next_page_token is not None assert result.results == [content_metadata] result = swh_storage.raw_extrinsic_metadata_get( content.swhid().to_extended(), authority, limit=1, page_token=result.next_page_token, ) assert result.next_page_token is None assert result.results[0].to_dict() == new_content_metadata2.to_dict() assert result.results == [new_content_metadata2] def test_content_metadata_get_by_ids(self, swh_storage, sample_data): content, content2 = sample_data.contents[:2] fetcher, fetcher2 = sample_data.fetchers[:2] authority, authority2 = sample_data.authorities[:2] ( content1_metadata1, content1_metadata2, content1_metadata3, ) = sample_data.content_metadata[:3] content2_metadata = RawExtrinsicMetadata.from_dict( { **remove_keys(content1_metadata2.to_dict(), ("id",)), # recompute id "target": str(content2.swhid()), } ) swh_storage.metadata_authority_add([authority, authority2]) swh_storage.metadata_fetcher_add([fetcher, fetcher2]) swh_storage.raw_extrinsic_metadata_add( [ content1_metadata1, content1_metadata2, content1_metadata3, content2_metadata, ] ) assert set( swh_storage.raw_extrinsic_metadata_get_by_ids( [content1_metadata1.id, b"\x00" * 20, content2_metadata.id] ) ) == {content1_metadata1, content2_metadata} def test_content_metadata_get_authorities(self, swh_storage, sample_data): content1, content2, content3 = sample_data.contents[:3] fetcher, fetcher2 = sample_data.fetchers[:2] authority, authority2 = sample_data.authorities[:2] ( content1_metadata1, content1_metadata2, content1_metadata3, ) = sample_data.content_metadata[:3] content2_metadata = RawExtrinsicMetadata.from_dict( { **remove_keys(content1_metadata2.to_dict(), ("id",)), # recompute id "target": str(content2.swhid()), } ) content1_metadata2 = RawExtrinsicMetadata.from_dict( { **remove_keys(content1_metadata2.to_dict(), ("id",)), # recompute id "authority": authority2.to_dict(), } ) swh_storage.metadata_authority_add([authority, authority2]) swh_storage.metadata_fetcher_add([fetcher, fetcher2]) swh_storage.raw_extrinsic_metadata_add( [ content1_metadata1, content1_metadata2, content1_metadata3, content2_metadata, ] ) assert swh_storage.raw_extrinsic_metadata_get_authorities(content1.swhid()) in ( [authority, authority2], [authority2, authority], ) assert swh_storage.raw_extrinsic_metadata_get_authorities(content2.swhid()) == [ authority ] assert ( swh_storage.raw_extrinsic_metadata_get_authorities(content3.swhid()) == [] ) def test_origin_metadata_add(self, swh_storage, sample_data): origin = sample_data.origin fetcher = sample_data.metadata_fetcher authority = sample_data.metadata_authority origin_metadata, origin_metadata2 = sample_data.origin_metadata[:2] assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.raw_extrinsic_metadata_add([origin_metadata, origin_metadata2]) result = swh_storage.raw_extrinsic_metadata_get( Origin(origin.url).swhid(), authority ) assert result.next_page_token is None assert list(sorted(result.results, key=lambda x: x.discovery_date)) == [ origin_metadata, origin_metadata2, ] actual_objects = list(swh_storage.journal_writer.journal.objects) expected_objects = [ ("metadata_authority", authority), ("metadata_fetcher", fetcher), ("raw_extrinsic_metadata", origin_metadata), ("raw_extrinsic_metadata", origin_metadata2), ] for obj in expected_objects: assert obj in actual_objects def test_origin_metadata_add_duplicate(self, swh_storage, sample_data): """Duplicates should be silently updated.""" origin = sample_data.origin fetcher = sample_data.metadata_fetcher authority = sample_data.metadata_authority origin_metadata, origin_metadata2 = sample_data.origin_metadata[:2] assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.raw_extrinsic_metadata_add([origin_metadata, origin_metadata2]) swh_storage.raw_extrinsic_metadata_add([origin_metadata2, origin_metadata]) result = swh_storage.raw_extrinsic_metadata_get( Origin(origin.url).swhid(), authority ) assert result.next_page_token is None # which of the two behavior happens is backend-specific. expected_results = (origin_metadata, origin_metadata2) assert ( tuple(sorted(result.results, key=lambda x: x.discovery_date,)) == expected_results ) def test_origin_metadata_get(self, swh_storage, sample_data): origin, origin2 = sample_data.origins[:2] fetcher, fetcher2 = sample_data.fetchers[:2] authority, authority2 = sample_data.authorities[:2] ( origin1_metadata1, origin1_metadata2, origin1_metadata3, ) = sample_data.origin_metadata[:3] assert swh_storage.origin_add([origin, origin2]) == {"origin:add": 2} origin2_metadata = RawExtrinsicMetadata.from_dict( { **remove_keys(origin1_metadata2.to_dict(), ("id",)), # recompute id "target": str(Origin(origin2.url).swhid()), } ) swh_storage.metadata_authority_add([authority, authority2]) swh_storage.metadata_fetcher_add([fetcher, fetcher2]) swh_storage.raw_extrinsic_metadata_add( [origin1_metadata1, origin1_metadata2, origin1_metadata3, origin2_metadata] ) result = swh_storage.raw_extrinsic_metadata_get( Origin(origin.url).swhid(), authority ) assert result.next_page_token is None assert [origin1_metadata1, origin1_metadata2] == list( sorted(result.results, key=lambda x: x.discovery_date,) ) result = swh_storage.raw_extrinsic_metadata_get( Origin(origin.url).swhid(), authority2 ) assert result.next_page_token is None assert [origin1_metadata3] == list( sorted(result.results, key=lambda x: x.discovery_date,) ) result = swh_storage.raw_extrinsic_metadata_get( Origin(origin2.url).swhid(), authority ) assert result.next_page_token is None assert [origin2_metadata] == list(result.results,) def test_origin_metadata_get_after(self, swh_storage, sample_data): origin = sample_data.origin fetcher = sample_data.metadata_fetcher authority = sample_data.metadata_authority origin_metadata, origin_metadata2 = sample_data.origin_metadata[:2] assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.raw_extrinsic_metadata_add([origin_metadata, origin_metadata2]) result = swh_storage.raw_extrinsic_metadata_get( Origin(origin.url).swhid(), authority, after=origin_metadata.discovery_date - timedelta(seconds=1), ) assert result.next_page_token is None assert list(sorted(result.results, key=lambda x: x.discovery_date,)) == [ origin_metadata, origin_metadata2, ] result = swh_storage.raw_extrinsic_metadata_get( Origin(origin.url).swhid(), authority, after=origin_metadata.discovery_date, ) assert result.next_page_token is None assert result.results == [origin_metadata2] result = swh_storage.raw_extrinsic_metadata_get( Origin(origin.url).swhid(), authority, after=origin_metadata2.discovery_date, ) assert result.next_page_token is None assert result.results == [] def test_origin_metadata_get_paginate(self, swh_storage, sample_data): origin = sample_data.origin fetcher = sample_data.metadata_fetcher authority = sample_data.metadata_authority origin_metadata, origin_metadata2 = sample_data.origin_metadata[:2] assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_fetcher_add([fetcher]) swh_storage.metadata_authority_add([authority]) swh_storage.raw_extrinsic_metadata_add([origin_metadata, origin_metadata2]) swh_storage.raw_extrinsic_metadata_get(Origin(origin.url).swhid(), authority) result = swh_storage.raw_extrinsic_metadata_get( Origin(origin.url).swhid(), authority, limit=1 ) assert result.next_page_token is not None assert result.results == [origin_metadata] result = swh_storage.raw_extrinsic_metadata_get( Origin(origin.url).swhid(), authority, limit=1, page_token=result.next_page_token, ) assert result.next_page_token is None assert result.results == [origin_metadata2] def test_origin_metadata_get_paginate_same_date(self, swh_storage, sample_data): origin = sample_data.origin fetcher1, fetcher2 = sample_data.fetchers[:2] authority = sample_data.metadata_authority origin_metadata, origin_metadata2 = sample_data.origin_metadata[:2] assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_fetcher_add([fetcher1, fetcher2]) swh_storage.metadata_authority_add([authority]) new_origin_metadata2 = RawExtrinsicMetadata.from_dict( { **remove_keys(origin_metadata2.to_dict(), ("id",)), # recompute id "discovery_date": origin_metadata2.discovery_date, "fetcher": attr.evolve(fetcher2, metadata=None).to_dict(), } ) swh_storage.raw_extrinsic_metadata_add([origin_metadata, new_origin_metadata2]) result = swh_storage.raw_extrinsic_metadata_get( Origin(origin.url).swhid(), authority, limit=1 ) assert result.next_page_token is not None assert result.results == [origin_metadata] result = swh_storage.raw_extrinsic_metadata_get( Origin(origin.url).swhid(), authority, limit=1, page_token=result.next_page_token, ) assert result.next_page_token is None assert result.results == [new_origin_metadata2] def test_origin_metadata_add_missing_authority(self, swh_storage, sample_data): origin = sample_data.origin fetcher = sample_data.metadata_fetcher origin_metadata, origin_metadata2 = sample_data.origin_metadata[:2] assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_fetcher_add([fetcher]) with pytest.raises(StorageArgumentException, match="authority"): swh_storage.raw_extrinsic_metadata_add([origin_metadata, origin_metadata2]) def test_origin_metadata_add_missing_fetcher(self, swh_storage, sample_data): origin = sample_data.origin authority = sample_data.metadata_authority origin_metadata, origin_metadata2 = sample_data.origin_metadata[:2] assert swh_storage.origin_add([origin]) == {"origin:add": 1} swh_storage.metadata_authority_add([authority]) with pytest.raises(StorageArgumentException, match="fetcher"): swh_storage.raw_extrinsic_metadata_add([origin_metadata, origin_metadata2]) class TestStorageGeneratedData: def test_generate_content_get_data(self, swh_storage, swh_contents): contents_with_data = [c for c in swh_contents if c.status != "absent"] # retrieve contents for content in contents_with_data: actual_content_data = swh_storage.content_get_data(content.sha1) assert actual_content_data is not None assert actual_content_data == content.data def test_generate_content_get(self, swh_storage, swh_contents): expected_contents = [ attr.evolve(c, data=None) for c in swh_contents if c.status != "absent" ] actual_contents = swh_storage.content_get([c.sha1 for c in expected_contents]) assert len(actual_contents) == len(expected_contents) assert actual_contents == expected_contents @pytest.mark.parametrize("limit", [1, 7, 10, 100, 1000]) def test_origin_list(self, swh_storage, swh_origins, limit): returned_origins = [] page_token = None i = 0 while True: actual_page = swh_storage.origin_list(page_token=page_token, limit=limit) assert len(actual_page.results) <= limit returned_origins.extend(actual_page.results) i += 1 page_token = actual_page.next_page_token if page_token is None: assert i * limit >= len(swh_origins) break else: assert len(actual_page.results) == limit assert sorted(returned_origins) == sorted(swh_origins) def test_origin_count(self, swh_storage, sample_data): swh_storage.origin_add(sample_data.origins) assert swh_storage.origin_count("github") == 3 assert swh_storage.origin_count("gitlab") == 2 assert swh_storage.origin_count(".*user.*", regexp=True) == 5 assert swh_storage.origin_count(".*user.*", regexp=False) == 0 assert swh_storage.origin_count(".*user1.*", regexp=True) == 2 assert swh_storage.origin_count(".*user1.*", regexp=False) == 0 def test_origin_count_with_visit_no_visits(self, swh_storage, sample_data): swh_storage.origin_add(sample_data.origins) # none of them have visits, so with_visit=True => 0 assert swh_storage.origin_count("github", with_visit=True) == 0 assert swh_storage.origin_count("gitlab", with_visit=True) == 0 assert swh_storage.origin_count(".*user.*", regexp=True, with_visit=True) == 0 assert swh_storage.origin_count(".*user.*", regexp=False, with_visit=True) == 0 assert swh_storage.origin_count(".*user1.*", regexp=True, with_visit=True) == 0 assert swh_storage.origin_count(".*user1.*", regexp=False, with_visit=True) == 0 def test_origin_count_with_visit_with_visits_no_snapshot( self, swh_storage, sample_data ): swh_storage.origin_add(sample_data.origins) origin_url = "https://github.com/user1/repo1" visit = OriginVisit(origin=origin_url, date=now(), type="git",) swh_storage.origin_visit_add([visit]) assert swh_storage.origin_count("github", with_visit=False) == 3 # it has a visit, but no snapshot, so with_visit=True => 0 assert swh_storage.origin_count("github", with_visit=True) == 0 assert swh_storage.origin_count("gitlab", with_visit=False) == 2 # these gitlab origins have no visit assert swh_storage.origin_count("gitlab", with_visit=True) == 0 assert ( swh_storage.origin_count("github.*user1", regexp=True, with_visit=False) == 1 ) assert ( swh_storage.origin_count("github.*user1", regexp=True, with_visit=True) == 0 ) assert swh_storage.origin_count("github", regexp=True, with_visit=True) == 0 def test_origin_count_with_visit_with_visits_and_snapshot( self, swh_storage, sample_data ): snapshot = sample_data.snapshot swh_storage.origin_add(sample_data.origins) swh_storage.snapshot_add([snapshot]) origin_url = "https://github.com/user1/repo1" visit = OriginVisit(origin=origin_url, date=now(), type="git",) visit = swh_storage.origin_visit_add([visit])[0] swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin_url, visit=visit.visit, date=now(), status="ongoing", snapshot=snapshot.id, ) ] ) assert swh_storage.origin_count("github", with_visit=False) == 3 # github/user1 has a visit and a snapshot, so with_visit=True => 1 assert swh_storage.origin_count("github", with_visit=True) == 1 assert ( swh_storage.origin_count("github.*user1", regexp=True, with_visit=False) == 1 ) assert ( swh_storage.origin_count("github.*user1", regexp=True, with_visit=True) == 1 ) assert swh_storage.origin_count("github", regexp=True, with_visit=True) == 1 @settings( suppress_health_check=[HealthCheck.too_slow] + function_scoped_fixture_check, ) @given(strategies.lists(objects(split_content=True), max_size=2)) def test_add_arbitrary(self, swh_storage, objects): for (obj_type, obj) in objects: if obj.object_type == "origin_visit": swh_storage.origin_add([Origin(url=obj.origin)]) visit = OriginVisit(origin=obj.origin, date=obj.date, type=obj.type,) swh_storage.origin_visit_add([visit]) elif obj.object_type == "raw_extrinsic_metadata": swh_storage.metadata_authority_add([obj.authority]) swh_storage.metadata_fetcher_add([obj.fetcher]) swh_storage.raw_extrinsic_metadata_add([obj]) else: method = getattr(swh_storage, obj_type + "_add") try: method([obj]) except HashCollision: pass