diff --git a/PKG-INFO b/PKG-INFO index 65f9b8e5..389d1fc8 100644 --- a/PKG-INFO +++ b/PKG-INFO @@ -1,216 +1,216 @@ Metadata-Version: 2.1 Name: swh.storage -Version: 0.6.0 +Version: 0.7.0 Summary: Software Heritage storage manager Home-page: https://forge.softwareheritage.org/diffusion/DSTO/ Author: Software Heritage developers Author-email: swh-devel@inria.fr License: UNKNOWN Project-URL: Bug Reports, https://forge.softwareheritage.org/maniphest Project-URL: Funding, https://www.softwareheritage.org/donate Project-URL: Source, https://forge.softwareheritage.org/source/swh-storage Project-URL: Documentation, https://docs.softwareheritage.org/devel/swh-storage/ Description: swh-storage =========== Abstraction layer over the archive, allowing to access all stored source code artifacts as well as their metadata. See the [documentation](https://docs.softwareheritage.org/devel/swh-storage/index.html) for more details. ## Quick start ### Dependencies Python tests for this module include tests that cannot be run without a local Postgresql database, so you need the Postgresql server executable on your machine (no need to have a running Postgresql server). They also expect a cassandra server. #### Debian-like host ``` $ sudo apt install libpq-dev postgresql-11 cassandra ``` #### Non Debian-like host The tests expects `/usr/sbin/cassandra` to exist. Optionally, you can avoid running the cassandra tests. ``` (swh) :~/swh-storage$ tox -- -m 'not cassandra' ``` ### Installation It is strongly recommended to use a virtualenv. In the following, we consider you work in a virtualenv named `swh`. See the [developer setup guide](https://docs.softwareheritage.org/devel/developer-setup.html#developer-setup) for a more details on how to setup a working environment. You can install the package directly from [pypi](https://pypi.org/p/swh.storage): ``` (swh) :~$ pip install swh.storage [...] ``` Or from sources: ``` (swh) :~$ git clone https://forge.softwareheritage.org/source/swh-storage.git [...] (swh) :~$ cd swh-storage (swh) :~/swh-storage$ pip install . [...] ``` Then you can check it's properly installed: ``` (swh) :~$ swh storage --help Usage: swh storage [OPTIONS] COMMAND [ARGS]... Software Heritage Storage tools. Options: -h, --help Show this message and exit. Commands: rpc-serve Software Heritage Storage RPC server. ``` ## Tests The best way of running Python tests for this module is to use [tox](https://tox.readthedocs.io/). ``` (swh) :~$ pip install tox ``` ### tox From the sources directory, simply use tox: ``` (swh) :~/swh-storage$ tox [...] ========= 315 passed, 6 skipped, 15 warnings in 40.86 seconds ========== _______________________________ summary ________________________________ flake8: commands succeeded py3: commands succeeded congratulations :) ``` ## Development The storage server can be locally started. It requires a configuration file and a running Postgresql database. ### Sample configuration A typical configuration `storage.yml` file is: ``` storage: cls: local args: db: "dbname=softwareheritage-dev user= password=" objstorage: cls: pathslicing args: root: /tmp/swh-storage/ slicing: 0:2/2:4/4:6 ``` which means, this uses: - a local storage instance whose db connection is to `softwareheritage-dev` local instance, - the objstorage uses a local objstorage instance whose: - `root` path is /tmp/swh-storage, - slicing scheme is `0:2/2:4/4:6`. This means that the identifier of the content (sha1) which will be stored on disk at first level with the first 2 hex characters, the second level with the next 2 hex characters and the third level with the next 2 hex characters. And finally the complete hash file holding the raw content. For example: 00062f8bd330715c4f819373653d97b3cd34394c will be stored at 00/06/2f/00062f8bd330715c4f819373653d97b3cd34394c Note that the `root` path should exist on disk before starting the server. ### Starting the storage server If the python package has been properly installed (e.g. in a virtual env), you should be able to use the command: ``` (swh) :~/swh-storage$ swh storage rpc-serve storage.yml ``` This runs a local swh-storage api at 5002 port. ``` (swh) :~/swh-storage$ curl http://127.0.0.1:5002 Software Heritage storage server

You have reached the Software Heritage storage server.
See its documentation and API for more information

``` ### And then what? In your upper layer ([loader-git](https://forge.softwareheritage.org/source/swh-loader-git/), [loader-svn](https://forge.softwareheritage.org/source/swh-loader-svn/), etc...), you can define a remote storage with this snippet of yaml configuration. ``` storage: cls: remote args: url: http://localhost:5002/ ``` You could directly define a local storage with the following snippet: ``` storage: cls: local args: db: service=swh-dev objstorage: cls: pathslicing args: root: /home/storage/swh-storage/ slicing: 0:2/2:4/4:6 ``` Platform: UNKNOWN Classifier: Programming Language :: Python :: 3 Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3) Classifier: Operating System :: OS Independent Classifier: Development Status :: 5 - Production/Stable Requires-Python: >=3.7 Description-Content-Type: text/markdown Provides-Extra: testing Provides-Extra: schemata Provides-Extra: journal diff --git a/debian/changelog b/debian/changelog index 59dbe532..1cf06f77 100644 --- a/debian/changelog +++ b/debian/changelog @@ -1,1859 +1,1870 @@ -swh-storage (0.6.0-1~swh1~bpo10+1) buster-swh; urgency=medium +swh-storage (0.7.0-1~swh1) unstable-swh; urgency=medium - * Rebuild for buster-swh - - -- Software Heritage autobuilder (on jenkins-debian1) Fri, 19 Jun 2020 12:52:54 +0000 + * New upstream release 0.7.0 - (tagged by Antoine R. Dumont + (@ardumont) on 2020-06-22 15:42:25 + +0200) + * Upstream changes: - v0.7.0 - test_origin: Rename + appropriately tests - algos: Improve origin visit get latest + visit status algorithm - test_snapshot: Do not use + origin_visit_add returned result - algos.snapshot: Fix edge case + when snapshot is not resolved - Ensure ids are correct in tests' + storage_data - Fix tests' storage_data revisions - SQL: + replace the hash(url) index by a unique btree(url) on the origin + table - Make sure the pagination in swh_snapshot_get_by_id uses + the proper indexes + + -- Software Heritage autobuilder (on jenkins-debian1) Mon, 22 Jun 2020 14:09:33 +0000 swh-storage (0.6.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.6.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-06-19 11:29:42 +0200) * Upstream changes: - v0.6.0 - Move deprecated endpoint snapshot_get_latest from api endpoint to algos - algos.origin: Open origin-get-latest-visit-status function - storage*: Allow origin-visit-get-latest to filter on type - test_origin: Align storage initialization within tests -- Software Heritage autobuilder (on jenkins-debian1) Fri, 19 Jun 2020 12:45:32 +0000 swh-storage (0.5.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.5.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-06-17 16:03:15 +0200) * Upstream changes: - v0.5.0 - test_storage: Fix flakiness in round to milliseconds test util method - storage*: Add origin- visit-status-get-latest endpoint - Fix/update the backfiller - validate: accept model objects as well as dicts on all add endpoints - cql: Fix blackified strings - storage: Add missing cur parameter - Fix db_to_author() converter to return None is all fields are None -- Software Heritage autobuilder (on jenkins-debian1) Wed, 17 Jun 2020 14:19:37 +0000 swh-storage (0.4.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.4.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-06-16 09:50:25 +0200) * Upstream changes: - v0.4.0 - ardumont/master storage*: Drop leftover code - storage*: Drop origin_visit_upsert endpoint - storage*: Remove origin-visit-update endpoint - replay: Replay origin-visit and origin-visit-status - in_memory: Make origin- visit-status-add respect "on conflict ignore" policy - test_storage: Add journal behavior coverage for origin-visit-*add - Start migrating the validate proxy toward using BaseModel objects - storage*: Do not write twice origin-visit-status in journal -- Software Heritage autobuilder (on jenkins-debian1) Tue, 16 Jun 2020 07:58:23 +0000 swh-storage (0.3.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.3.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-06-12 09:08:23 +0200) * Upstream changes: - v0.3.0 - origin-visit-add storage*: Align origin-visit-add to take iterable of OriginVisit objects -- Software Heritage autobuilder (on jenkins-debian1) Fri, 12 Jun 2020 07:22:03 +0000 swh-storage (0.2.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.2.0 - (tagged by Antoine R. Dumont (@ardumont) on 2020-06-10 11:51:30 +0200) * Upstream changes: - v0.2.0 - origin-visit-upsert: Write visit status objects to the journal - origin-visit-update: Write visit status objects to the journal - origin-visit-add: Write visit status to the journal - Add pagination to origin_metadata_get. - Deduplicate origin-metadata when they have the same authority + discovery_date + fetcher. - Open `origin_visit_status_add` endpoint to add origin visit statuses - Add a replayer test for anonymized journal topics - Small refactoring of the InMemoryStorage to make it more consistent -- Software Heritage autobuilder (on jenkins-debian1) Wed, 10 Jun 2020 10:02:45 +0000 swh-storage (0.1.1-1~swh1) unstable-swh; urgency=medium * New upstream release 0.1.1 - (tagged by Nicolas Dandrimont on 2020-06-04 16:49:22 +0200) * Upstream changes: - Release swh.storage v0.1.1 - Work around tests hanging during Debian build -- Software Heritage autobuilder (on jenkins-debian1) Thu, 04 Jun 2020 14:56:54 +0000 swh-storage (0.1.0-2~swh1) unstable-swh; urgency=medium * Update dependencies. -- David Douard Thu, 04 Jun 2020 13:40:52 +0200 swh-storage (0.1.0-1~swh1) unstable-swh; urgency=medium * New upstream release 0.1.0 - (tagged by David Douard on 2020-06-04 12:08:46 +0200) * Upstream changes: - v0.1.0 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 04 Jun 2020 10:28:43 +0000 swh-storage (0.0.193-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.193 - (tagged by Antoine R. Dumont (@ardumont) on 2020-05-28 14:28:54 +0200) * Upstream changes: - v0.0.193 - pg: Write origin visit updates & status, read from origin_visit_status - Make content.blake2s256 not null. - Remove unused SQL functions. - README: Update necessary dependencies for test purposes - Add a pre-commit hook to check there are version bumps in sql/upgrades/*.sql - Add missing dbversion bump in 150.sql. - Add artifact metadata to the extrinsic metadata storage specification. - Add not null constraints to metadata_authority/origin_metadata - Realign schema with latest 149 migration script -- Software Heritage autobuilder (on jenkins-debian1) Thu, 28 May 2020 12:37:58 +0000 swh-storage (0.0.192-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.192 - (tagged by Valentin Lorentz on 2020-05-19 18:42:00 +0200) * Upstream changes: - v0.0.192 - * origin_metadata_add: Reject non-bytes types for 'metadata'. -- Software Heritage autobuilder (on jenkins-debian1) Tue, 19 May 2020 16:54:00 +0000 swh-storage (0.0.191-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.191 - (tagged by Valentin Lorentz on 2020-05-19 13:43:35 +0200) * Upstream changes: - v0.0.191 - * Implement the new extrinsic metadata specification/vocabulary. -- Software Heritage autobuilder (on jenkins-debian1) Tue, 19 May 2020 11:52:00 +0000 swh-storage (0.0.190-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.190 - (tagged by Antoine R. Dumont (@ardumont) on 2020-05-18 14:10:39 +0200) * Upstream changes: - v0.0.190 - storage: metadata_provider: Ensure idempotency when creating provider - journal: add a skipped_content topic dedicated to SkippedContent objects - Add missing return annotations on JournalWriter methods - Improve a bit the exception message of JournalWriter.content_update - Refactor the JournalWriter class to normalize its methods - tests: fix test_replay; do only use aware datetime objects - test_kafka_writer: Add missing object type skipped_content -- Software Heritage autobuilder (on jenkins-debian1) Mon, 18 May 2020 12:18:09 +0000 swh-storage (0.0.189-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.189 - (tagged by Antoine R. Dumont (@ardumont) on 2020-04-30 14:50:54 +0200) * Upstream changes: - v0.0.189 - pg: Write both origin visit updates & status, read from origin_visit - pg-storage: Add new created state - setup.py: add documentation link - metadata spec: Fix title hierarchy - tests: Use aware datetimes instead of naive ones. - cassandra: Adapt internal implementations to use origin visit status - in_memory: Adapt internal implementations to use origin visit status -- Software Heritage autobuilder (on jenkins-debian1) Thu, 30 Apr 2020 12:58:57 +0000 swh-storage (0.0.188-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.188 - (tagged by David Douard on 2020-04-28 13:44:20 +0200) * Upstream changes: - v0.0.188 -- Software Heritage autobuilder (on jenkins-debian1) Tue, 28 Apr 2020 11:52:08 +0000 swh-storage (0.0.187-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.187 - (tagged by Antoine R. Dumont (@ardumont) on 2020-04-14 18:13:08 +0200) * Upstream changes: - v0.0.187 - storage.interface: Actually define the remote flush operation -- Software Heritage autobuilder (on jenkins-debian1) Tue, 14 Apr 2020 16:23:41 +0000 swh-storage (0.0.186-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.186 - (tagged by Nicolas Dandrimont on 2020-04-14 17:09:22 +0200) * Upstream changes: - Release swh.storage v0.0.186 - Drop backwards-compatibility code with swh.journal < 0.0.30 -- Software Heritage autobuilder (on jenkins-debian1) Tue, 14 Apr 2020 15:20:57 +0000 swh-storage (0.0.185-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.185 - (tagged by Antoine R. Dumont (@ardumont) on 2020-04-14 14:15:32 +0200) * Upstream changes: - v0.0.185 - storage.filter: Remove internal state - test: update storage tests to (future) swh.journal 0.0.30 -- Software Heritage autobuilder (on jenkins-debian1) Tue, 14 Apr 2020 12:22:06 +0000 swh-storage (0.0.184-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.184 - (tagged by Antoine R. Dumont (@ardumont) on 2020-04-10 16:07:32 +0200) * Upstream changes: - v0.0.184 - storage*: Add flush endpoints to storage implems (backend, proxy) - test_retry: Add missing skipped_content_add tests -- Software Heritage autobuilder (on jenkins-debian1) Fri, 10 Apr 2020 14:14:20 +0000 swh-storage (0.0.183-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.183 - (tagged by Antoine R. Dumont (@ardumont) on 2020-04-09 12:35:53 +0200) * Upstream changes: - v0.0.183 - proxy storage: Add a clear_buffers endpoint - buffer proxy storage: Filter out duplicate objects prior to storage write - storage: Prevent erroneous HashCollisions by using the same ctime for all rows. - Enable black - origin_visit_update: ensure it raises a StorageArgumentException - Adapt cassandra backend to validating model types - tests: many refactoring improvments - tests: Shut down cassandra connection before closing the fixture down - Add more type annotations -- Software Heritage autobuilder (on jenkins-debian1) Thu, 09 Apr 2020 10:46:29 +0000 swh-storage (0.0.182-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.182 - (tagged by Antoine R. Dumont (@ardumont) on 2020-03-27 07:02:13 +0100) * Upstream changes: - v0.0.182 - storage*: Update origin_visit_update to make status parameter mandatory - test: Adapt origin validation test according to latest model changes - Respec discovery_date as a Python datetime instead of an ISO string. - origin_visit_add: Add missing db/cur argument to call to origin_get. -- Software Heritage autobuilder (on jenkins-debian1) Fri, 27 Mar 2020 06:13:17 +0000 swh-storage (0.0.181-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.181 - (tagged by Antoine R. Dumont (@ardumont) on 2020-03-25 09:50:49 +0100) * Upstream changes: - v0.0.181 - storage*: Hex encode content hashes in HashCollision exception - Add format of discovery_date in the metadata specification. - Store the value of token(partition_key) in skipped_content_by_* table, instead of three hashes. - Store the value of token(partition_key) in content_by_* table, instead of three hashes. -- Software Heritage autobuilder (on jenkins-debian1) Wed, 25 Mar 2020 09:03:43 +0000 swh-storage (0.0.180-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.180 - (tagged by Nicolas Dandrimont on 2020-03-18 18:24:41 +0100) * Upstream changes: - Release swh.storage v0.0.180 - Stop counting origin additions multiple times in statsd -- Software Heritage autobuilder (on jenkins-debian1) Wed, 18 Mar 2020 17:45:36 +0000 swh-storage (0.0.179-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.179 - (tagged by Nicolas Dandrimont on 2020-03-18 16:05:13 +0100) * Upstream changes: - Release swh.storage v0.0.179. - fix requirements-swh.txt to use proper version restriction - reduce the transaction load for content writes and reads -- Software Heritage autobuilder (on jenkins-debian1) Wed, 18 Mar 2020 15:50:50 +0000 swh-storage (0.0.178-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.178 - (tagged by Antoine R. Dumont (@ardumont) on 2020-03-16 12:51:28 +0100) * Upstream changes: - v0.0.178 - origin_visit_add: Adapt endpoint signature to return OriginVisit - origin_visit_upsert: Use OriginVisit object as input - storage/writer: refactor JournalWriter.content_add to send model objects -- Software Heritage autobuilder (on jenkins-debian1) Mon, 16 Mar 2020 11:59:18 +0000 swh-storage (0.0.177-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.177 - (tagged by Antoine R. Dumont (@ardumont) on 2020-03-10 11:37:33 +0100) * Upstream changes: - v0.0.177 - storage: Identify and provide the collision hashes in exception - Guarantee the order of results for revision_get and release_get - tests: Improve test speed - sql: do not attempt to create the plpgsql lang if already exists - Update requirement on swh.core for RPCClient method overrides -- Software Heritage autobuilder (on jenkins-debian1) Tue, 10 Mar 2020 10:48:11 +0000 swh-storage (0.0.176-1~swh2) unstable-swh; urgency=medium * Update build dependencies -- Antoine R. Dumont (@ardumont) Mon, 02 Mar 2020 14:36:00 +0100 swh-storage (0.0.176-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.176 - (tagged by Valentin Lorentz on 2020-02-28 14:44:10 +0100) * Upstream changes: - v0.0.176 - * Accept cassandra-driver >= 3.22. - * Make the RPC client and objstorage helper fetch Content.data from lazy - contents. - * Move ctime out of the validation proxy. -- Software Heritage autobuilder (on jenkins-debian1) Fri, 28 Feb 2020 15:21:27 +0000 swh-storage (0.0.175-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.175 - (tagged by Antoine Lambert on 2020-02-20 13:51:40 +0100) * Upstream changes: - version 0.0.175 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 20 Feb 2020 13:18:34 +0000 swh-storage (0.0.174-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.174 - (tagged by Valentin Lorentz on 2020-02-19 14:18:59 +0100) * Upstream changes: - v0.0.174 - * Fix inconsistent behavior of skipped_content_missing across backends. - * Fix FilteringProxy to not drop skipped-contents with a missing sha1_git. - * Make storage proxies use swh-model objects instead of dicts. - * Add support for (de)serializing swh-model in RPC calls. -- Software Heritage autobuilder (on jenkins-debian1) Wed, 19 Feb 2020 15:00:32 +0000 swh-storage (0.0.172-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.172 - (tagged by Valentin Lorentz on 2020-02-12 14:00:04 +0100) * Upstream changes: - v0.0.172 - * Unify exception raised by invalid input to API endpoints. - * Add a validation proxy for _add() methods. This proxy is *required* - in front of all backends whose _add() methods may be called or they'll - crash at runtime. - * Fix RecursionError when storage proxies are deepcopied or unpickled. - * storages: Refactor objstorage operations with a dedicated collaborator - * storages: Refactor journal operations with a dedicated writer collab -- Software Heritage autobuilder (on jenkins-debian1) Wed, 12 Feb 2020 13:13:47 +0000 swh-storage (0.0.171-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.171 - (tagged by Valentin Lorentz on 2020-02-06 14:46:05 +0100) * Upstream changes: - v0.0.171 - * Split 'content_add' method into 'content_add' and 'skipped_content_add'. - * Increase Cassandra requests timeout to 1 second. -- Software Heritage autobuilder (on jenkins-debian1) Thu, 06 Feb 2020 14:07:37 +0000 swh-storage (0.0.170-1~swh3) unstable-swh; urgency=medium * Update build dependencies -- Antoine R. Dumont (@ardumont) Mon, 03 Feb 2020 17:30:38 +0100 swh-storage (0.0.170-1~swh2) unstable-swh; urgency=medium * Update build dependencies -- Antoine R. Dumont (@ardumont) Mon, 03 Feb 2020 16:00:39 +0100 swh-storage (0.0.170-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.170 - (tagged by Antoine R. Dumont (@ardumont) on 2020-02-03 14:11:53 +0100) * Upstream changes: - v0.0.170 - swh.storage.cassandra: Add Cassandra backend implementation -- Software Heritage autobuilder (on jenkins-debian1) Mon, 03 Feb 2020 13:23:48 +0000 swh-storage (0.0.169-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.169 - (tagged by Antoine R. Dumont (@ardumont) on 2020-01-30 13:40:00 +0100) * Upstream changes: - v0.0.169 - retry: Add retry behavior on pipeline storage with flushing failure -- Software Heritage autobuilder (on jenkins-debian1) Thu, 30 Jan 2020 13:26:23 +0000 swh-storage (0.0.168-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.168 - (tagged by Valentin Lorentz on 2020-01-30 11:19:31 +0100) * Upstream changes: - v0.0.168 - * Implement content_update for the in-mem storage. - * Remove cur/db arguments from the in- mem storage. - * Move Storage documentation and endpoint paths to a new StorageInterface class - * Rename in_memory.Storage to in_memory.InMemoryStorage. - * CONTRIBUTORS: add Daniele Serafini -- Software Heritage autobuilder (on jenkins-debian1) Thu, 30 Jan 2020 10:25:30 +0000 swh-storage (0.0.167-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.167 - (tagged by Antoine R. Dumont (@ardumont) on 2020-01-24 14:55:57 +0100) * Upstream changes: - v0.0.167 - pgstorage: Empty temp tables instead of dropping them -- Software Heritage autobuilder (on jenkins-debian1) Fri, 24 Jan 2020 14:01:57 +0000 swh-storage (0.0.166-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.166 - (tagged by Antoine R. Dumont (@ardumont) on 2020-01-24 09:51:52 +0100) * Upstream changes: - v0.0.166 - storage: Add endpoint to get missing content (by sha1_git) and missing snapshot - Remove redundant config checks in load_and_check_config - Remove 'id' and 'object_id' from the output of object_find_by_sha1_git - Make origin_visit_get_random return None instead of {} if there are no results - docs: Fix sphinx warnings -- Software Heritage autobuilder (on jenkins-debian1) Fri, 24 Jan 2020 09:00:12 +0000 swh-storage (0.0.165-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.165 - (tagged by Antoine R. Dumont (@ardumont) on 2020-01-17 14:04:53 +0100) * Upstream changes: - v0.0.165 - storage.retry: Fix objects loading when using generator parameters -- Software Heritage autobuilder (on jenkins-debian1) Fri, 17 Jan 2020 13:09:39 +0000 swh-storage (0.0.164-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.164 - (tagged by Antoine Lambert on 2020-01-16 17:54:40 +0100) * Upstream changes: - version 0.0.164 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 16 Jan 2020 17:05:02 +0000 swh-storage (0.0.163-1~swh2) unstable-swh; urgency=medium * Fix test dependency -- Antoine R. Dumont (@ardumont) Tue, 14 Jan 2020 17:26:08 +0100 swh-storage (0.0.163-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.163 - (tagged by Antoine R. Dumont (@ardumont) on 2020-01-14 17:12:03 +0100) * Upstream changes: - v0.0.163 - retry: Improve proxy storage for add endpoints - in_memory: Make directory_get_random return None when storage empty - storage: Change content_get_metadata api to return Dict[bytes, List[Dict]] - storage: Add content_get_partition endpoint to replace content_get_range - storage: Add endpoint origin_list to replace origin_get_range -- Software Heritage autobuilder (on jenkins-debian1) Tue, 14 Jan 2020 16:17:45 +0000 swh-storage (0.0.162-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.162 - (tagged by Valentin Lorentz on 2019-12-16 14:37:44 +0100) * Upstream changes: - v0.0.162 - Add {content,directory,revision,release,snapshot}_get_random. -- Software Heritage autobuilder (on jenkins-debian1) Mon, 16 Dec 2019 13:41:39 +0000 swh-storage (0.0.161-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.161 - (tagged by Antoine R. Dumont (@ardumont) on 2019-12-10 15:03:28 +0100) * Upstream changes: - v0.0.161 - storage: Add endpoint to randomly pick an origin -- Software Heritage autobuilder (on jenkins-debian1) Tue, 10 Dec 2019 14:08:15 +0000 swh-storage (0.0.160-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.160 - (tagged by Antoine R. Dumont (@ardumont) on 2019-12-06 11:15:48 +0100) * Upstream changes: - v0.0.160 - storage.buffer: Buffer release objects as well - storage.tests: Unify tests sample data - Implement origin lookup by sha1 -- Software Heritage autobuilder (on jenkins-debian1) Fri, 06 Dec 2019 10:23:44 +0000 swh-storage (0.0.159-1~swh2) unstable-swh; urgency=medium * Force fast hypothesis profile when running tests -- Antoine R. Dumont (@ardumont) Tue, 26 Nov 2019 17:08:16 +0100 swh-storage (0.0.159-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.159 - (tagged by Antoine R. Dumont (@ardumont) on 2019-11-22 11:05:41 +0100) * Upstream changes: - v0.0.159 - Add 'pipeline' storage "class" for more readable configurations. - tests: Improve tests environments configuration - Fix a few typos reported by codespell - Add a pre-commit-hooks.yaml config file - Remove utils/(dump|fix)_revisions scripts -- Software Heritage autobuilder (on jenkins-debian1) Fri, 22 Nov 2019 10:10:31 +0000 swh-storage (0.0.158-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.158 - (tagged by Antoine R. Dumont (@ardumont) on 2019-11-14 13:33:00 +0100) * Upstream changes: - v0.0.158 - Drop schemata module (migrated back to swh-lister) -- Software Heritage autobuilder (on jenkins-debian1) Thu, 14 Nov 2019 12:37:18 +0000 swh-storage (0.0.157-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.157 - (tagged by Nicolas Dandrimont on 2019-11-13 13:22:39 +0100) * Upstream changes: - Release swh.storage 0.0.157 - schemata.distribution: Fix bogus NotImplementedError on Area.index_uris -- Software Heritage autobuilder (on jenkins-debian1) Wed, 13 Nov 2019 12:27:07 +0000 swh-storage (0.0.156-1~swh2) unstable-swh; urgency=medium * Add version constraint on psycopg2 -- Nicolas Dandrimont Wed, 30 Oct 2019 18:21:34 +0100 swh-storage (0.0.156-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.156 - (tagged by Valentin Lorentz on 2019-10-30 15:12:10 +0100) * Upstream changes: - v0.0.156 - * Stop supporting origin ids in API (except in origin_get_range). - * Make visit['origin'] a string everywhere (instead of a dict). -- Software Heritage autobuilder (on jenkins-debian1) Wed, 30 Oct 2019 14:29:28 +0000 swh-storage (0.0.155-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.155 - (tagged by David Douard on 2019-10-30 12:14:14 +0100) * Upstream changes: - v0.0.155 -- Software Heritage autobuilder (on jenkins-debian1) Wed, 30 Oct 2019 11:18:37 +0000 swh-storage (0.0.154-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.154 - (tagged by Antoine R. Dumont (@ardumont) on 2019-10-17 13:47:57 +0200) * Upstream changes: - v0.0.154 - Fix tests in debian build -- Software Heritage autobuilder (on jenkins-debian1) Thu, 17 Oct 2019 11:52:46 +0000 swh-storage (0.0.153-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.153 - (tagged by Antoine R. Dumont (@ardumont) on 2019-10-17 13:21:00 +0200) * Upstream changes: - v0.0.153 - Deploy new test fixture -- Software Heritage autobuilder (on jenkins-debian1) Thu, 17 Oct 2019 11:26:12 +0000 swh-storage (0.0.152-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.152 - (tagged by Antoine R. Dumont (@ardumont) on 2019-10-08 16:55:43 +0200) * Upstream changes: - v0.0.152 - swh.storage.buffer: Add buffering proxy storage implementation - swh.storage.filter: Add filtering storage implementation - swh.storage.tests: Improve db transaction handling - swh.storage.tests: Add more tests - swh.storage.storage: introduce a db() context manager -- Software Heritage autobuilder (on jenkins-debian1) Tue, 08 Oct 2019 15:03:16 +0000 swh-storage (0.0.151-1~swh2) unstable-swh; urgency=medium * Add missing build-dependency on python3-swh.journal -- Nicolas Dandrimont Tue, 01 Oct 2019 18:28:19 +0200 swh-storage (0.0.151-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.151 - (tagged by Stefano Zacchiroli on 2019-10-01 10:04:36 +0200) * Upstream changes: - v0.0.151 - * tox: anticipate mypy run to just after flake8 - * mypy.ini: be less flaky w.r.t. the packages installed in tox - * storage.py: ignore typing of optional get_journal_writer import - * mypy: ignore swh.journal to work-around dependency loop - * init.py: switch to documented way of extending path - * typing: minimal changes to make a no- op mypy run pass - * Write objects to the journal only if they don't exist yet. - * Use origin URLs for skipped_content['origin'] instead of origin ids. - * Properly mock get_journal_writer for the remote-pg-storage tests. - * journal_writer: use journal writer from swh.journal - * fix typos in docstrings and sample paths - * storage.origin_visit_add: Remove deprecated 'ts' parameter - * click "required" param wants bool, not int -- Software Heritage autobuilder (on jenkins-debian1) Tue, 01 Oct 2019 08:09:53 +0000 swh-storage (0.0.150-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.150 - (tagged by Antoine R. Dumont (@ardumont) on 2019-09-04 16:09:59 +0200) * Upstream changes: - v0.0.150 - tests/test_storage: Remove failing assertion after swh-model update - tests/test_storage: Fix tests execution with psycopg2 < 2.8 -- Software Heritage autobuilder (on jenkins-debian1) Wed, 04 Sep 2019 14:16:09 +0000 swh-storage (0.0.149-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.149 - (tagged by Antoine R. Dumont (@ardumont) on 2019-09-03 14:00:57 +0200) * Upstream changes: - v0.0.149 - Add support for origin_url in origin_metadata_* - Make origin_add/origin_visit_update validate their input - Make snapshot_add validate its input - Make revision_add and release_add validate their input - Make directory_add validate its input - Make content_add validate its input using swh-model -- Software Heritage autobuilder (on jenkins-debian1) Tue, 03 Sep 2019 12:27:51 +0000 swh-storage (0.0.148-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.148 - (tagged by Valentin Lorentz on 2019-08-23 10:33:02 +0200) * Upstream changes: - v0.0.148 - Tests improvements: - * Remove 'next_branch' from test input data. - * Fix off-by-one error when using origin_visit_upsert on with an unknown visit id. - * Use explicit arguments for origin_visit_add. - * Remove test_content_missing__marked_missing, it makes no sense. - Drop person ids: - * Stop leaking person ids. - * Remove person_get endpoint. - Logging fixes: - * Enforce log level for the werkzeug logger. - * Eliminate warnings about %TYPE. - * api: use RPCServerApp and RPCClient instead of deprecated classes - Other: - * Add support for skipped content in in- memory storage -- Software Heritage autobuilder (on jenkins-debian1) Fri, 23 Aug 2019 08:48:21 +0000 swh-storage (0.0.147-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.147 - (tagged by Valentin Lorentz on 2019-07-18 12:11:37 +0200) * Upstream changes: - Make origin_get ignore the `type` argument -- Software Heritage autobuilder (on jenkins-debian1) Thu, 18 Jul 2019 10:16:16 +0000 swh-storage (0.0.146-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.146 - (tagged by Valentin Lorentz on 2019-07-18 10:46:21 +0200) * Upstream changes: - Progress toward getting rid of origin ids - * Less dependency on origin ids in the in-mem storage - * add the SWH_STORAGE_IN_MEMORY_ENABLE_ORIGIN_IDS env var - * Remove legacy behavior of snapshot_add -- Software Heritage autobuilder (on jenkins-debian1) Thu, 18 Jul 2019 08:52:09 +0000 swh-storage (0.0.145-1~swh3) unstable-swh; urgency=medium * Properly rebuild for unstable-swh -- Nicolas Dandrimont Thu, 11 Jul 2019 14:03:30 +0200 swh-storage (0.0.145-1~swh2) buster-swh; urgency=medium * Remove useless swh.scheduler dependency -- Nicolas Dandrimont Thu, 11 Jul 2019 13:53:45 +0200 swh-storage (0.0.145-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.145 - (tagged by Valentin Lorentz on 2019-07-02 12:00:53 +0200) * Upstream changes: - v0.0.145 - Add an 'origin_visit_find_by_date' endpoint. - Add support for origin urls in all endpoints -- Software Heritage autobuilder (on jenkins-debian1) Tue, 02 Jul 2019 10:19:19 +0000 swh-storage (0.0.143-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.143 - (tagged by Valentin Lorentz on 2019-06-05 13:18:14 +0200) * Upstream changes: - Add test for snapshot/release counters. -- Software Heritage autobuilder (on jenkins-debian1) Mon, 01 Jul 2019 12:38:40 +0000 swh-storage (0.0.142-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.142 - (tagged by Valentin Lorentz on 2019-06-11 15:24:49 +0200) * Upstream changes: - Mark network tests, so they can be disabled. -- Software Heritage autobuilder (on jenkins-debian1) Tue, 11 Jun 2019 13:44:19 +0000 swh-storage (0.0.141-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.141 - (tagged by Valentin Lorentz on 2019-06-06 17:05:03 +0200) * Upstream changes: - Add support for using URL instead of ID in snapshot_get_latest. -- Software Heritage autobuilder (on jenkins-debian1) Tue, 11 Jun 2019 10:36:32 +0000 swh-storage (0.0.140-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.140 - (tagged by mihir(faux__) on 2019-03-24 21:47:31 +0530) * Upstream changes: - Changes the output of content_find method to a list in case of hash collisions and makes the sql query on python side and added test duplicate input, colliding sha256 and colliding blake2s256 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 16 May 2019 12:09:04 +0000 swh-storage (0.0.139-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.139 - (tagged by Nicolas Dandrimont on 2019-04-18 17:57:57 +0200) * Upstream changes: - Release swh.storage v0.0.139 - Backwards- compatibility improvements for snapshot_add - Better transactionality in revision_add/release_add - Fix backwards metric names - Handle shallow histories properly -- Software Heritage autobuilder (on jenkins-debian1) Thu, 18 Apr 2019 16:08:28 +0000 swh-storage (0.0.138-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.138 - (tagged by Valentin Lorentz on 2019-04-09 16:40:49 +0200) * Upstream changes: - Use the db_transaction decorator on all _add() methods. - So they gracefully release the connection on error instead - of relying on reference-counting to call the Db's `__del__` - (which does not happen in Hypothesis tests) because a ref - to it is kept via the traceback object. -- Software Heritage autobuilder (on jenkins-debian1) Tue, 09 Apr 2019 16:50:48 +0000 swh-storage (0.0.137-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.137 - (tagged by Valentin Lorentz on 2019-04-08 15:40:24 +0200) * Upstream changes: - Make test_origin_get_range run faster. -- Software Heritage autobuilder (on jenkins-debian1) Mon, 08 Apr 2019 13:56:16 +0000 swh-storage (0.0.135-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.135 - (tagged by Valentin Lorentz on 2019-04-04 20:42:32 +0200) * Upstream changes: - Make content_add_metadata require a ctime argument. - This makes Python set the ctime instead of pgsql. -- Software Heritage autobuilder (on jenkins-debian1) Fri, 05 Apr 2019 14:43:28 +0000 swh-storage (0.0.134-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.134 - (tagged by Valentin Lorentz on 2019-04-03 13:38:58 +0200) * Upstream changes: - Don't leak origin ids to the journal. -- Software Heritage autobuilder (on jenkins-debian1) Thu, 04 Apr 2019 10:16:09 +0000 swh-storage (0.0.132-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.132 - (tagged by Valentin Lorentz on 2019-04-01 11:50:30 +0200) * Upstream changes: - Use sha1 instead of bigint as FK from origin_visit to snapshot (part 1: add new column) -- Software Heritage autobuilder (on jenkins-debian1) Mon, 01 Apr 2019 13:30:48 +0000 swh-storage (0.0.131-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.131 - (tagged by Nicolas Dandrimont on 2019-03-28 17:24:44 +0100) * Upstream changes: - Release swh.storage v0.0.131 - Add statsd metrics to storage RPC backend - Clean up snapshot_add/origin_visit_update - Uniformize RPC backend to use POSTs everywhere -- Software Heritage autobuilder (on jenkins-debian1) Thu, 28 Mar 2019 16:34:07 +0000 swh-storage (0.0.130-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.130 - (tagged by Valentin Lorentz on 2019-02-26 10:50:44 +0100) * Upstream changes: - Add an helper function to list all origins in the storage. -- Software Heritage autobuilder (on jenkins-debian1) Wed, 13 Mar 2019 14:01:04 +0000 swh-storage (0.0.129-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.129 - (tagged by Valentin Lorentz on 2019-02-27 10:42:29 +0100) * Upstream changes: - Double the timeout of revision_get. - Metadata indexers often hit the limit. -- Software Heritage autobuilder (on jenkins-debian1) Fri, 01 Mar 2019 10:11:28 +0000 swh-storage (0.0.128-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.128 - (tagged by Antoine R. Dumont (@ardumont) on 2019-02-21 14:59:22 +0100) * Upstream changes: - v0.0.128 - api.server: Fix wrong exception type - storage.cli: Fix cli entry point name to the expected name (setup.py) -- Software Heritage autobuilder (on jenkins-debian1) Thu, 21 Feb 2019 14:07:23 +0000 swh-storage (0.0.127-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.127 - (tagged by Antoine R. Dumont (@ardumont) on 2019-02-21 13:34:19 +0100) * Upstream changes: - v0.0.127 - api.wsgi: Open wsgi entrypoint and check config at startup time - api.server: Make the api server load and check its configuration - swh.storage.cli: Migrate the api server startup in swh.storage.cli -- Software Heritage autobuilder (on jenkins-debian1) Thu, 21 Feb 2019 12:59:48 +0000 swh-storage (0.0.126-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.126 - (tagged by Valentin Lorentz on 2019-02-21 10:18:26 +0100) * Upstream changes: - Double the timeout of snapshot_get_latest. - Metadata indexers often hit the limit. -- Software Heritage autobuilder (on jenkins-debian1) Thu, 21 Feb 2019 11:24:52 +0000 swh-storage (0.0.125-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.125 - (tagged by Antoine R. Dumont (@ardumont) on 2019-02-14 10:13:31 +0100) * Upstream changes: - v0.0.125 - api/server: Do not read configuration at each request -- Software Heritage autobuilder (on jenkins-debian1) Thu, 14 Feb 2019 16:57:01 +0000 swh-storage (0.0.124-1~swh3) unstable-swh; urgency=low * New upstream release, fixing the distribution this time -- Antoine R. Dumont (@ardumont) Thu, 14 Feb 2019 17:51:29 +0100 swh-storage (0.0.124-1~swh2) unstable; urgency=medium * New upstream release for dependency fix reasons -- Antoine R. Dumont (@ardumont) Thu, 14 Feb 2019 09:27:55 +0100 swh-storage (0.0.124-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.124 - (tagged by Antoine Lambert on 2019-02-12 14:40:53 +0100) * Upstream changes: - version 0.0.124 -- Software Heritage autobuilder (on jenkins-debian1) Tue, 12 Feb 2019 13:46:08 +0000 swh-storage (0.0.123-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.123 - (tagged by Antoine R. Dumont (@ardumont) on 2019-02-08 15:06:49 +0100) * Upstream changes: - v0.0.123 - Make Storage.origin_get support a list of origins, like other - Storage.*_get methods. - Stop using _to_bytes functions. - Use the BaseDb (and friends) from swh-core -- Software Heritage autobuilder (on jenkins-debian1) Fri, 08 Feb 2019 14:14:18 +0000 swh-storage (0.0.122-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.122 - (tagged by Antoine Lambert on 2019-01-28 11:57:27 +0100) * Upstream changes: - version 0.0.122 -- Software Heritage autobuilder (on jenkins-debian1) Mon, 28 Jan 2019 11:02:45 +0000 swh-storage (0.0.121-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.121 - (tagged by Antoine Lambert on 2019-01-28 11:31:48 +0100) * Upstream changes: - version 0.0.121 -- Software Heritage autobuilder (on jenkins-debian1) Mon, 28 Jan 2019 10:36:40 +0000 swh-storage (0.0.120-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.120 - (tagged by Antoine Lambert on 2019-01-17 12:04:27 +0100) * Upstream changes: - version 0.0.120 -- Software Heritage autobuilder (on jenkins-debian1) Thu, 17 Jan 2019 11:12:47 +0000 swh-storage (0.0.119-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.119 - (tagged by Antoine R. Dumont (@ardumont) on 2019-01-11 11:57:13 +0100) * Upstream changes: - v0.0.119 - listener: Notify Kafka when an origin visit is updated -- Software Heritage autobuilder (on jenkins-debian1) Fri, 11 Jan 2019 11:02:07 +0000 swh-storage (0.0.118-1~swh1) unstable-swh; urgency=medium * New upstream release 0.0.118 - (tagged by Antoine Lambert on 2019-01-09 16:59:15 +0100) * Upstream changes: - version 0.0.118 -- Software Heritage autobuilder (on jenkins-debian1) Wed, 09 Jan 2019 18:51:34 +0000 swh-storage (0.0.117-1~swh1) unstable-swh; urgency=medium * v0.0.117 * listener: Adapt decoding behavior depending on the object type -- Antoine R. Dumont (@ardumont) Thu, 20 Dec 2018 14:48:44 +0100 swh-storage (0.0.116-1~swh1) unstable-swh; urgency=medium * v0.0.116 * Update requirements to latest swh.core -- Antoine R. Dumont (@ardumont) Fri, 14 Dec 2018 15:57:04 +0100 swh-storage (0.0.115-1~swh1) unstable-swh; urgency=medium * version 0.0.115 -- Antoine Lambert Fri, 14 Dec 2018 15:47:52 +0100 swh-storage (0.0.114-1~swh1) unstable-swh; urgency=medium * version 0.0.114 -- Antoine Lambert Wed, 05 Dec 2018 10:59:49 +0100 swh-storage (0.0.113-1~swh1) unstable-swh; urgency=medium * v0.0.113 * in-memory storage: Add recursive argument to directory_ls endpoint -- Antoine R. Dumont (@ardumont) Fri, 30 Nov 2018 11:56:44 +0100 swh-storage (0.0.112-1~swh1) unstable-swh; urgency=medium * v0.0.112 * in-memory storage: Align with existing storage * docstring: Improvments and adapt according to api * doc: update index to match new swh-doc format * Increase test coverage for stat_counters + fix its bugs. -- Antoine R. Dumont (@ardumont) Fri, 30 Nov 2018 10:28:02 +0100 swh-storage (0.0.111-1~swh1) unstable-swh; urgency=medium * v0.0.111 * Move generative tests in their own module * Open in-memory storage implementation -- Antoine R. Dumont (@ardumont) Wed, 21 Nov 2018 08:55:14 +0100 swh-storage (0.0.110-1~swh1) unstable-swh; urgency=medium * v0.0.110 * storage: Open content_get_range endpoint * tests: Start using hypothesis for tests generation * Improvments: Remove SQLisms from the tests and API * docs: Document metadata providers -- Antoine R. Dumont (@ardumont) Fri, 16 Nov 2018 11:53:14 +0100 swh-storage (0.0.109-1~swh1) unstable-swh; urgency=medium * version 0.0.109 -- Antoine Lambert Mon, 12 Nov 2018 14:11:09 +0100 swh-storage (0.0.108-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.108 * Add a function to get a full snapshot from the paginated view -- Nicolas Dandrimont Thu, 18 Oct 2018 18:32:10 +0200 swh-storage (0.0.107-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.107 * Enable pagination of snapshot branches * Drop occurrence-related tables * Drop entity-related tables -- Nicolas Dandrimont Wed, 17 Oct 2018 15:06:07 +0200 swh-storage (0.0.106-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.106 * Fix origin_visit_get_latest_snapshot logic * Improve directory iterator * Drop backwards compatibility between snapshots and occurrences * Drop the occurrence table -- Nicolas Dandrimont Mon, 08 Oct 2018 17:03:54 +0200 swh-storage (0.0.105-1~swh1) unstable-swh; urgency=medium * v0.0.105 * Increase directory_ls endpoint to 20 seconds * Add snapshot to the stats endpoint * Improve documentation -- Antoine R. Dumont (@ardumont) Mon, 10 Sep 2018 11:36:27 +0200 swh-storage (0.0.104-1~swh1) unstable-swh; urgency=medium * version 0.0.104 -- Antoine Lambert Wed, 29 Aug 2018 15:55:37 +0200 swh-storage (0.0.103-1~swh1) unstable-swh; urgency=medium * v0.0.103 * swh.storage.storage: origin_add returns updated list of dict with id -- Antoine R. Dumont (@ardumont) Mon, 30 Jul 2018 11:47:53 +0200 swh-storage (0.0.102-1~swh1) unstable-swh; urgency=medium * Release swh-storage v0.0.102 * Stop using temporary tables for read-only queries * Add timeouts for some read-only queries -- Nicolas Dandrimont Tue, 05 Jun 2018 14:06:54 +0200 swh-storage (0.0.101-1~swh1) unstable-swh; urgency=medium * v0.0.101 * swh.storage.api.client: Permit to specify the query timeout option -- Antoine R. Dumont (@ardumont) Thu, 24 May 2018 12:13:51 +0200 swh-storage (0.0.100-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.100 * remote api: only instantiate storage once per import * add thread-awareness to the storage implementation * properly cleanup after tests * parallelize objstorage and storage additions -- Nicolas Dandrimont Sat, 12 May 2018 18:12:40 +0200 swh-storage (0.0.99-1~swh1) unstable-swh; urgency=medium * v0.0.99 * storage: Add methods to compute directories/revisions diff * Add a new table for "bucketed" object counts * doc: update table clusters in SQL diagram * swh.storage.content_missing: Improve docstring -- Antoine R. Dumont (@ardumont) Tue, 20 Feb 2018 13:32:25 +0100 swh-storage (0.0.98-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.98 * Switch backwards compatibility for snapshots off -- Nicolas Dandrimont Tue, 06 Feb 2018 15:27:15 +0100 swh-storage (0.0.97-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.97 * refactor database initialization * use a separate thread instead of a temporary file for COPY operations * add more snapshot-related endpoints -- Nicolas Dandrimont Tue, 06 Feb 2018 14:07:07 +0100 swh-storage (0.0.96-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.96 * Add snapshot models * Add support for hg revision type -- Nicolas Dandrimont Tue, 19 Dec 2017 16:25:57 +0100 swh-storage (0.0.95-1~swh1) unstable-swh; urgency=medium * v0.0.95 * swh.storage: Rename indexer_configuration to tool * swh.storage: Migrate indexer model to its own model -- Antoine R. Dumont (@ardumont) Thu, 07 Dec 2017 09:56:31 +0100 swh-storage (0.0.94-1~swh1) unstable-swh; urgency=medium * v0.0.94 * Open searching origins methods to storage -- Antoine R. Dumont (@ardumont) Tue, 05 Dec 2017 12:32:57 +0100 swh-storage (0.0.93-1~swh1) unstable-swh; urgency=medium * v0.0.93 * swh.storage: Open indexer_configuration_add endpoint * swh-data: Update content mimetype indexer configuration * origin_visit_get: make order repeatable * db: Make unique indices actually unique and vice versa * Add origin_metadata endpoints (add, get, etc...) * cleanup: Remove unused content provenance cache tables -- Antoine R. Dumont (@ardumont) Fri, 24 Nov 2017 11:14:11 +0100 swh-storage (0.0.92-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.92 * make swh.storage.schemata work on SQLAlchemy 1.0 -- Nicolas Dandrimont Thu, 12 Oct 2017 19:51:24 +0200 swh-storage (0.0.91-1~swh1) unstable-swh; urgency=medium * Release swh.storage version 0.0.91 * Update packaging runes -- Nicolas Dandrimont Thu, 12 Oct 2017 18:41:46 +0200 swh-storage (0.0.90-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.90 * Remove leaky dependency on python3-kafka -- Nicolas Dandrimont Wed, 11 Oct 2017 18:53:22 +0200 swh-storage (0.0.89-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.89 * Add new package for ancillary schemata * Add new metadata-related entry points * Update for new swh.model -- Nicolas Dandrimont Wed, 11 Oct 2017 17:39:29 +0200 swh-storage (0.0.88-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.88 * Move the archiver to its own module * Prepare building for stretch -- Nicolas Dandrimont Fri, 30 Jun 2017 14:52:12 +0200 swh-storage (0.0.87-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.87 * update tasks to new swh.scheduler api -- Nicolas Dandrimont Mon, 12 Jun 2017 17:54:11 +0200 swh-storage (0.0.86-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.86 * archiver updates -- Nicolas Dandrimont Tue, 06 Jun 2017 18:43:43 +0200 swh-storage (0.0.85-1~swh1) unstable-swh; urgency=medium * v0.0.85 * Improve license endpoint's unknown license policy -- Antoine R. Dumont (@ardumont) Tue, 06 Jun 2017 17:55:40 +0200 swh-storage (0.0.84-1~swh1) unstable-swh; urgency=medium * v0.0.84 * Update indexer endpoints to use indexer configuration id * Add indexer configuration endpoint -- Antoine R. Dumont (@ardumont) Fri, 02 Jun 2017 16:16:47 +0200 swh-storage (0.0.83-1~swh1) unstable-swh; urgency=medium * v0.0.83 * Add blake2s256 new hash computation on content -- Antoine R. Dumont (@ardumont) Fri, 31 Mar 2017 12:27:09 +0200 swh-storage (0.0.82-1~swh1) unstable-swh; urgency=medium * v0.0.82 * swh.storage.listener: Subscribe to new origin notifications * sql/swh-func: improve equality check on the three columns for swh_content_missing * swh.storage: add length to directory listing primitives * refactoring: Migrate from swh.core.hashutil to swh.model.hashutil * swh.storage.archiver.updater: Create a content updater journal client * vault: add a git fast-import cooker * vault: generic cache to allow multiple cooker types and formats -- Antoine R. Dumont (@ardumont) Tue, 21 Mar 2017 14:50:16 +0100 swh-storage (0.0.81-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.81 * archiver improvements for mass injection in azure -- Nicolas Dandrimont Thu, 09 Mar 2017 11:15:28 +0100 swh-storage (0.0.80-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.80 * archiver improvements related to the mass injection of contents in azure * updates to the vault cooker -- Nicolas Dandrimont Tue, 07 Mar 2017 15:12:35 +0100 swh-storage (0.0.79-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.79 * archiver: keep counts of objects in each archive * converters: normalize timestamps using swh.model -- Nicolas Dandrimont Tue, 14 Feb 2017 19:37:36 +0100 swh-storage (0.0.78-1~swh1) unstable-swh; urgency=medium * v0.0.78 * Refactoring some common code into swh.core + adaptation api calls in * swh.objstorage and swh.storage (storage and vault) -- Antoine R. Dumont (@ardumont) Thu, 26 Jan 2017 15:08:03 +0100 swh-storage (0.0.77-1~swh1) unstable-swh; urgency=medium * v0.0.77 * Paginate results for origin_visits endpoint -- Antoine R. Dumont (@ardumont) Thu, 19 Jan 2017 14:41:49 +0100 swh-storage (0.0.76-1~swh1) unstable-swh; urgency=medium * v0.0.76 * Unify storage and objstorage configuration and instantiation functions -- Antoine R. Dumont (@ardumont) Thu, 15 Dec 2016 18:25:58 +0100 swh-storage (0.0.75-1~swh1) unstable-swh; urgency=medium * v0.0.75 * Add information on indexer tools (T610) -- Antoine R. Dumont (@ardumont) Fri, 02 Dec 2016 18:21:36 +0100 swh-storage (0.0.74-1~swh1) unstable-swh; urgency=medium * v0.0.74 * Use strict equality for content ctags' symbols search -- Antoine R. Dumont (@ardumont) Tue, 29 Nov 2016 17:25:29 +0100 swh-storage (0.0.73-1~swh1) unstable-swh; urgency=medium * v0.0.73 * Improve ctags search query for edge cases -- Antoine R. Dumont (@ardumont) Mon, 28 Nov 2016 16:34:55 +0100 swh-storage (0.0.72-1~swh1) unstable-swh; urgency=medium * v0.0.72 * Permit pagination on content_ctags_search api endpoint -- Antoine R. Dumont (@ardumont) Thu, 24 Nov 2016 14:19:29 +0100 swh-storage (0.0.71-1~swh1) unstable-swh; urgency=medium * v0.0.71 * Open full-text search endpoint on ctags -- Antoine R. Dumont (@ardumont) Wed, 23 Nov 2016 17:33:51 +0100 swh-storage (0.0.70-1~swh1) unstable-swh; urgency=medium * v0.0.70 * Add new license endpoints (add/get) * Update ctags endpoints to align update conflict policy -- Antoine R. Dumont (@ardumont) Thu, 10 Nov 2016 17:27:49 +0100 swh-storage (0.0.69-1~swh1) unstable-swh; urgency=medium * v0.0.69 * storage: Open ctags entry points (missing, add, get) * storage: allow adding several origins at once -- Antoine R. Dumont (@ardumont) Thu, 20 Oct 2016 16:07:07 +0200 swh-storage (0.0.68-1~swh1) unstable-swh; urgency=medium * v0.0.68 * indexer: Open mimetype/language get endpoints * indexer: Add the mimetype/language add function with conflict_update flag * archiver: Extend worker-to-backend to transmit messages to another * queue (once done) -- Antoine R. Dumont (@ardumont) Thu, 13 Oct 2016 15:30:21 +0200 swh-storage (0.0.67-1~swh1) unstable-swh; urgency=medium * v0.0.67 * Fix provenance storage init function -- Antoine R. Dumont (@ardumont) Wed, 12 Oct 2016 02:24:12 +0200 swh-storage (0.0.66-1~swh1) unstable-swh; urgency=medium * v0.0.66 * Improve provenance configuration format -- Antoine R. Dumont (@ardumont) Wed, 12 Oct 2016 01:39:26 +0200 swh-storage (0.0.65-1~swh1) unstable-swh; urgency=medium * v0.0.65 * Open api entry points for swh.indexer about content mimetype and * language * Update schema graph to latest version -- Antoine R. Dumont (@ardumont) Sat, 08 Oct 2016 10:00:30 +0200 swh-storage (0.0.64-1~swh1) unstable-swh; urgency=medium * v0.0.64 * Fix: Missing incremented version 5 for archiver.dbversion * Retrieve information on a content cached * sql/swh-func: content cache populates lines in deterministic order -- Antoine R. Dumont (@ardumont) Thu, 29 Sep 2016 21:50:59 +0200 swh-storage (0.0.63-1~swh1) unstable-swh; urgency=medium * v0.0.63 * Make the 'worker to backend' destination agnostic (message parameter) * Improve 'unknown sha1' policy (archiver db can lag behind swh db) * Improve 'force copy' policy -- Antoine R. Dumont (@ardumont) Fri, 23 Sep 2016 12:29:50 +0200 swh-storage (0.0.62-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.62 * Updates to the provenance cache to reduce churn on the main tables -- Nicolas Dandrimont Thu, 22 Sep 2016 18:54:52 +0200 swh-storage (0.0.61-1~swh1) unstable-swh; urgency=medium * v0.0.61 * Handle copies of unregistered sha1 in archiver db * Fix copy to only the targeted destination * Update to latest python3-swh.core dependency -- Antoine R. Dumont (@ardumont) Thu, 22 Sep 2016 13:44:05 +0200 swh-storage (0.0.60-1~swh1) unstable-swh; urgency=medium * v0.0.60 * Update archiver dependencies -- Antoine R. Dumont (@ardumont) Tue, 20 Sep 2016 16:46:48 +0200 swh-storage (0.0.59-1~swh1) unstable-swh; urgency=medium * v0.0.59 * Unify configuration property between director/worker * Deal with potential missing contents in the archiver db * Improve get_contents_error implementation * Remove dead code in swh.storage.db about archiver -- Antoine R. Dumont (@ardumont) Sat, 17 Sep 2016 12:50:14 +0200 swh-storage (0.0.58-1~swh1) unstable-swh; urgency=medium * v0.0.58 * ArchiverDirectorToBackend reads sha1 from stdin and sends chunks of sha1 * for archival. -- Antoine R. Dumont (@ardumont) Fri, 16 Sep 2016 22:17:14 +0200 swh-storage (0.0.57-1~swh1) unstable-swh; urgency=medium * v0.0.57 * Update swh.storage.archiver -- Antoine R. Dumont (@ardumont) Thu, 15 Sep 2016 16:30:11 +0200 swh-storage (0.0.56-1~swh1) unstable-swh; urgency=medium * v0.0.56 * Vault: Add vault implementation (directory cooker & cache * implementation + its api) * Archiver: Add another archiver implementation (direct to backend) -- Antoine R. Dumont (@ardumont) Thu, 15 Sep 2016 10:56:35 +0200 swh-storage (0.0.55-1~swh1) unstable-swh; urgency=medium * v0.0.55 * Fix origin_visit endpoint -- Antoine R. Dumont (@ardumont) Thu, 08 Sep 2016 15:21:28 +0200 swh-storage (0.0.54-1~swh1) unstable-swh; urgency=medium * v0.0.54 * Open origin_visit_get_by entry point -- Antoine R. Dumont (@ardumont) Mon, 05 Sep 2016 12:36:34 +0200 swh-storage (0.0.53-1~swh1) unstable-swh; urgency=medium * v0.0.53 * Add cache about content provenance * debian: fix python3-swh.storage.archiver runtime dependency * debian: create new package python3-swh.storage.provenance -- Antoine R. Dumont (@ardumont) Fri, 02 Sep 2016 11:14:09 +0200 swh-storage (0.0.52-1~swh1) unstable-swh; urgency=medium * v0.0.52 * Package python3-swh.storage.archiver -- Antoine R. Dumont (@ardumont) Thu, 25 Aug 2016 14:55:23 +0200 swh-storage (0.0.51-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.51 * Add new metadata column to origin_visit * Update swh-add-directory script for updated API -- Nicolas Dandrimont Wed, 24 Aug 2016 14:36:03 +0200 swh-storage (0.0.50-1~swh1) unstable-swh; urgency=medium * v0.0.50 * Add a function to pull (only) metadata for a list of contents * Update occurrence_add api entry point to properly deal with origin_visit * Add origin_visit api entry points to create/update origin_visit -- Antoine R. Dumont (@ardumont) Tue, 23 Aug 2016 16:29:26 +0200 swh-storage (0.0.49-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.49 * Proper dependency on python3-kafka -- Nicolas Dandrimont Fri, 19 Aug 2016 13:45:52 +0200 swh-storage (0.0.48-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.48 * Updates to the archiver * Notification support for new object creations -- Nicolas Dandrimont Fri, 19 Aug 2016 12:13:50 +0200 swh-storage (0.0.47-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.47 * Update storage archiver to new schemaless schema -- Nicolas Dandrimont Fri, 22 Jul 2016 16:59:19 +0200 swh-storage (0.0.46-1~swh1) unstable-swh; urgency=medium * v0.0.46 * Update archiver bootstrap -- Antoine R. Dumont (@ardumont) Wed, 20 Jul 2016 19:04:42 +0200 swh-storage (0.0.45-1~swh1) unstable-swh; urgency=medium * v0.0.45 * Separate swh.storage.archiver's db from swh.storage.storage -- Antoine R. Dumont (@ardumont) Tue, 19 Jul 2016 15:05:36 +0200 swh-storage (0.0.44-1~swh1) unstable-swh; urgency=medium * v0.0.44 * Open listing visits per origin api -- Quentin Campos Fri, 08 Jul 2016 11:27:10 +0200 swh-storage (0.0.43-1~swh1) unstable-swh; urgency=medium * v0.0.43 * Extract objstorage to its own package swh.objstorage -- Quentin Campos Mon, 27 Jun 2016 14:57:12 +0200 swh-storage (0.0.42-1~swh1) unstable-swh; urgency=medium * Add an object storage multiplexer to allow transition between multiple versions of objet storages. -- Quentin Campos Tue, 21 Jun 2016 15:03:52 +0200 swh-storage (0.0.41-1~swh1) unstable-swh; urgency=medium * Refactoring of the object storage in order to allow multiple versions of it, as well as a multiplexer for version transition. -- Quentin Campos Thu, 16 Jun 2016 15:54:16 +0200 swh-storage (0.0.40-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.40: * Refactor objstorage to allow for different implementations * Updates to the checker functionality * Bump swh.core dependency to v0.0.20 -- Nicolas Dandrimont Tue, 14 Jun 2016 17:25:42 +0200 swh-storage (0.0.39-1~swh1) unstable-swh; urgency=medium * v0.0.39 * Add run_from_webserver function for objstorage api server * Add unique identifier message on default api server route endpoints -- Antoine R. Dumont (@ardumont) Fri, 20 May 2016 15:27:34 +0200 swh-storage (0.0.38-1~swh1) unstable-swh; urgency=medium * v0.0.38 * Add an http api for object storage * Implement an archiver to perform backup copies -- Quentin Campos Fri, 20 May 2016 14:40:14 +0200 swh-storage (0.0.37-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.37 * Add fullname to person table * Add svn as a revision type -- Nicolas Dandrimont Fri, 08 Apr 2016 16:44:24 +0200 swh-storage (0.0.36-1~swh1) unstable-swh; urgency=medium * Release swh.storage v0.0.36 * Add json-schema documentation for the jsonb fields * Overhaul entity handling -- Nicolas Dandrimont Wed, 16 Mar 2016 17:27:17 +0100 swh-storage (0.0.35-1~swh1) unstable-swh; urgency=medium * Release swh-storage v0.0.35 * Factor in temporary tables with only an id (db v059) * Allow generic object search by sha1_git (db v060) -- Nicolas Dandrimont Thu, 25 Feb 2016 16:21:01 +0100 swh-storage (0.0.34-1~swh1) unstable-swh; urgency=medium * Release swh.storage version 0.0.34 * occurrence improvements * commit metadata improvements -- Nicolas Dandrimont Fri, 19 Feb 2016 18:20:07 +0100 swh-storage (0.0.33-1~swh1) unstable-swh; urgency=medium * Bump swh.storage to version 0.0.33 -- Nicolas Dandrimont Fri, 05 Feb 2016 11:17:00 +0100 swh-storage (0.0.32-1~swh1) unstable-swh; urgency=medium * v0.0.32 * Let the person's id flow * sql/upgrades/051: 050->051 schema change * sql/upgrades/050: 049->050 schema change - Clean up obsolete functions * sql/upgrades/049: Final take for 048->049 schema change. * sql: Use a new schema for occurrences -- Antoine R. Dumont (@ardumont) Fri, 29 Jan 2016 17:44:27 +0100 swh-storage (0.0.31-1~swh1) unstable-swh; urgency=medium * v0.0.31 * Deal with occurrence_history.branch, occurrence.branch, release.name as bytes -- Antoine R. Dumont (@ardumont) Wed, 27 Jan 2016 15:45:53 +0100 swh-storage (0.0.30-1~swh1) unstable-swh; urgency=medium * Prepare swh.storage v0.0.30 release * type-agnostic occurrences and revisions -- Nicolas Dandrimont Tue, 26 Jan 2016 07:36:43 +0100 swh-storage (0.0.29-1~swh1) unstable-swh; urgency=medium * v0.0.29 * New: * Upgrade sql schema to 041→043 * Deal with communication downtime between clients and storage * Open occurrence_get(origin_id) to retrieve latest occurrences per origin * Open release_get_by to retrieve a release by origin * Open directory_get to retrieve information on directory by id * Open entity_get to retrieve information on entity + hierarchy from its uuid * Open directory_get that retrieve information on directory per id * Update: * directory_get/directory_ls: Rename to directory_ls * revision_log: update to retrieve logs from multiple root revisions * revision_get_by: branch name filtering is now optional -- Antoine R. Dumont (@ardumont) Wed, 20 Jan 2016 16:15:50 +0100 swh-storage (0.0.28-1~swh1) unstable-swh; urgency=medium * v0.0.28 * Open entity_get api -- Antoine R. Dumont (@ardumont) Fri, 15 Jan 2016 16:37:27 +0100 swh-storage (0.0.27-1~swh1) unstable-swh; urgency=medium * v0.0.27 * Open directory_entry_get_by_path api * Improve get_revision_by api performance * sql/swh-schema: add index on origin(type, url) --> improve origin lookup api * Bump to 039 db version -- Antoine R. Dumont (@ardumont) Fri, 15 Jan 2016 12:42:47 +0100 swh-storage (0.0.26-1~swh1) unstable-swh; urgency=medium * v0.0.26 * Open revision_get_by to retrieve a revision by occurrence criterion filtering * sql/upgrades/036: add 035→036 upgrade script -- Antoine R. Dumont (@ardumont) Wed, 13 Jan 2016 12:46:44 +0100 swh-storage (0.0.25-1~swh1) unstable-swh; urgency=medium * v0.0.25 * Limit results in swh_revision_list* * Create the package to align the current db production version on https://archive.softwareheritage.org/ -- Antoine R. Dumont (@ardumont) Fri, 08 Jan 2016 11:33:08 +0100 swh-storage (0.0.24-1~swh1) unstable-swh; urgency=medium * Prepare swh.storage release v0.0.24 * Add a limit argument to revision_log -- Nicolas Dandrimont Wed, 06 Jan 2016 15:12:53 +0100 swh-storage (0.0.23-1~swh1) unstable-swh; urgency=medium * v0.0.23 * Protect against overflow, wrapped in ValueError for client * Fix relative path import for remote storage. * api to retrieve revision_log is now 'parents' aware -- Antoine R. Dumont (@ardumont) Wed, 06 Jan 2016 11:30:58 +0100 swh-storage (0.0.22-1~swh1) unstable-swh; urgency=medium * Release v0.0.22 * Fix relative import for remote storage -- Nicolas Dandrimont Wed, 16 Dec 2015 16:04:48 +0100 swh-storage (0.0.21-1~swh1) unstable-swh; urgency=medium * Prepare release v0.0.21 * Protect the storage api client from overflows * Add a get_storage function mapping to local or remote storage -- Nicolas Dandrimont Wed, 16 Dec 2015 13:34:46 +0100 swh-storage (0.0.20-1~swh1) unstable-swh; urgency=medium * v0.0.20 * allow numeric timestamps with offset * Open revision_log api * start migration to swh.model -- Antoine R. Dumont (@ardumont) Mon, 07 Dec 2015 15:20:36 +0100 swh-storage (0.0.19-1~swh1) unstable-swh; urgency=medium * v0.0.19 * Improve directory listing with content data * Open person_get * Open release_get data reading * Improve origin_get api * Effort to unify api output on dict (for read) * Migrate backend to 032 -- Antoine R. Dumont (@ardumont) Fri, 27 Nov 2015 13:33:34 +0100 swh-storage (0.0.18-1~swh1) unstable-swh; urgency=medium * v0.0.18 * Improve origin_get to permit retrieval per id * Update directory_get implementation (add join from * directory_entry_file to content) * Open release_get : [sha1] -> [Release] -- Antoine R. Dumont (@ardumont) Thu, 19 Nov 2015 11:18:35 +0100 swh-storage (0.0.17-1~swh1) unstable-swh; urgency=medium * Prepare deployment of swh.storage v0.0.17 * Add some entity related entry points -- Nicolas Dandrimont Tue, 03 Nov 2015 16:40:59 +0100 swh-storage (0.0.16-1~swh1) unstable-swh; urgency=medium * v0.0.16 * Add metadata column in revision (db version 29) * cache http connection for remote storage client -- Antoine R. Dumont (@ardumont) Thu, 29 Oct 2015 10:29:00 +0100 swh-storage (0.0.15-1~swh1) unstable-swh; urgency=medium * Prepare deployment of swh.storage v0.0.15 * Allow population of fetch_history * Update organizations / projects as entities * Use schema v028 for directory addition -- Nicolas Dandrimont Tue, 27 Oct 2015 11:43:39 +0100 swh-storage (0.0.14-1~swh1) unstable-swh; urgency=medium * Prepare swh.storage v0.0.14 deployment -- Nicolas Dandrimont Fri, 16 Oct 2015 15:34:08 +0200 swh-storage (0.0.13-1~swh1) unstable-swh; urgency=medium * Prepare deploying swh.storage v0.0.13 -- Nicolas Dandrimont Fri, 16 Oct 2015 14:51:44 +0200 swh-storage (0.0.12-1~swh1) unstable-swh; urgency=medium * Prepare deploying swh.storage v0.0.12 -- Nicolas Dandrimont Tue, 13 Oct 2015 12:39:18 +0200 swh-storage (0.0.11-1~swh1) unstable-swh; urgency=medium * Preparing deployment of swh.storage v0.0.11 -- Nicolas Dandrimont Fri, 09 Oct 2015 17:44:51 +0200 swh-storage (0.0.10-1~swh1) unstable-swh; urgency=medium * Prepare deployment of swh.storage v0.0.10 -- Nicolas Dandrimont Tue, 06 Oct 2015 17:37:00 +0200 swh-storage (0.0.9-1~swh1) unstable-swh; urgency=medium * Prepare deployment of swh.storage v0.0.9 -- Nicolas Dandrimont Thu, 01 Oct 2015 19:03:00 +0200 swh-storage (0.0.8-1~swh1) unstable-swh; urgency=medium * Prepare deployment of swh.storage v0.0.8 -- Nicolas Dandrimont Thu, 01 Oct 2015 11:32:46 +0200 swh-storage (0.0.7-1~swh1) unstable-swh; urgency=medium * Prepare deployment of swh.storage v0.0.7 -- Nicolas Dandrimont Tue, 29 Sep 2015 16:52:54 +0200 swh-storage (0.0.6-1~swh1) unstable-swh; urgency=medium * Prepare deployment of swh.storage v0.0.6 -- Nicolas Dandrimont Tue, 29 Sep 2015 16:43:24 +0200 swh-storage (0.0.5-1~swh1) unstable-swh; urgency=medium * Prepare deploying swh.storage v0.0.5 -- Nicolas Dandrimont Tue, 29 Sep 2015 16:27:00 +0200 swh-storage (0.0.1-1~swh1) unstable-swh; urgency=medium * Initial release * swh.storage.api: Properly escape arbitrary byte sequences in arguments -- Nicolas Dandrimont Tue, 22 Sep 2015 17:02:34 +0200 diff --git a/sql/upgrades/154.sql b/sql/upgrades/154.sql new file mode 100644 index 00000000..54815d5b --- /dev/null +++ b/sql/upgrades/154.sql @@ -0,0 +1,35 @@ +-- SWH DB schema upgrade +-- from_version: 153 +-- to_version: 154 +-- description: make sure swh_snapshot_get_by_id doesn't degenerate into a very large index scan + +insert into dbversion(version, release, description) + values(154, now(), 'Work In Progress'); + +create or replace function swh_snapshot_get_by_id(id sha1_git, + branches_from bytea default '', branches_count bigint default null, + target_types snapshot_target[] default NULL) + returns setof snapshot_result + language sql + stable +as $$ + -- with small limits, the "naive" version of this query can degenerate into + -- using the deduplication index on snapshot_branch (name, target, + -- target_type); The planner happily scans several hundred million rows. + + -- Do the query in two steps: first pull the relevant branches for the given + -- snapshot (filtering them by type), then do the limiting. This two-step + -- process guides the planner into using the proper index. + with filtered_snapshot_branches as ( + select swh_snapshot_get_by_id.id as snapshot_id, name, target, target_type + from snapshot_branches + inner join snapshot_branch on snapshot_branches.branch_id = snapshot_branch.object_id + where snapshot_id = (select object_id from snapshot where snapshot.id = swh_snapshot_get_by_id.id) + and (target_types is null or target_type = any(target_types)) + order by name + ) + select snapshot_id, name, target, target_type + from filtered_snapshot_branches + where name >= branches_from + order by name limit branches_count; +$$; diff --git a/swh.storage.egg-info/PKG-INFO b/swh.storage.egg-info/PKG-INFO index 65f9b8e5..389d1fc8 100644 --- a/swh.storage.egg-info/PKG-INFO +++ b/swh.storage.egg-info/PKG-INFO @@ -1,216 +1,216 @@ Metadata-Version: 2.1 Name: swh.storage -Version: 0.6.0 +Version: 0.7.0 Summary: Software Heritage storage manager Home-page: https://forge.softwareheritage.org/diffusion/DSTO/ Author: Software Heritage developers Author-email: swh-devel@inria.fr License: UNKNOWN Project-URL: Bug Reports, https://forge.softwareheritage.org/maniphest Project-URL: Funding, https://www.softwareheritage.org/donate Project-URL: Source, https://forge.softwareheritage.org/source/swh-storage Project-URL: Documentation, https://docs.softwareheritage.org/devel/swh-storage/ Description: swh-storage =========== Abstraction layer over the archive, allowing to access all stored source code artifacts as well as their metadata. See the [documentation](https://docs.softwareheritage.org/devel/swh-storage/index.html) for more details. ## Quick start ### Dependencies Python tests for this module include tests that cannot be run without a local Postgresql database, so you need the Postgresql server executable on your machine (no need to have a running Postgresql server). They also expect a cassandra server. #### Debian-like host ``` $ sudo apt install libpq-dev postgresql-11 cassandra ``` #### Non Debian-like host The tests expects `/usr/sbin/cassandra` to exist. Optionally, you can avoid running the cassandra tests. ``` (swh) :~/swh-storage$ tox -- -m 'not cassandra' ``` ### Installation It is strongly recommended to use a virtualenv. In the following, we consider you work in a virtualenv named `swh`. See the [developer setup guide](https://docs.softwareheritage.org/devel/developer-setup.html#developer-setup) for a more details on how to setup a working environment. You can install the package directly from [pypi](https://pypi.org/p/swh.storage): ``` (swh) :~$ pip install swh.storage [...] ``` Or from sources: ``` (swh) :~$ git clone https://forge.softwareheritage.org/source/swh-storage.git [...] (swh) :~$ cd swh-storage (swh) :~/swh-storage$ pip install . [...] ``` Then you can check it's properly installed: ``` (swh) :~$ swh storage --help Usage: swh storage [OPTIONS] COMMAND [ARGS]... Software Heritage Storage tools. Options: -h, --help Show this message and exit. Commands: rpc-serve Software Heritage Storage RPC server. ``` ## Tests The best way of running Python tests for this module is to use [tox](https://tox.readthedocs.io/). ``` (swh) :~$ pip install tox ``` ### tox From the sources directory, simply use tox: ``` (swh) :~/swh-storage$ tox [...] ========= 315 passed, 6 skipped, 15 warnings in 40.86 seconds ========== _______________________________ summary ________________________________ flake8: commands succeeded py3: commands succeeded congratulations :) ``` ## Development The storage server can be locally started. It requires a configuration file and a running Postgresql database. ### Sample configuration A typical configuration `storage.yml` file is: ``` storage: cls: local args: db: "dbname=softwareheritage-dev user= password=" objstorage: cls: pathslicing args: root: /tmp/swh-storage/ slicing: 0:2/2:4/4:6 ``` which means, this uses: - a local storage instance whose db connection is to `softwareheritage-dev` local instance, - the objstorage uses a local objstorage instance whose: - `root` path is /tmp/swh-storage, - slicing scheme is `0:2/2:4/4:6`. This means that the identifier of the content (sha1) which will be stored on disk at first level with the first 2 hex characters, the second level with the next 2 hex characters and the third level with the next 2 hex characters. And finally the complete hash file holding the raw content. For example: 00062f8bd330715c4f819373653d97b3cd34394c will be stored at 00/06/2f/00062f8bd330715c4f819373653d97b3cd34394c Note that the `root` path should exist on disk before starting the server. ### Starting the storage server If the python package has been properly installed (e.g. in a virtual env), you should be able to use the command: ``` (swh) :~/swh-storage$ swh storage rpc-serve storage.yml ``` This runs a local swh-storage api at 5002 port. ``` (swh) :~/swh-storage$ curl http://127.0.0.1:5002 Software Heritage storage server

You have reached the Software Heritage storage server.
See its documentation and API for more information

``` ### And then what? In your upper layer ([loader-git](https://forge.softwareheritage.org/source/swh-loader-git/), [loader-svn](https://forge.softwareheritage.org/source/swh-loader-svn/), etc...), you can define a remote storage with this snippet of yaml configuration. ``` storage: cls: remote args: url: http://localhost:5002/ ``` You could directly define a local storage with the following snippet: ``` storage: cls: local args: db: service=swh-dev objstorage: cls: pathslicing args: root: /home/storage/swh-storage/ slicing: 0:2/2:4/4:6 ``` Platform: UNKNOWN Classifier: Programming Language :: Python :: 3 Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3) Classifier: Operating System :: OS Independent Classifier: Development Status :: 5 - Production/Stable Requires-Python: >=3.7 Description-Content-Type: text/markdown Provides-Extra: testing Provides-Extra: schemata Provides-Extra: journal diff --git a/swh.storage.egg-info/SOURCES.txt b/swh.storage.egg-info/SOURCES.txt index 65c3881f..2f837984 100644 --- a/swh.storage.egg-info/SOURCES.txt +++ b/swh.storage.egg-info/SOURCES.txt @@ -1,259 +1,260 @@ MANIFEST.in Makefile Makefile.local README.md pyproject.toml pytest.ini setup.cfg setup.py tox.ini version.txt ./requirements-swh-journal.txt ./requirements-swh.txt ./requirements-test.txt ./requirements.txt bin/swh-storage-add-dir sql/.gitignore sql/Makefile sql/TODO sql/clusters.dot sql/bin/db-upgrade sql/bin/dot_add_content sql/doc/json/.gitignore sql/doc/json/Makefile sql/doc/json/entity.lister_metadata.schema.json sql/doc/json/entity.metadata.schema.json sql/doc/json/entity_history.lister_metadata.schema.json sql/doc/json/entity_history.metadata.schema.json sql/doc/json/fetch_history.result.schema.json sql/doc/json/list_history.result.schema.json sql/doc/json/listable_entity.list_params.schema.json sql/doc/json/origin_visit.metadata.json sql/doc/json/tool.tool_configuration.schema.json sql/json/.gitignore sql/json/Makefile sql/json/entity.lister_metadata.schema.json sql/json/entity.metadata.schema.json sql/json/entity_history.lister_metadata.schema.json sql/json/entity_history.metadata.schema.json sql/json/fetch_history.result.schema.json sql/json/list_history.result.schema.json sql/json/listable_entity.list_params.schema.json sql/json/origin_visit.metadata.json sql/json/tool.tool_configuration.schema.json sql/upgrades/015.sql sql/upgrades/016.sql sql/upgrades/017.sql sql/upgrades/018.sql sql/upgrades/019.sql sql/upgrades/020.sql sql/upgrades/021.sql sql/upgrades/022.sql sql/upgrades/023.sql sql/upgrades/024.sql sql/upgrades/025.sql sql/upgrades/026.sql sql/upgrades/027.sql sql/upgrades/028.sql sql/upgrades/029.sql sql/upgrades/030.sql sql/upgrades/032.sql sql/upgrades/033.sql sql/upgrades/034.sql sql/upgrades/035.sql sql/upgrades/036.sql sql/upgrades/037.sql sql/upgrades/038.sql sql/upgrades/039.sql sql/upgrades/040.sql sql/upgrades/041.sql sql/upgrades/042.sql sql/upgrades/043.sql sql/upgrades/044.sql sql/upgrades/045.sql sql/upgrades/046.sql sql/upgrades/047.sql sql/upgrades/048.sql sql/upgrades/049.sql sql/upgrades/050.sql sql/upgrades/051.sql sql/upgrades/052.sql sql/upgrades/053.sql sql/upgrades/054.sql sql/upgrades/055.sql sql/upgrades/056.sql sql/upgrades/057.sql sql/upgrades/058.sql sql/upgrades/059.sql sql/upgrades/060.sql sql/upgrades/061.sql sql/upgrades/062.sql sql/upgrades/063.sql sql/upgrades/064.sql sql/upgrades/065.sql sql/upgrades/066.sql sql/upgrades/067.sql sql/upgrades/068.sql sql/upgrades/069.sql sql/upgrades/070.sql sql/upgrades/071.sql sql/upgrades/072.sql sql/upgrades/073.sql sql/upgrades/074.sql sql/upgrades/075.sql sql/upgrades/076.sql sql/upgrades/077.sql sql/upgrades/078.sql sql/upgrades/079.sql sql/upgrades/080.sql sql/upgrades/081.sql sql/upgrades/082.sql sql/upgrades/083.sql sql/upgrades/084.sql sql/upgrades/085.sql sql/upgrades/086.sql sql/upgrades/087.sql sql/upgrades/088.sql sql/upgrades/089.sql sql/upgrades/090.sql sql/upgrades/091.sql sql/upgrades/092.sql sql/upgrades/093.sql sql/upgrades/094.sql sql/upgrades/095.sql sql/upgrades/096.sql sql/upgrades/097.sql sql/upgrades/098.sql sql/upgrades/099.sql sql/upgrades/100.sql sql/upgrades/101.sql sql/upgrades/102.sql sql/upgrades/103.sql sql/upgrades/104.sql sql/upgrades/105.sql sql/upgrades/106.sql sql/upgrades/107.sql sql/upgrades/108.sql sql/upgrades/109.sql sql/upgrades/110.sql sql/upgrades/111.sql sql/upgrades/112.sql sql/upgrades/113.sql sql/upgrades/114.sql sql/upgrades/115.sql sql/upgrades/116.sql sql/upgrades/117.sql sql/upgrades/118.sql sql/upgrades/119.sql sql/upgrades/120.sql sql/upgrades/121.sql sql/upgrades/122.sql sql/upgrades/123.sql sql/upgrades/124.sql sql/upgrades/125.sql sql/upgrades/126.sql sql/upgrades/127.sql sql/upgrades/128.sql sql/upgrades/129.sql sql/upgrades/130.sql sql/upgrades/131.sql sql/upgrades/132.sql sql/upgrades/133.sql sql/upgrades/134.sql sql/upgrades/135.sql sql/upgrades/136.sql sql/upgrades/137.sql sql/upgrades/138.sql sql/upgrades/139.sql sql/upgrades/140.sql sql/upgrades/141.sql sql/upgrades/142.sql sql/upgrades/143.sql sql/upgrades/144.sql sql/upgrades/145.sql sql/upgrades/146.sql sql/upgrades/147.sql sql/upgrades/148.sql sql/upgrades/149.sql sql/upgrades/150.sql sql/upgrades/151.sql sql/upgrades/152.sql sql/upgrades/153.sql +sql/upgrades/154.sql swh/__init__.py swh.storage.egg-info/PKG-INFO swh.storage.egg-info/SOURCES.txt swh.storage.egg-info/dependency_links.txt swh.storage.egg-info/entry_points.txt swh.storage.egg-info/requires.txt swh.storage.egg-info/top_level.txt swh/storage/__init__.py swh/storage/backfill.py swh/storage/buffer.py swh/storage/cli.py swh/storage/common.py swh/storage/converters.py swh/storage/db.py swh/storage/exc.py swh/storage/filter.py swh/storage/fixer.py swh/storage/in_memory.py swh/storage/interface.py swh/storage/metrics.py swh/storage/objstorage.py swh/storage/py.typed swh/storage/replay.py swh/storage/retry.py swh/storage/storage.py swh/storage/utils.py swh/storage/validate.py swh/storage/writer.py swh/storage/algos/__init__.py swh/storage/algos/diff.py swh/storage/algos/dir_iterators.py swh/storage/algos/origin.py swh/storage/algos/revisions_walker.py swh/storage/algos/snapshot.py swh/storage/api/__init__.py swh/storage/api/client.py swh/storage/api/serializers.py swh/storage/api/server.py swh/storage/cassandra/__init__.py swh/storage/cassandra/common.py swh/storage/cassandra/converters.py swh/storage/cassandra/cql.py swh/storage/cassandra/schema.py swh/storage/cassandra/storage.py swh/storage/sql/10-swh-init.sql swh/storage/sql/20-swh-enums.sql swh/storage/sql/30-swh-schema.sql swh/storage/sql/40-swh-func.sql swh/storage/sql/60-swh-indexes.sql swh/storage/tests/__init__.py swh/storage/tests/conftest.py swh/storage/tests/generate_data_test.py swh/storage/tests/storage_data.py swh/storage/tests/test_api_client.py swh/storage/tests/test_backfill.py swh/storage/tests/test_buffer.py swh/storage/tests/test_cassandra.py swh/storage/tests/test_cassandra_converters.py swh/storage/tests/test_cli.py swh/storage/tests/test_converters.py swh/storage/tests/test_db.py swh/storage/tests/test_exception.py swh/storage/tests/test_filter.py swh/storage/tests/test_in_memory.py swh/storage/tests/test_init.py swh/storage/tests/test_kafka_writer.py swh/storage/tests/test_metrics.py swh/storage/tests/test_replay.py swh/storage/tests/test_retry.py swh/storage/tests/test_server.py swh/storage/tests/test_storage.py swh/storage/tests/test_utils.py swh/storage/tests/algos/__init__.py swh/storage/tests/algos/test_diff.py swh/storage/tests/algos/test_dir_iterator.py swh/storage/tests/algos/test_origin.py swh/storage/tests/algos/test_revisions_walker.py swh/storage/tests/algos/test_snapshot.py \ No newline at end of file diff --git a/swh/storage/algos/origin.py b/swh/storage/algos/origin.py index 5f7e51df..43050fc2 100644 --- a/swh/storage/algos/origin.py +++ b/swh/storage/algos/origin.py @@ -1,83 +1,93 @@ # Copyright (C) 2019-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information -from typing import Optional, Iterable, Tuple +from typing import Any, Dict, Optional, Iterable, Tuple from swh.model.model import OriginVisit, OriginVisitStatus def iter_origins(storage, origin_from=1, origin_to=None, batch_size=10000): """Iterates over all origins in the storage. Args: storage: the storage object used for queries. batch_size: number of origins per query Yields: dict: the origin dictionary with the keys: - type: origin's type - url: origin's url """ start = origin_from while True: if origin_to: origin_count = min(origin_to - start, batch_size) else: origin_count = batch_size origins = list( storage.origin_get_range(origin_from=start, origin_count=origin_count) ) if not origins: break start = origins[-1]["id"] + 1 for origin in origins: del origin["id"] yield origin if origin_to and start > origin_to: break def origin_get_latest_visit_status( storage, origin_url: str, type: Optional[str] = None, allowed_statuses: Optional[Iterable[str]] = None, require_snapshot: bool = False, ) -> Optional[Tuple[OriginVisit, OriginVisitStatus]]: - """Get the latest origin visit and visit status information for a given origin, - optionally looking only for those with one of the given allowed_statuses or for - those with a snapshot. + """Get the latest origin visit (and status) of an origin. Optionally, a combination of + criteria can be provided, origin type, allowed statuses or if a visit has a + snapshot. - If nothing matches the criteria, this returns None. + If no visit matching the criteria is found, returns None. Otherwise, returns a tuple + of origin visit, origin visit status. Args: storage: A storage backend origin: origin URL type: Optional visit type to filter on (e.g git, tar, dsc, svn, hg, npm, pypi, ...) allowed_statuses: list of visit statuses considered to find the latest visit. For instance, ``allowed_statuses=['full']`` will only consider visits that have successfully run to completion. require_snapshot: If True, only a visit with a snapshot will be returned. Returns: a tuple of (visit, visit_status) model object if the visit *and* the visit - status exist, None otherwise. + status exist (and match the search criteria), None otherwise. """ - visit_d = storage.origin_visit_get_latest(origin_url, type=type) - if not visit_d: + # visits order are from older visit to most recent. + visits = list(storage.origin_visit_get(origin_url)) + visits.reverse() + if not visits: return None - visit = OriginVisit.from_dict(visit_d) - visit_status = storage.origin_visit_status_get_latest( - origin_url, - visit.visit, - allowed_statuses=allowed_statuses, - require_snapshot=require_snapshot, - ) - if not visit_status: + visit_status: Optional[OriginVisitStatus] = None + visit: Dict[str, Any] + # Iterate over the visits in reverse order, so the most recent match is found first + for visit in visits: + if type is not None and visit["type"] != type: + continue + visit_status = storage.origin_visit_status_get_latest( + origin_url, + visit["visit"], + allowed_statuses=allowed_statuses, + require_snapshot=require_snapshot, + ) + if visit_status is not None: + break + if visit_status is None: return None - return (visit, visit_status) + return (OriginVisit.from_dict(visit), visit_status) diff --git a/swh/storage/algos/snapshot.py b/swh/storage/algos/snapshot.py index 5c9aeef7..5e965f4e 100644 --- a/swh/storage/algos/snapshot.py +++ b/swh/storage/algos/snapshot.py @@ -1,94 +1,95 @@ # Copyright (C) 2018-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information from typing import Iterable, Optional from swh.model.model import Snapshot from swh.storage.algos.origin import origin_get_latest_visit_status def snapshot_get_all_branches(storage, snapshot_id): """Get all the branches for a given snapshot Args: storage (swh.storage.storage.Storage): the storage instance snapshot_id (bytes): the snapshot's identifier Returns: dict: a dict with two keys: * **id**: identifier of the snapshot * **branches**: a dict of branches contained in the snapshot whose keys are the branches' names. """ ret = storage.snapshot_get(snapshot_id) if not ret: return next_branch = ret.pop("next_branch", None) while next_branch: data = storage.snapshot_get_branches(snapshot_id, branches_from=next_branch) ret["branches"].update(data["branches"]) next_branch = data.get("next_branch") return ret def snapshot_get_latest( storage, origin: str, allowed_statuses: Optional[Iterable[str]] = None, branches_count: Optional[int] = None, ) -> Optional[Snapshot]: """Get the latest snapshot for the given origin, optionally only from visits that have one of the given allowed_statuses. The branches of the snapshot are iterated in the lexicographical order of their names. Args: storage: Storage instance origin: the origin's URL allowed_statuses: list of visit statuses considered to find the latest snapshot for the visit. For instance, ``allowed_statuses=['full']`` will only consider visits that have successfully run to completion. branches_count: Optional parameter to retrieve snapshot with all branches (default behavior when None) or not. If set to positive number, the snapshot will be partial with only that number of branches. Raises: ValueError if branches_count is not a positive value Returns: The snapshot object if one is found matching the criteria or None. """ visit_and_status = origin_get_latest_visit_status( storage, origin, allowed_statuses=allowed_statuses, require_snapshot=True ) if not visit_and_status: return None _, visit_status = visit_and_status snapshot_id = visit_status.snapshot if not snapshot_id: return None if branches_count: # partial snapshot if not isinstance(branches_count, int) or branches_count <= 0: raise ValueError( "Parameter branches_count must be a positive integer. " f"Current value is {branches_count}" ) snapshot = storage.snapshot_get_branches( snapshot_id, branches_count=branches_count ) + if snapshot is None: + return None snapshot.pop("next_branch") else: snapshot = snapshot_get_all_branches(storage, snapshot_id) - - return Snapshot.from_dict(snapshot) + return Snapshot.from_dict(snapshot) if snapshot else None diff --git a/swh/storage/sql/30-swh-schema.sql b/swh/storage/sql/30-swh-schema.sql index fea0c961..56c94cd9 100644 --- a/swh/storage/sql/30-swh-schema.sql +++ b/swh/storage/sql/30-swh-schema.sql @@ -1,489 +1,489 @@ --- --- SQL implementation of the Software Heritage data model --- -- schema versions create table dbversion ( version int primary key, release timestamptz, description text ); comment on table dbversion is 'Details of current db version'; comment on column dbversion.version is 'SQL schema version'; comment on column dbversion.release is 'Version deployment timestamp'; comment on column dbversion.description is 'Release description'; -- latest schema version insert into dbversion(version, release, description) - values(152, now(), 'Work In Progress'); + values(154, now(), 'Work In Progress'); -- a SHA1 checksum create domain sha1 as bytea check (length(value) = 20); -- a Git object ID, i.e., a Git-style salted SHA1 checksum create domain sha1_git as bytea check (length(value) = 20); -- a SHA256 checksum create domain sha256 as bytea check (length(value) = 32); -- a blake2 checksum create domain blake2s256 as bytea check (length(value) = 32); -- UNIX path (absolute, relative, individual path component, etc.) create domain unix_path as bytea; -- a set of UNIX-like access permissions, as manipulated by, e.g., chmod create domain file_perms as int; -- Checksums about actual file content. Note that the content itself is not -- stored in the DB, but on external (key-value) storage. A single checksum is -- used as key there, but the other can be used to verify that we do not inject -- content collisions not knowingly. create table content ( sha1 sha1 not null, sha1_git sha1_git not null, sha256 sha256 not null, blake2s256 blake2s256 not null, length bigint not null, ctime timestamptz not null default now(), -- creation time, i.e. time of (first) injection into the storage status content_status not null default 'visible', object_id bigserial ); comment on table content is 'Checksums of file content which is actually stored externally'; comment on column content.sha1 is 'Content sha1 hash'; comment on column content.sha1_git is 'Git object sha1 hash'; comment on column content.sha256 is 'Content Sha256 hash'; comment on column content.blake2s256 is 'Content blake2s hash'; comment on column content.length is 'Content length'; comment on column content.ctime is 'First seen time'; comment on column content.status is 'Content status (absent, visible, hidden)'; comment on column content.object_id is 'Content identifier'; -- An origin is a place, identified by an URL, where software source code -- artifacts can be found. We support different kinds of origins, e.g., git and -- other VCS repositories, web pages that list tarballs URLs (e.g., -- http://www.kernel.org), indirect tarball URLs (e.g., -- http://www.example.org/latest.tar.gz), etc. The key feature of an origin is -- that it can be *fetched* from (wget, git clone, svn checkout, etc.) to -- retrieve all the contained software. create table origin ( id bigserial not null, url text not null ); comment on column origin.id is 'Artifact origin id'; comment on column origin.url is 'URL of origin'; -- Content blobs observed somewhere, but not ingested into the archive for -- whatever reason. This table is separate from the content table as we might -- not have the sha1 checksum of skipped contents (for instance when we inject -- git repositories, objects that are too big will be skipped here, and we will -- only know their sha1_git). 'reason' contains the reason the content was -- skipped. origin is a nullable column allowing to find out which origin -- contains that skipped content. create table skipped_content ( sha1 sha1, sha1_git sha1_git, sha256 sha256, blake2s256 blake2s256, length bigint not null, ctime timestamptz not null default now(), status content_status not null default 'absent', reason text not null, origin bigint, object_id bigserial ); comment on table skipped_content is 'Content blobs observed, but not ingested in the archive'; comment on column skipped_content.sha1 is 'Skipped content sha1 hash'; comment on column skipped_content.sha1_git is 'Git object sha1 hash'; comment on column skipped_content.sha256 is 'Skipped content sha256 hash'; comment on column skipped_content.blake2s256 is 'Skipped content blake2s hash'; comment on column skipped_content.length is 'Skipped content length'; comment on column skipped_content.ctime is 'First seen time'; comment on column skipped_content.status is 'Skipped content status (absent, visible, hidden)'; comment on column skipped_content.reason is 'Reason for skipping'; comment on column skipped_content.origin is 'Origin table identifier'; comment on column skipped_content.object_id is 'Skipped content identifier'; -- A file-system directory. A directory is a list of directory entries (see -- tables: directory_entry_{dir,file}). -- -- To list the contents of a directory: -- 1. list the contained directory_entry_dir using array dir_entries -- 2. list the contained directory_entry_file using array file_entries -- 3. list the contained directory_entry_rev using array rev_entries -- 4. UNION -- -- Synonyms/mappings: -- * git: tree create table directory ( id sha1_git not null, dir_entries bigint[], -- sub-directories, reference directory_entry_dir file_entries bigint[], -- contained files, reference directory_entry_file rev_entries bigint[], -- mounted revisions, reference directory_entry_rev object_id bigserial -- short object identifier ); comment on table directory is 'Contents of a directory, synonymous to tree (git)'; comment on column directory.id is 'Git object sha1 hash'; comment on column directory.dir_entries is 'Sub-directories, reference directory_entry_dir'; comment on column directory.file_entries is 'Contained files, reference directory_entry_file'; comment on column directory.rev_entries is 'Mounted revisions, reference directory_entry_rev'; comment on column directory.object_id is 'Short object identifier'; -- A directory entry pointing to a (sub-)directory. create table directory_entry_dir ( id bigserial, target sha1_git not null, -- id of target directory name unix_path not null, -- path name, relative to containing dir perms file_perms not null -- unix-like permissions ); comment on table directory_entry_dir is 'Directory entry for directory'; comment on column directory_entry_dir.id is 'Directory identifier'; comment on column directory_entry_dir.target is 'Target directory identifier'; comment on column directory_entry_dir.name is 'Path name, relative to containing directory'; comment on column directory_entry_dir.perms is 'Unix-like permissions'; -- A directory entry pointing to a file content. create table directory_entry_file ( id bigserial, target sha1_git not null, -- id of target file name unix_path not null, -- path name, relative to containing dir perms file_perms not null -- unix-like permissions ); comment on table directory_entry_file is 'Directory entry for file'; comment on column directory_entry_file.id is 'File identifier'; comment on column directory_entry_file.target is 'Target file identifier'; comment on column directory_entry_file.name is 'Path name, relative to containing directory'; comment on column directory_entry_file.perms is 'Unix-like permissions'; -- A directory entry pointing to a revision. create table directory_entry_rev ( id bigserial, target sha1_git not null, -- id of target revision name unix_path not null, -- path name, relative to containing dir perms file_perms not null -- unix-like permissions ); comment on table directory_entry_rev is 'Directory entry for revision'; comment on column directory_entry_dir.id is 'Revision identifier'; comment on column directory_entry_dir.target is 'Target revision in identifier'; comment on column directory_entry_dir.name is 'Path name, relative to containing directory'; comment on column directory_entry_dir.perms is 'Unix-like permissions'; -- A person referenced by some source code artifacts, e.g., a VCS revision or -- release metadata. create table person ( id bigserial, name bytea, -- advisory: not null if we managed to parse a name email bytea, -- advisory: not null if we managed to parse an email fullname bytea not null -- freeform specification; what is actually used in the checksums -- will usually be of the form 'name ' ); comment on table person is 'Person referenced in code artifact release metadata'; comment on column person.id is 'Person identifier'; comment on column person.name is 'Name'; comment on column person.email is 'Email'; comment on column person.fullname is 'Full name (raw name)'; -- The state of a source code tree at a specific point in time. -- -- Synonyms/mappings: -- * git / subversion / etc: commit -- * tarball: a specific tarball -- -- Revisions are organized as DAGs. Each revision points to 0, 1, or more (in -- case of merges) parent revisions. Each revision points to a directory, i.e., -- a file-system tree containing files and directories. create table revision ( id sha1_git not null, date timestamptz, date_offset smallint, committer_date timestamptz, committer_date_offset smallint, type revision_type not null, directory sha1_git, -- source code 'root' directory message bytea, author bigint, committer bigint, synthetic boolean not null default false, -- true iff revision has been created by Software Heritage metadata jsonb, -- extra metadata (tarball checksums, extra commit information, etc...) object_id bigserial, date_neg_utc_offset boolean, committer_date_neg_utc_offset boolean ); comment on table revision is 'A revision represents the state of a source code tree at a specific point in time'; comment on column revision.id is 'Git-style SHA1 commit identifier'; comment on column revision.date is 'Author timestamp as UNIX epoch'; comment on column revision.date_offset is 'Author timestamp timezone, as minute offsets from UTC'; comment on column revision.date_neg_utc_offset is 'True indicates a -0 UTC offset on author timestamp'; comment on column revision.committer_date is 'Committer timestamp as UNIX epoch'; comment on column revision.committer_date_offset is 'Committer timestamp timezone, as minute offsets from UTC'; comment on column revision.committer_date_neg_utc_offset is 'True indicates a -0 UTC offset on committer timestamp'; comment on column revision.type is 'Type of revision'; comment on column revision.directory is 'Directory identifier'; comment on column revision.message is 'Commit message'; comment on column revision.author is 'Author identity'; comment on column revision.committer is 'Committer identity'; comment on column revision.synthetic is 'True iff revision has been synthesized by Software Heritage'; comment on column revision.metadata is 'Extra revision metadata'; comment on column revision.object_id is 'Non-intrinsic, sequential object identifier'; -- either this table or the sha1_git[] column on the revision table create table revision_history ( id sha1_git not null, parent_id sha1_git not null, parent_rank int not null default 0 -- parent position in merge commits, 0-based ); comment on table revision_history is 'Sequence of revision history with parent and position in history'; comment on column revision_history.id is 'Revision history git object sha1 checksum'; comment on column revision_history.parent_id is 'Parent revision git object identifier'; comment on column revision_history.parent_rank is 'Parent position in merge commits, 0-based'; -- Crawling history of software origins visited by Software Heritage. Each -- visit is a 3-way mapping between a software origin, a timestamp, and a -- snapshot object capturing the full-state of the origin at visit time. create table origin_visit ( origin bigint not null, visit bigint not null, date timestamptz not null, type text not null, -- remove those when done migrating the schema status origin_visit_state not null, metadata jsonb, snapshot sha1_git ); comment on column origin_visit.origin is 'Visited origin'; comment on column origin_visit.visit is 'Sequential visit number for the origin'; comment on column origin_visit.date is 'Visit timestamp'; comment on column origin_visit.type is 'Type of loader that did the visit (hg, git, ...)'; comment on column origin_visit.status is '(Deprecated) Visit status'; comment on column origin_visit.metadata is '(Deprecated) Optional origin visit metadata'; comment on column origin_visit.snapshot is '(Deprecated) Optional snapshot of the origin visit. It can be partial.'; -- Crawling history of software origin visits by Software Heritage. Each -- visit see its history change through new origin visit status updates create table origin_visit_status ( origin bigint not null, visit bigint not null, date timestamptz not null, status origin_visit_state not null, metadata jsonb, snapshot sha1_git ); comment on column origin_visit_status.origin is 'Origin concerned by the visit update'; comment on column origin_visit_status.visit is 'Visit concerned by the visit update'; comment on column origin_visit_status.date is 'Visit update timestamp'; comment on column origin_visit_status.status is 'Visit status (ongoing, failed, full)'; comment on column origin_visit_status.metadata is 'Optional origin visit metadata'; comment on column origin_visit_status.snapshot is 'Optional, possibly partial, snapshot of the origin visit. It can be partial.'; -- A snapshot represents the entire state of a software origin as crawled by -- Software Heritage. This table is a simple mapping between (public) intrinsic -- snapshot identifiers and (private) numeric sequential identifiers. create table snapshot ( object_id bigserial not null, -- PK internal object identifier id sha1_git not null -- snapshot intrinsic identifier ); comment on table snapshot is 'State of a software origin as crawled by Software Heritage'; comment on column snapshot.object_id is 'Internal object identifier'; comment on column snapshot.id is 'Intrinsic snapshot identifier'; -- Each snapshot associate "branch" names to other objects in the Software -- Heritage Merkle DAG. This table describes branches as mappings between names -- and target typed objects. create table snapshot_branch ( object_id bigserial not null, -- PK internal object identifier name bytea not null, -- branch name, e.g., "master" or "feature/drag-n-drop" target bytea, -- target object identifier, e.g., a revision identifier target_type snapshot_target -- target object type, e.g., "revision" ); comment on table snapshot_branch is 'Associates branches with objects in Heritage Merkle DAG'; comment on column snapshot_branch.object_id is 'Internal object identifier'; comment on column snapshot_branch.name is 'Branch name'; comment on column snapshot_branch.target is 'Target object identifier'; comment on column snapshot_branch.target_type is 'Target object type'; -- Mapping between snapshots and their branches. create table snapshot_branches ( snapshot_id bigint not null, -- snapshot identifier, ref. snapshot.object_id branch_id bigint not null -- branch identifier, ref. snapshot_branch.object_id ); comment on table snapshot_branches is 'Mapping between snapshot and their branches'; comment on column snapshot_branches.snapshot_id is 'Snapshot identifier'; comment on column snapshot_branches.branch_id is 'Branch identifier'; -- A "memorable" point in time in the development history of a software -- project. -- -- Synonyms/mappings: -- * git: tag (of the annotated kind, otherwise they are just references) -- * tarball: the release version number create table release ( id sha1_git not null, target sha1_git, date timestamptz, date_offset smallint, name bytea, comment bytea, author bigint, synthetic boolean not null default false, -- true iff release has been created by Software Heritage object_id bigserial, target_type object_type not null, date_neg_utc_offset boolean ); comment on table release is 'Details of a software release, synonymous with a tag (git) or version number (tarball)'; comment on column release.id is 'Release git identifier'; comment on column release.target is 'Target git identifier'; comment on column release.date is 'Release timestamp'; comment on column release.date_offset is 'Timestamp offset from UTC'; comment on column release.name is 'Name'; comment on column release.comment is 'Comment'; comment on column release.author is 'Author'; comment on column release.synthetic is 'Indicates if created by Software Heritage'; comment on column release.object_id is 'Object identifier'; comment on column release.target_type is 'Object type (''content'', ''directory'', ''revision'', ''release'', ''snapshot'')'; comment on column release.date_neg_utc_offset is 'True indicates -0 UTC offset for release timestamp'; -- Tools create table metadata_fetcher ( id serial not null, name text not null, version text not null, metadata jsonb not null ); comment on table metadata_fetcher is 'Tools used to retrieve metadata'; comment on column metadata_fetcher.id is 'Internal identifier of the fetcher'; comment on column metadata_fetcher.name is 'Fetcher name'; comment on column metadata_fetcher.version is 'Fetcher version'; comment on column metadata_fetcher.metadata is 'Extra information about the fetcher'; create table metadata_authority ( id serial not null, type text not null, url text not null, metadata jsonb not null ); comment on table metadata_authority is 'Metadata authority information'; comment on column metadata_authority.id is 'Internal identifier of the authority'; comment on column metadata_authority.type is 'Type of authority (deposit/forge/registry)'; comment on column metadata_authority.url is 'Authority''s uri'; comment on column metadata_authority.metadata is 'Other metadata about authority'; -- Discovery of metadata during a listing, loading, deposit or external_catalog of an origin -- also provides a translation to a defined json schema using a translation tool (tool_id) create table origin_metadata ( id bigserial not null, -- PK internal object identifier origin_id bigint not null, -- references origin(id) discovery_date timestamptz not null, -- when it was extracted authority_id bigint not null, fetcher_id bigint not null, format text not null default 'sword-v2-atom-codemeta-v2-in-json', metadata bytea not null ); comment on table origin_metadata is 'keeps all metadata found concerning an origin'; comment on column origin_metadata.id is 'the origin_metadata object''s id'; comment on column origin_metadata.origin_id is 'the origin id for which the metadata was found'; comment on column origin_metadata.discovery_date is 'the date of retrieval'; comment on column origin_metadata.authority_id is 'the metadata provider: github, openhub, deposit, etc.'; comment on column origin_metadata.fetcher_id is 'the tool used for extracting metadata: loaders, crawlers, etc.'; comment on column origin_metadata.format is 'name of the format of metadata, used by readers to interpret it.'; comment on column origin_metadata.metadata is 'original metadata in opaque format'; -- Keep a cache of object counts create table object_counts ( object_type text, -- table for which we're counting objects (PK) value bigint, -- count of objects in the table last_update timestamptz, -- last update for the object count in this table single_update boolean -- whether we update this table standalone (true) or through bucketed counts (false) ); comment on table object_counts is 'Cache of object counts'; comment on column object_counts.object_type is 'Object type (''content'', ''directory'', ''revision'', ''release'', ''snapshot'')'; comment on column object_counts.value is 'Count of objects in the table'; comment on column object_counts.last_update is 'Last update for object count'; comment on column object_counts.single_update is 'standalone (true) or bucketed counts (false)'; create table object_counts_bucketed ( line serial not null, -- PK object_type text not null, -- table for which we're counting objects identifier text not null, -- identifier across which we're bucketing objects bucket_start bytea, -- lower bound (inclusive) for the bucket bucket_end bytea, -- upper bound (exclusive) for the bucket value bigint, -- count of objects in the bucket last_update timestamptz -- last update for the object count in this bucket ); comment on table object_counts_bucketed is 'Bucketed count for objects ordered by type'; comment on column object_counts_bucketed.line is 'Auto incremented idenitfier value'; comment on column object_counts_bucketed.object_type is 'Object type (''content'', ''directory'', ''revision'', ''release'', ''snapshot'')'; comment on column object_counts_bucketed.identifier is 'Common identifier for bucketed objects'; comment on column object_counts_bucketed.bucket_start is 'Lower bound (inclusive) for the bucket'; comment on column object_counts_bucketed.bucket_end is 'Upper bound (exclusive) for the bucket'; comment on column object_counts_bucketed.value is 'Count of objects in the bucket'; comment on column object_counts_bucketed.last_update is 'Last update for the object count in this bucket'; diff --git a/swh/storage/sql/40-swh-func.sql b/swh/storage/sql/40-swh-func.sql index b4e4997a..40c64e9f 100644 --- a/swh/storage/sql/40-swh-func.sql +++ b/swh/storage/sql/40-swh-func.sql @@ -1,937 +1,948 @@ create or replace function hash_sha1(text) returns text as $$ select encode(digest($1, 'sha1'), 'hex') $$ language sql strict immutable; comment on function hash_sha1(text) is 'Compute SHA1 hash as text'; -- create a temporary table called tmp_TBLNAME, mimicking existing table -- TBLNAME -- -- Args: -- tblname: name of the table to mimic create or replace function swh_mktemp(tblname regclass) returns void language plpgsql as $$ begin execute format(' create temporary table if not exists tmp_%1$I (like %1$I including defaults) on commit delete rows; alter table tmp_%1$I drop column if exists object_id; ', tblname); return; end $$; -- create a temporary table for directory entries called tmp_TBLNAME, -- mimicking existing table TBLNAME with an extra dir_id (sha1_git) -- column, and dropping the id column. -- -- This is used to create the tmp_directory_entry_ tables. -- -- Args: -- tblname: name of the table to mimic create or replace function swh_mktemp_dir_entry(tblname regclass) returns void language plpgsql as $$ begin execute format(' create temporary table if not exists tmp_%1$I (like %1$I including defaults, dir_id sha1_git) on commit delete rows; alter table tmp_%1$I drop column if exists id; ', tblname); return; end $$; -- create a temporary table for revisions called tmp_revisions, -- mimicking existing table revision, replacing the foreign keys to -- people with an email and name field -- create or replace function swh_mktemp_revision() returns void language sql as $$ create temporary table if not exists tmp_revision ( like revision including defaults, author_fullname bytea, author_name bytea, author_email bytea, committer_fullname bytea, committer_name bytea, committer_email bytea ) on commit delete rows; alter table tmp_revision drop column if exists author; alter table tmp_revision drop column if exists committer; alter table tmp_revision drop column if exists object_id; $$; -- create a temporary table for releases called tmp_release, -- mimicking existing table release, replacing the foreign keys to -- people with an email and name field -- create or replace function swh_mktemp_release() returns void language sql as $$ create temporary table if not exists tmp_release ( like release including defaults, author_fullname bytea, author_name bytea, author_email bytea ) on commit delete rows; alter table tmp_release drop column if exists author; alter table tmp_release drop column if exists object_id; $$; -- create a temporary table for the branches of a snapshot create or replace function swh_mktemp_snapshot_branch() returns void language sql as $$ create temporary table if not exists tmp_snapshot_branch ( name bytea not null, target bytea, target_type snapshot_target ) on commit delete rows; $$; -- a content signature is a set of cryptographic checksums that we use to -- uniquely identify content, for the purpose of verifying if we already have -- some content or not during content injection create type content_signature as ( sha1 sha1, sha1_git sha1_git, sha256 sha256, blake2s256 blake2s256 ); -- check which entries of tmp_skipped_content are missing from skipped_content -- -- operates in bulk: 0. swh_mktemp(skipped_content), 1. COPY to tmp_skipped_content, -- 2. call this function create or replace function swh_skipped_content_missing() returns setof content_signature language plpgsql as $$ begin return query select sha1, sha1_git, sha256, blake2s256 from tmp_skipped_content t where not exists (select 1 from skipped_content s where s.sha1 is not distinct from t.sha1 and s.sha1_git is not distinct from t.sha1_git and s.sha256 is not distinct from t.sha256); return; end $$; -- add tmp_content entries to content, skipping duplicates -- -- operates in bulk: 0. swh_mktemp(content), 1. COPY to tmp_content, -- 2. call this function create or replace function swh_content_add() returns void language plpgsql as $$ begin insert into content (sha1, sha1_git, sha256, blake2s256, length, status, ctime) select distinct sha1, sha1_git, sha256, blake2s256, length, status, ctime from tmp_content; return; end $$; -- add tmp_skipped_content entries to skipped_content, skipping duplicates -- -- operates in bulk: 0. swh_mktemp(skipped_content), 1. COPY to tmp_skipped_content, -- 2. call this function create or replace function swh_skipped_content_add() returns void language plpgsql as $$ begin insert into skipped_content (sha1, sha1_git, sha256, blake2s256, length, status, reason, origin) select distinct sha1, sha1_git, sha256, blake2s256, length, status, reason, origin from tmp_skipped_content where (coalesce(sha1, ''), coalesce(sha1_git, ''), coalesce(sha256, '')) in ( select coalesce(sha1, ''), coalesce(sha1_git, ''), coalesce(sha256, '') from swh_skipped_content_missing() ); -- TODO XXX use postgres 9.5 "UPSERT" support here, when available. -- Specifically, using "INSERT .. ON CONFLICT IGNORE" we can avoid -- the extra swh_skipped_content_missing() query here. return; end $$; -- Update content entries from temporary table. -- (columns are potential new columns added to the schema, this cannot be empty) -- create or replace function swh_content_update(columns_update text[]) returns void language plpgsql as $$ declare query text; tmp_array text[]; begin if array_length(columns_update, 1) = 0 then raise exception 'Please, provide the list of column names to update.'; end if; tmp_array := array(select format('%1$s=t.%1$s', unnest) from unnest(columns_update)); query = format('update content set %s from tmp_content t where t.sha1 = content.sha1', array_to_string(tmp_array, ', ')); execute query; return; end $$; comment on function swh_content_update(text[]) IS 'Update existing content''s columns'; create type directory_entry_type as enum('file', 'dir', 'rev'); -- Add tmp_directory_entry_* entries to directory_entry_* and directory, -- skipping duplicates in directory_entry_*. This is a generic function that -- works on all kind of directory entries. -- -- operates in bulk: 0. swh_mktemp_dir_entry('directory_entry_*'), 1 COPY to -- tmp_directory_entry_*, 2. call this function -- -- Assumption: this function is used in the same transaction that inserts the -- context directory in table "directory". create or replace function swh_directory_entry_add(typ directory_entry_type) returns void language plpgsql as $$ begin execute format(' insert into directory_entry_%1$s (target, name, perms) select distinct t.target, t.name, t.perms from tmp_directory_entry_%1$s t where not exists ( select 1 from directory_entry_%1$s i where t.target = i.target and t.name = i.name and t.perms = i.perms) ', typ); execute format(' with new_entries as ( select t.dir_id, array_agg(i.id) as entries from tmp_directory_entry_%1$s t inner join directory_entry_%1$s i using (target, name, perms) group by t.dir_id ) update tmp_directory as d set %1$s_entries = new_entries.entries from new_entries where d.id = new_entries.dir_id ', typ); return; end $$; -- Insert the data from tmp_directory, tmp_directory_entry_file, -- tmp_directory_entry_dir, tmp_directory_entry_rev into their final -- tables. -- -- Prerequisites: -- directory ids in tmp_directory -- entries in tmp_directory_entry_{file,dir,rev} -- create or replace function swh_directory_add() returns void language plpgsql as $$ begin perform swh_directory_entry_add('file'); perform swh_directory_entry_add('dir'); perform swh_directory_entry_add('rev'); insert into directory select * from tmp_directory t where not exists ( select 1 from directory d where d.id = t.id); return; end $$; -- a directory listing entry with all the metadata -- -- can be used to list a directory, and retrieve all the data in one go. create type directory_entry as ( dir_id sha1_git, -- id of the parent directory type directory_entry_type, -- type of entry target sha1_git, -- id of target name unix_path, -- path name, relative to containing dir perms file_perms, -- unix-like permissions status content_status, -- visible or absent sha1 sha1, -- content if sha1 if type is not dir sha1_git sha1_git, -- content's sha1 git if type is not dir sha256 sha256, -- content's sha256 if type is not dir length bigint -- content length if type is not dir ); -- List a single level of directory walked_dir_id -- FIXME: order by name is not correct. For git, we need to order by -- lexicographic order but as if a trailing / is present in directory -- name create or replace function swh_directory_walk_one(walked_dir_id sha1_git) returns setof directory_entry language sql stable as $$ with dir as ( select id as dir_id, dir_entries, file_entries, rev_entries from directory where id = walked_dir_id), ls_d as (select dir_id, unnest(dir_entries) as entry_id from dir), ls_f as (select dir_id, unnest(file_entries) as entry_id from dir), ls_r as (select dir_id, unnest(rev_entries) as entry_id from dir) (select dir_id, 'dir'::directory_entry_type as type, e.target, e.name, e.perms, NULL::content_status, NULL::sha1, NULL::sha1_git, NULL::sha256, NULL::bigint from ls_d left join directory_entry_dir e on ls_d.entry_id = e.id) union (select dir_id, 'file'::directory_entry_type as type, e.target, e.name, e.perms, c.status, c.sha1, c.sha1_git, c.sha256, c.length from ls_f left join directory_entry_file e on ls_f.entry_id = e.id left join content c on e.target = c.sha1_git) union (select dir_id, 'rev'::directory_entry_type as type, e.target, e.name, e.perms, NULL::content_status, NULL::sha1, NULL::sha1_git, NULL::sha256, NULL::bigint from ls_r left join directory_entry_rev e on ls_r.entry_id = e.id) order by name; $$; -- List recursively the revision directory arborescence create or replace function swh_directory_walk(walked_dir_id sha1_git) returns setof directory_entry language sql stable as $$ with recursive entries as ( select dir_id, type, target, name, perms, status, sha1, sha1_git, sha256, length from swh_directory_walk_one(walked_dir_id) union all select dir_id, type, target, (dirname || '/' || name)::unix_path as name, perms, status, sha1, sha1_git, sha256, length from (select (swh_directory_walk_one(dirs.target)).*, dirs.name as dirname from (select target, name from entries where type = 'dir') as dirs) as with_parent ) select dir_id, type, target, name, perms, status, sha1, sha1_git, sha256, length from entries $$; -- Find a directory entry by its path create or replace function swh_find_directory_entry_by_path( walked_dir_id sha1_git, dir_or_content_path bytea[]) returns directory_entry language plpgsql as $$ declare end_index integer; paths bytea default ''; path bytea; res bytea[]; r record; begin end_index := array_upper(dir_or_content_path, 1); res[1] := walked_dir_id; for i in 1..end_index loop path := dir_or_content_path[i]; -- concatenate path for patching the name in the result record (if we found it) if i = 1 then paths = path; else paths := paths || '/' || path; -- concatenate paths end if; if i <> end_index then select * from swh_directory_walk_one(res[i] :: sha1_git) where name=path and type = 'dir' limit 1 into r; else select * from swh_directory_walk_one(res[i] :: sha1_git) where name=path limit 1 into r; end if; -- find the path if r is null then return null; else -- store the next dir to lookup the next local path from res[i+1] := r.target; end if; end loop; -- at this moment, r is the result. Patch its 'name' with the full path before returning it. r.name := paths; return r; end $$; -- List all revision IDs starting from a given revision, going back in time -- -- TODO ordering: should be breadth-first right now (what do we want?) -- TODO ordering: ORDER BY parent_rank somewhere? create or replace function swh_revision_list(root_revisions bytea[], num_revs bigint default NULL) returns table (id sha1_git, parents bytea[]) language sql stable as $$ with recursive full_rev_list(id) as ( (select id from revision where id = ANY(root_revisions)) union (select h.parent_id from revision_history as h join full_rev_list on h.id = full_rev_list.id) ), rev_list as (select id from full_rev_list limit num_revs) select rev_list.id as id, array(select rh.parent_id::bytea from revision_history rh where rh.id = rev_list.id order by rh.parent_rank ) as parent from rev_list; $$; -- Detailed entry for a revision create type revision_entry as ( id sha1_git, date timestamptz, date_offset smallint, date_neg_utc_offset boolean, committer_date timestamptz, committer_date_offset smallint, committer_date_neg_utc_offset boolean, type revision_type, directory sha1_git, message bytea, author_id bigint, author_fullname bytea, author_name bytea, author_email bytea, committer_id bigint, committer_fullname bytea, committer_name bytea, committer_email bytea, metadata jsonb, synthetic boolean, parents bytea[], object_id bigint ); -- "git style" revision log. Similar to swh_revision_list(), but returning all -- information associated to each revision, and expanding authors/committers create or replace function swh_revision_log(root_revisions bytea[], num_revs bigint default NULL) returns setof revision_entry language sql stable as $$ select t.id, r.date, r.date_offset, r.date_neg_utc_offset, r.committer_date, r.committer_date_offset, r.committer_date_neg_utc_offset, r.type, r.directory, r.message, a.id, a.fullname, a.name, a.email, c.id, c.fullname, c.name, c.email, r.metadata, r.synthetic, t.parents, r.object_id from swh_revision_list(root_revisions, num_revs) as t left join revision r on t.id = r.id left join person a on a.id = r.author left join person c on c.id = r.committer; $$; -- Detailed entry for a release create type release_entry as ( id sha1_git, target sha1_git, target_type object_type, date timestamptz, date_offset smallint, date_neg_utc_offset boolean, name bytea, comment bytea, synthetic boolean, author_id bigint, author_fullname bytea, author_name bytea, author_email bytea, object_id bigint ); -- Create entries in person from tmp_revision create or replace function swh_person_add_from_revision() returns void language plpgsql as $$ begin with t as ( select author_fullname as fullname, author_name as name, author_email as email from tmp_revision union select committer_fullname as fullname, committer_name as name, committer_email as email from tmp_revision ) insert into person (fullname, name, email) select distinct on (fullname) fullname, name, email from t where not exists ( select 1 from person p where t.fullname = p.fullname ); return; end $$; -- Create entries in revision from tmp_revision create or replace function swh_revision_add() returns void language plpgsql as $$ begin perform swh_person_add_from_revision(); insert into revision (id, date, date_offset, date_neg_utc_offset, committer_date, committer_date_offset, committer_date_neg_utc_offset, type, directory, message, author, committer, metadata, synthetic) select t.id, t.date, t.date_offset, t.date_neg_utc_offset, t.committer_date, t.committer_date_offset, t.committer_date_neg_utc_offset, t.type, t.directory, t.message, a.id, c.id, t.metadata, t.synthetic from tmp_revision t left join person a on a.fullname = t.author_fullname left join person c on c.fullname = t.committer_fullname; return; end $$; -- Create entries in person from tmp_release create or replace function swh_person_add_from_release() returns void language plpgsql as $$ begin with t as ( select distinct author_fullname as fullname, author_name as name, author_email as email from tmp_release where author_fullname is not null ) insert into person (fullname, name, email) select distinct on (fullname) fullname, name, email from t where not exists ( select 1 from person p where t.fullname = p.fullname ); return; end $$; -- Create entries in release from tmp_release create or replace function swh_release_add() returns void language plpgsql as $$ begin perform swh_person_add_from_release(); insert into release (id, target, target_type, date, date_offset, date_neg_utc_offset, name, comment, author, synthetic) select t.id, t.target, t.target_type, t.date, t.date_offset, t.date_neg_utc_offset, t.name, t.comment, a.id, t.synthetic from tmp_release t left join person a on a.fullname = t.author_fullname; return; end $$; -- add a new origin_visit for origin origin_id at date. -- -- Returns the new visit id. create or replace function swh_origin_visit_add(origin_url text, date timestamptz, type text) returns bigint language sql as $$ with origin_id as ( select id from origin where url = origin_url ), last_known_visit as ( select coalesce(max(visit), 0) as visit from origin_visit where origin = (select id from origin_id) ) insert into origin_visit (origin, date, type, visit, status) values ((select id from origin_id), date, type, (select visit from last_known_visit) + 1, 'ongoing') returning visit; $$; create or replace function swh_snapshot_add(snapshot_id sha1_git) returns void language plpgsql as $$ declare snapshot_object_id snapshot.object_id%type; begin select object_id from snapshot where id = snapshot_id into snapshot_object_id; if snapshot_object_id is null then insert into snapshot (id) values (snapshot_id) returning object_id into snapshot_object_id; insert into snapshot_branch (name, target_type, target) select name, target_type, target from tmp_snapshot_branch tmp where not exists ( select 1 from snapshot_branch sb where sb.name = tmp.name and sb.target = tmp.target and sb.target_type = tmp.target_type ) on conflict do nothing; insert into snapshot_branches (snapshot_id, branch_id) select snapshot_object_id, sb.object_id as branch_id from tmp_snapshot_branch tmp join snapshot_branch sb using (name, target, target_type) where tmp.target is not null and tmp.target_type is not null union select snapshot_object_id, sb.object_id as branch_id from tmp_snapshot_branch tmp join snapshot_branch sb using (name) where tmp.target is null and tmp.target_type is null and sb.target is null and sb.target_type is null; end if; truncate table tmp_snapshot_branch; end; $$; create type snapshot_result as ( snapshot_id sha1_git, name bytea, target bytea, target_type snapshot_target ); create or replace function swh_snapshot_get_by_id(id sha1_git, branches_from bytea default '', branches_count bigint default null, target_types snapshot_target[] default NULL) returns setof snapshot_result language sql stable as $$ - select - swh_snapshot_get_by_id.id as snapshot_id, name, target, target_type - from snapshot_branches - inner join snapshot_branch on snapshot_branches.branch_id = snapshot_branch.object_id - where snapshot_id = (select object_id from snapshot where snapshot.id = swh_snapshot_get_by_id.id) - and (target_types is null or target_type = any(target_types)) - and name >= branches_from - order by name limit branches_count + -- with small limits, the "naive" version of this query can degenerate into + -- using the deduplication index on snapshot_branch (name, target, + -- target_type); The planner happily scans several hundred million rows. + + -- Do the query in two steps: first pull the relevant branches for the given + -- snapshot (filtering them by type), then do the limiting. This two-step + -- process guides the planner into using the proper index. + with filtered_snapshot_branches as ( + select swh_snapshot_get_by_id.id as snapshot_id, name, target, target_type + from snapshot_branches + inner join snapshot_branch on snapshot_branches.branch_id = snapshot_branch.object_id + where snapshot_id = (select object_id from snapshot where snapshot.id = swh_snapshot_get_by_id.id) + and (target_types is null or target_type = any(target_types)) + order by name + ) + select snapshot_id, name, target, target_type + from filtered_snapshot_branches + where name >= branches_from + order by name limit branches_count; $$; create type snapshot_size as ( target_type snapshot_target, count bigint ); create or replace function swh_snapshot_count_branches(id sha1_git) returns setof snapshot_size language sql stable as $$ SELECT target_type, count(name) from swh_snapshot_get_by_id(swh_snapshot_count_branches.id) group by target_type; $$; -- Absolute path: directory reference + complete path relative to it create type content_dir as ( directory sha1_git, path unix_path ); -- Find the containing directory of a given content, specified by sha1 -- (note: *not* sha1_git). -- -- Return a pair (dir_it, path) where path is a UNIX path that, from the -- directory root, reach down to a file with the desired content. Return NULL -- if no match is found. -- -- In case of multiple paths (i.e., pretty much always), an arbitrary one is -- chosen. create or replace function swh_content_find_directory(content_id sha1) returns content_dir language sql stable as $$ with recursive path as ( -- Recursively build a path from the requested content to a root -- directory. Each iteration returns a pair (dir_id, filename) where -- filename is relative to dir_id. Stops when no parent directory can -- be found. (select dir.id as dir_id, dir_entry_f.name as name, 0 as depth from directory_entry_file as dir_entry_f join content on content.sha1_git = dir_entry_f.target join directory as dir on dir.file_entries @> array[dir_entry_f.id] where content.sha1 = content_id limit 1) union all (select dir.id as dir_id, (dir_entry_d.name || '/' || path.name)::unix_path as name, path.depth + 1 from path join directory_entry_dir as dir_entry_d on dir_entry_d.target = path.dir_id join directory as dir on dir.dir_entries @> array[dir_entry_d.id] limit 1) ) select dir_id, name from path order by depth desc limit 1; $$; -- Find the visit of origin closest to date visit_date -- Breaks ties by selecting the largest visit id create or replace function swh_visit_find_by_date(origin_url text, visit_date timestamptz default NOW()) returns setof origin_visit language plpgsql stable as $$ declare origin_id bigint; begin select id into origin_id from origin where url=origin_url; return query with closest_two_visits as (( select ov, (date - visit_date), visit as interval from origin_visit ov where ov.origin = origin_id and ov.date >= visit_date order by ov.date asc, ov.visit desc limit 1 ) union ( select ov, (visit_date - date), visit as interval from origin_visit ov where ov.origin = origin_id and ov.date < visit_date order by ov.date desc, ov.visit desc limit 1 )) select (ov).* from closest_two_visits order by interval, visit limit 1; end $$; -- Object listing by object_id create or replace function swh_content_list_by_object_id( min_excl bigint, max_incl bigint ) returns setof content language sql stable as $$ select * from content where object_id > min_excl and object_id <= max_incl order by object_id; $$; create or replace function swh_revision_list_by_object_id( min_excl bigint, max_incl bigint ) returns setof revision_entry language sql stable as $$ with revs as ( select * from revision where object_id > min_excl and object_id <= max_incl ) select r.id, r.date, r.date_offset, r.date_neg_utc_offset, r.committer_date, r.committer_date_offset, r.committer_date_neg_utc_offset, r.type, r.directory, r.message, a.id, a.fullname, a.name, a.email, c.id, c.fullname, c.name, c.email, r.metadata, r.synthetic, array(select rh.parent_id::bytea from revision_history rh where rh.id = r.id order by rh.parent_rank) as parents, r.object_id from revs r left join person a on a.id = r.author left join person c on c.id = r.committer order by r.object_id; $$; create or replace function swh_release_list_by_object_id( min_excl bigint, max_incl bigint ) returns setof release_entry language sql stable as $$ with rels as ( select * from release where object_id > min_excl and object_id <= max_incl ) select r.id, r.target, r.target_type, r.date, r.date_offset, r.date_neg_utc_offset, r.name, r.comment, r.synthetic, p.id as author_id, p.fullname as author_fullname, p.name as author_name, p.email as author_email, r.object_id from rels r left join person p on p.id = r.author order by r.object_id; $$; -- simple counter mapping a textual label to an integer value create type counter as ( label text, value bigint ); -- return statistics about the number of tuples in various SWH tables -- -- Note: the returned values are based on postgres internal statistics -- (pg_class table), which are only updated daily (by autovacuum) or so create or replace function swh_stat_counters() returns setof counter language sql stable as $$ select object_type as label, value as value from object_counts where object_type in ( 'content', 'directory', 'directory_entry_dir', 'directory_entry_file', 'directory_entry_rev', 'origin', 'origin_visit', 'person', 'release', 'revision', 'revision_history', 'skipped_content', 'snapshot' ); $$; create or replace function swh_update_counter(object_type text) returns void language plpgsql as $$ begin execute format(' insert into object_counts (value, last_update, object_type) values ((select count(*) from %1$I), NOW(), %1$L) on conflict (object_type) do update set value = excluded.value, last_update = excluded.last_update', object_type); return; end; $$; create or replace function swh_update_counter_bucketed() returns void language plpgsql as $$ declare query text; line_to_update int; new_value bigint; begin select object_counts_bucketed.line, format( 'select count(%I) from %I where %s', coalesce(identifier, '*'), object_type, coalesce( concat_ws( ' and ', case when bucket_start is not null then format('%I >= %L', identifier, bucket_start) -- lower bound condition, inclusive end, case when bucket_end is not null then format('%I < %L', identifier, bucket_end) -- upper bound condition, exclusive end ), 'true' ) ) from object_counts_bucketed order by coalesce(last_update, now() - '1 month'::interval) asc limit 1 into line_to_update, query; execute query into new_value; update object_counts_bucketed set value = new_value, last_update = now() where object_counts_bucketed.line = line_to_update; END $$; create or replace function swh_update_counters_from_buckets() returns trigger language plpgsql as $$ begin with to_update as ( select object_type, sum(value) as value, max(last_update) as last_update from object_counts_bucketed ob1 where not exists ( select 1 from object_counts_bucketed ob2 where ob1.object_type = ob2.object_type and value is null ) group by object_type ) update object_counts set value = to_update.value, last_update = to_update.last_update from to_update where object_counts.object_type = to_update.object_type and object_counts.value != to_update.value; return null; end $$; create trigger update_counts_from_bucketed after insert or update on object_counts_bucketed for each row when (NEW.line % 256 = 0) execute procedure swh_update_counters_from_buckets(); diff --git a/swh/storage/sql/60-swh-indexes.sql b/swh/storage/sql/60-swh-indexes.sql index 7b74b713..d2bbe779 100644 --- a/swh/storage/sql/60-swh-indexes.sql +++ b/swh/storage/sql/60-swh-indexes.sql @@ -1,191 +1,192 @@ -- content create unique index concurrently content_pkey on content(sha1); create unique index concurrently on content(sha1_git); create index concurrently on content(sha256); create index concurrently on content(blake2s256); create index concurrently on content(ctime); -- TODO use a BRIN index here (postgres >= 9.5) create unique index concurrently on content(object_id); alter table content add primary key using index content_pkey; -- origin -create unique index concurrently origin_pkey on origin(id); -alter table origin add primary key using index origin_pkey; +create unique index concurrently origin_pkey on origin(id); +create unique index concurrently on origin using btree(url); create index concurrently on origin using gin (url gin_trgm_ops); -create index concurrently on origin using hash (url); create index concurrently on origin using btree(digest(url, 'sha1')); +alter table origin add primary key using index origin_pkey; + -- skipped_content alter table skipped_content add constraint skipped_content_sha1_sha1_git_sha256_key unique (sha1, sha1_git, sha256); create index concurrently on skipped_content(sha1); create index concurrently on skipped_content(sha1_git); create index concurrently on skipped_content(sha256); create index concurrently on skipped_content(blake2s256); create unique index concurrently on skipped_content(object_id); alter table skipped_content add constraint skipped_content_origin_fkey foreign key (origin) references origin(id) not valid; alter table skipped_content validate constraint skipped_content_origin_fkey; -- directory create unique index concurrently directory_pkey on directory(id); alter table directory add primary key using index directory_pkey; create index concurrently on directory using gin (dir_entries); create index concurrently on directory using gin (file_entries); create index concurrently on directory using gin (rev_entries); create unique index concurrently on directory(object_id); -- directory_entry_dir create unique index concurrently directory_entry_dir_pkey on directory_entry_dir(id); alter table directory_entry_dir add primary key using index directory_entry_dir_pkey; create unique index concurrently on directory_entry_dir(target, name, perms); -- directory_entry_file create unique index concurrently directory_entry_file_pkey on directory_entry_file(id); alter table directory_entry_file add primary key using index directory_entry_file_pkey; create unique index concurrently on directory_entry_file(target, name, perms); -- directory_entry_rev create unique index concurrently directory_entry_rev_pkey on directory_entry_rev(id); alter table directory_entry_rev add primary key using index directory_entry_rev_pkey; create unique index concurrently on directory_entry_rev(target, name, perms); -- person create unique index concurrently person_pkey on person(id); alter table person add primary key using index person_pkey; create unique index concurrently on person(fullname); create index concurrently on person(name); create index concurrently on person(email); -- revision create unique index concurrently revision_pkey on revision(id); alter table revision add primary key using index revision_pkey; alter table revision add constraint revision_author_fkey foreign key (author) references person(id) not valid; alter table revision validate constraint revision_author_fkey; alter table revision add constraint revision_committer_fkey foreign key (committer) references person(id) not valid; alter table revision validate constraint revision_committer_fkey; create index concurrently on revision(directory); create unique index concurrently on revision(object_id); -- revision_history create unique index concurrently revision_history_pkey on revision_history(id, parent_rank); alter table revision_history add primary key using index revision_history_pkey; create index concurrently on revision_history(parent_id); alter table revision_history add constraint revision_history_id_fkey foreign key (id) references revision(id) not valid; alter table revision_history validate constraint revision_history_id_fkey; -- snapshot create unique index concurrently snapshot_pkey on snapshot(object_id); alter table snapshot add primary key using index snapshot_pkey; create unique index concurrently on snapshot(id); -- snapshot_branch create unique index concurrently snapshot_branch_pkey on snapshot_branch(object_id); alter table snapshot_branch add primary key using index snapshot_branch_pkey; create unique index concurrently on snapshot_branch (target_type, target, name); alter table snapshot_branch add constraint snapshot_branch_target_check check ((target_type is null) = (target is null)) not valid; alter table snapshot_branch validate constraint snapshot_branch_target_check; alter table snapshot_branch add constraint snapshot_target_check check (target_type not in ('content', 'directory', 'revision', 'release', 'snapshot') or length(target) = 20) not valid; alter table snapshot_branch validate constraint snapshot_target_check; create unique index concurrently on snapshot_branch (name) where target_type is null and target is null; -- snapshot_branches create unique index concurrently snapshot_branches_pkey on snapshot_branches(snapshot_id, branch_id); alter table snapshot_branches add primary key using index snapshot_branches_pkey; alter table snapshot_branches add constraint snapshot_branches_snapshot_id_fkey foreign key (snapshot_id) references snapshot(object_id) not valid; alter table snapshot_branches validate constraint snapshot_branches_snapshot_id_fkey; alter table snapshot_branches add constraint snapshot_branches_branch_id_fkey foreign key (branch_id) references snapshot_branch(object_id) not valid; alter table snapshot_branches validate constraint snapshot_branches_branch_id_fkey; -- origin_visit create unique index concurrently origin_visit_pkey on origin_visit(origin, visit); alter table origin_visit add primary key using index origin_visit_pkey; create index concurrently on origin_visit(date); create index concurrently on origin_visit(type, status, date); alter table origin_visit add constraint origin_visit_origin_fkey foreign key (origin) references origin(id) not valid; alter table origin_visit validate constraint origin_visit_origin_fkey; -- origin_visit_status create unique index concurrently origin_visit_status_pkey on origin_visit_status(origin, visit, date); alter table origin_visit_status add primary key using index origin_visit_status_pkey; alter table origin_visit_status add constraint origin_visit_status_origin_visit_fkey foreign key (origin, visit) references origin_visit(origin, visit) not valid; alter table origin_visit_status validate constraint origin_visit_status_origin_visit_fkey; -- release create unique index concurrently release_pkey on release(id); alter table release add primary key using index release_pkey; create index concurrently on release(target, target_type); create unique index concurrently on release(object_id); alter table release add constraint release_author_fkey foreign key (author) references person(id) not valid; alter table release validate constraint release_author_fkey; -- if the author is null, then the date must be null alter table release add constraint release_author_date_check check ((date is null) or (author is not null)) not valid; alter table release validate constraint release_author_date_check; -- metadata_fetcher create unique index metadata_fetcher_pkey on metadata_fetcher(id); alter table metadata_fetcher add primary key using index metadata_fetcher_pkey; create unique index metadata_fetcher_name_version on metadata_fetcher(name, version); -- metadata_authority create unique index concurrently metadata_authority_pkey on metadata_authority(id); alter table metadata_authority add primary key using index metadata_authority_pkey; create unique index metadata_authority_type_url on metadata_authority(type, url); -- origin_metadata create unique index concurrently origin_metadata_pkey on origin_metadata(id); alter table origin_metadata add primary key using index origin_metadata_pkey; create unique index concurrently origin_metadata_origin_authority_date_fetcher on origin_metadata(origin_id, authority_id, discovery_date, fetcher_id); alter table origin_metadata add constraint origin_metadata_origin_fkey foreign key (origin_id) references origin(id) not valid; alter table origin_metadata validate constraint origin_metadata_origin_fkey; alter table origin_metadata add constraint origin_metadata_authority_fkey foreign key (authority_id) references metadata_authority(id) not valid; alter table origin_metadata validate constraint origin_metadata_authority_fkey; alter table origin_metadata add constraint origin_metadata_fetcher_fkey foreign key (fetcher_id) references metadata_fetcher(id) not valid; alter table origin_metadata validate constraint origin_metadata_fetcher_fkey; -- object_counts create unique index concurrently object_counts_pkey on object_counts(object_type); alter table object_counts add primary key using index object_counts_pkey; -- object_counts_bucketed create unique index concurrently object_counts_bucketed_pkey on object_counts_bucketed(line); alter table object_counts_bucketed add primary key using index object_counts_bucketed_pkey; diff --git a/swh/storage/tests/algos/test_origin.py b/swh/storage/tests/algos/test_origin.py index 4b6749da..3b0d8348 100644 --- a/swh/storage/tests/algos/test_origin.py +++ b/swh/storage/tests/algos/test_origin.py @@ -1,306 +1,330 @@ # Copyright (C) 2019-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import pytest from unittest.mock import patch from swh.model.model import Origin, OriginVisit, OriginVisitStatus, Snapshot from swh.storage.algos.origin import iter_origins, origin_get_latest_visit_status from swh.storage.utils import now from swh.storage.tests.test_storage import round_to_milliseconds from swh.storage.tests.storage_data import data def assert_list_eq(left, right, msg=None): assert list(left) == list(right), msg @pytest.fixture def swh_storage_backend_config(): yield {"cls": "validate", "storage": {"cls": "memory",}} def test_iter_origins(swh_storage): origins = swh_storage.origin_add([{"url": "bar"}, {"url": "qux"}, {"url": "quuz"},]) assert_list_eq(iter_origins(swh_storage), origins) assert_list_eq(iter_origins(swh_storage, batch_size=1), origins) assert_list_eq(iter_origins(swh_storage, batch_size=2), origins) for i in range(1, 5): assert_list_eq(iter_origins(swh_storage, origin_from=i + 1), origins[i:], i) assert_list_eq( iter_origins(swh_storage, origin_from=i + 1, batch_size=1), origins[i:], i ) assert_list_eq( iter_origins(swh_storage, origin_from=i + 1, batch_size=2), origins[i:], i ) for j in range(i, 5): assert_list_eq( iter_origins(swh_storage, origin_from=i + 1, origin_to=j + 1), origins[i:j], (i, j), ) assert_list_eq( iter_origins( swh_storage, origin_from=i + 1, origin_to=j + 1, batch_size=1 ), origins[i:j], (i, j), ) assert_list_eq( iter_origins( swh_storage, origin_from=i + 1, origin_to=j + 1, batch_size=2 ), origins[i:j], (i, j), ) @patch("swh.storage.in_memory.InMemoryStorage.origin_get_range") def test_iter_origins_batch_size(mock_origin_get_range, swh_storage): mock_origin_get_range.return_value = [] list(iter_origins(swh_storage)) mock_origin_get_range.assert_called_with(origin_from=1, origin_count=10000) list(iter_origins(swh_storage, batch_size=42)) mock_origin_get_range.assert_called_with(origin_from=1, origin_count=42) -def test_origin_get_latest_status_none(swh_storage): +def test_origin_get_latest_visit_status_none(swh_storage): """Looking up unknown objects should return nothing """ # unknown origin so no result assert origin_get_latest_visit_status(swh_storage, "unknown-origin") is None # unknown type so no result origin = Origin.from_dict(data.origin) swh_storage.origin_add_one(origin) swh_storage.origin_visit_add( [ OriginVisit( origin=origin.url, date=data.date_visit1, type="git", status="ongoing", snapshot=None, ), ] )[0] actual_origin_visit = origin_get_latest_visit_status( swh_storage, origin.url, type="unknown" ) assert actual_origin_visit is None actual_origin_visit = origin_get_latest_visit_status( swh_storage, origin.url, require_snapshot=True ) assert actual_origin_visit is None actual_origin_visit = origin_get_latest_visit_status( swh_storage, origin.url, allowed_statuses=["unknown"] ) assert actual_origin_visit is None def init_storage_with_origin_visits(swh_storage): """Initialize storage with origin/origin-visit/origin-visit-status """ origin1 = Origin.from_dict(data.origin) origin2 = Origin.from_dict(data.origin2) swh_storage.origin_add([origin1, origin2]) ov1, ov2 = swh_storage.origin_visit_add( [ OriginVisit( origin=origin1.url, date=data.date_visit1, type=data.type_visit1, status="ongoing", snapshot=None, ), OriginVisit( origin=origin2.url, date=data.date_visit2, type=data.type_visit2, status="ongoing", snapshot=None, ), ] ) snapshot = Snapshot.from_dict(data.complete_snapshot) swh_storage.snapshot_add([snapshot]) date_now = now() date_now = round_to_milliseconds(date_now) assert data.date_visit1 < data.date_visit2 assert data.date_visit2 < date_now # origin visit status 1 for origin visit 1 ovs11 = OriginVisitStatus( origin=origin1.url, visit=ov1.visit, date=data.date_visit1, status="partial", snapshot=None, ) # origin visit status 2 for origin visit 1 ovs12 = OriginVisitStatus( origin=origin1.url, visit=ov1.visit, date=data.date_visit2, status="ongoing", snapshot=None, ) # origin visit status 1 for origin visit 2 ovs21 = OriginVisitStatus( origin=origin2.url, visit=ov2.visit, date=data.date_visit2, status="ongoing", snapshot=None, ) # origin visit status 2 for origin visit 2 ovs22 = OriginVisitStatus( origin=origin2.url, visit=ov2.visit, date=date_now, status="full", snapshot=snapshot.id, metadata={"something": "wicked"}, ) swh_storage.origin_visit_status_add([ovs11, ovs12, ovs21, ovs22]) return { "origin": [origin1, origin2], "origin_visit": [ov1, ov2], "origin_visit_status": [ovs11, ovs12, ovs21, ovs22], } -def test_origin_get_latest_status_filter_type(swh_storage): +def test_origin_get_latest_visit_status_filter_type(swh_storage): """Filtering origin visit per types should yield consistent results """ objects = init_storage_with_origin_visits(swh_storage) origin1, origin2 = objects["origin"] ov1, ov2 = objects["origin_visit"] ovs11, ovs12, _, ovs22 = objects["origin_visit_status"] # no visit for origin1 url with type_visit2 assert ( origin_get_latest_visit_status(swh_storage, origin1.url, type=data.type_visit2) is None ) # no visit for origin2 url with type_visit1 assert ( origin_get_latest_visit_status(swh_storage, origin2.url, type=data.type_visit1) is None ) # Two visits, both with no snapshot, take the most recent actual_ov1, actual_ovs12 = origin_get_latest_visit_status( swh_storage, origin1.url, type=data.type_visit1 ) assert isinstance(actual_ov1, OriginVisit) assert isinstance(actual_ovs12, OriginVisitStatus) assert actual_ov1.origin == ov1.origin assert actual_ov1.visit == ov1.visit assert actual_ov1.type == data.type_visit1 assert actual_ovs12 == ovs12 # take the most recent visit with type_visit2 actual_ov2, actual_ovs22 = origin_get_latest_visit_status( swh_storage, origin2.url, type=data.type_visit2 ) assert isinstance(actual_ov2, OriginVisit) assert isinstance(actual_ovs22, OriginVisitStatus) assert actual_ov2.origin == ov2.origin assert actual_ov2.visit == ov2.visit assert actual_ov2.type == data.type_visit2 assert actual_ovs22 == ovs22 -def test_origin_get_latest_status_filter_status(swh_storage): +def test_origin_get_latest_visit_status_filter_status(swh_storage): objects = init_storage_with_origin_visits(swh_storage) origin1, origin2 = objects["origin"] ov1, ov2 = objects["origin_visit"] ovs11, ovs12, _, ovs22 = objects["origin_visit_status"] # no failed status for that visit assert ( origin_get_latest_visit_status( swh_storage, origin2.url, allowed_statuses=["failed"] ) is None ) # only 1 partial for that visit actual_ov1, actual_ovs11 = origin_get_latest_visit_status( swh_storage, origin1.url, allowed_statuses=["partial"] ) assert actual_ov1.origin == ov1.origin assert actual_ov1.visit == ov1.visit assert actual_ov1.type == data.type_visit1 assert actual_ovs11 == ovs11 # both status exist, take the latest one actual_ov1, actual_ovs12 = origin_get_latest_visit_status( swh_storage, origin1.url, allowed_statuses=["partial", "ongoing"] ) assert actual_ov1.origin == ov1.origin assert actual_ov1.visit == ov1.visit assert actual_ov1.type == data.type_visit1 assert actual_ovs12 == ovs12 assert isinstance(actual_ov1, OriginVisit) assert isinstance(actual_ovs12, OriginVisitStatus) assert actual_ov1.origin == ov1.origin assert actual_ov1.visit == ov1.visit assert actual_ov1.type == data.type_visit1 assert actual_ovs12 == ovs12 # take the most recent visit with type_visit2 actual_ov2, actual_ovs22 = origin_get_latest_visit_status( swh_storage, origin2.url, allowed_statuses=["full"] ) assert actual_ov2.origin == ov2.origin assert actual_ov2.visit == ov2.visit assert actual_ov2.type == data.type_visit2 assert actual_ovs22 == ovs22 -def test_origin_get_latest_status_filter_snapshot(swh_storage): +def test_origin_get_latest_visit_status_filter_snapshot(swh_storage): objects = init_storage_with_origin_visits(swh_storage) origin1, origin2 = objects["origin"] _, ov2 = objects["origin_visit"] _, _, _, ovs22 = objects["origin_visit_status"] # there is no visit with snapshot yet for that visit assert ( origin_get_latest_visit_status(swh_storage, origin1.url, require_snapshot=True) is None ) # visit status with partial status visit elected actual_ov2, actual_ovs22 = origin_get_latest_visit_status( swh_storage, origin2.url, require_snapshot=True ) assert actual_ov2.origin == ov2.origin assert actual_ov2.visit == ov2.visit assert actual_ov2.type == ov2.type assert actual_ovs22 == ovs22 + + date_now = now() + + # Add another visit + swh_storage.origin_visit_add( + [ + OriginVisit( + origin=origin2.url, + date=date_now, + type=data.type_visit2, + status="ongoing", + snapshot=None, + ), + ] + ) + + # Requiring the latest visit with a snapshot, we still find the previous visit + ov2, ovs22 = origin_get_latest_visit_status( + swh_storage, origin2.url, require_snapshot=True + ) + assert actual_ov2.origin == ov2.origin + assert actual_ov2.visit == ov2.visit + assert actual_ov2.type == ov2.type + assert actual_ovs22 == ovs22 diff --git a/swh/storage/tests/algos/test_snapshot.py b/swh/storage/tests/algos/test_snapshot.py index 7a8da207..eb968ecb 100644 --- a/swh/storage/tests/algos/test_snapshot.py +++ b/swh/storage/tests/algos/test_snapshot.py @@ -1,151 +1,176 @@ # Copyright (C) 2018-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information from hypothesis import given import pytest from swh.model.hypothesis_strategies import snapshots, branch_names, branch_targets from swh.model.identifiers import snapshot_identifier, identifier_to_bytes from swh.model.model import Origin, OriginVisit, OriginVisitStatus, Snapshot from swh.storage.algos.snapshot import snapshot_get_all_branches, snapshot_get_latest from swh.storage.utils import now from swh.storage.tests.storage_data import data @pytest.fixture def swh_storage_backend_config(): yield { "cls": "memory", "journal_writer": None, } @given(snapshot=snapshots(min_size=0, max_size=10, only_objects=False)) def test_snapshot_small(swh_storage, snapshot): # noqa snapshot = snapshot.to_dict() swh_storage.snapshot_add([snapshot]) returned_snapshot = snapshot_get_all_branches(swh_storage, snapshot["id"]) assert snapshot == returned_snapshot @given(branch_name=branch_names(), branch_target=branch_targets(only_objects=True)) def test_snapshot_large(swh_storage, branch_name, branch_target): # noqa branch_target = branch_target.to_dict() snapshot = { "branches": {b"%s%05d" % (branch_name, i): branch_target for i in range(10000)} } snapshot["id"] = identifier_to_bytes(snapshot_identifier(snapshot)) swh_storage.snapshot_add([snapshot]) returned_snapshot = snapshot_get_all_branches(swh_storage, snapshot["id"]) assert snapshot == returned_snapshot def test_snapshot_get_latest_none(swh_storage): """Retrieve latest snapshot on unknown origin or origin without snapshot should yield no result """ + # unknown origin so None assert snapshot_get_latest(swh_storage, "unknown-origin") is None - # no snapshot on origin visit then nothing is found + # no snapshot on origin visit so None origin = Origin.from_dict(data.origin) swh_storage.origin_add_one(origin) swh_storage.origin_visit_add( [ OriginVisit( origin=origin.url, date=data.date_visit1, type=data.type_visit1, status="ongoing", snapshot=None, ) ] ) assert snapshot_get_latest(swh_storage, origin.url) is None + ov1 = swh_storage.origin_visit_get_latest(origin.url) + assert ov1 is not None + visit_id = ov1["visit"] + + # visit references a snapshot but the snapshot does not exist in backend for some + # reason + complete_snapshot = Snapshot.from_dict(data.complete_snapshot) + swh_storage.origin_visit_status_add( + [ + OriginVisitStatus( + origin=origin.url, + visit=visit_id, + date=data.date_visit2, + status="partial", + snapshot=complete_snapshot.id, + ) + ] + ) + # so we do not find it + assert snapshot_get_latest(swh_storage, origin.url) is None + assert snapshot_get_latest(swh_storage, origin.url, branches_count=1) is None + def test_snapshot_get_latest(swh_storage): origin = Origin.from_dict(data.origin) swh_storage.origin_add_one(origin) visit1 = OriginVisit( origin=origin.url, date=data.date_visit1, type=data.type_visit1, status="ongoing", snapshot=None, ) - ov1 = swh_storage.origin_visit_add([visit1])[0] + swh_storage.origin_visit_add([visit1]) + ov1 = swh_storage.origin_visit_get_latest(origin.url) + visit_id = ov1["visit"] # Add snapshot to visit1, latest snapshot = visit 1 snapshot complete_snapshot = Snapshot.from_dict(data.complete_snapshot) swh_storage.snapshot_add([complete_snapshot]) swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, - visit=ov1.visit, + visit=visit_id, date=data.date_visit2, status="partial", snapshot=None, ) ] ) assert data.date_visit1 < data.date_visit2 # no snapshot associated to the visit, so None actual_snapshot = snapshot_get_latest( swh_storage, origin.url, allowed_statuses=["partial"] ) assert actual_snapshot is None date_now = now() assert data.date_visit2 < date_now swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, - visit=ov1.visit, + visit=visit_id, date=date_now, status="full", snapshot=complete_snapshot.id, ) ] ) actual_snapshot = snapshot_get_latest(swh_storage, origin.url) assert actual_snapshot is not None assert actual_snapshot == complete_snapshot swh_storage.origin_visit_status_add( [ OriginVisitStatus( origin=origin.url, - visit=ov1.visit, + visit=visit_id, date=date_now, status="full", snapshot=complete_snapshot.id, ) ] ) actual_snapshot = snapshot_get_latest(swh_storage, origin.url) assert actual_snapshot is not None assert actual_snapshot == complete_snapshot actual_snapshot = snapshot_get_latest(swh_storage, origin.url, branches_count=1) assert actual_snapshot is not None assert actual_snapshot.id == complete_snapshot.id assert len(actual_snapshot.branches.values()) == 1 with pytest.raises(ValueError, match="branches_count must be a positive integer"): snapshot_get_latest(swh_storage, origin.url, branches_count="something-wrong") diff --git a/swh/storage/tests/storage_data.py b/swh/storage/tests/storage_data.py index 0811da27..a47b9635 100644 --- a/swh/storage/tests/storage_data.py +++ b/swh/storage/tests/storage_data.py @@ -1,522 +1,539 @@ # Copyright (C) 2015-2019 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import datetime from swh.model.hashutil import hash_to_bytes from swh.model import from_disk class StorageData: def __getattr__(self, key): try: v = globals()[key] except KeyError as e: raise AttributeError(e.args[0]) if hasattr(v, "copy"): return v.copy() return v data = StorageData() cont = { "data": b"42\n", "length": 3, "sha1": hash_to_bytes("34973274ccef6ab4dfaaf86599792fa9c3fe4689"), "sha1_git": hash_to_bytes("d81cc0710eb6cf9efd5b920a8453e1e07157b6cd"), "sha256": hash_to_bytes( "673650f936cb3b0a2f93ce09d81be10748b1b203c19e8176b4eefc1964a0cf3a" ), "blake2s256": hash_to_bytes( "d5fe1939576527e42cfd76a9455a2432fe7f56669564577dd93c4280e76d661d" ), "status": "visible", } cont2 = { "data": b"4242\n", "length": 5, "sha1": hash_to_bytes("61c2b3a30496d329e21af70dd2d7e097046d07b7"), "sha1_git": hash_to_bytes("36fade77193cb6d2bd826161a0979d64c28ab4fa"), "sha256": hash_to_bytes( "859f0b154fdb2d630f45e1ecae4a862915435e663248bb8461d914696fc047cd" ), "blake2s256": hash_to_bytes( "849c20fad132b7c2d62c15de310adfe87be94a379941bed295e8141c6219810d" ), "status": "visible", } cont3 = { "data": b"424242\n", "length": 7, "sha1": hash_to_bytes("3e21cc4942a4234c9e5edd8a9cacd1670fe59f13"), "sha1_git": hash_to_bytes("c932c7649c6dfa4b82327d121215116909eb3bea"), "sha256": hash_to_bytes( "92fb72daf8c6818288a35137b72155f507e5de8d892712ab96277aaed8cf8a36" ), "blake2s256": hash_to_bytes( "76d0346f44e5a27f6bafdd9c2befd304aff83780f93121d801ab6a1d4769db11" ), "status": "visible", "ctime": "2019-12-01 00:00:00Z", } contents = (cont, cont2, cont3) missing_cont = { "data": b"missing\n", "length": 8, "sha1": hash_to_bytes("f9c24e2abb82063a3ba2c44efd2d3c797f28ac90"), "sha1_git": hash_to_bytes("33e45d56f88993aae6a0198013efa80716fd8919"), "sha256": hash_to_bytes( "6bbd052ab054ef222c1c87be60cd191addedd24cc882d1f5f7f7be61dc61bb3a" ), "blake2s256": hash_to_bytes( "306856b8fd879edb7b6f1aeaaf8db9bbecc993cd7f776c333ac3a782fa5c6eba" ), "status": "absent", } skipped_cont = { "length": 1024 * 1024 * 200, "sha1_git": hash_to_bytes("33e45d56f88993aae6a0198013efa80716fd8920"), "sha1": hash_to_bytes("43e45d56f88993aae6a0198013efa80716fd8920"), "sha256": hash_to_bytes( "7bbd052ab054ef222c1c87be60cd191addedd24cc882d1f5f7f7be61dc61bb3a" ), "blake2s256": hash_to_bytes( "ade18b1adecb33f891ca36664da676e12c772cc193778aac9a137b8dc5834b9b" ), "reason": "Content too long", "status": "absent", "origin": "file:///dev/zero", } skipped_cont2 = { "length": 1024 * 1024 * 300, "sha1_git": hash_to_bytes("44e45d56f88993aae6a0198013efa80716fd8921"), "sha1": hash_to_bytes("54e45d56f88993aae6a0198013efa80716fd8920"), "sha256": hash_to_bytes( "8cbd052ab054ef222c1c87be60cd191addedd24cc882d1f5f7f7be61dc61bb3a" ), "blake2s256": hash_to_bytes( "9ce18b1adecb33f891ca36664da676e12c772cc193778aac9a137b8dc5834b9b" ), "reason": "Content too long", "status": "absent", } +skipped_contents = (skipped_cont, skipped_cont2) + + dir = { - "id": hash_to_bytes("340133423253310030f531e632a733ff37c3a930"), + "id": hash_to_bytes("34f335a750111ca0a8b64d8034faec9eedc396be"), "entries": ( { "name": b"foo", "type": "file", "target": hash_to_bytes("d81cc0710eb6cf9efd5b920a8453e1e07157b6cd"), # cont "perms": from_disk.DentryPerms.content, }, { "name": b"bar\xc3", "type": "dir", "target": b"12345678901234567890", "perms": from_disk.DentryPerms.directory, }, ), } dir2 = { - "id": hash_to_bytes("340133423253310030f531e632a733ff37c3a935"), + "id": hash_to_bytes("8505808532953da7d2581741f01b29c04b1cb9ab"), "entries": ( { "name": b"oof", "type": "file", "target": hash_to_bytes( # cont2 "36fade77193cb6d2bd826161a0979d64c28ab4fa" ), "perms": from_disk.DentryPerms.content, }, ), } dir3 = { - "id": hash_to_bytes("33e45d56f88993aae6a0198013efa80716fd8921"), + "id": hash_to_bytes("4ea8c6b2f54445e5dd1a9d5bb2afd875d66f3150"), "entries": ( { "name": b"foo", "type": "file", "target": hash_to_bytes("d81cc0710eb6cf9efd5b920a8453e1e07157b6cd"), # cont "perms": from_disk.DentryPerms.content, }, { "name": b"subdir", "type": "dir", - "target": hash_to_bytes("340133423253310030f531e632a733ff37c3a930"), # dir + "target": hash_to_bytes("34f335a750111ca0a8b64d8034faec9eedc396be"), # dir "perms": from_disk.DentryPerms.directory, }, { "name": b"hello", "type": "file", "target": b"12345678901234567890", "perms": from_disk.DentryPerms.content, }, ), } dir4 = { - "id": hash_to_bytes("33e45d56f88993aae6a0198013efa80716fd8922"), + "id": hash_to_bytes("377aa5fcd944fbabf502dbfda55cd14d33c8c3c6"), "entries": ( { "name": b"subdir1", "type": "dir", - "target": hash_to_bytes("33e45d56f88993aae6a0198013efa80716fd8921"), # dir3 + "target": hash_to_bytes("4ea8c6b2f54445e5dd1a9d5bb2afd875d66f3150"), # dir3 "perms": from_disk.DentryPerms.directory, }, ), } -dierctories = (dir, dir2, dir3, dir4) +directories = (dir, dir2, dir3, dir4) minus_offset = datetime.timezone(datetime.timedelta(minutes=-120)) plus_offset = datetime.timezone(datetime.timedelta(minutes=120)) revision = { - "id": b"56789012345678901234", + "id": hash_to_bytes("066b1b62dbfa033362092af468bf6cfabec230e7"), "message": b"hello", "author": { "name": b"Nicolas Dandrimont", "email": b"nicolas@example.com", "fullname": b"Nicolas Dandrimont ", }, "date": { "timestamp": {"seconds": 1234567890, "microseconds": 0}, "offset": 120, "negative_utc": False, }, "committer": { "name": b"St\xc3fano Zacchiroli", "email": b"stefano@example.com", "fullname": b"St\xc3fano Zacchiroli ", }, "committer_date": { "timestamp": {"seconds": 1123456789, "microseconds": 0}, "offset": 0, "negative_utc": True, }, "parents": (b"01234567890123456789", b"23434512345123456789"), "type": "git", - "directory": hash_to_bytes("340133423253310030f531e632a733ff37c3a930"), # dir + "directory": hash_to_bytes("34f335a750111ca0a8b64d8034faec9eedc396be"), # dir "metadata": { "checksums": {"sha1": "tarball-sha1", "sha256": "tarball-sha256",}, "signed-off-by": "some-dude", "extra_headers": [ ["gpgsig", b"test123"], - ["mergetags", [b"foo\\bar", b"\x22\xaf\x89\x80\x01\x00"]], + ["mergetag", b"foo\\bar"], + ["mergetag", b"\x22\xaf\x89\x80\x01\x00"], ], }, "synthetic": True, } revision2 = { - "id": b"87659012345678904321", + "id": hash_to_bytes("df7a6f6a99671fb7f7343641aff983a314ef6161"), "message": b"hello again", "author": { "name": b"Roberto Dicosmo", "email": b"roberto@example.com", "fullname": b"Roberto Dicosmo ", }, "date": { "timestamp": {"seconds": 1234567843, "microseconds": 220000,}, "offset": -720, "negative_utc": False, }, "committer": { "name": b"tony", "email": b"ar@dumont.fr", "fullname": b"tony ", }, "committer_date": { "timestamp": {"seconds": 1123456789, "microseconds": 0}, "offset": 0, "negative_utc": False, }, "parents": (b"01234567890123456789",), "type": "git", - "directory": hash_to_bytes("340133423253310030f531e632a733ff37c3a935"), # dir2 + "directory": hash_to_bytes("8505808532953da7d2581741f01b29c04b1cb9ab"), # dir2 "metadata": None, "synthetic": False, } revision3 = { - "id": hash_to_bytes("7026b7c1a2af56521e951c01ed20f255fa054238"), + "id": hash_to_bytes("2cbd7bb22c653bbb23a29657852a50a01b591d46"), "message": b"a simple revision with no parents this time", "author": { "name": b"Roberto Dicosmo", "email": b"roberto@example.com", "fullname": b"Roberto Dicosmo ", }, "date": { "timestamp": {"seconds": 1234567843, "microseconds": 220000,}, "offset": -720, "negative_utc": False, }, "committer": { "name": b"tony", "email": b"ar@dumont.fr", "fullname": b"tony ", }, "committer_date": { "timestamp": {"seconds": 1127351742, "microseconds": 0}, "offset": 0, "negative_utc": False, }, "parents": (), "type": "git", - "directory": hash_to_bytes("340133423253310030f531e632a733ff37c3a935"), # dir2 + "directory": hash_to_bytes("8505808532953da7d2581741f01b29c04b1cb9ab"), # dir2 "metadata": None, "synthetic": True, } revision4 = { - "id": hash_to_bytes("368a48fe15b7db2383775f97c6b247011b3f14f4"), + "id": hash_to_bytes("88cd5126fc958ed70089d5340441a1c2477bcc20"), "message": b"parent of self.revision2", "author": { "name": b"me", "email": b"me@soft.heri", "fullname": b"me ", }, "date": { "timestamp": {"seconds": 1244567843, "microseconds": 220000,}, "offset": -720, "negative_utc": False, }, "committer": { "name": b"committer-dude", "email": b"committer@dude.com", "fullname": b"committer-dude ", }, "committer_date": { "timestamp": {"seconds": 1244567843, "microseconds": 220000,}, "offset": -720, "negative_utc": False, }, "parents": ( - hash_to_bytes("7026b7c1a2af56521e951c01ed20f255fa054238"), + hash_to_bytes("2cbd7bb22c653bbb23a29657852a50a01b591d46"), ), # revision3 "type": "git", - "directory": hash_to_bytes("340133423253310030f531e632a733ff37c3a930"), # dir + "directory": hash_to_bytes("34f335a750111ca0a8b64d8034faec9eedc396be"), # dir "metadata": None, "synthetic": False, } revisions = (revision, revision2, revision3, revision4) origin = { "url": "file:///dev/null", } origin2 = { "url": "file:///dev/zero", } origins = (origin, origin2) metadata_authority = { "type": "deposit", "url": "http://hal.inria.example.com/", "metadata": {"location": "France"}, } metadata_authority2 = { "type": "registry", "url": "http://wikidata.example.com/", "metadata": {}, } metadata_fetcher = { "name": "swh-deposit", "version": "0.0.1", "metadata": {"sword_version": "2"}, } metadata_fetcher2 = { "name": "swh-example", "version": "0.0.1", "metadata": {}, } date_visit1 = datetime.datetime(2015, 1, 1, 23, 0, 0, tzinfo=datetime.timezone.utc) type_visit1 = "git" date_visit2 = datetime.datetime(2017, 1, 1, 23, 0, 0, tzinfo=datetime.timezone.utc) type_visit2 = "hg" date_visit3 = datetime.datetime(2018, 1, 1, 23, 0, 0, tzinfo=datetime.timezone.utc) type_visit3 = "deb" release = { - "id": b"87659012345678901234", + "id": hash_to_bytes("a673e617fcc6234e29b2cad06b8245f96c415c61"), "name": b"v0.0.1", "author": { "name": b"olasd", "email": b"nic@olasd.fr", "fullname": b"olasd ", }, "date": { "timestamp": {"seconds": 1234567890, "microseconds": 0}, "offset": 42, "negative_utc": False, }, "target": b"43210987654321098765", "target_type": "revision", "message": b"synthetic release", "synthetic": True, } release2 = { - "id": b"56789012348765901234", + "id": hash_to_bytes("6902bd4c82b7d19a421d224aedab2b74197e420d"), "name": b"v0.0.2", "author": { "name": b"tony", "email": b"ar@dumont.fr", "fullname": b"tony ", }, "date": { "timestamp": {"seconds": 1634366813, "microseconds": 0}, "offset": -120, "negative_utc": False, }, "target": b"432109\xa9765432\xc309\x00765", "target_type": "revision", "message": b"v0.0.2\nMisc performance improvements + bug fixes", "synthetic": False, } release3 = { - "id": b"87659012345678904321", + "id": hash_to_bytes("3e9050196aa288264f2a9d279d6abab8b158448b"), "name": b"v0.0.2", "author": { "name": b"tony", "email": b"tony@ardumont.fr", "fullname": b"tony ", }, "date": { "timestamp": {"seconds": 1634336813, "microseconds": 0}, "offset": 0, "negative_utc": False, }, - "target": b"87659012345678904321", # revision2 + "target": hash_to_bytes("df7a6f6a99671fb7f7343641aff983a314ef6161"), "target_type": "revision", "message": b"yet another synthetic release", "synthetic": True, } releases = (release, release2, release3) snapshot = { - "id": hash_to_bytes("2498dbf535f882bc7f9a18fb16c9ad27fda7bab7"), + "id": hash_to_bytes("409ee1ff3f10d166714bc90581debfd0446dda57"), "branches": { b"master": { - "target": b"56789012345678901234", # revision + "target": hash_to_bytes("066b1b62dbfa033362092af468bf6cfabec230e7"), "target_type": "revision", }, }, } empty_snapshot = { "id": hash_to_bytes("1a8893e6a86f444e8be8e7bda6cb34fb1735a00e"), "branches": {}, } complete_snapshot = { - "id": hash_to_bytes("6e65b86363953b780d92b0a928f3e8fcdd10db36"), + "id": hash_to_bytes("a56ce2d81c190023bb99a3a36279307522cb85f6"), "branches": { b"directory": { "target": hash_to_bytes("1bd0e65f7d2ff14ae994de17a1e7fe65111dcad8"), "target_type": "directory", }, b"directory2": { "target": hash_to_bytes("1bd0e65f7d2ff14ae994de17a1e7fe65111dcad8"), "target_type": "directory", }, b"content": { "target": hash_to_bytes("fe95a46679d128ff167b7c55df5d02356c5a1ae1"), "target_type": "content", }, b"alias": {"target": b"revision", "target_type": "alias",}, b"revision": { "target": hash_to_bytes("aafb16d69fd30ff58afdd69036a26047f3aebdc6"), "target_type": "revision", }, b"release": { "target": hash_to_bytes("7045404f3d1c54e6473c71bbb716529fbad4be24"), "target_type": "release", }, b"snapshot": { "target": hash_to_bytes("1a8893e6a86f444e8be8e7bda6cb34fb1735a00e"), "target_type": "snapshot", }, b"dangling": None, }, } +snapshots = (snapshot, empty_snapshot, complete_snapshot) + + origin_metadata = { "origin_url": origin["url"], "discovery_date": datetime.datetime( 2015, 1, 1, 21, 0, 0, tzinfo=datetime.timezone.utc ), "authority": { "type": metadata_authority["type"], "url": metadata_authority["url"], }, "fetcher": { "name": metadata_fetcher["name"], "version": metadata_fetcher["version"], }, "format": "json", "metadata": b'{"foo": "bar"}', } origin_metadata2 = { "origin_url": origin["url"], "discovery_date": datetime.datetime( 2017, 1, 1, 22, 0, 0, tzinfo=datetime.timezone.utc ), "authority": { "type": metadata_authority["type"], "url": metadata_authority["url"], }, "fetcher": { "name": metadata_fetcher["name"], "version": metadata_fetcher["version"], }, "format": "yaml", "metadata": b"foo: bar", } origin_metadata3 = { "origin_url": origin["url"], "discovery_date": datetime.datetime( 2017, 1, 1, 22, 0, 0, tzinfo=datetime.timezone.utc ), "authority": { "type": metadata_authority2["type"], "url": metadata_authority2["url"], }, "fetcher": { "name": metadata_fetcher2["name"], "version": metadata_fetcher2["version"], }, "format": "yaml", "metadata": b"foo: bar", } person = { "name": b"John Doe", "email": b"john.doe@institute.org", "fullname": b"John Doe ", } + +objects = { + "content": contents, + "skipped_content": skipped_contents, + "directory": directories, + "revision": revisions, + "origin": origins, + "release": releases, + "snapshot": snapshots, +} diff --git a/version.txt b/version.txt index 7994c802..12fe044a 100644 --- a/version.txt +++ b/version.txt @@ -1 +1 @@ -v0.6.0-0-g1600907 \ No newline at end of file +v0.7.0-0-g2d497ff \ No newline at end of file