diff --git a/.gitignore b/.gitignore index 43c4b92..6e3db8c 100644 --- a/.gitignore +++ b/.gitignore @@ -1,10 +1,13 @@ *.pyc *.sw? *~ .coverage .eggs/ __pycache__ *.egg-info/ +build/ +dist/ version.txt /sql/createdb-stamp /sql/filldb-stamp +.tox/ diff --git a/MANIFEST.in b/MANIFEST.in index e7c46fc..304d9f7 100644 --- a/MANIFEST.in +++ b/MANIFEST.in @@ -1,4 +1,8 @@ +include README.md include Makefile include requirements.txt include requirements-swh.txt include version.txt +recursive-include sql * +recursive-include swh/indexer/sql *.sql +recursive-include swh/indexer/data * diff --git a/PKG-INFO b/PKG-INFO index e985a5a..53d9ce3 100644 --- a/PKG-INFO +++ b/PKG-INFO @@ -1,10 +1,114 @@ -Metadata-Version: 1.0 +Metadata-Version: 2.1 Name: swh.indexer -Version: 0.0.52 +Version: 0.0.53 Summary: Software Heritage Content Indexer Home-page: https://forge.softwareheritage.org/diffusion/78/ Author: Software Heritage developers Author-email: swh-devel@inria.fr License: UNKNOWN -Description: UNKNOWN +Project-URL: Bug Reports, https://forge.softwareheritage.org/maniphest +Project-URL: Funding, https://www.softwareheritage.org/donate +Project-URL: Source, https://forge.softwareheritage.org/source/swh-indexer +Description: swh-indexer + ============ + + Tools to compute multiple indexes on SWH's raw contents: + - content: + - mimetype + - ctags + - language + - fossology-license + - metadata + - revision: + - metadata + + ## Context + + SWH has currently stored around 5B contents. The table `content` + holds their checksums. + + Those contents are physically stored in an object storage (using + disks) and replicated in another. Those object storages are not + destined for reading yet. + + We are in the process to copy those contents over to azure's blob + storages. As such, we will use that opportunity to trigger the + computations on these contents once those have been copied over. + + + ## Workers + + There are two types of workers: + - orchestrators (orchestrator, orchestrator-text) + - indexer (mimetype, language, ctags, fossology-license) + + ### Orchestrator + + + The orchestrator is in charge of dispatching a batch of sha1 hashes to + different indexers. + + Orchestration procedure: + - receive batch of sha1s + - split those batches into groups (according to setup) + - broadcast those group to indexers + + There are two types of orchestrators: + + - orchestrator (swh_indexer_orchestrator_content_all): Receives and + broadcast sha1 ids (of contents) to indexers (currently only the + mimetype indexer) + + - orchestrator-text (swh_indexer_orchestrator_content_text): Receives + batch of sha1 ids (of textual contents) and broadcast those to + indexers (currently language, ctags, and fossology-license + indexers). + + + ### Indexers + + + An indexer is in charge of the content retrieval and indexation of the + extracted information in the swh-indexer db. + + There are two types of indexers: + - content indexer: works with content sha1 hashes + - revision indexer: works with revision sha1 hashes + + Indexation procedure: + - receive batch of ids + - retrieve the associated data depending on object type + - compute for that object some index + - store the result to swh's storage + - (and possibly do some broadcast itself) + + Current content indexers: + + - mimetype (queue swh_indexer_content_mimetype): compute the mimetype, + filter out the textual contents and broadcast the list to the + orchestrator-text + + - language (queue swh_indexer_content_language): detect the programming language + + - ctags (queue swh_indexer_content_ctags): try and compute tags + information + + - fossology-license (queue swh_indexer_fossology_license): try and + compute the license + + - metadata : translate file into translated_metadata dict + + Current revision indexers: + + - metadata: detects files containing metadata and retrieves translated_metadata + in content_metadata table in storage or run content indexer to translate + files. + Platform: UNKNOWN +Classifier: Programming Language :: Python :: 3 +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3) +Classifier: Operating System :: OS Independent +Classifier: Development Status :: 5 - Production/Stable +Description-Content-Type: text/markdown +Provides-Extra: testing diff --git a/README b/README.md similarity index 73% rename from README rename to README.md index 5c16501..562f028 100644 --- a/README +++ b/README.md @@ -1,82 +1,94 @@ swh-indexer -=========== +============ Tools to compute multiple indexes on SWH's raw contents: - content: - mimetype - ctags - language - fossology-license - metadata - revision: - metadata -# Context +## Context -SWH has currently stored around 3B contents. The table `content` +SWH has currently stored around 5B contents. The table `content` holds their checksums. Those contents are physically stored in an object storage (using disks) and replicated in another. Those object storages are not destined for reading yet. We are in the process to copy those contents over to azure's blob storages. As such, we will use that opportunity to trigger the computations on these contents once those have been copied over. -# Workers +## Workers -There exists 2 kinds: +There are two types of workers: - orchestrators (orchestrator, orchestrator-text) - indexer (mimetype, language, ctags, fossology-license) -## Orchestrator +### Orchestrator -Orchestrators: + +The orchestrator is in charge of dispatching a batch of sha1 hashes to +different indexers. + +Orchestration procedure: - receive batch of sha1s -- split those batches -- broadcast those to indexers +- split those batches into groups (according to setup) +- broadcast those group to indexers -There are 2 sorts: +There are two types of orchestrators: - orchestrator (swh_indexer_orchestrator_content_all): Receives and broadcast sha1 ids (of contents) to indexers (currently only the mimetype indexer) - orchestrator-text (swh_indexer_orchestrator_content_text): Receives batch of sha1 ids (of textual contents) and broadcast those to indexers (currently language, ctags, and fossology-license indexers). -## Indexers +### Indexers + + +An indexer is in charge of the content retrieval and indexation of the +extracted information in the swh-indexer db. + +There are two types of indexers: + - content indexer: works with content sha1 hashes + - revision indexer: works with revision sha1 hashes -Indexers: +Indexation procedure: - receive batch of ids - retrieve the associated data depending on object type - compute for that object some index - store the result to swh's storage - (and possibly do some broadcast itself) Current content indexers: - mimetype (queue swh_indexer_content_mimetype): compute the mimetype, filter out the textual contents and broadcast the list to the orchestrator-text - language (queue swh_indexer_content_language): detect the programming language - ctags (queue swh_indexer_content_ctags): try and compute tags information - fossology-license (queue swh_indexer_fossology_license): try and compute the license - metadata : translate file into translated_metadata dict Current revision indexers: - metadata: detects files containing metadata and retrieves translated_metadata in content_metadata table in storage or run content indexer to translate files. diff --git a/debian/control b/debian/control index 4310c51..dec0b2f 100644 --- a/debian/control +++ b/debian/control @@ -1,47 +1,47 @@ Source: swh-indexer Maintainer: Software Heritage developers Section: python Priority: optional Build-Depends: debhelper (>= 9), dh-python (>= 2), python3-all, python3-chardet (>= 2.3.0~), python3-click, - python3-nose, + python3-pytest, python3-pygments, python3-magic, python3-setuptools, - python3-swh.core (>= 0.0.40~), + python3-swh.core (>= 0.0.44~), python3-swh.model (>= 0.0.15~), python3-swh.objstorage (>= 0.0.13~), - python3-swh.scheduler (>= 0.0.14~), + python3-swh.scheduler (>= 0.0.33~), python3-swh.storage (>= 0.0.102~), python3-vcversioner Standards-Version: 3.9.6 Homepage: https://forge.softwareheritage.org/diffusion/78/ Package: python3-swh.indexer.storage Architecture: all -Depends: python3-swh.core (>= 0.0.40~), +Depends: python3-swh.core (>= 0.0.44~), python3-swh.model (>= 0.0.15~), python3-swh.objstorage (>= 0.0.13~), - python3-swh.scheduler (>= 0.0.14~), + python3-swh.scheduler (>= 0.0.33~), python3-swh.storage (>= 0.0.102~), ${misc:Depends}, ${python3:Depends} Description: Software Heritage Content Indexer Storage Package: python3-swh.indexer Architecture: all Depends: python3-swh.scheduler (>= 0.0.14~), - python3-swh.core (>= 0.0.40~), + python3-swh.core (>= 0.0.44~), python3-swh.model (>= 0.0.15~), python3-swh.objstorage (>= 0.0.13~), - python3-swh.scheduler (>= 0.0.14~), + python3-swh.scheduler (>= 0.0.33~), python3-swh.storage (>= 0.0.102~), python3-swh.indexer.storage (= ${binary:Version}), universal-ctags (>= 0.8~), fossology-nomossa (>= 3.1~), ${misc:Depends}, ${python3:Depends} Description: Software Heritage Content Indexer diff --git a/debian/rules b/debian/rules index 0b0f59f..33bf8bb 100755 --- a/debian/rules +++ b/debian/rules @@ -1,16 +1,16 @@ #!/usr/bin/make -f export PYBUILD_NAME=swh.indexer -export PYBUILD_TEST_ARGS=-sv -a !db,!fs +export PYBUILD_TEST_ARGS=-m 'not db and not fs' %: dh $@ --with python3 --buildsystem=pybuild override_dh_install: dh_install rm -v $(CURDIR)/debian/python3-*/usr/lib/python*/dist-packages/swh/__init__.py for pyvers in $(shell py3versions -vr); do \ mkdir -p $(CURDIR)/debian/python3-swh.indexer.storage/usr/lib/python$$pyvers/dist-packages/swh/indexer/storage/ ; \ mv $(CURDIR)/debian/python3-swh.indexer/usr/lib/python$$pyvers/dist-packages/swh/indexer/storage/* \ $(CURDIR)/debian/python3-swh.indexer.storage/usr/lib/python$$pyvers/dist-packages/swh/indexer/storage/ ; \ done diff --git a/docs/.gitignore b/docs/.gitignore index 58a761e..f6b5c55 100644 --- a/docs/.gitignore +++ b/docs/.gitignore @@ -1,3 +1,4 @@ _build/ apidoc/ *-stamp +README.md diff --git a/docs/dev-info.rst b/docs/dev-info.rst new file mode 100644 index 0000000..493b102 --- /dev/null +++ b/docs/dev-info.rst @@ -0,0 +1,206 @@ +Hacking on swh-indexer +====================== + +This tutorial will guide you through the hacking on the swh-indexer. +If you do not have a local copy of the Software Heritage archive, go to the +`getting started tutorial +`_ + +Configuration files +------------------- +You will need the following YAML configuration files to run the swh-indexer +commands: + +- Orchestrator at + ``~/.config/swh/indexer/orchestrator.yml`` + +.. code-block:: yaml + + indexers: + mimetype: + check_presence: false + batch_size: 100 + +- Orchestrator-text at + ``~/.config/swh/indexer/orchestrator-text.yml`` + +.. code-block:: yaml + + indexers: + # language: + # batch_size: 10 + # check_presence: false + fossology_license: + batch_size: 10 + check_presence: false + # ctags: + # batch_size: 2 + # check_presence: false + +- Mimetype indexer at + ``~/.config/swh/indexer/mimetype.yml`` + +.. code-block:: yaml + + # storage to read sha1's metadata (path) + # storage: + # cls: local + # args: + # db: "service=swh-dev" + # objstorage: + # cls: pathslicing + # args: + # root: /home/storage/swh-storage/ + # slicing: 0:1/1:5 + + storage: + cls: remote + args: + url: http://localhost:5002/ + + indexer_storage: + cls: remote + args: + url: http://localhost:5007/ + + # storage to read sha1's content + # adapt this to your need + # locally: this needs to match your storage's setup + objstorage: + cls: pathslicing + args: + slicing: 0:1/1:5 + root: /home/storage/swh-storage/ + + destination_task: swh.indexer.tasks.SWHOrchestratorTextContentsTask + rescheduling_task: swh.indexer.tasks.SWHContentMimetypeTask + + +- Fossology indexer at + ``~/.config/swh/indexer/fossology_license.yml`` + +.. code-block:: yaml + + # storage to read sha1's metadata (path) + # storage: + # cls: local + # args: + # db: "service=swh-dev" + # objstorage: + # cls: pathslicing + # args: + # root: /home/storage/swh-storage/ + # slicing: 0:1/1:5 + + storage: + cls: remote + url: http://localhost:5002/ + + indexer_storage: + cls: remote + args: + url: http://localhost:5007/ + + # storage to read sha1's content + # adapt this to your need + # locally: this needs to match your storage's setup + objstorage: + cls: pathslicing + args: + slicing: 0:1/1:5 + root: /home/storage/swh-storage/ + + workdir: /tmp/swh/worker.indexer/license/ + + tools: + name: 'nomos' + version: '3.1.0rc2-31-ga2cbb8c' + configuration: + command_line: 'nomossa ' + + +- Worker at + ``~/.config/swh/worker.yml`` + +.. code-block:: yaml + + task_broker: amqp://guest@localhost// + task_modules: + - swh.loader.svn.tasks + - swh.loader.tar.tasks + - swh.loader.git.tasks + - swh.storage.archiver.tasks + - swh.indexer.tasks + - swh.indexer.orchestrator + task_queues: + - swh_loader_svn + - swh_loader_tar + - swh_reader_git_to_azure_archive + - swh_storage_archive_worker_to_backend + - swh_indexer_orchestrator_content_all + - swh_indexer_orchestrator_content_text + - swh_indexer_content_mimetype + - swh_indexer_content_language + - swh_indexer_content_ctags + - swh_indexer_content_fossology_license + - swh_loader_svn_mount_and_load + - swh_loader_git_express + - swh_loader_git_archive + - swh_loader_svn_archive + task_soft_time_limit: 0 + + +Database +-------- + +swh-indxer uses a database to store the indexed content. The default +db is expected to be called swh-indexer-dev. + +Create or add ``swh-dev`` and ``swh-indexer-dev`` to +the ``~/.pg_service.conf`` and ``~/.pgpass`` files, which are postgresql's +configuration files. + +Add data to local DB +-------------------- +from within the ``swh-environment``, run the following command:: + + make rebuild-testdata + +and fetch some real data to work with, using:: + + python3 -m swh.loader.git.updater --origin-url + +Then you can list all content files using this script:: + + #!/usr/bin/env bash + + psql service=swh-dev -c "copy (select sha1 from content) to stdin" | sed -e 's/^\\\\x//g' + +Run the indexers +----------------- +Use the list off contents to feed the indexers with with the +following command:: + + ./list-sha1.sh | python3 -m swh.indexer.producer --batch 100 --task-name orchestrator_all + +Activate the workers +-------------------- +To send messages to different queues using rabbitmq +(which should already be installed through dependencies installation), +run the following command in a dedicated terminal:: + + python3 -m celery worker --app=swh.scheduler.celery_backend.config.app \ + --pool=prefork \ + --concurrency=1 \ + -Ofair \ + --loglevel=info \ + --without-gossip \ + --without-mingle \ + --without-heartbeat 2>&1 + +With this command rabbitmq will consume message using the worker +configuration file. + +Note: for the fossology_license indexer, you need a package fossology-nomossa +which is in our `public debian repository +`_. diff --git a/docs/index.rst b/docs/index.rst index c4d60d2..9c7dd9b 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -1,17 +1,23 @@ .. _swh-indexer: -Software Heritage - Development Documentation -============================================= +Software Heritage - Indexer +=========================== + +Tools and workers used to mine the content of the archive and extract derived +information from archive source code artifacts. + .. toctree:: - :maxdepth: 2 + :maxdepth: 1 :caption: Contents: + README + dev-info.rst Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` diff --git a/pytest.ini b/pytest.ini new file mode 100644 index 0000000..afa4cf3 --- /dev/null +++ b/pytest.ini @@ -0,0 +1,2 @@ +[pytest] +norecursedirs = docs diff --git a/requirements-swh.txt b/requirements-swh.txt index e5b4b25..39e716d 100644 --- a/requirements-swh.txt +++ b/requirements-swh.txt @@ -1,5 +1,5 @@ -swh.core >= 0.0.40 +swh.core >= 0.0.44 swh.model >= 0.0.15 swh.objstorage >= 0.0.13 -swh.scheduler >= 0.0.14 +swh.scheduler >= 0.0.33 swh.storage >= 0.0.102 diff --git a/requirements-test.txt b/requirements-test.txt new file mode 100644 index 0000000..e079f8a --- /dev/null +++ b/requirements-test.txt @@ -0,0 +1 @@ +pytest diff --git a/requirements.txt b/requirements.txt index b97c809..dbade70 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,9 +1,5 @@ -# Add here external Python modules dependencies, one per line. Module names -# should match https://pypi.python.org/pypi names. For the full spec or -# dependency lines, see https://pip.readthedocs.org/en/1.1/requirements.html vcversioner pygments click chardet - file_magic diff --git a/setup.py b/setup.py old mode 100644 new mode 100755 index ed1f85e..71bffeb --- a/setup.py +++ b/setup.py @@ -1,28 +1,65 @@ +#!/usr/bin/env python3 +# Copyright (C) 2015-2018 The Software Heritage developers +# See the AUTHORS file at the top-level directory of this distribution +# License: GNU General Public License version 3, or any later version +# See top-level LICENSE file for more information + from setuptools import setup, find_packages +from os import path +from io import open + +here = path.abspath(path.dirname(__file__)) + +# Get the long description from the README file +with open(path.join(here, 'README.md'), encoding='utf-8') as f: + long_description = f.read() + + +def parse_requirements(name=None): + if name: + reqf = 'requirements-%s.txt' % name + else: + reqf = 'requirements.txt' -def parse_requirements(): requirements = [] - for reqf in ('requirements.txt', 'requirements-swh.txt'): - with open(reqf) as f: - for line in f.readlines(): - line = line.strip() - if not line or line.startswith('#'): - continue - requirements.append(line) + if not path.exists(reqf): + return requirements + + with open(reqf) as f: + for line in f.readlines(): + line = line.strip() + if not line or line.startswith('#'): + continue + requirements.append(line) return requirements setup( name='swh.indexer', description='Software Heritage Content Indexer', + long_description=long_description, + long_description_content_type='text/markdown', author='Software Heritage developers', author_email='swh-devel@inria.fr', url='https://forge.softwareheritage.org/diffusion/78/', packages=find_packages(), scripts=[], - install_requires=parse_requirements(), + install_requires=parse_requirements() + parse_requirements('swh'), setup_requires=['vcversioner'], + extras_require={'testing': parse_requirements('test')}, vcversioner={}, include_package_data=True, + classifiers=[ + "Programming Language :: Python :: 3", + "Intended Audience :: Developers", + "License :: OSI Approved :: GNU General Public License v3 (GPLv3)", + "Operating System :: OS Independent", + "Development Status :: 5 - Production/Stable", + ], + project_urls={ + 'Bug Reports': 'https://forge.softwareheritage.org/maniphest', + 'Funding': 'https://www.softwareheritage.org/donate', + 'Source': 'https://forge.softwareheritage.org/source/swh-indexer', + }, ) diff --git a/sql/Makefile b/sql/Makefile deleted file mode 100644 index d52d181..0000000 --- a/sql/Makefile +++ /dev/null @@ -1,43 +0,0 @@ -# Depends: postgresql-client, postgresql-autodoc - -DBNAME = softwareheritage-indexer-dev -DOCDIR = autodoc - -SQL_INIT = swh-init.sql -SQL_ENUMS = swh-enums.sql -SQL_SCHEMA = swh-schema.sql -SQL_FUNC = swh-func.sql -SQL_DATA = swh-data.sql -SQL_INDEX = swh-indexes.sql -SQLS = $(SQL_INIT) $(SQL_ENUMS) $(SQL_SCHEMA) $(SQL_FUNC) $(SQL_INDEX) $(SQL_DATA) - -PSQL_BIN = psql -PSQL_FLAGS = --echo-all -X -v ON_ERROR_STOP=1 -PSQL = $(PSQL_BIN) $(PSQL_FLAGS) - -all: - -createdb: createdb-stamp -createdb-stamp: $(SQL_INIT) - createdb $(DBNAME) - touch $@ - -filldb: filldb-stamp -filldb-stamp: createdb-stamp - cat $(SQLS) | $(PSQL) $(DBNAME) - touch $@ - -dropdb: - -dropdb $(DBNAME) - -dumpdb: swh-indexer.dump -swh-indexer.dump: filldb-stamp - pg_dump -Fc $(DBNAME) > $@ - -clean: - rm -rf *-stamp $(DOCDIR)/ - -distclean: clean dropdb - rm -f swh-indexer.dump - -.PHONY: all initdb createdb dropdb doc clean diff --git a/swh/indexer/tests/__init__.py b/sql/createdb-stamp similarity index 100% copy from swh/indexer/tests/__init__.py copy to sql/createdb-stamp diff --git a/sql/doc/json/.gitignore b/sql/doc/json/.gitignore new file mode 100644 index 0000000..c337aa9 --- /dev/null +++ b/sql/doc/json/.gitignore @@ -0,0 +1 @@ +*-stamp diff --git a/sql/doc/json/Makefile b/sql/doc/json/Makefile new file mode 100644 index 0000000..5d983b8 --- /dev/null +++ b/sql/doc/json/Makefile @@ -0,0 +1,19 @@ +# Depends: json-glib-tools + +JSONVAL = json-glib-validate +JSONS = $(wildcard *.json) + +all: validate +check: validate +test: validate + +validate: validate-stamp +validate-stamp: $(JSONS) + make $(patsubst %,validate/%,$?) + touch $@ + +validate/%: + $(JSONVAL) $* + +clean: + rm -f validate-stamp diff --git a/sql/doc/json/indexer_configuration.tool_configuration.schema.json b/sql/doc/json/indexer_configuration.tool_configuration.schema.json new file mode 100644 index 0000000..28396b4 --- /dev/null +++ b/sql/doc/json/indexer_configuration.tool_configuration.schema.json @@ -0,0 +1,11 @@ +{ + "$schema": "http://json-schema.org/schema#", + "id": "http://softwareheritage.org/schemas/indexer_configuration.tool_configuration.schema.json", + + "type": "object", + "properties": { + "command_line": { + "type": "string" + } + } +} diff --git a/sql/json/revision_metadata.translated_metadata.json b/sql/doc/json/revision_metadata.translated_metadata.json similarity index 95% copy from sql/json/revision_metadata.translated_metadata.json copy to sql/doc/json/revision_metadata.translated_metadata.json index 1806fc7..4b6814d 100644 --- a/sql/json/revision_metadata.translated_metadata.json +++ b/sql/doc/json/revision_metadata.translated_metadata.json @@ -1,59 +1,56 @@ { "$schema": "http://json-schema.org/schema#", "id": "http://softwareheritage.org/schemas/revision_metadata.translated_metadata.schema.json", "type": "object", "properties": { "developmentStatus": { "type": "list" }, "version": { "type": "list" }, "operatingSystem": { "type": "list" }, "description": { "type": "list" }, "keywords": { "type": "list" }, "issueTracker": { "type": "list" }, "name": { "type": "list" }, "author": { "type": "list" }, "relatedLink": { "type": "list" }, "url": { "type": "list" }, - "type": { - "type": "list" - }, "license": { "type": "list" }, "maintainer": { "type": "list" }, "email": { "type": "list" }, "softwareRequirements": { "type": "list" }, "identifier": { "type": "list" }, "codeRepository": { "type": "list" }, } } diff --git a/swh/indexer/tests/__init__.py b/sql/filldb-stamp similarity index 100% copy from swh/indexer/tests/__init__.py copy to sql/filldb-stamp diff --git a/sql/json/revision_metadata.translated_metadata.json b/sql/json/revision_metadata.translated_metadata.json index 1806fc7..4b6814d 100644 --- a/sql/json/revision_metadata.translated_metadata.json +++ b/sql/json/revision_metadata.translated_metadata.json @@ -1,59 +1,56 @@ { "$schema": "http://json-schema.org/schema#", "id": "http://softwareheritage.org/schemas/revision_metadata.translated_metadata.schema.json", "type": "object", "properties": { "developmentStatus": { "type": "list" }, "version": { "type": "list" }, "operatingSystem": { "type": "list" }, "description": { "type": "list" }, "keywords": { "type": "list" }, "issueTracker": { "type": "list" }, "name": { "type": "list" }, "author": { "type": "list" }, "relatedLink": { "type": "list" }, "url": { "type": "list" }, - "type": { - "type": "list" - }, "license": { "type": "list" }, "maintainer": { "type": "list" }, "email": { "type": "list" }, "softwareRequirements": { "type": "list" }, "identifier": { "type": "list" }, "codeRepository": { "type": "list" }, } } diff --git a/sql/upgrades/116.sql b/sql/upgrades/116.sql new file mode 100644 index 0000000..991f81f --- /dev/null +++ b/sql/upgrades/116.sql @@ -0,0 +1,81 @@ +-- SWH Indexer DB schema upgrade +-- from_version: 115 +-- to_version: 116 +-- description: + +insert into dbversion(version, release, description) +values(116, now(), 'Work In Progress'); + +drop table origin_metadata_translation; + +create table origin_intrinsic_metadata( + origin_id bigserial not null, + metadata jsonb, + indexer_configuration_id bigint not null, + from_revision sha1_git not null +); + +comment on table origin_intrinsic_metadata is 'keeps intrinsic metadata for an origin'; +comment on column origin_intrinsic_metadata.origin_id is 'the entry id in origin'; +comment on column origin_intrinsic_metadata.metadata is 'metadata extracted from a revision'; +comment on column origin_intrinsic_metadata.indexer_configuration_id is 'tool used to generate this metadata'; +comment on column origin_intrinsic_metadata.from_revision is 'sha1 of the revision this metadata was copied from.'; + +-- create a temporary table for retrieving origin_intrinsic_metadata +create or replace function swh_mktemp_origin_intrinsic_metadata() + returns void + language sql +as $$ + create temporary table tmp_origin_intrinsic_metadata ( + like origin_intrinsic_metadata including defaults + ) on commit drop; +$$; + +comment on function swh_mktemp_origin_intrinsic_metadata() is 'Helper table to add origin intrinsic metadata'; + + +-- add tmp_origin_intrinsic_metadata entries to origin_intrinsic_metadata, +-- overwriting duplicates if conflict_update is true, skipping duplicates +-- otherwise. +-- +-- If filtering duplicates is in order, the call to +-- swh_origin_intrinsic_metadata_missing must take place before calling this +-- function. +-- +-- operates in bulk: 0. swh_mktemp(content_language), 1. COPY to +-- tmp_origin_intrinsic_metadata, 2. call this function +create or replace function swh_origin_intrinsic_metadata_add( + conflict_update boolean) + returns void + language plpgsql +as $$ +begin + if conflict_update then + insert into origin_intrinsic_metadata (origin_id, metadata, indexer_configuration_id, from_revision) + select origin_id, metadata, indexer_configuration_id, from_revision + from tmp_origin_intrinsic_metadata + on conflict(origin_id, indexer_configuration_id) + do update set metadata = excluded.metadata; + + else + insert into origin_intrinsic_metadata (origin_id, metadata, indexer_configuration_id, from_revision) + select origin_id, metadata, indexer_configuration_id, from_revision + from tmp_origin_intrinsic_metadata + on conflict(origin_id, indexer_configuration_id) + do nothing; + end if; + return; +end +$$; + +comment on function swh_origin_intrinsic_metadata_add(boolean) IS 'Add new origin intrinsic metadata'; + + +-- origin_intrinsic_metadata +create unique index origin_intrinsic_metadata_pkey on origin_intrinsic_metadata(origin_id, indexer_configuration_id); +alter table origin_intrinsic_metadata add primary key using index origin_intrinsic_metadata_pkey; + +alter table origin_intrinsic_metadata add constraint origin_intrinsic_metadata_indexer_configuration_id_fkey foreign key (indexer_configuration_id) references indexer_configuration(id) not valid; +alter table origin_intrinsic_metadata validate constraint origin_intrinsic_metadata_indexer_configuration_id_fkey; +alter table origin_intrinsic_metadata add constraint origin_intrinsic_metadata_revision_metadata_fkey foreign key (from_revision, indexer_configuration_id) references revision_metadata(id, indexer_configuration_id) not valid; +alter table origin_intrinsic_metadata validate constraint origin_intrinsic_metadata_revision_metadata_fkey; diff --git a/swh.indexer.egg-info/PKG-INFO b/swh.indexer.egg-info/PKG-INFO index e985a5a..53d9ce3 100644 --- a/swh.indexer.egg-info/PKG-INFO +++ b/swh.indexer.egg-info/PKG-INFO @@ -1,10 +1,114 @@ -Metadata-Version: 1.0 +Metadata-Version: 2.1 Name: swh.indexer -Version: 0.0.52 +Version: 0.0.53 Summary: Software Heritage Content Indexer Home-page: https://forge.softwareheritage.org/diffusion/78/ Author: Software Heritage developers Author-email: swh-devel@inria.fr License: UNKNOWN -Description: UNKNOWN +Project-URL: Bug Reports, https://forge.softwareheritage.org/maniphest +Project-URL: Funding, https://www.softwareheritage.org/donate +Project-URL: Source, https://forge.softwareheritage.org/source/swh-indexer +Description: swh-indexer + ============ + + Tools to compute multiple indexes on SWH's raw contents: + - content: + - mimetype + - ctags + - language + - fossology-license + - metadata + - revision: + - metadata + + ## Context + + SWH has currently stored around 5B contents. The table `content` + holds their checksums. + + Those contents are physically stored in an object storage (using + disks) and replicated in another. Those object storages are not + destined for reading yet. + + We are in the process to copy those contents over to azure's blob + storages. As such, we will use that opportunity to trigger the + computations on these contents once those have been copied over. + + + ## Workers + + There are two types of workers: + - orchestrators (orchestrator, orchestrator-text) + - indexer (mimetype, language, ctags, fossology-license) + + ### Orchestrator + + + The orchestrator is in charge of dispatching a batch of sha1 hashes to + different indexers. + + Orchestration procedure: + - receive batch of sha1s + - split those batches into groups (according to setup) + - broadcast those group to indexers + + There are two types of orchestrators: + + - orchestrator (swh_indexer_orchestrator_content_all): Receives and + broadcast sha1 ids (of contents) to indexers (currently only the + mimetype indexer) + + - orchestrator-text (swh_indexer_orchestrator_content_text): Receives + batch of sha1 ids (of textual contents) and broadcast those to + indexers (currently language, ctags, and fossology-license + indexers). + + + ### Indexers + + + An indexer is in charge of the content retrieval and indexation of the + extracted information in the swh-indexer db. + + There are two types of indexers: + - content indexer: works with content sha1 hashes + - revision indexer: works with revision sha1 hashes + + Indexation procedure: + - receive batch of ids + - retrieve the associated data depending on object type + - compute for that object some index + - store the result to swh's storage + - (and possibly do some broadcast itself) + + Current content indexers: + + - mimetype (queue swh_indexer_content_mimetype): compute the mimetype, + filter out the textual contents and broadcast the list to the + orchestrator-text + + - language (queue swh_indexer_content_language): detect the programming language + + - ctags (queue swh_indexer_content_ctags): try and compute tags + information + + - fossology-license (queue swh_indexer_fossology_license): try and + compute the license + + - metadata : translate file into translated_metadata dict + + Current revision indexers: + + - metadata: detects files containing metadata and retrieves translated_metadata + in content_metadata table in storage or run content indexer to translate + files. + Platform: UNKNOWN +Classifier: Programming Language :: Python :: 3 +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3) +Classifier: Operating System :: OS Independent +Classifier: Development Status :: 5 - Production/Stable +Description-Content-Type: text/markdown +Provides-Extra: testing diff --git a/swh.indexer.egg-info/SOURCES.txt b/swh.indexer.egg-info/SOURCES.txt index a59988b..9f7dfbf 100644 --- a/swh.indexer.egg-info/SOURCES.txt +++ b/swh.indexer.egg-info/SOURCES.txt @@ -1,73 +1,89 @@ .gitignore AUTHORS CONTRIBUTORS LICENSE MANIFEST.in Makefile -README +README.md codemeta.json +pytest.ini requirements-swh.txt +requirements-test.txt requirements.txt setup.py +tox.ini version.txt debian/changelog debian/compat debian/control debian/copyright debian/rules debian/source/format docs/.gitignore docs/Makefile docs/conf.py +docs/dev-info.rst docs/index.rst docs/_static/.placeholder docs/_templates/.placeholder -sql/Makefile -sql/swh-data.sql -sql/swh-enums.sql -sql/swh-func.sql -sql/swh-indexes.sql -sql/swh-init.sql -sql/swh-schema.sql +sql/createdb-stamp +sql/filldb-stamp sql/bin/db-upgrade sql/bin/dot_add_content sql/doc/json +sql/doc/json/.gitignore +sql/doc/json/Makefile +sql/doc/json/indexer_configuration.tool_configuration.schema.json +sql/doc/json/revision_metadata.translated_metadata.json sql/json/.gitignore sql/json/Makefile sql/json/indexer_configuration.tool_configuration.schema.json sql/json/revision_metadata.translated_metadata.json sql/upgrades/115.sql +sql/upgrades/116.sql swh/__init__.py swh.indexer.egg-info/PKG-INFO swh.indexer.egg-info/SOURCES.txt swh.indexer.egg-info/dependency_links.txt swh.indexer.egg-info/requires.txt swh.indexer.egg-info/top_level.txt swh/indexer/__init__.py swh/indexer/ctags.py swh/indexer/fossology_license.py swh/indexer/indexer.py swh/indexer/language.py swh/indexer/metadata.py swh/indexer/metadata_detector.py swh/indexer/metadata_dictionary.py swh/indexer/mimetype.py swh/indexer/orchestrator.py +swh/indexer/origin_head.py swh/indexer/producer.py swh/indexer/rehash.py swh/indexer/tasks.py +swh/indexer/data/codemeta/LICENSE +swh/indexer/data/codemeta/crosswalk.csv +swh/indexer/sql/10-swh-init.sql +swh/indexer/sql/20-swh-enums.sql +swh/indexer/sql/30-swh-schema.sql +swh/indexer/sql/40-swh-func.sql +swh/indexer/sql/50-swh-data.sql +swh/indexer/sql/60-swh-indexes.sql swh/indexer/storage/__init__.py swh/indexer/storage/converters.py swh/indexer/storage/db.py swh/indexer/storage/api/__init__.py swh/indexer/storage/api/client.py swh/indexer/storage/api/server.py swh/indexer/tests/__init__.py swh/indexer/tests/test_language.py swh/indexer/tests/test_metadata.py swh/indexer/tests/test_mimetype.py +swh/indexer/tests/test_orchestrator.py +swh/indexer/tests/test_origin_head.py +swh/indexer/tests/test_origin_metadata.py swh/indexer/tests/test_utils.py swh/indexer/tests/storage/__init__.py swh/indexer/tests/storage/test_api_client.py swh/indexer/tests/storage/test_converters.py swh/indexer/tests/storage/test_storage.py \ No newline at end of file diff --git a/swh.indexer.egg-info/requires.txt b/swh.indexer.egg-info/requires.txt index faaa350..cbaafb5 100644 --- a/swh.indexer.egg-info/requires.txt +++ b/swh.indexer.egg-info/requires.txt @@ -1,10 +1,13 @@ chardet click file_magic pygments -swh.core>=0.0.40 +swh.core>=0.0.44 swh.model>=0.0.15 swh.objstorage>=0.0.13 -swh.scheduler>=0.0.14 +swh.scheduler>=0.0.33 swh.storage>=0.0.102 vcversioner + +[testing] +pytest diff --git a/swh/indexer/__init__.py b/swh/indexer/__init__.py index a5f3dfd..6091175 100644 --- a/swh/indexer/__init__.py +++ b/swh/indexer/__init__.py @@ -1,29 +1,32 @@ # Copyright (C) 2016-2017 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information INDEXER_CLASSES = { 'mimetype': 'swh.indexer.mimetype.ContentMimetypeIndexer', 'language': 'swh.indexer.language.ContentLanguageIndexer', 'ctags': 'swh.indexer.ctags.CtagsIndexer', 'fossology_license': 'swh.indexer.fossology_license.ContentFossologyLicenseIndexer', } TASK_NAMES = { - 'orchestrator_all': 'swh.indexer.tasks.SWHOrchestratorAllContentsTask', - 'orchestrator_text': 'swh.indexer.tasks.SWHOrchestratorTextContentsTask', - 'mimetype': 'swh.indexer.tasks.SWHContentMimetypeTask', - 'language': 'swh.indexer.tasks.SWHContentLanguageTask', - 'ctags': 'swh.indexer.tasks.SWHCtagsTask', - 'fossology_license': 'swh.indexer.tasks.SWHContentFossologyLicenseTask', - 'rehash': 'swh.indexer.tasks.SWHRecomputeChecksumsTask', + 'orchestrator_all': 'swh.indexer.tasks.OrchestratorAllContents', + 'orchestrator_text': 'swh.indexer.tasks.OrchestratorTextContents', + 'mimetype': 'swh.indexer.tasks.ContentMimetype', + 'language': 'swh.indexer.tasks.ContentLanguage', + 'ctags': 'swh.indexer.tasks.Ctags', + 'fossology_license': 'swh.indexer.tasks.ContentFossologyLicense', + 'rehash': 'swh.indexer.tasks.RecomputeChecksums', + 'revision_metadata': 'swh.indexer.tasks.RevisionMetadata', + 'origin_intrinsic_metadata': + 'swh.indexer.tasks.OriginMetadata', } __all__ = [ 'INDEXER_CLASSES', 'TASK_NAMES', ] diff --git a/swh/indexer/data/codemeta/LICENSE b/swh/indexer/data/codemeta/LICENSE new file mode 100644 index 0000000..b16ce70 --- /dev/null +++ b/swh/indexer/data/codemeta/LICENSE @@ -0,0 +1,178 @@ +Copyright 2014-2018, The CodeMeta contributors https://github.com/codemeta/codemeta/blob/master/CONTRIBUTORS.MD + +Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS diff --git a/swh/indexer/data/codemeta/crosswalk.csv b/swh/indexer/data/codemeta/crosswalk.csv new file mode 100644 index 0000000..3fc65de --- /dev/null +++ b/swh/indexer/data/codemeta/crosswalk.csv @@ -0,0 +1,77 @@ +Parent Type,Property,Type,Description,codemeta-V1,DataCite,OntoSoft,Zenodo,GitHub,Figshare,Software Ontology,Software Discovery Index,Dublin Core,R Package Description,Debian Package,Python Distutils (PyPI),Trove Software Map,Perl Module Description (CPAN::Meta),NodeJS,Java (Maven),Octave,Ruby Gem,ASCL,DOAP,Wikidata,Citation File Format Core (CFF-Core) 1.0.2 +schema:SoftwareSourceCode,codeRepository,URL,"Link to the repository where the un-compiled, human readable code and related code is located (SVN, github, CodePlex).",codeRepository,,,relatedLink,html_url,relatedLink,,,,URL,HomePage,url,,resouces.repository,repository,repositories,,homepage,site_list,repository,source code repository,repository-code +schema:SoftwareSourceCode,programmingLanguage,ComputerLanguage or Text,The computer programming language.,programmingLanguage,Format,hasProgrammingLanguage,,languages_url,,programming language,,,,,classifiers['Programming Language'],Programming Language,,,,,,,programming-language,programming language, +schema:SoftwareSourceCode,runtimePlatform,Text,"Runtime platform or script interpreter dependencies (Example - Java v1, Python2.3, .Net Framework 3.0). Supersedes runtime.",,,,,,,,,,,,,,,,,,platform,,platform,, +schema:SoftwareSourceCode,targetProduct,SoftwareApplication,"Target Operating System / Product to which the code applies. If applies to several versions, just the product name can be used.",,,,,,,,,,,,,,,,,,,,,, +schema:SoftwareApplication,applicationCategory,Text or URL,"Type of software application, e.g. 'Game, Multimedia'.",,,hasSoftwareCategory,communities,,categories,,,,,,classifiers['Topic'],Topic,Categories,,,Categories,,,,, +schema:SoftwareApplication,applicationSubCategory,Text or URL,"Subcategory of the application, e.g. 'Arcade Game'.",,,,,,,,,,,,,,,,,,,,,, +schema:SoftwareApplication,downloadUrl,URL,"If the file can be downloaded, URL to download the binary.",downloadLink,,,,archive_url,,,,,,,,,,,,,,,download-page,,repository-artifact +schema:SoftwareApplication,fileSize,Text,"Size of the application / package (e.g. 18MB). In the absence of a unit (MB, KB etc.), KB will be assumed.",,,,,,,,,,,,,,,,,,,,,, +schema:SoftwareApplication,installUrl,URL,"URL at which the app may be installed, if different from the URL of the item.",,,,,,,,,,,,,,,,,,,,download-mirror,, +schema:SoftwareApplication,memoryRequirements,Text or URL,Minimum memory requirements.,,,,,,,,,,,,,,,,,,,,,, +schema:SoftwareApplication,operatingSystem,Text,"Operating systems supported (Windows 7, OSX 10.6, Android 1.6).",operatingSystems,,SupportsOperatingSystem,,,,,,,,,classifiers['Operating System'],Operating System,OSNAMES,os,,,,,os,operating system, +schema:SoftwareApplication,permissions,Text,"Permission(s) required to run the app (for example, a mobile app may require full internet access or may run only on wifi).",,,,,,,,,,,,,,,,,,,,,, +schema:SoftwareApplication,processorRequirements,Text,Processor architecture required to run the application (e.g. IA64).,,,,,,,,,,,,,,,cpu / engines,,,,,,, +schema:SoftwareApplication,releaseNotes,Text or URL,Description of what changed in this version.,,,,,,,,,,,,,,,,,,,,,, +schema:SoftwareApplication,softwareHelp,CreativeWork,Software application help.,,,,,,,,,,,,,,,,,,,,,, +schema:SoftwareApplication,softwareRequirements,SoftwareSourceCode,Required software dependencies,depends,,hasDependency->Software,,,,,"""Platform, environment, and dependencies""",,"Depends, SystemRequirements",,install_requires,Database Environment,prereqs,dependencies / bundledDependencies / bundleDependencies / peerDependencies,prerequisites,"Depends, SystemRequirements","requirements, add_runtime_dependency",,,depends on software, +schema:SoftwareApplication,softwareVersion,Text,Version of the software instance.,,,,,,,,,,,,,,,,,,,,release,software version, +schema:SoftwareApplication,storageRequirements,Text or URL,Storage requirements (free space required).,,,,,,,,,,,,,,,,,,,,,, +schema:SoftwareApplication,supportingData,DataFeed,Supporting data for a SoftwareApplication.,,,,,,,,,,,,,,,,,,,,,, +schema:CreativeWork,author,Organization or Person,The author of this content or rating. Please note that author is special in that HTML 5 provides a special mechanism for indicating authorship via the rel tag. That is equivalent to this and may be used interchangeably.,agents,creators,,creators,login,,,,,[aut] in Author,,,,,author,,,author,,developer,,authors +schema:CreativeWork,citation,CreativeWork or URL,"A citation or reference to another creative work, such as another publication, web page, scholarly article, etc.",relatedLink,,,,,,,,,,,,,,,,,,,,, +schema:CreativeWork,contributor,Organization or Person,A secondary contributor to the CreativeWork or Event.,,,,,,,,,,[ctb] in Author,,,,,contributor,,,,,developer,, +schema:CreativeWork,copyrightHolder,Organization or Person,The party holding the legal copyright to the CreativeWork.,agents [role=copyrightHolder],,,,,,,,,,,,,,,,,,,,, +schema:CreativeWork,copyrightYear,Number,The year during which the claimed copyright for the CreativeWork was first asserted.,,,,,,,,,,,,,,,,,,,,,, +schema:CreativeWork,creator,Organization or Person,The creator/author of this CreativeWork. This is the same as the Author property for CreativeWork.,agent,,,,,,,,creator,[cre] in Author,,,,,author,,,,,,, +schema:CreativeWork,dateCreated,Date or DateTime,The date on which the CreativeWork was created or the item was added to a DataFeed.,dateCreated,date,,,created_at,,,,created,,Date,,,,,,,,,,, +schema:CreativeWork,dateModified,Date or DateTime,The date on which the CreativeWork was most recently modified or when the item's entry was modified within a DataFeed.,dateModified,date,,,updated_at,,,,,,,,last-updated,,,,,,,,, +schema:CreativeWork,datePublished,Date,Date of first broadcast/publication.,datePublished,publicationYear,,date_published,,date_retrieved,,,date,Date,,,,,,,Date,,,,publication date,date-released +schema:CreativeWork,editor,Person,Specifies the Person who edited the CreativeWork.,,,,,,,,,,,,,,,,,,,,,editor, +schema:CreativeWork,encoding,MediaObject,A media object that encodes this CreativeWork. This property is a synonym for associatedMedia. Supersedes encodings.,,,,,,,,,,,,,,,,,,,,,, +schema:CreativeWork,fileFormat,Text or URL,"Media type, typically MIME format (see IANA site) of the content e.g. application/zip of a SoftwareApplication binary. In cases where a CreativeWork has several media type representations, 'encoding' can be used to indicate each MediaObject alongside particular fileFormat information. Unregistered or niche file formats can be indicated instead via the most appropriate URL, e.g. defining Web page or a Wikipedia entry.",,Format,,,,,,,,,,,,,,,,,,,, +schema:CreativeWork,funder,Organization or Person,A person or organization that supports (sponsors) something through some kind of financial contribution.,fundingReference.funderName,,,contributors.Funder,,,,,,,,,,,,,,,,,, +schema:CreativeWork,keywords,Text,Keywords or tags used to describe this content. Multiple entries in a keywords list are typically delimited by commas.,controlledTerms,subject,hasDomainKeywords,keywords,,tags,,,,,,keywords,,keywords,keywords,,,,,category,,keywords +schema:CreativeWork,license,CreativeWork or URL,"A license document that applies to this content, typically indicated by URL.",licenseId,rights,License,license,license,License,software license,Software license,license,License,,license,license,license,license,licesnse,License,license/licenses,,license,license,license/license-url +schema:CreativeWork,producer,Organization or Person,"The person or organization who produced the work (e.g. music album, movie, tv/radio series etc.).",,,,,,,,,,,,,,,,,,,,,, +schema:CreativeWork,provider,Organization or Person,"The service provider, service operator, or service performer; the goods producer. Another party (a seller) may offer those services or goods on behalf of the provider. A provider may also serve as the seller. Supersedes carrier.",,,,,,,,,,,,,,,,,,,,,, +schema:CreativeWork,publisher,Organization or Person,The publisher of the creative work.,publisher,publisher,os:hasPublisher,,,,software publisher organization,,publisher,,,,,,,,,,,vendor,, +schema:CreativeWork,sponsor,Organization or Person,"A person or organization that supports a thing through a pledge, promise, or financial contribution. e.g. a sponsor of a Medical Study or a corporate sponsor of an event.",,,,,,,,,,,,,,,,,,,,,, +schema:CreativeWork,version,Number or Text,The version of the CreativeWork embodied by a specified resource.,version,version,hasSoftwareVersion,,,,Version,Software version,dcterms:hasVersion,,numeric_version,Version,version,,version,version,version,version,,,,version +schema:CreativeWork,isAccessibleForFree,Boolean,A flag to signal that the publication is accessible for free.,,,,,,,,,,,,,,,,,,,,,, +schema:CreativeWork,isPartOf,CreativeWork,Indicates a CreativeWork that this CreativeWork is (in some sense) part of. Reverse property hasPart,,,,,,,,,,,,,,,,,,,,,,references +schema:CreativeWork,hasPart,CreativeWork,Indicates a CreativeWork that is (in some sense) a part of this CreativeWork. Reverse property isPartOf,,,,,,,,,,,,,,,,,,,,,, +schema:CreativeWork,position,Integer or Text,"The position of an item in a series or sequence of items. (While schema.org considers this a property of CreativeWork, it is also the way to indicate ordering in any list (e.g. the Authors list). By default arrays are unordered in JSON-LD",,,,,,,,,,,,,,,,,,,,,, +schema:Thing,description,Text,A description of the item.,description,description,hasShortDescription,description/notes,description,Description,software,,description,Description,Description,"description, long_description",description,"abstract, description",description,description,Description,"summary, description",abstract,,,abstract +schema:Thing,identifier,PropertyValue or URL,"The identifier property represents any kind of identifier for any kind of Thing, such as ISBNs, GTIN codes, UUIDs etc. Schema.org provides dedicated properties for representing many of these, either as textual strings or as URL (URI) links. See background notes for more details.",identifier,identifier,hasUniqueId,id,id,,,Persistent Identifier,identifier,Package,Package,,,,name,groupId,,,ascl_id,,,doi +schema:Thing,name,Text,"The name of the item (software, Organization)",name,,hasName,title,full_name,Title,SoftwareTitle,Software title,title,Title,,name,Title,name,name,name,name,name,title,,,title +schema:Thing,sameAs,URL,"URL of a reference Web page that unambiguously indicates the item's identity. E.g. the URL of the item's Wikipedia page, Wikidata entry, or official website.",,,,,,,,,,,,,,,,,,,,,, +schema:Thing,url,URL,URL of the item.,URL,,,,,,,,,URL,,,,,homepage,,URL,,,homepage,official website,url +schema:Thing,relatedLink,URL,"A link related to this object, e.g. related web pages",,RelateIdentifier,,,,,,,,,,,,,,,,,,,, +schema:Person,givenName,Text,"Given name. In the U.S., the first name of a Person. This can be used along with familyName instead of the name property",,givenName,,,,,,,,givenName,,,,,,,,,,,,person.given-names +schema:Person,familyName,Text,"Family name. In the U.S., the last name of an Person. This can be used along with givenName instead of the name property.",,familyName,,,,,,,,familyName,,,,,,,,,,,,person.name-particle + person.family-names + person.name-suffix +schema:Person,email,Text,Email address,email,,,,,,,,,email,,author_email,,email-address,author.email,,,email,email,,,person.email/entity.email +schema:Person,affiliation,Text,"An organization that this person is affiliated with. For example, a school/university",affiliation,affiliation,,affiliation,,,,,,,,,,,,,,,,,,person.affiliation +schema:Person,identifier,URL,"URL identifer, ideally an ORCID ID for individuals, a FundRef ID for funders",identifier,nameIdentifier,,ORCID,,ORCID,,,,,,,,,,,,,,,,person.orcid / entity.orcid +schema:Person,name,Text,"The name of an Organization, or if separate given and family names cannot be resolved for a Person",,,,name,,name,,,,,,,,author:contact-name,author.name,,,,,,,entity.name +schema:Person,address,PostalAddress or Text,Physical address of the item.,,,,,,,,,,,,,,,,,,,,,,person.address + person.city + person.region + person.post-code + person.country / entity.address + entity.city + entity.region + entity.post-code + entity.country +schema,type,Object Type (from context or URI),"The object type (e.g. ""Person"", ""Organization"", ""ScientificArticle"", ""SoftwareApplication"", etc).",,,,,,,,,,,,,,,,,,,,,,reference.type +schema,id,URL,Primary identifier for an object. Must be a resolvable URL or a string used to refer to this node elsewhere in the same document,,,,,,,,,,,,,,,,,,,,,, +codemeta:SoftwareSourceCode,softwareSuggestions,SoftwareSourceCode,"Optional dependencies , e.g. for optional features, code development, etc",suggests,,,,,,,,,Suggests,,,,,devDependencies / optionalDependencies,,BuildDepends,add_development_dependency,,,, +codemeta:SoftwareSourceCode,maintainer,Person,Individual responsible for maintaining the software (usually includes an email contact address),uploadedBy,,,,,,,,,Maintainer,,,,,,,,,,maintainer,, +codemeta:SoftwareSourceCode,contIntegration,URL,link to continuous integration service,contIntegration,,,,,,,,,,,,,,,ciManagement,,,,,, +codemeta:SoftwareSourceCode,buildInstructions,URL,link to installation instructions/documentation,buildInstructions,,,,,,,,,,,,,,,,,,,,, +codemeta:SoftwareSourceCode,developmentStatus,Text,"Description of development status, e.g. Active, inactive, supsended. See repostatus.org",developmentStatus,,activeDevelopment,,,,,,,,,classifiers['Development Status'],Development Status,release_status,,,,,,,, +codemeta:SoftwareSourceCode,embargoDate,Date,"Software may be embargoed from public access until a specified date (e.g. pending publication, 1 year from publication)",embargoDate,,,,,embargo_date,,,,,,,,,,,,,,,, +codemeta:SoftwareSourceCode,funding,Text,Funding source (e.g. specific grant),funding,,fundingReference.awardTitle or fundingReference.awardNumber,,,,,,,,,,,,,,,,,,, +codemeta:SoftwareSourceCode,issueTracker,URL,link to software bug reporting or issue tracking system,issueTracker,,,,issues_url,,,,,BugReports,,,,resources.bugtracker,bugs,issuesManagement,Problems,,,bug-database,bug tracking system,repository +codemeta:SoftwareSourceCode,referencePublication,ScholarlyArticle,An academic publication related to the software.,relatedPublications,,,,,,,,,,,,,,,,,,,blog,,references +codemeta:SoftwareSourceCode,readme,URL,link to software Readme file,readme,,,,,,,,,,,,,,,,,,,,, +,,,,relatedIdentifer,,,,,,,,,,,,,,,,,,,,, +,,,,relatedIdentiferType,,,,,,,,,,,,,,,,,,,,, +,,,,relationshipType,,,,,,,,,,,,,,,,,,,,, +,,,,title,,,,,,,,,,,,,,,,,,,,, +,,,,namespace,,,,,,,,,,,,,,,,,,,,, +,,,,role,,,,,,,,,,,,,,,,,,,,, +,,,,roleCode,,,,,,,,,,,,,,,,,,,,, +,,,,softwarePaperCitationIdenifiers,,,,,,,,,,,,,,,,,,,,, diff --git a/swh/indexer/indexer.py b/swh/indexer/indexer.py index 6950a07..3f1fb8c 100644 --- a/swh/indexer/indexer.py +++ b/swh/indexer/indexer.py @@ -1,418 +1,492 @@ # Copyright (C) 2016-2017 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import abc import os import logging import shutil import tempfile +from swh.storage import get_storage from swh.core.config import SWHConfig from swh.objstorage import get_objstorage from swh.objstorage.exc import ObjNotFoundError from swh.model import hashutil from swh.scheduler.utils import get_task from swh.indexer.storage import get_indexer_storage, INDEXER_CFG_KEY class DiskIndexer: """Mixin intended to be used with other SomethingIndexer classes. Indexers inheriting from this class are a category of indexers which needs the disk for their computations. Note: This expects `self.working_directory` variable defined at runtime. """ def write_to_temp(self, filename, data): """Write the sha1's content in a temporary file. Args: sha1 (str): the sha1 name filename (str): one of sha1's many filenames data (bytes): the sha1's content to write in temporary file Returns: The path to the temporary file created. That file is filled in with the raw content's data. """ os.makedirs(self.working_directory, exist_ok=True) temp_dir = tempfile.mkdtemp(dir=self.working_directory) content_path = os.path.join(temp_dir, filename) with open(content_path, 'wb') as f: f.write(data) return content_path def cleanup(self, content_path): """Remove content_path from working directory. Args: content_path (str): the file to remove """ temp_dir = os.path.dirname(content_path) shutil.rmtree(temp_dir) class BaseIndexer(SWHConfig, metaclass=abc.ABCMeta): """Base class for indexers to inherit from. The main entry point is the :func:`run` function which is in charge of triggering the computations on the batch dict/ids received. Indexers can: - filter out ids whose data has already been indexed. - retrieve ids data from storage or objstorage - index this data depending on the object and store the result in storage. To implement a new object type indexer, inherit from the - BaseIndexer and implement the process of indexation: + BaseIndexer and implement indexing: :func:`run`: object_ids are different depending on object. For example: sha1 for content, sha1_git for revision, directory, release, and id for origin To implement a new concrete indexer, inherit from the object level - classes: :class:`ContentIndexer`, :class:`RevisionIndexer` (later - on :class:`OriginIndexer` will also be available) + classes: :class:`ContentIndexer`, :class:`RevisionIndexer`, + :class:`OriginIndexer`. Then you need to implement the following functions: :func:`filter`: filter out data already indexed (in storage). This function is used by the orchestrator and not directly by the indexer (cf. swh.indexer.orchestrator.BaseOrchestratorIndexer). :func:`index_object`: compute index on id with data (retrieved from the storage or the objstorage by the id key) and return the resulting index computation. :func:`persist_index_computations`: persist the results of multiple index computations in the storage. The new indexer implementation can also override the following functions: :func:`prepare`: Configuration preparation for the indexer. When overriding, this must call the `super().prepare()` instruction. :func:`check`: Configuration check for the indexer. When overriding, this must call the `super().check()` instruction. :func:`register_tools`: This should return a dict of the tool(s) to use when indexing or filtering. """ CONFIG = 'indexer/base' DEFAULT_CONFIG = { INDEXER_CFG_KEY: ('dict', { 'cls': 'remote', 'args': { - 'db': 'service=swh-indexer-dev' + 'url': 'http://localhost:5007/' } }), # queue to reschedule if problem (none for no rescheduling, # the default) 'rescheduling_task': ('str', None), + 'storage': ('dict', { + 'cls': 'remote', + 'args': { + 'url': 'http://localhost:5002/', + } + }), 'objstorage': ('dict', { 'cls': 'multiplexer', 'args': { 'objstorages': [{ 'cls': 'filtered', 'args': { 'storage_conf': { - 'cls': 'azure-storage', + 'cls': 'azure', 'args': { 'account_name': '0euwestswh', 'api_secret_key': 'secret', 'container_name': 'contents' } }, 'filters_conf': [ {'type': 'readonly'}, {'type': 'prefix', 'prefix': '0'} ] } }, { 'cls': 'filtered', 'args': { 'storage_conf': { - 'cls': 'azure-storage', + 'cls': 'azure', 'args': { 'account_name': '1euwestswh', 'api_secret_key': 'secret', 'container_name': 'contents' } }, 'filters_conf': [ {'type': 'readonly'}, {'type': 'prefix', 'prefix': '1'} ] } }] }, }), } ADDITIONAL_CONFIG = {} def __init__(self): """Prepare and check that the indexer is ready to run. """ super().__init__() self.prepare() self.check() def prepare(self): """Prepare the indexer's needed runtime configuration. Without this step, the indexer cannot possibly run. """ self.config = self.parse_config_file( additional_configs=[self.ADDITIONAL_CONFIG]) + if self.config['storage']: + self.storage = get_storage(**self.config['storage']) objstorage = self.config['objstorage'] self.objstorage = get_objstorage(objstorage['cls'], objstorage['args']) idx_storage = self.config[INDEXER_CFG_KEY] self.idx_storage = get_indexer_storage(**idx_storage) rescheduling_task = self.config['rescheduling_task'] if rescheduling_task: self.rescheduling_task = get_task(rescheduling_task) else: self.rescheduling_task = None _log = logging.getLogger('requests.packages.urllib3.connectionpool') _log.setLevel(logging.WARN) self.log = logging.getLogger('swh.indexer') self.tools = list(self.register_tools(self.config['tools'])) def check(self): """Check the indexer's configuration is ok before proceeding. If ok, does nothing. If not raise error. """ if not self.tools: raise ValueError('Tools %s is unknown, cannot continue' % self.tools) def _prepare_tool(self, tool): """Prepare the tool dict to be compliant with the storage api. """ return {'tool_%s' % key: value for key, value in tool.items()} def register_tools(self, tools): """Permit to register tools to the storage. Add a sensible default which can be overridden if not sufficient. (For now, all indexers use only one tool) Expects the self.config['tools'] property to be set with one or more tools. Args: tools (dict/[dict]): Either a dict or a list of dict. Returns: List of dict with additional id key. Raises: ValueError if not a list nor a dict. """ tools = self.config['tools'] if isinstance(tools, list): tools = map(self._prepare_tool, tools) elif isinstance(tools, dict): tools = [self._prepare_tool(tools)] else: raise ValueError('Configuration tool(s) must be a dict or list!') return self.idx_storage.indexer_configuration_add(tools) @abc.abstractmethod def filter(self, ids): """Filter missing ids for that particular indexer. Args: ids ([bytes]): list of ids Yields: iterator of missing ids """ pass @abc.abstractmethod def index(self, id, data): """Index computation for the id and associated raw data. Args: id (bytes): identifier data (bytes): id's data from storage or objstorage depending on object type Returns: a dict that makes sense for the persist_index_computations function. """ pass @abc.abstractmethod def persist_index_computations(self, results, policy_update): """Persist the computation resulting from the index. Args: results ([result]): List of results. One result is the result of the index function. policy_update ([str]): either 'update-dups' or 'ignore-dups' to respectively update duplicates or ignore them Returns: None """ pass def next_step(self, results): """Do something else with computations results (e.g. send to another queue, ...). (This is not an abstractmethod since it is optional). Args: results ([result]): List of results (dict) as returned by index function. Returns: None """ pass @abc.abstractmethod - def run(self, ids, policy_update): + def run(self, ids, policy_update, **kwargs): """Given a list of ids: - retrieves the data from the storage - executes the indexing computations - stores the results (according to policy_update) Args: ids ([bytes]): id's identifier list policy_update ([str]): either 'update-dups' or 'ignore-dups' to respectively update duplicates or ignore them + **kwargs: passed to the `index` method """ pass class ContentIndexer(BaseIndexer): """An object type indexer, inherits from the :class:`BaseIndexer` and - implements the process of indexation for Contents using the run - method + implements Content indexing using the run method Note: the :class:`ContentIndexer` is not an instantiable object. To use it in another context, one should inherit from this class and override the methods mentioned in the :class:`BaseIndexer` class. """ - def run(self, ids, policy_update): + def run(self, ids, policy_update, **kwargs): """Given a list of ids: - retrieve the content from the storage - execute the indexing computations - store the results (according to policy_update) Args: ids ([bytes]): sha1's identifier list policy_update ([str]): either 'update-dups' or 'ignore-dups' to respectively update duplicates or ignore them + **kwargs: passed to the `index` method """ results = [] try: for sha1 in ids: try: raw_content = self.objstorage.get(sha1) except ObjNotFoundError: self.log.warn('Content %s not found in objstorage' % hashutil.hash_to_hex(sha1)) continue - res = self.index(sha1, raw_content) + res = self.index(sha1, raw_content, **kwargs) if res: # If no results, skip it results.append(res) self.persist_index_computations(results, policy_update) - self.next_step(results) + self.results = results + return self.next_step(results) except Exception: self.log.exception( 'Problem when reading contents metadata.') if self.rescheduling_task: self.log.warn('Rescheduling batch') self.rescheduling_task.delay(ids, policy_update) +class OriginIndexer(BaseIndexer): + """An object type indexer, inherits from the :class:`BaseIndexer` and + implements Origin indexing using the run method + + Note: the :class:`OriginIndexer` is not an instantiable object. + To use it in another context one should inherit from this class + and override the methods mentioned in the :class:`BaseIndexer` + class. + + """ + def run(self, ids, policy_update, parse_ids=False, **kwargs): + """Given a list of origin ids: + + - retrieve origins from storage + - execute the indexing computations + - store the results (according to policy_update) + + Args: + ids ([Union[int, Tuple[str, bytes]]]): list of origin ids or + (type, url) tuples. + policy_update ([str]): either 'update-dups' or 'ignore-dups' to + respectively update duplicates or ignore + them + parse_ids ([bool]: If `True`, will try to convert `ids` + from a human input to the valid type. + **kwargs: passed to the `index` method + + """ + if parse_ids: + ids = [ + o.split('+', 1) if ':' in o else int(o) # type+url or id + for o in ids] + + results = [] + + for id_ in ids: + if isinstance(id_, (tuple, list)): + if len(id_) != 2: + raise TypeError('Expected a (type, url) tuple.') + (type_, url) = id_ + params = {'type': type_, 'url': url} + elif isinstance(id_, int): + params = {'id': id_} + else: + raise TypeError('Invalid value in "ids": %r' % id_) + origin = self.storage.origin_get(params) + if not origin: + self.log.warn('Origins %s not found in storage' % + list(ids)) + continue + try: + res = self.index(origin, **kwargs) + if origin: # If no results, skip it + results.append(res) + except Exception: + self.log.exception( + 'Problem when processing origin %s' % id_) + self.persist_index_computations(results, policy_update) + self.results = results + return self.next_step(results) + + class RevisionIndexer(BaseIndexer): """An object type indexer, inherits from the :class:`BaseIndexer` and - implements the process of indexation for Revisions using the run - method + implements Revision indexing using the run method Note: the :class:`RevisionIndexer` is not an instantiable object. To use it in another context one should inherit from this class and override the methods mentioned in the :class:`BaseIndexer` class. """ def run(self, ids, policy_update): """Given a list of sha1_gits: - retrieve revisions from storage - execute the indexing computations - store the results (according to policy_update) Args: ids ([bytes]): sha1_git's identifier list policy_update ([str]): either 'update-dups' or 'ignore-dups' to respectively update duplicates or ignore them """ results = [] revs = self.storage.revision_get(ids) for rev in revs: if not rev: self.log.warn('Revisions %s not found in storage' % list(map(hashutil.hash_to_hex, ids))) continue try: res = self.index(rev) if res: # If no results, skip it results.append(res) except Exception: self.log.exception( 'Problem when processing revision') self.persist_index_computations(results, policy_update) + self.results = results + return self.next_step(results) diff --git a/swh/indexer/metadata.py b/swh/indexer/metadata.py index 25d2332..0e5a5a4 100644 --- a/swh/indexer/metadata.py +++ b/swh/indexer/metadata.py @@ -1,300 +1,337 @@ # Copyright (C) 2017 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import click import logging -from swh.indexer.indexer import ContentIndexer, RevisionIndexer -from swh.indexer.metadata_dictionary import compute_metadata +from swh.indexer.indexer import ContentIndexer, RevisionIndexer, OriginIndexer +from swh.indexer.metadata_dictionary import MAPPINGS from swh.indexer.metadata_detector import detect_metadata from swh.indexer.metadata_detector import extract_minimal_metadata_dict from swh.indexer.storage import INDEXER_CFG_KEY from swh.model import hashutil class ContentMetadataIndexer(ContentIndexer): """Content-level indexer This indexer is in charge of: - filtering out content already indexed in content_metadata - reading content from objstorage with the content's id sha1 - computing translated_metadata by given context - using the metadata_dictionary as the 'swh-metadata-translator' tool - store result in content_metadata table """ CONFIG_BASE_FILENAME = 'indexer/metadata' def __init__(self, tool, config): # twisted way to use the exact same config of RevisionMetadataIndexer # object that uses internally ContentMetadataIndexer self.config = config self.config['tools'] = tool super().__init__() - def prepare(self): - self.results = [] - if self.config[INDEXER_CFG_KEY]: - self.idx_storage = self.config[INDEXER_CFG_KEY] - if self.config['objstorage']: - self.objstorage = self.config['objstorage'] - _log = logging.getLogger('requests.packages.urllib3.connectionpool') - _log.setLevel(logging.WARN) - self.log = logging.getLogger('swh.indexer') - self.tools = self.register_tools(self.config['tools']) - # NOTE: only one tool so far, change when no longer true - self.tool = self.tools[0] - def filter(self, ids): """Filter out known sha1s and return only missing ones. """ yield from self.idx_storage.content_metadata_missing(( { 'id': sha1, 'indexer_configuration_id': self.tool['id'], } for sha1 in ids )) def index(self, id, data): """Index sha1s' content and store result. Args: id (bytes): content's identifier data (bytes): raw content in bytes Returns: dict: dictionary representing a content_metadata. If the translation wasn't successful the translated_metadata keys will be returned as None """ result = { 'id': id, 'indexer_configuration_id': self.tool['id'], 'translated_metadata': None } try: - context = self.tool['tool_configuration']['context'] - result['translated_metadata'] = compute_metadata(context, data) + mapping_name = self.tool['tool_configuration']['context'] + result['translated_metadata'] = MAPPINGS[mapping_name] \ + .translate(data) # a twisted way to keep result with indexer object for get_results self.results.append(result) except Exception: self.log.exception( "Problem during tool retrieval of metadata translation") return result def persist_index_computations(self, results, policy_update): """Persist the results in storage. Args: results ([dict]): list of content_metadata, dict with the following keys: - id (bytes): content's identifier (sha1) - translated_metadata (jsonb): detected metadata policy_update ([str]): either 'update-dups' or 'ignore-dups' to respectively update duplicates or ignore them """ self.idx_storage.content_metadata_add( results, conflict_update=(policy_update == 'update-dups')) def get_results(self): """can be called only if run method was called before Returns: list: list of content_metadata entries calculated by current indexer """ return self.results class RevisionMetadataIndexer(RevisionIndexer): """Revision-level indexer This indexer is in charge of: - filtering revisions already indexed in revision_metadata table with defined computation tool - retrieve all entry_files in root directory - - use metadata_detector for file_names containig metadata + - use metadata_detector for file_names containing metadata - compute metadata translation if necessary and possible (depends on tool) - send sha1s to content indexing if possible - store the results for revision """ CONFIG_BASE_FILENAME = 'indexer/metadata' ADDITIONAL_CONFIG = { - 'storage': ('dict', { - 'cls': 'remote', - 'args': { - 'url': 'http://localhost:5002/', - } - }), 'tools': ('dict', { 'name': 'swh-metadata-detector', - 'version': '0.0.1', + 'version': '0.0.2', 'configuration': { 'type': 'local', - 'context': ['npm', 'codemeta'] + 'context': ['NpmMapping', 'CodemetaMapping'] }, }), } + ContentMetadataIndexer = ContentMetadataIndexer + def prepare(self): super().prepare() self.tool = self.tools[0] def filter(self, sha1_gits): """Filter out known sha1s and return only missing ones. """ yield from self.idx_storage.revision_metadata_missing(( { 'id': sha1_git, 'indexer_configuration_id': self.tool['id'], } for sha1_git in sha1_gits )) def index(self, rev): """Index rev by processing it and organizing result. use metadata_detector to iterate on filenames - if one filename detected -> sends file to content indexer - if multiple file detected -> translation needed at revision level Args: rev (bytes): revision artifact from storage Returns: dict: dictionary representing a revision_metadata, with keys: - id (bytes): rev's identifier (sha1_git) - indexer_configuration_id (bytes): tool used - translated_metadata (bytes): dict of retrieved metadata """ try: result = { 'id': rev['id'], 'indexer_configuration_id': self.tool['id'], 'translated_metadata': None } root_dir = rev['directory'] dir_ls = self.storage.directory_ls(root_dir, recursive=False) - files = (entry for entry in dir_ls if entry['type'] == 'file') + files = [entry for entry in dir_ls if entry['type'] == 'file'] detected_files = detect_metadata(files) result['translated_metadata'] = self.translate_revision_metadata( detected_files) - except Exception as e: + except Exception: self.log.exception( 'Problem when indexing rev') return result def persist_index_computations(self, results, policy_update): """Persist the results in storage. Args: results ([dict]): list of content_mimetype, dict with the following keys: - id (bytes): content's identifier (sha1) - mimetype (bytes): mimetype in bytes - encoding (bytes): encoding in bytes policy_update ([str]): either 'update-dups' or 'ignore-dups' to respectively update duplicates or ignore them """ # TODO: add functions in storage to keep data in revision_metadata self.idx_storage.revision_metadata_add( results, conflict_update=(policy_update == 'update-dups')) def translate_revision_metadata(self, detected_files): """ Determine plan of action to translate metadata when containing one or multiple detected files: Args: detected_files (dict): dictionary mapping context names (e.g., "npm", "authors") to list of sha1 Returns: dict: dict with translated metadata according to the CodeMeta vocabulary """ translated_metadata = [] tool = { 'name': 'swh-metadata-translator', - 'version': '0.0.1', + 'version': '0.0.2', 'configuration': { 'type': 'local', 'context': None }, } # TODO: iterate on each context, on each file # -> get raw_contents # -> translate each content config = { INDEXER_CFG_KEY: self.idx_storage, 'objstorage': self.objstorage } for context in detected_files.keys(): tool['configuration']['context'] = context - c_metadata_indexer = ContentMetadataIndexer(tool, config) + c_metadata_indexer = self.ContentMetadataIndexer(tool, config) # sha1s that are in content_metadata table sha1s_in_storage = [] metadata_generator = self.idx_storage.content_metadata_get( detected_files[context]) for c in metadata_generator: # extracting translated_metadata sha1 = c['id'] sha1s_in_storage.append(sha1) local_metadata = c['translated_metadata'] # local metadata is aggregated if local_metadata: translated_metadata.append(local_metadata) sha1s_filtered = [item for item in detected_files[context] if item not in sha1s_in_storage] if sha1s_filtered: # schedule indexation of content try: c_metadata_indexer.run(sha1s_filtered, policy_update='ignore-dups') # on the fly possibility: results = c_metadata_indexer.get_results() for result in results: local_metadata = result['translated_metadata'] translated_metadata.append(local_metadata) except Exception as e: self.log.warn("""Exception while indexing content""", e) # transform translated_metadata into min set with swh-metadata-detector min_metadata = extract_minimal_metadata_dict(translated_metadata) return min_metadata +class OriginMetadataIndexer(OriginIndexer): + def filter(self, ids): + return ids + + def run(self, revisions_metadata, policy_update, *, origin_head_pairs): + """Expected to be called with the result of RevisionMetadataIndexer + as first argument; ie. not a list of ids as other indexers would. + + Args: + + * `revisions_metadata` (List[dict]): contains metadata from + revisions, along with the respective revision ids. It is + passed by RevisionMetadataIndexer via a Celery chain + triggered by OriginIndexer.next_step. + * `policy_update`: `'ignore-dups'` or `'update-dups'` + * `origin_head_pairs` (List[dict]): list of dictionaries with + keys `origin_id` and `revision_id`, which is the result + of OriginHeadIndexer. + """ + origin_head_map = {pair['origin_id']: pair['revision_id'] + for pair in origin_head_pairs} + + # Fix up the argument order. revisions_metadata has to be the + # first argument because of celery.chain; the next line calls + # run() with the usual order, ie. origin ids first. + return super().run(ids=list(origin_head_map), + policy_update=policy_update, + revisions_metadata=revisions_metadata, + origin_head_map=origin_head_map) + + def index(self, origin, *, revisions_metadata, origin_head_map): + # Get the last revision of the origin. + revision_id = origin_head_map[origin['id']] + + # Get the metadata of that revision, and return it + for revision_metadata in revisions_metadata: + if revision_metadata['id'] == revision_id: + return { + 'origin_id': origin['id'], + 'metadata': revision_metadata['translated_metadata'], + 'from_revision': revision_id, + 'indexer_configuration_id': + revision_metadata['indexer_configuration_id'], + } + + # If you get this KeyError with a message like this: + # 'foo' not in [b'foo'] + # you should check you're not using JSON as task serializer + raise KeyError('%r not in %r' % + (revision_id, [r['id'] for r in revisions_metadata])) + + def persist_index_computations(self, results, policy_update): + self.idx_storage.origin_intrinsic_metadata_add( + results, conflict_update=(policy_update == 'update-dups')) + + @click.command() @click.option('--revs', '-i', - default=['8dbb6aeb036e7fd80664eb8bfd1507881af1ba9f', - '026040ea79dec1b49b4e3e7beda9132b6b26b51b', - '9699072e21eded4be8d45e3b8d543952533fa190'], help='Default sha1_git to lookup', multiple=True) def main(revs): _git_sha1s = list(map(hashutil.hash_to_bytes, revs)) rev_metadata_indexer = RevisionMetadataIndexer() rev_metadata_indexer.run(_git_sha1s, 'update-dups') if __name__ == '__main__': logging.basicConfig(level=logging.INFO) main() diff --git a/swh/indexer/metadata_detector.py b/swh/indexer/metadata_detector.py index 78599e0..d26a7ef 100644 --- a/swh/indexer/metadata_detector.py +++ b/swh/indexer/metadata_detector.py @@ -1,73 +1,65 @@ # Copyright (C) 2017 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information -mapping_filenames = { - b"package.json": "npm", - b"codemeta.json": "codemeta" -} +from swh.indexer.metadata_dictionary import MAPPINGS def detect_metadata(files): """ Detects files potentially containing metadata Args: - file_entries (list): list of files Returns: - empty list if nothing was found - dictionary {mapping_filenames[name]:f['sha1']} """ results = {} - for f in files: - name = f['name'].lower().strip() - # TODO: possibility to detect extensions - if name in mapping_filenames: - tool = mapping_filenames[name] - if tool in results: - results[tool].append(f['sha1']) - else: - results[tool] = [f['sha1']] + for (mapping_name, mapping) in MAPPINGS.items(): + matches = mapping.detect_metadata_files(files) + if matches: + results[mapping_name] = matches return results def extract_minimal_metadata_dict(metadata_list): """ Every item in the metadata_list is a dict of translated_metadata in the CodeMeta vocabulary we wish to extract a minimal set of terms and keep all values corresponding - to this term + to this term without duplication Args: - metadata_list (list): list of dicts of translated_metadata Returns: - minimal_dict (dict): one dict with selected values of metadata """ minimal_dict = { "developmentStatus": [], "version": [], "operatingSystem": [], "description": [], "keywords": [], "issueTracker": [], "name": [], "author": [], "relatedLink": [], "url": [], - "type": [], "license": [], "maintainer": [], "email": [], "softwareRequirements": [], "identifier": [], "codeRepository": [] } for term in minimal_dict.keys(): - for metadata_dict in metadata_list: - if term in metadata_dict: - minimal_dict[term].append(metadata_dict[term]) + for metadata_item in metadata_list: + if term in metadata_item: + if not metadata_item[term] in minimal_dict[term]: + minimal_dict[term].append(metadata_item[term]) if not minimal_dict[term]: minimal_dict[term] = None return minimal_dict diff --git a/swh/indexer/metadata_dictionary.py b/swh/indexer/metadata_dictionary.py index 3ed9fc5..4266001 100644 --- a/swh/indexer/metadata_dictionary.py +++ b/swh/indexer/metadata_dictionary.py @@ -1,210 +1,214 @@ # Copyright (C) 2017 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information + +import abc +import csv import json +import os.path +import logging +import swh.indexer -def convert(raw_content): - """ - convert raw_content recursively: +CROSSWALK_TABLE_PATH = os.path.join(os.path.dirname(swh.indexer.__file__), + 'data', 'codemeta', 'crosswalk.csv') - - from bytes to string - - from string to dict - Args: - raw_content (bytes / string / dict) +def read_crosstable(fd): + reader = csv.reader(fd) + try: + header = next(reader) + except StopIteration: + raise ValueError('empty file') - Returns: - dict: content (if string was json, otherwise returns string) + data_sources = set(header) - {'Parent Type', 'Property', + 'Type', 'Description'} + assert 'codemeta-V1' in data_sources - """ - if isinstance(raw_content, bytes): - return convert(raw_content.decode()) - if isinstance(raw_content, str): - try: - content = json.loads(raw_content) - if content: - return content - else: - return raw_content - except json.decoder.JSONDecodeError: - return raw_content - if isinstance(raw_content, dict): - return raw_content - - -class BaseMapping(): + codemeta_translation = {data_source: {} for data_source in data_sources} + + for line in reader: # For each canonical name + canonical_name = dict(zip(header, line))['Property'] + for (col, value) in zip(header, line): # For each cell in the row + if col in data_sources: + # If that's not the parentType/property/type/description + for local_name in value.split('/'): + # For each of the data source's properties that maps + # to this canonical name + if local_name.strip(): + codemeta_translation[col][local_name.strip()] = \ + canonical_name + + return codemeta_translation + + +with open(CROSSWALK_TABLE_PATH) as fd: + CROSSWALK_TABLE = read_crosstable(fd) + + +MAPPINGS = {} + + +def register_mapping(cls): + MAPPINGS[cls.__name__] = cls() + return cls + + +class BaseMapping(metaclass=abc.ABCMeta): """Base class for mappings to inherit from To implement a new mapping: - inherit this class - - add a local property self.mapping - override translate function """ + def __init__(self): + self.log = logging.getLogger('%s.%s' % ( + self.__class__.__module__, + self.__class__.__name__)) + + @abc.abstractmethod + def detect_metadata_files(self, files): + """ + Detects files potentially containing metadata + Args: + - file_entries (list): list of files + + Returns: + - empty list if nothing was found + - list of sha1 otherwise + """ + pass + + @abc.abstractmethod + def translate(self, file_content): + pass + + +class DictMapping(BaseMapping): + """Base class for mappings that take as input a file that is mostly + a key-value store (eg. a shallow JSON dict).""" + + @property + @abc.abstractmethod + def mapping(self): + """A translation dict to map dict keys into a canonical name.""" + pass - def translate(self, content_dict): + def translate_dict(self, content_dict): """ - Tranlsates content by parsing content to a json object - and translating with the npm mapping (for now hard_coded mapping) + Translates content by parsing content from a dict object + and translating with the appropriate mapping Args: - context_text (text): should be json + content_dict (dict) Returns: - dict: translated metadata in jsonb form needed for the indexer + dict: translated metadata in json-friendly form needed for + the indexer """ translated_metadata = {} default = 'other' translated_metadata['other'] = {} try: for k, v in content_dict.items(): try: term = self.mapping.get(k, default) if term not in translated_metadata: translated_metadata[term] = v continue if isinstance(translated_metadata[term], str): in_value = translated_metadata[term] translated_metadata[term] = [in_value, v] continue if isinstance(translated_metadata[term], list): translated_metadata[term].append(v) continue if isinstance(translated_metadata[term], dict): translated_metadata[term][k] = v continue except KeyError: self.log.exception( "Problem during item mapping") continue except Exception: + raise return None return translated_metadata -class NpmMapping(BaseMapping): - """ - dedicated class for NPM (package.json) mapping and translation - """ - mapping = { - 'repository': 'codeRepository', - 'os': 'operatingSystem', - 'cpu': 'processorRequirements', - 'engines': 'processorRequirements', - 'dependencies': 'softwareRequirements', - 'bundleDependencies': 'softwareRequirements', - 'peerDependencies': 'softwareRequirements', - 'author': 'author', - 'contributor': 'contributor', - 'keywords': 'keywords', - 'license': 'license', - 'version': 'version', - 'description': 'description', - 'name': 'name', - 'devDependencies': 'softwareSuggestions', - 'optionalDependencies': 'softwareSuggestions', - 'bugs': 'issueTracker', - 'homepage': 'url' - } - - def translate(self, raw_content): - content_dict = convert(raw_content) - return super().translate(content_dict) +class JsonMapping(DictMapping): + """Base class for all mappings that use a JSON file as input.""" + @property + @abc.abstractmethod + def filename(self): + """The .json file to extract metadata from.""" + pass -class MavenMapping(BaseMapping): - """ - dedicated class for Maven (pom.xml) mapping and translation - """ - mapping = { - 'license': 'license', - 'version': 'version', - 'description': 'description', - 'name': 'name', - 'prerequisites': 'softwareRequirements', - 'repositories': 'codeRepository', - 'groupId': 'identifier', - 'ciManagement': 'contIntegration', - 'issuesManagement': 'issueTracker', - } + def detect_metadata_files(self, file_entries): + for entry in file_entries: + if entry['name'] == self.filename: + return [entry['sha1']] + return [] def translate(self, raw_content): - content = convert(raw_content) - # parse content from xml to dict - return super().translate(content) - + """ + Translates content by parsing content from a bytestring containing + json data and translating with the appropriate mapping -class DoapMapping(BaseMapping): - mapping = { + Args: + raw_content: bytes - } + Returns: + dict: translated metadata in json-friendly form needed for + the indexer - def translate(self, raw_content): - content = convert(raw_content) - # parse content from xml to dict - return super().translate(content) + """ + try: + raw_content = raw_content.decode() + except UnicodeDecodeError: + self.log.warning('Error unidecoding %r', raw_content) + return + try: + content_dict = json.loads(raw_content) + except json.JSONDecodeError: + self.log.warning('Error unjsoning %r' % raw_content) + return + return self.translate_dict(content_dict) -def parse_xml(content): +@register_mapping +class NpmMapping(JsonMapping): """ - Parses content from xml to a python dict - Args: - - content (text): the string form of the raw_content ( in xml) - - Returns: - - parsed_xml (dict): a python dict of the content after parsing + dedicated class for NPM (package.json) mapping and translation """ - # check if xml - # use xml parser to dict - return content - + mapping = CROSSWALK_TABLE['NodeJS'] + filename = b'package.json' -mapping_tool_fn = { - "npm": NpmMapping(), - "maven": MavenMapping(), - "doap_xml": DoapMapping() -} - -def compute_metadata(context, raw_content): +@register_mapping +class CodemetaMapping(JsonMapping): """ - first landing method: a dispatcher that sends content - to the right function to carry out the real parsing of syntax - and translation of terms - - Args: - context (text): defines to which function/tool the content is sent - content (text): the string form of the raw_content - - Returns: - dict: translated metadata jsonb dictionary needed for the indexer to - store in storage - + dedicated class for CodeMeta (codemeta.json) mapping and translation """ - if raw_content is None or raw_content is b"": - return None - - # TODO: keep mapping not in code (maybe fetch crosswalk from storage?) - # if fetched from storage should be done once for batch of sha1s - dictionary = mapping_tool_fn[context] - translated_metadata = dictionary.translate(raw_content) - return translated_metadata + mapping = CROSSWALK_TABLE['codemeta-V1'] + filename = b'codemeta.json' def main(): raw_content = """{"name": "test_name", "unknown_term": "ut"}""" raw_content1 = b"""{"name": "test_name", "unknown_term": "ut", "prerequisites" :"packageXYZ"}""" - result = compute_metadata("npm", raw_content) - result1 = compute_metadata("maven", raw_content1) + result = MAPPINGS["NpmMapping"].translate(raw_content) + result1 = MAPPINGS["MavenMapping"].translate(raw_content1) print(result) print(result1) if __name__ == "__main__": main() diff --git a/swh/indexer/mimetype.py b/swh/indexer/mimetype.py index 57bcd3a..858e75a 100644 --- a/swh/indexer/mimetype.py +++ b/swh/indexer/mimetype.py @@ -1,158 +1,158 @@ # Copyright (C) 2016-2017 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import click import magic from swh.model import hashutil from swh.scheduler import utils from .indexer import ContentIndexer def compute_mimetype_encoding(raw_content): """Determine mimetype and encoding from the raw content. Args: raw_content (bytes): content's raw data Returns: A dict with mimetype and encoding key and corresponding values (as bytes). """ r = magic.detect_from_content(raw_content) return { 'mimetype': r.mime_type.encode('utf-8'), 'encoding': r.encoding.encode('utf-8'), } class ContentMimetypeIndexer(ContentIndexer): """Indexer in charge of: - filtering out content already indexed - reading content from objstorage per the content's id (sha1) - computing {mimetype, encoding} from that content - store result in storage """ ADDITIONAL_CONFIG = { - 'destination_queue': ('str', None), + 'destination_task': ('str', None), 'tools': ('dict', { 'name': 'file', 'version': '1:5.30-1+deb9u1', 'configuration': { "type": "library", "debian-package": "python3-magic" }, }), } CONFIG_BASE_FILENAME = 'indexer/mimetype' def prepare(self): super().prepare() - destination_queue = self.config.get('destination_queue') - if destination_queue: - self.task_destination = utils.get_task(destination_queue) + destination_task = self.config.get('destination_task') + if destination_task: + self.destination_task = utils.get_task(destination_task) else: - self.task_destination = None + self.destination_task = None self.tool = self.tools[0] def filter(self, ids): """Filter out known sha1s and return only missing ones. """ yield from self.idx_storage.content_mimetype_missing(( { 'id': sha1, 'indexer_configuration_id': self.tool['id'], } for sha1 in ids )) def index(self, id, data): """Index sha1s' content and store result. Args: id (bytes): content's identifier data (bytes): raw content in bytes Returns: A dict, representing a content_mimetype, with keys: - id (bytes): content's identifier (sha1) - mimetype (bytes): mimetype in bytes - encoding (bytes): encoding in bytes """ try: properties = compute_mimetype_encoding(data) properties.update({ 'id': id, 'indexer_configuration_id': self.tool['id'], }) except TypeError: self.log.error('Detecting mimetype error for id %s' % ( hashutil.hash_to_hex(id), )) return None return properties def persist_index_computations(self, results, policy_update): """Persist the results in storage. Args: results ([dict]): list of content_mimetype, dict with the following keys: - id (bytes): content's identifier (sha1) - mimetype (bytes): mimetype in bytes - encoding (bytes): encoding in bytes policy_update ([str]): either 'update-dups' or 'ignore-dups' to respectively update duplicates or ignore them """ self.idx_storage.content_mimetype_add( results, conflict_update=(policy_update == 'update-dups')) def _filter_text(self, results): """Filter sha1 whose raw content is text. """ for result in results: if b'binary' in result['encoding']: continue yield result['id'] def next_step(self, results): """When the computations is done, we'd like to send over only text contents to the text content orchestrator. Args: results ([dict]): List of content_mimetype results, dict with the following keys: - id (bytes): content's identifier (sha1) - mimetype (bytes): mimetype in bytes - encoding (bytes): encoding in bytes """ - if self.task_destination: - self.task_destination.delay(list(self._filter_text(results))) + if self.destination_task: + self.destination_task.delay(list(self._filter_text(results))) @click.command() @click.option('--path', help="Path to execute index on") def main(path): with open(path, 'rb') as f: raw_content = f.read() print(compute_mimetype_encoding(raw_content)) if __name__ == '__main__': main() diff --git a/swh/indexer/orchestrator.py b/swh/indexer/orchestrator.py index cbf4667..ea12525 100644 --- a/swh/indexer/orchestrator.py +++ b/swh/indexer/orchestrator.py @@ -1,124 +1,133 @@ # Copyright (C) 2016-2017 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import random from celery import group from swh.core.config import SWHConfig from swh.core.utils import grouper from swh.scheduler import utils -from . import TASK_NAMES, INDEXER_CLASSES def get_class(clazz): """Get a symbol class dynamically by its fully qualified name string representation. """ parts = clazz.split('.') module = '.'.join(parts[:-1]) m = __import__(module) for comp in parts[1:]: m = getattr(m, comp) return m class BaseOrchestratorIndexer(SWHConfig): """The indexer orchestrator is in charge of dispatching batch of contents (filtered or not based on presence) to indexers. That dispatch is indexer specific, so the configuration reflects it: - when `check_presence` flag is true, filter out the contents already present for that indexer, otherwise send everything - broadcast those (filtered or not) contents to indexers in a `batch_size` fashioned For example:: indexers: mimetype: batch_size: 10 check_presence: false language: batch_size: 2 check_presence: true means: - send all contents received as batch of size 10 to the 'mimetype' indexer - send only unknown contents as batch of size 2 to the 'language' indexer. """ CONFIG_BASE_FILENAME = 'indexer/orchestrator' + # Overridable in child classes. + from . import TASK_NAMES, INDEXER_CLASSES + DEFAULT_CONFIG = { 'indexers': ('dict', { 'mimetype': { 'batch_size': 10, 'check_presence': True, }, }), } - def __init__(self): - super().__init__() - self.config = self.parse_config_file() - indexer_names = list(self.config['indexers'].keys()) - random.shuffle(indexer_names) + def prepare(self): + super().prepare() + self.prepare_tasks() + def prepare_tasks(self): + indexer_names = list(self.config['indexers']) + random.shuffle(indexer_names) indexers = {} tasks = {} for name in indexer_names: - if name not in TASK_NAMES: + if name not in self.TASK_NAMES: raise ValueError('%s must be one of %s' % ( - name, TASK_NAMES.keys())) + name, ', '.join(self.TASK_NAMES))) opts = self.config['indexers'][name] indexers[name] = ( - INDEXER_CLASSES[name], + self.INDEXER_CLASSES[name], opts['check_presence'], opts['batch_size']) - tasks[name] = utils.get_task(TASK_NAMES[name]) + tasks[name] = utils.get_task(self.TASK_NAMES[name]) self.indexers = indexers self.tasks = tasks def run(self, ids): + all_results = [] for name, (idx_class, filtering, batch_size) in self.indexers.items(): if filtering: policy_update = 'ignore-dups' indexer_class = get_class(idx_class) ids_filtered = list(indexer_class().filter(ids)) if not ids_filtered: continue else: policy_update = 'update-dups' ids_filtered = ids celery_tasks = [] for ids_to_send in grouper(ids_filtered, batch_size): celery_task = self.tasks[name].s( ids=list(ids_to_send), policy_update=policy_update) celery_tasks.append(celery_task) - group(celery_tasks).delay() + all_results.append(self._run_tasks(celery_tasks)) + + return all_results + + def _run_tasks(self, celery_tasks): + return group(celery_tasks).delay() class OrchestratorAllContentsIndexer(BaseOrchestratorIndexer): """Orchestrator which deals with batch of any types of contents. """ class OrchestratorTextContentsIndexer(BaseOrchestratorIndexer): """Orchestrator which deals with batch of text contents. """ CONFIG_BASE_FILENAME = 'indexer/orchestrator_text' diff --git a/swh/indexer/origin_head.py b/swh/indexer/origin_head.py new file mode 100644 index 0000000..e21d531 --- /dev/null +++ b/swh/indexer/origin_head.py @@ -0,0 +1,195 @@ +# Copyright (C) 2018 The Software Heritage developers +# See the AUTHORS file at the top-level directory of this distribution +# License: GNU General Public License version 3, or any later version +# See top-level LICENSE file for more information + +import re +import click +import logging + +from celery import chain + +from swh.indexer.indexer import OriginIndexer +from swh.indexer.tasks import OriginMetadata, RevisionMetadata + + +class OriginHeadIndexer(OriginIndexer): + """Origin-level indexer. + + This indexer is in charge of looking up the revision that acts as the + "head" of an origin. + + In git, this is usually the commit pointed to by the 'master' branch.""" + + ADDITIONAL_CONFIG = { + 'tools': ('dict', { + 'name': 'origin-metadata', + 'version': '0.0.1', + 'configuration': {}, + }), + } + + CONFIG_BASE_FILENAME = 'indexer/origin_head' + + revision_metadata_task = RevisionMetadata() + origin_intrinsic_metadata_task = OriginMetadata() + + def filter(self, ids): + yield from ids + + def persist_index_computations(self, results, policy_update): + """Do nothing. The indexer's results are not persistent, they + should only be piped to another indexer via the orchestrator.""" + pass + + def next_step(self, results): + """Once the head is found, call the RevisionMetadataIndexer + on these revisions, then call the OriginMetadataIndexer with + both the origin_id and the revision metadata, so it can copy the + revision metadata to the origin's metadata. + + Args: + results (Iterable[dict]): Iterable of return values from `index`. + + """ + if self.revision_metadata_task is None and \ + self.origin_intrinsic_metadata_task is None: + return + assert self.revision_metadata_task is not None + assert self.origin_intrinsic_metadata_task is not None + return chain( + self.revision_metadata_task.s( + ids=[res['revision_id'] for res in results], + policy_update='update-dups'), + self.origin_intrinsic_metadata_task.s( + origin_head_pairs=results, + policy_update='update-dups'), + )() + + # Dispatch + + def index(self, origin): + origin_id = origin['id'] + latest_snapshot = self.storage.snapshot_get_latest(origin_id) + method = getattr(self, '_try_get_%s_head' % origin['type'], None) + if method is None: + method = self._try_get_head_generic + rev_id = method(latest_snapshot) + if rev_id is None: + return None + result = { + 'origin_id': origin_id, + 'revision_id': rev_id, + } + return result + + # VCSs + + def _try_get_vcs_head(self, snapshot): + try: + if isinstance(snapshot, dict): + branches = snapshot['branches'] + if branches[b'HEAD']['target_type'] == 'revision': + return branches[b'HEAD']['target'] + except KeyError: + return None + + _try_get_hg_head = _try_get_git_head = _try_get_vcs_head + + # Tarballs + + _archive_filename_re = re.compile( + rb'^' + rb'(?P.*)[-_]' + rb'(?P[0-9]+(\.[0-9])*)' + rb'(?P[-+][a-zA-Z0-9.~]+?)?' + rb'(?P(\.[a-zA-Z0-9]+)+)' + rb'$') + + @classmethod + def _parse_version(cls, filename): + """Extracts the release version from an archive filename, + to get an ordering whose maximum is likely to be the last + version of the software + + >>> OriginHeadIndexer._parse_version(b'foo') + (-inf,) + >>> OriginHeadIndexer._parse_version(b'foo.tar.gz') + (-inf,) + >>> OriginHeadIndexer._parse_version(b'gnu-hello-0.0.1.tar.gz') + (0, 0, 1, 0) + >>> OriginHeadIndexer._parse_version(b'gnu-hello-0.0.1-beta2.tar.gz') + (0, 0, 1, -1, 'beta2') + >>> OriginHeadIndexer._parse_version(b'gnu-hello-0.0.1+foobar.tar.gz') + (0, 0, 1, 1, 'foobar') + """ + res = cls._archive_filename_re.match(filename) + if res is None: + return (float('-infinity'),) + version = [int(n) for n in res.group('version').decode().split('.')] + if res.group('preversion') is None: + version.append(0) + else: + preversion = res.group('preversion').decode() + if preversion.startswith('-'): + version.append(-1) + version.append(preversion[1:]) + elif preversion.startswith('+'): + version.append(1) + version.append(preversion[1:]) + else: + assert False, res.group('preversion') + return tuple(version) + + def _try_get_ftp_head(self, snapshot): + archive_names = list(snapshot['branches']) + max_archive_name = max(archive_names, key=self._parse_version) + r = self._try_resolve_target(snapshot['branches'], max_archive_name) + return r + + # Generic + + def _try_get_head_generic(self, snapshot): + # Works on 'deposit', 'svn', and 'pypi'. + try: + if isinstance(snapshot, dict): + branches = snapshot['branches'] + except KeyError: + return None + else: + return ( + self._try_resolve_target(branches, b'HEAD') or + self._try_resolve_target(branches, b'master') + ) + + def _try_resolve_target(self, branches, target_name): + try: + target = branches[target_name] + while target['target_type'] == 'alias': + target = branches[target['target']] + if target['target_type'] == 'revision': + return target['target'] + elif target['target_type'] == 'content': + return None # TODO + elif target['target_type'] == 'directory': + return None # TODO + elif target['target_type'] == 'release': + return None # TODO + else: + assert False + except KeyError: + return None + + +@click.command() +@click.option('--origins', '-i', + help='Origins to lookup, in the "type+url" format', + multiple=True) +def main(origins): + rev_metadata_indexer = OriginHeadIndexer() + rev_metadata_indexer.run(origins, 'update-dups', parse_ids=True) + + +if __name__ == '__main__': + logging.basicConfig(level=logging.INFO) + main() diff --git a/swh/indexer/rehash.py b/swh/indexer/rehash.py index 4530e2d..d3bfcbf 100644 --- a/swh/indexer/rehash.py +++ b/swh/indexer/rehash.py @@ -1,189 +1,189 @@ # Copyright (C) 2017 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import logging import itertools from collections import defaultdict from swh.core import utils from swh.core.config import SWHConfig from swh.model import hashutil from swh.objstorage import get_objstorage from swh.objstorage.exc import ObjNotFoundError from swh.storage import get_storage from swh.scheduler.utils import get_task class RecomputeChecksums(SWHConfig): """Class in charge of (re)computing content's hashes. Hashes to compute are defined across 2 configuration options: compute_checksums ([str]) - list of hash algorithms that py:func:`swh.model.hashutil.hash_data` - function should be able to deal with. For variable-length checksums, a - desired checksum length should also be provided. Their format is + list of hash algorithms that + py:func:`swh.model.hashutil.MultiHash.from_data` function should + be able to deal with. For variable-length checksums, a desired + checksum length should also be provided. Their format is : e.g: blake2:512 recompute_checksums (bool) a boolean to notify that we also want to recompute potential existing hashes specified in compute_checksums. Default to False. """ DEFAULT_CONFIG = { # The storage to read from or update metadata to 'storage': ('dict', { 'cls': 'remote', 'args': { 'url': 'http://localhost:5002/' }, }), # The objstorage to read contents' data from 'objstorage': ('dict', { 'cls': 'pathslicing', 'args': { 'root': '/srv/softwareheritage/objects', 'slicing': '0:2/2:4/4:6', }, }), # the set of checksums that should be computed. # Examples: 'sha1_git', 'blake2b512', 'blake2s256' 'compute_checksums': ( 'list[str]', []), # whether checksums that already exist in the DB should be # recomputed/updated or left untouched 'recompute_checksums': ('bool', False), # Number of contents to retrieve blobs at the same time 'batch_size_retrieve_content': ('int', 10), # Number of contents to update at the same time 'batch_size_update': ('int', 100), # Rescheduling task on error (if None, nothing is done) 'rescheduling_task': ('str', None), } CONFIG_BASE_FILENAME = 'indexer/rehash' def __init__(self): self.config = self.parse_config_file() self.storage = get_storage(**self.config['storage']) self.objstorage = get_objstorage(**self.config['objstorage']) self.compute_checksums = self.config['compute_checksums'] self.recompute_checksums = self.config[ 'recompute_checksums'] self.batch_size_retrieve_content = self.config[ 'batch_size_retrieve_content'] self.batch_size_update = self.config[ 'batch_size_update'] self.log = logging.getLogger('swh.indexer.rehash') rescheduling_task = self.config['rescheduling_task'] if rescheduling_task: self.rescheduling_task = get_task(rescheduling_task) else: self.rescheduling_task = None if not self.compute_checksums: raise ValueError('Checksums list should not be empty.') def _read_content_ids(self, contents): """Read the content identifiers from the contents. """ for c in contents: h = c['sha1'] if isinstance(h, str): h = hashutil.hash_to_bytes(h) yield h def get_new_contents_metadata(self, all_contents): """Retrieve raw contents and compute new checksums on the contents. Unknown or corrupted contents are skipped. Args: all_contents ([dict]): List of contents as dictionary with the necessary primary keys checksum_algorithms ([str]): List of checksums to compute Yields: tuple of: content to update, list of checksums computed """ content_ids = self._read_content_ids(all_contents) for contents in utils.grouper(content_ids, self.batch_size_retrieve_content): contents_iter = itertools.tee(contents, 2) try: content_metadata = self.storage.content_get_metadata( [s for s in contents_iter[0]]) except Exception: self.log.exception( 'Problem when reading contents metadata.') if self.rescheduling_task: self.log.warn('Rescheduling batch.') cs = [{'sha1': sha1} for sha1 in contents_iter[1]] self.rescheduling_task.delay(cs) continue for content in content_metadata: if self.recompute_checksums: # Recompute checksums provided # in compute_checksums options checksums_to_compute = list(self.compute_checksums) - else: # Compute checkums provided in compute_checksums + else: # Compute checksums provided in compute_checksums # options not already defined for that content checksums_to_compute = [h for h in self.compute_checksums if not content.get(h)] if not checksums_to_compute: # Nothing to recompute continue try: raw_content = self.objstorage.get(content['sha1']) except ObjNotFoundError: self.log.warn('Content %s not found in objstorage!' % content['sha1']) continue - # Actually computing the checksums for that content - updated_content = hashutil.hash_data( - raw_content, algorithms=checksums_to_compute) - content.update(updated_content) + content_hashes = hashutil.MultiHash.from_data( + raw_content, hash_names=checksums_to_compute).digest() + content.update(content_hashes) yield content, checksums_to_compute def run(self, contents): """Given a list of content: - (re)compute a given set of checksums on contents available in our object storage - update those contents with the new metadata Args: contents (dict): contents as dictionary with necessary keys. key present in such dictionary should be the ones defined in the 'primary_key' option. """ for data in utils.grouper( self.get_new_contents_metadata(contents), self.batch_size_update): groups = defaultdict(list) for content, keys_to_update in data: keys = ','.join(keys_to_update) groups[keys].append(content) for keys_to_update, contents in groups.items(): keys = keys_to_update.split(',') try: self.storage.content_update(contents, keys=keys) except Exception: self.log.exception('Problem during update.') if self.rescheduling_task: self.log.warn('Rescheduling batch.') cs = [{'sha1': c['sha1']} for c in contents] self.rescheduling_task.delay(cs) continue diff --git a/sql/swh-init.sql b/swh/indexer/sql/10-swh-init.sql similarity index 100% rename from sql/swh-init.sql rename to swh/indexer/sql/10-swh-init.sql diff --git a/sql/swh-enums.sql b/swh/indexer/sql/20-swh-enums.sql similarity index 100% rename from sql/swh-enums.sql rename to swh/indexer/sql/20-swh-enums.sql diff --git a/sql/swh-schema.sql b/swh/indexer/sql/30-swh-schema.sql similarity index 86% rename from sql/swh-schema.sql rename to swh/indexer/sql/30-swh-schema.sql index 6151217..6f45f14 100644 --- a/sql/swh-schema.sql +++ b/swh/indexer/sql/30-swh-schema.sql @@ -1,138 +1,140 @@ --- --- Software Heritage Indexers Data Model --- -- drop schema if exists swh cascade; -- create schema swh; -- set search_path to swh; create table dbversion ( version int primary key, release timestamptz, description text ); insert into dbversion(version, release, description) - values(115, now(), 'Work In Progress'); + values(116, now(), 'Work In Progress'); -- Computing metadata on sha1's contents -- a SHA1 checksum (not necessarily originating from Git) create domain sha1 as bytea check (length(value) = 20); -- a Git object ID, i.e., a SHA1 checksum create domain sha1_git as bytea check (length(value) = 20); create table indexer_configuration ( id serial not null, tool_name text not null, tool_version text not null, tool_configuration jsonb ); comment on table indexer_configuration is 'Indexer''s configuration version'; comment on column indexer_configuration.id is 'Tool identifier'; comment on column indexer_configuration.tool_version is 'Tool name'; comment on column indexer_configuration.tool_version is 'Tool version'; comment on column indexer_configuration.tool_configuration is 'Tool configuration: command line, flags, etc...'; -- Properties (mimetype, encoding, etc...) create table content_mimetype ( id sha1 not null, mimetype bytea not null, encoding bytea not null, indexer_configuration_id bigint not null ); comment on table content_mimetype is 'Metadata associated to a raw content'; comment on column content_mimetype.mimetype is 'Raw content Mimetype'; comment on column content_mimetype.encoding is 'Raw content encoding'; comment on column content_mimetype.indexer_configuration_id is 'Tool used to compute the information'; -- Language metadata create table content_language ( id sha1 not null, lang languages not null, indexer_configuration_id bigint not null ); comment on table content_language is 'Language information on a raw content'; comment on column content_language.lang is 'Language information'; comment on column content_language.indexer_configuration_id is 'Tool used to compute the information'; -- ctags information per content create table content_ctags ( id sha1 not null, name text not null, kind text not null, line bigint not null, lang ctags_languages not null, indexer_configuration_id bigint not null ); comment on table content_ctags is 'Ctags information on a raw content'; comment on column content_ctags.id is 'Content identifier'; comment on column content_ctags.name is 'Symbol name'; comment on column content_ctags.kind is 'Symbol kind (function, class, variable, const...)'; comment on column content_ctags.line is 'Symbol line'; comment on column content_ctags.lang is 'Language information for that content'; comment on column content_ctags.indexer_configuration_id is 'Tool used to compute the information'; create table fossology_license( id smallserial, name text not null ); comment on table fossology_license is 'Possible license recognized by license indexer'; comment on column fossology_license.id is 'License identifier'; comment on column fossology_license.name is 'License name'; create table content_fossology_license ( id sha1 not null, license_id smallserial not null, indexer_configuration_id bigint not null ); comment on table content_fossology_license is 'license associated to a raw content'; comment on column content_fossology_license.id is 'Raw content identifier'; comment on column content_fossology_license.license_id is 'One of the content''s license identifier'; comment on column content_fossology_license.indexer_configuration_id is 'Tool used to compute the information'; -- The table content_metadata provides a translation to files -- identified as potentially containning metadata with a translation tool (indexer_configuration_id) create table content_metadata( id sha1 not null, translated_metadata jsonb not null, indexer_configuration_id bigint not null ); comment on table content_metadata is 'metadata semantically translated from a content file'; comment on column content_metadata.id is 'sha1 of content file'; comment on column content_metadata.translated_metadata is 'result of translation with defined format'; comment on column content_metadata.indexer_configuration_id is 'tool used for translation'; -- The table revision_metadata provides a minimal set of intrinsic metadata -- detected with the detection tool (indexer_configuration_id) and aggregated -- from the content_metadata translation. create table revision_metadata( id sha1_git not null, translated_metadata jsonb not null, indexer_configuration_id bigint not null ); comment on table revision_metadata is 'metadata semantically detected and translated in a revision'; comment on column revision_metadata.id is 'sha1_git of revision'; comment on column revision_metadata.translated_metadata is 'result of detection and translation with defined format'; comment on column revision_metadata.indexer_configuration_id is 'tool used for detection'; -create table origin_metadata_translation( - id bigserial not null, -- PK origin_metadata identifier - result jsonb, - tool_id bigint +create table origin_intrinsic_metadata( + origin_id bigserial not null, + metadata jsonb, + indexer_configuration_id bigint not null, + from_revision sha1_git not null ); -comment on table origin_metadata_translation is 'keeps translated for an origin_metadata entry'; -comment on column origin_metadata_translation.id is 'the entry id in origin_metadata'; -comment on column origin_metadata_translation.result is 'translated_metadata result after translation with tool'; -comment on column origin_metadata_translation.tool_id is 'tool used for translation'; +comment on table origin_intrinsic_metadata is 'keeps intrinsic metadata for an origin'; +comment on column origin_intrinsic_metadata.origin_id is 'the entry id in origin'; +comment on column origin_intrinsic_metadata.metadata is 'metadata extracted from a revision'; +comment on column origin_intrinsic_metadata.indexer_configuration_id is 'tool used to generate this metadata'; +comment on column origin_intrinsic_metadata.from_revision is 'sha1 of the revision this metadata was copied from.'; diff --git a/sql/swh-func.sql b/swh/indexer/sql/40-swh-func.sql similarity index 87% rename from sql/swh-func.sql rename to swh/indexer/sql/40-swh-func.sql index c4096f6..5e87671 100644 --- a/sql/swh-func.sql +++ b/swh/indexer/sql/40-swh-func.sql @@ -1,381 +1,429 @@ -- Postgresql index helper function create or replace function hash_sha1(text) returns text language sql strict immutable as $$ select encode(public.digest($1, 'sha1'), 'hex') $$; comment on function hash_sha1(text) is 'Compute sha1 hash as text'; -- create a temporary table called tmp_TBLNAME, mimicking existing table -- TBLNAME -- -- Args: -- tblname: name of the table to mimick create or replace function swh_mktemp(tblname regclass) returns void language plpgsql as $$ begin execute format(' create temporary table tmp_%1$I (like %1$I including defaults) on commit drop; alter table tmp_%1$I drop column if exists object_id; ', tblname); return; end $$; -- create a temporary table for content_mimetype tmp_content_mimetype, create or replace function swh_mktemp_content_mimetype() returns void language sql as $$ create temporary table tmp_content_mimetype ( like content_mimetype including defaults ) on commit drop; $$; comment on function swh_mktemp_content_mimetype() IS 'Helper table to add mimetype information'; -- add tmp_content_mimetype entries to content_mimetype, overwriting -- duplicates if conflict_update is true, skipping duplicates otherwise. -- -- If filtering duplicates is in order, the call to -- swh_content_mimetype_missing must take place before calling this -- function. -- -- -- operates in bulk: 0. swh_mktemp(content_mimetype), 1. COPY to tmp_content_mimetype, -- 2. call this function create or replace function swh_content_mimetype_add(conflict_update boolean) returns void language plpgsql as $$ begin if conflict_update then insert into content_mimetype (id, mimetype, encoding, indexer_configuration_id) select id, mimetype, encoding, indexer_configuration_id from tmp_content_mimetype tcm on conflict(id, indexer_configuration_id) do update set mimetype = excluded.mimetype, encoding = excluded.encoding; else insert into content_mimetype (id, mimetype, encoding, indexer_configuration_id) select id, mimetype, encoding, indexer_configuration_id from tmp_content_mimetype tcm on conflict(id, indexer_configuration_id) do nothing; end if; return; end $$; comment on function swh_content_mimetype_add(boolean) IS 'Add new content mimetypes'; -- add tmp_content_language entries to content_language, overwriting -- duplicates if conflict_update is true, skipping duplicates otherwise. -- -- If filtering duplicates is in order, the call to -- swh_content_language_missing must take place before calling this -- function. -- -- operates in bulk: 0. swh_mktemp(content_language), 1. COPY to -- tmp_content_language, 2. call this function create or replace function swh_content_language_add(conflict_update boolean) returns void language plpgsql as $$ begin if conflict_update then insert into content_language (id, lang, indexer_configuration_id) select id, lang, indexer_configuration_id from tmp_content_language tcl on conflict(id, indexer_configuration_id) do update set lang = excluded.lang; else insert into content_language (id, lang, indexer_configuration_id) select id, lang, indexer_configuration_id from tmp_content_language tcl on conflict(id, indexer_configuration_id) do nothing; end if; return; end $$; comment on function swh_content_language_add(boolean) IS 'Add new content languages'; -- create a temporary table for retrieving content_language create or replace function swh_mktemp_content_language() returns void language sql as $$ create temporary table tmp_content_language ( like content_language including defaults ) on commit drop; $$; comment on function swh_mktemp_content_language() is 'Helper table to add content language'; -- create a temporary table for content_ctags tmp_content_ctags, create or replace function swh_mktemp_content_ctags() returns void language sql as $$ create temporary table tmp_content_ctags ( like content_ctags including defaults ) on commit drop; $$; comment on function swh_mktemp_content_ctags() is 'Helper table to add content ctags'; -- add tmp_content_ctags entries to content_ctags, overwriting -- duplicates if conflict_update is true, skipping duplicates otherwise. -- -- operates in bulk: 0. swh_mktemp(content_ctags), 1. COPY to tmp_content_ctags, -- 2. call this function create or replace function swh_content_ctags_add(conflict_update boolean) returns void language plpgsql as $$ begin if conflict_update then delete from content_ctags where id in (select tmp.id from tmp_content_ctags tmp inner join indexer_configuration i on i.id=tmp.indexer_configuration_id); end if; insert into content_ctags (id, name, kind, line, lang, indexer_configuration_id) select id, name, kind, line, lang, indexer_configuration_id from tmp_content_ctags tct on conflict(id, hash_sha1(name), kind, line, lang, indexer_configuration_id) do nothing; return; end $$; comment on function swh_content_ctags_add(boolean) IS 'Add new ctags symbols per content'; create type content_ctags_signature as ( id sha1, name text, kind text, line bigint, lang ctags_languages, tool_id integer, tool_name text, tool_version text, tool_configuration jsonb ); -- Search within ctags content. -- create or replace function swh_content_ctags_search( expression text, l integer default 10, last_sha1 sha1 default '\x0000000000000000000000000000000000000000') returns setof content_ctags_signature language sql as $$ select c.id, name, kind, line, lang, i.id as tool_id, tool_name, tool_version, tool_configuration from content_ctags c inner join indexer_configuration i on i.id = c.indexer_configuration_id where hash_sha1(name) = hash_sha1(expression) and c.id > last_sha1 order by id limit l; $$; comment on function swh_content_ctags_search(text, integer, sha1) IS 'Equality search through ctags'' symbols'; -- create a temporary table for content_fossology_license tmp_content_fossology_license, create or replace function swh_mktemp_content_fossology_license() returns void language sql as $$ create temporary table tmp_content_fossology_license ( id sha1, license text, indexer_configuration_id integer ) on commit drop; $$; comment on function swh_mktemp_content_fossology_license() is 'Helper table to add content license'; -- add tmp_content_fossology_license entries to content_fossology_license, overwriting -- duplicates if conflict_update is true, skipping duplicates otherwise. -- -- operates in bulk: 0. swh_mktemp(content_fossology_license), 1. COPY to -- tmp_content_fossology_license, 2. call this function create or replace function swh_content_fossology_license_add(conflict_update boolean) returns void language plpgsql as $$ begin -- insert unknown licenses first insert into fossology_license (name) select distinct license from tmp_content_fossology_license tmp where not exists (select 1 from fossology_license where name=tmp.license) on conflict(name) do nothing; if conflict_update then -- delete from content_fossology_license c -- using tmp_content_fossology_license tmp, indexer_configuration i -- where c.id = tmp.id and i.id=tmp.indexer_configuration_id delete from content_fossology_license where id in (select tmp.id from tmp_content_fossology_license tmp inner join indexer_configuration i on i.id=tmp.indexer_configuration_id); end if; insert into content_fossology_license (id, license_id, indexer_configuration_id) select tcl.id, (select id from fossology_license where name = tcl.license) as license, indexer_configuration_id from tmp_content_fossology_license tcl on conflict(id, license_id, indexer_configuration_id) do nothing; return; end $$; comment on function swh_content_fossology_license_add(boolean) IS 'Add new content licenses'; -- content_metadata functions -- add tmp_content_metadata entries to content_metadata, overwriting -- duplicates if conflict_update is true, skipping duplicates otherwise. -- -- If filtering duplicates is in order, the call to -- swh_content_metadata_missing must take place before calling this -- function. -- -- operates in bulk: 0. swh_mktemp(content_language), 1. COPY to -- tmp_content_metadata, 2. call this function create or replace function swh_content_metadata_add(conflict_update boolean) returns void language plpgsql as $$ begin if conflict_update then insert into content_metadata (id, translated_metadata, indexer_configuration_id) select id, translated_metadata, indexer_configuration_id from tmp_content_metadata tcm on conflict(id, indexer_configuration_id) do update set translated_metadata = excluded.translated_metadata; else insert into content_metadata (id, translated_metadata, indexer_configuration_id) select id, translated_metadata, indexer_configuration_id from tmp_content_metadata tcm on conflict(id, indexer_configuration_id) do nothing; end if; return; end $$; comment on function swh_content_metadata_add(boolean) IS 'Add new content metadata'; -- create a temporary table for retrieving content_metadata create or replace function swh_mktemp_content_metadata() returns void language sql as $$ create temporary table tmp_content_metadata ( like content_metadata including defaults ) on commit drop; $$; comment on function swh_mktemp_content_metadata() is 'Helper table to add content metadata'; -- end content_metadata functions -- add tmp_revision_metadata entries to revision_metadata, overwriting -- duplicates if conflict_update is true, skipping duplicates otherwise. -- -- If filtering duplicates is in order, the call to -- swh_revision_metadata_missing must take place before calling this -- function. -- -- operates in bulk: 0. swh_mktemp(content_language), 1. COPY to -- tmp_revision_metadata, 2. call this function create or replace function swh_revision_metadata_add(conflict_update boolean) returns void language plpgsql as $$ begin if conflict_update then insert into revision_metadata (id, translated_metadata, indexer_configuration_id) select id, translated_metadata, indexer_configuration_id from tmp_revision_metadata tcm on conflict(id, indexer_configuration_id) do update set translated_metadata = excluded.translated_metadata; else insert into revision_metadata (id, translated_metadata, indexer_configuration_id) select id, translated_metadata, indexer_configuration_id from tmp_revision_metadata tcm on conflict(id, indexer_configuration_id) do nothing; end if; return; end $$; comment on function swh_revision_metadata_add(boolean) IS 'Add new revision metadata'; -- create a temporary table for retrieving revision_metadata create or replace function swh_mktemp_revision_metadata() returns void language sql as $$ create temporary table tmp_revision_metadata ( like revision_metadata including defaults ) on commit drop; $$; comment on function swh_mktemp_revision_metadata() is 'Helper table to add revision metadata'; +-- create a temporary table for retrieving origin_intrinsic_metadata +create or replace function swh_mktemp_origin_intrinsic_metadata() + returns void + language sql +as $$ + create temporary table tmp_origin_intrinsic_metadata ( + like origin_intrinsic_metadata including defaults + ) on commit drop; +$$; + +comment on function swh_mktemp_origin_intrinsic_metadata() is 'Helper table to add origin intrinsic metadata'; + create or replace function swh_mktemp_indexer_configuration() returns void language sql as $$ create temporary table tmp_indexer_configuration ( like indexer_configuration including defaults ) on commit drop; alter table tmp_indexer_configuration drop column id; $$; -- add tmp_indexer_configuration entries to indexer_configuration, -- skipping duplicates if any. -- -- operates in bulk: 0. create temporary tmp_indexer_configuration, 1. COPY to -- it, 2. call this function to insert and filtering out duplicates create or replace function swh_indexer_configuration_add() returns setof indexer_configuration language plpgsql as $$ begin insert into indexer_configuration(tool_name, tool_version, tool_configuration) select tool_name, tool_version, tool_configuration from tmp_indexer_configuration tmp on conflict(tool_name, tool_version, tool_configuration) do nothing; return query select id, tool_name, tool_version, tool_configuration from tmp_indexer_configuration join indexer_configuration using(tool_name, tool_version, tool_configuration); return; end $$; + +-- add tmp_origin_intrinsic_metadata entries to origin_intrinsic_metadata, +-- overwriting duplicates if conflict_update is true, skipping duplicates +-- otherwise. +-- +-- If filtering duplicates is in order, the call to +-- swh_origin_intrinsic_metadata_missing must take place before calling this +-- function. +-- +-- operates in bulk: 0. swh_mktemp(content_language), 1. COPY to +-- tmp_origin_intrinsic_metadata, 2. call this function +create or replace function swh_origin_intrinsic_metadata_add( + conflict_update boolean) + returns void + language plpgsql +as $$ +begin + if conflict_update then + insert into origin_intrinsic_metadata (origin_id, metadata, indexer_configuration_id, from_revision) + select origin_id, metadata, indexer_configuration_id, from_revision + from tmp_origin_intrinsic_metadata + on conflict(origin_id, indexer_configuration_id) + do update set metadata = excluded.metadata; + + else + insert into origin_intrinsic_metadata (origin_id, metadata, indexer_configuration_id, from_revision) + select origin_id, metadata, indexer_configuration_id, from_revision + from tmp_origin_intrinsic_metadata + on conflict(origin_id, indexer_configuration_id) + do nothing; + end if; + return; +end +$$; + +comment on function swh_origin_intrinsic_metadata_add(boolean) IS 'Add new origin intrinsic metadata'; diff --git a/sql/swh-data.sql b/swh/indexer/sql/50-swh-data.sql similarity index 95% rename from sql/swh-data.sql rename to swh/indexer/sql/50-swh-data.sql index e429343..a76059f 100644 --- a/sql/swh-data.sql +++ b/swh/indexer/sql/50-swh-data.sql @@ -1,26 +1,26 @@ insert into indexer_configuration(tool_name, tool_version, tool_configuration) values ('nomos', '3.1.0rc2-31-ga2cbb8c', '{"command_line": "nomossa "}'); insert into indexer_configuration(tool_name, tool_version, tool_configuration) values ('file', '5.22', '{"command_line": "file --mime "}'); insert into indexer_configuration(tool_name, tool_version, tool_configuration) values ('universal-ctags', '~git7859817b', '{"command_line": "ctags --fields=+lnz --sort=no --links=no --output-format=json "}'); insert into indexer_configuration(tool_name, tool_version, tool_configuration) values ('pygments', '2.0.1+dfsg-1.1+deb8u1', '{"type": "library", "debian-package": "python3-pygments"}'); insert into indexer_configuration(tool_name, tool_version, tool_configuration) values ('pygments', '2.0.1+dfsg-1.1+deb8u1', '{"type": "library", "debian-package": "python3-pygments", "max_content_size": 10240}'); insert into indexer_configuration(tool_name, tool_version, tool_configuration) -values ('swh-metadata-translator', '0.0.1', '{"type": "local", "context": "npm"}'); +values ('swh-metadata-translator', '0.0.1', '{"type": "local", "context": "NpmMapping"}'); insert into indexer_configuration(tool_name, tool_version, tool_configuration) -values ('swh-metadata-detector', '0.0.1', '{"type": "local", "context": ["npm", "codemeta"]}'); +values ('swh-metadata-detector', '0.0.1', '{"type": "local", "context": ["NpmMapping", "CodemetaMapping"]}'); insert into indexer_configuration(tool_name, tool_version, tool_configuration) values ('swh-deposit', '0.0.1', '{"sword_version": "2"}'); insert into indexer_configuration(tool_name, tool_version, tool_configuration) values ('file', '1:5.30-1+deb9u1', '{"type": "library", "debian-package": "python3-magic"}'); diff --git a/sql/swh-indexes.sql b/swh/indexer/sql/60-swh-indexes.sql similarity index 80% rename from sql/swh-indexes.sql rename to swh/indexer/sql/60-swh-indexes.sql index addb720..130f3bc 100644 --- a/sql/swh-indexes.sql +++ b/swh/indexer/sql/60-swh-indexes.sql @@ -1,57 +1,66 @@ -- fossology_license create unique index fossology_license_pkey on fossology_license(id); alter table fossology_license add primary key using index fossology_license_pkey; create unique index on fossology_license(name); -- indexer_configuration create unique index concurrently indexer_configuration_pkey on indexer_configuration(id); alter table indexer_configuration add primary key using index indexer_configuration_pkey; create unique index on indexer_configuration(tool_name, tool_version, tool_configuration); -- content_ctags create index on content_ctags(id); create index on content_ctags(hash_sha1(name)); create unique index on content_ctags(id, hash_sha1(name), kind, line, lang, indexer_configuration_id); alter table content_ctags add constraint content_ctags_indexer_configuration_id_fkey foreign key (indexer_configuration_id) references indexer_configuration(id) not valid; alter table content_ctags validate constraint content_ctags_indexer_configuration_id_fkey; -- content_metadata create unique index content_metadata_pkey on content_metadata(id, indexer_configuration_id); alter table content_metadata add primary key using index content_metadata_pkey; alter table content_metadata add constraint content_metadata_indexer_configuration_id_fkey foreign key (indexer_configuration_id) references indexer_configuration(id) not valid; alter table content_metadata validate constraint content_metadata_indexer_configuration_id_fkey; -- revision_metadata create unique index revision_metadata_pkey on revision_metadata(id, indexer_configuration_id); alter table revision_metadata add primary key using index revision_metadata_pkey; alter table revision_metadata add constraint revision_metadata_indexer_configuration_id_fkey foreign key (indexer_configuration_id) references indexer_configuration(id) not valid; alter table revision_metadata validate constraint revision_metadata_indexer_configuration_id_fkey; -- content_mimetype create unique index content_mimetype_pkey on content_mimetype(id, indexer_configuration_id); alter table content_mimetype add primary key using index content_mimetype_pkey; alter table content_mimetype add constraint content_mimetype_indexer_configuration_id_fkey foreign key (indexer_configuration_id) references indexer_configuration(id) not valid; alter table content_mimetype validate constraint content_mimetype_indexer_configuration_id_fkey; -- content_language create unique index content_language_pkey on content_language(id, indexer_configuration_id); alter table content_language add primary key using index content_language_pkey; alter table content_language add constraint content_language_indexer_configuration_id_fkey foreign key (indexer_configuration_id) references indexer_configuration(id) not valid; alter table content_language validate constraint content_language_indexer_configuration_id_fkey; -- content_fossology_license create unique index content_fossology_license_pkey on content_fossology_license(id, license_id, indexer_configuration_id); alter table content_fossology_license add primary key using index content_fossology_license_pkey; alter table content_fossology_license add constraint content_fossology_license_license_id_fkey foreign key (license_id) references fossology_license(id) not valid; alter table content_fossology_license validate constraint content_fossology_license_license_id_fkey; alter table content_fossology_license add constraint content_fossology_license_indexer_configuration_id_fkey foreign key (indexer_configuration_id) references indexer_configuration(id) not valid; alter table content_fossology_license validate constraint content_fossology_license_indexer_configuration_id_fkey; + +-- origin_intrinsic_metadata +create unique index origin_intrinsic_metadata_pkey on origin_intrinsic_metadata(origin_id, indexer_configuration_id); +alter table origin_intrinsic_metadata add primary key using index origin_intrinsic_metadata_pkey; + +alter table origin_intrinsic_metadata add constraint origin_intrinsic_metadata_indexer_configuration_id_fkey foreign key (indexer_configuration_id) references indexer_configuration(id) not valid; +alter table origin_intrinsic_metadata validate constraint origin_intrinsic_metadata_indexer_configuration_id_fkey; +alter table origin_intrinsic_metadata add constraint origin_intrinsic_metadata_revision_metadata_fkey foreign key (from_revision, indexer_configuration_id) references revision_metadata(id, indexer_configuration_id) not valid; +alter table origin_intrinsic_metadata validate constraint origin_intrinsic_metadata_revision_metadata_fkey; diff --git a/swh/indexer/storage/__init__.py b/swh/indexer/storage/__init__.py index 1bfaa5e..78a2791 100644 --- a/swh/indexer/storage/__init__.py +++ b/swh/indexer/storage/__init__.py @@ -1,541 +1,608 @@ # Copyright (C) 2015-2018 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import json import psycopg2 from collections import defaultdict +from swh.core.api import remote_api_endpoint from swh.storage.common import db_transaction_generator, db_transaction from swh.storage.exc import StorageDBError from .db import Db from . import converters INDEXER_CFG_KEY = 'indexer_storage' def get_indexer_storage(cls, args): """Get an indexer storage object of class `storage_class` with arguments `storage_args`. Args: args (dict): dictionary with keys: - cls (str): storage's class, either 'local' or 'remote' - args (dict): dictionary with keys Returns: an instance of swh.indexer's storage (either local or remote) Raises: ValueError if passed an unknown storage class. """ if cls == 'remote': from .api.client import RemoteStorage as IndexerStorage elif cls == 'local': from . import IndexerStorage else: raise ValueError('Unknown indexer storage class `%s`' % cls) return IndexerStorage(**args) -class IndexerStorage(): +class IndexerStorage: """SWH Indexer Storage """ def __init__(self, db, min_pool_conns=1, max_pool_conns=10): """ Args: db_conn: either a libpq connection string, or a psycopg2 connection """ try: if isinstance(db, psycopg2.extensions.connection): self._pool = None self._db = Db(db) else: self._pool = psycopg2.pool.ThreadedConnectionPool( min_pool_conns, max_pool_conns, db ) self._db = None except psycopg2.OperationalError as e: raise StorageDBError(e) def get_db(self): if self._db: return self._db return Db.from_pool(self._pool) + @remote_api_endpoint('check_config') def check_config(self, *, check_write): """Check that the storage is configured and ready to go.""" # Check permissions on one of the tables with self.get_db().transaction() as cur: if check_write: check = 'INSERT' else: check = 'SELECT' cur.execute( "select has_table_privilege(current_user, 'content_mimetype', %s)", # noqa (check,) ) return cur.fetchone()[0] return True + @remote_api_endpoint('content_mimetype/missing') @db_transaction_generator() def content_mimetype_missing(self, mimetypes, db=None, cur=None): """List mimetypes missing from storage. Args: mimetypes (iterable): iterable of dict with keys: id (bytes): sha1 identifier indexer_configuration_id (int): tool used to compute the results Yields: an iterable of missing id for the tuple (id, indexer_configuration_id) """ for obj in db.content_mimetype_missing_from_list(mimetypes, cur): yield obj[0] + @remote_api_endpoint('content_mimetype/add') @db_transaction() def content_mimetype_add(self, mimetypes, conflict_update=False, db=None, cur=None): """Add mimetypes not present in storage. Args: mimetypes (iterable): dictionaries with keys: id (bytes): sha1 identifier mimetype (bytes): raw content's mimetype encoding (bytes): raw content's encoding indexer_configuration_id (int): tool's id used to compute the results conflict_update (bool): Flag to determine if we want to overwrite (true) or skip duplicates (false, the default) """ db.mktemp_content_mimetype(cur) db.copy_to(mimetypes, 'tmp_content_mimetype', ['id', 'mimetype', 'encoding', 'indexer_configuration_id'], cur) db.content_mimetype_add_from_temp(conflict_update, cur) + @remote_api_endpoint('content_mimetype') @db_transaction_generator() def content_mimetype_get(self, ids, db=None, cur=None): """Retrieve full content mimetype per ids. Args: ids (iterable): sha1 identifier Yields: mimetypes (iterable): dictionaries with keys: id (bytes): sha1 identifier mimetype (bytes): raw content's mimetype encoding (bytes): raw content's encoding tool (dict): Tool used to compute the language """ for c in db.content_mimetype_get_from_list(ids, cur): yield converters.db_to_mimetype( dict(zip(db.content_mimetype_cols, c))) + @remote_api_endpoint('content_language/missing') @db_transaction_generator() def content_language_missing(self, languages, db=None, cur=None): """List languages missing from storage. Args: languages (iterable): dictionaries with keys: id (bytes): sha1 identifier indexer_configuration_id (int): tool used to compute the results Yields: an iterable of missing id for the tuple (id, indexer_configuration_id) """ for obj in db.content_language_missing_from_list(languages, cur): yield obj[0] + @remote_api_endpoint('content_language') @db_transaction_generator() def content_language_get(self, ids, db=None, cur=None): """Retrieve full content language per ids. Args: ids (iterable): sha1 identifier Yields: languages (iterable): dictionaries with keys: id (bytes): sha1 identifier lang (bytes): raw content's language tool (dict): Tool used to compute the language """ for c in db.content_language_get_from_list(ids, cur): yield converters.db_to_language( dict(zip(db.content_language_cols, c))) + @remote_api_endpoint('content_language/add') @db_transaction() def content_language_add(self, languages, conflict_update=False, db=None, cur=None): """Add languages not present in storage. Args: languages (iterable): dictionaries with keys: id (bytes): sha1 lang (bytes): language detected conflict_update (bool): Flag to determine if we want to overwrite (true) or skip duplicates (false, the default) """ db.mktemp_content_language(cur) # empty language is mapped to 'unknown' db.copy_to( ({ 'id': l['id'], 'lang': 'unknown' if not l['lang'] else l['lang'], 'indexer_configuration_id': l['indexer_configuration_id'], } for l in languages), 'tmp_content_language', ['id', 'lang', 'indexer_configuration_id'], cur) db.content_language_add_from_temp(conflict_update, cur) + @remote_api_endpoint('content/ctags/missing') @db_transaction_generator() def content_ctags_missing(self, ctags, db=None, cur=None): """List ctags missing from storage. Args: ctags (iterable): dicts with keys: id (bytes): sha1 identifier indexer_configuration_id (int): tool used to compute the results Yields: an iterable of missing id for the tuple (id, indexer_configuration_id) """ for obj in db.content_ctags_missing_from_list(ctags, cur): yield obj[0] + @remote_api_endpoint('content/ctags') @db_transaction_generator() def content_ctags_get(self, ids, db=None, cur=None): """Retrieve ctags per id. Args: ids (iterable): sha1 checksums Yields: Dictionaries with keys: id (bytes): content's identifier name (str): symbol's name kind (str): symbol's kind language (str): language for that content tool (dict): tool used to compute the ctags' info """ for c in db.content_ctags_get_from_list(ids, cur): yield converters.db_to_ctags(dict(zip(db.content_ctags_cols, c))) + @remote_api_endpoint('content/ctags/add') @db_transaction() def content_ctags_add(self, ctags, conflict_update=False, db=None, cur=None): """Add ctags not present in storage Args: ctags (iterable): dictionaries with keys: id (bytes): sha1 ctags ([list): List of dictionary with keys: name, kind, line, language """ def _convert_ctags(__ctags): """Convert ctags dict to list of ctags. """ for ctags in __ctags: yield from converters.ctags_to_db(ctags) db.mktemp_content_ctags(cur) db.copy_to(list(_convert_ctags(ctags)), tblname='tmp_content_ctags', columns=['id', 'name', 'kind', 'line', 'lang', 'indexer_configuration_id'], cur=cur) db.content_ctags_add_from_temp(conflict_update, cur) + @remote_api_endpoint('content/ctags/search') @db_transaction_generator() def content_ctags_search(self, expression, limit=10, last_sha1=None, db=None, cur=None): """Search through content's raw ctags symbols. Args: expression (str): Expression to search for limit (int): Number of rows to return (default to 10). last_sha1 (str): Offset from which retrieving data (default to ''). Yields: rows of ctags including id, name, lang, kind, line, etc... """ for obj in db.content_ctags_search(expression, last_sha1, limit, cur=cur): yield converters.db_to_ctags(dict(zip(db.content_ctags_cols, obj))) + @remote_api_endpoint('content/fossology_license') @db_transaction_generator() def content_fossology_license_get(self, ids, db=None, cur=None): """Retrieve licenses per id. Args: ids (iterable): sha1 checksums Yields: list: dictionaries with the following keys: id (bytes) licenses ([str]): associated licenses for that content tool (dict): Tool used to compute the license """ d = defaultdict(list) for c in db.content_fossology_license_get_from_list(ids, cur): license = dict(zip(db.content_fossology_license_cols, c)) id_ = license['id'] d[id_].append(converters.db_to_fossology_license(license)) for id_, facts in d.items(): yield {id_: facts} + @remote_api_endpoint('content/fossology_license/add') @db_transaction() def content_fossology_license_add(self, licenses, conflict_update=False, db=None, cur=None): """Add licenses not present in storage. Args: licenses (iterable): dictionaries with keys: - id: sha1 - license ([bytes]): List of licenses associated to sha1 - tool (str): nomossa conflict_update: Flag to determine if we want to overwrite (true) or skip duplicates (false, the default) Returns: list: content_license entries which failed due to unknown licenses """ # Then, we add the correct ones db.mktemp_content_fossology_license(cur) db.copy_to( ({ 'id': sha1['id'], 'indexer_configuration_id': sha1['indexer_configuration_id'], 'license': license, } for sha1 in licenses for license in sha1['licenses']), tblname='tmp_content_fossology_license', columns=['id', 'license', 'indexer_configuration_id'], cur=cur) db.content_fossology_license_add_from_temp(conflict_update, cur) + @remote_api_endpoint('content_metadata/missing') @db_transaction_generator() def content_metadata_missing(self, metadata, db=None, cur=None): """List metadata missing from storage. Args: metadata (iterable): dictionaries with keys: id (bytes): sha1 identifier indexer_configuration_id (int): tool used to compute the results Yields: an iterable of missing id for the tuple (id, indexer_configuration_id) """ for obj in db.content_metadata_missing_from_list(metadata, cur): yield obj[0] + @remote_api_endpoint('content_metadata') @db_transaction_generator() def content_metadata_get(self, ids, db=None, cur=None): """Retrieve metadata per id. Args: ids (iterable): sha1 checksums Yields: list: dictionaries with the following keys: id (bytes) translated_metadata (str): associated metadata tool (dict): tool used to compute metadata """ for c in db.content_metadata_get_from_list(ids, cur): yield converters.db_to_metadata( dict(zip(db.content_metadata_cols, c))) + @remote_api_endpoint('content_metadata/add') @db_transaction() def content_metadata_add(self, metadata, conflict_update=False, db=None, cur=None): """Add metadata not present in storage. Args: metadata (iterable): dictionaries with keys: id: sha1 - translated_metadata: bytes / jsonb ? + translated_metadata: arbitrary dict conflict_update: Flag to determine if we want to overwrite (true) or skip duplicates (false, the default) """ db.mktemp_content_metadata(cur) - # empty metadata is mapped to 'unknown' db.copy_to(metadata, 'tmp_content_metadata', ['id', 'translated_metadata', 'indexer_configuration_id'], cur) db.content_metadata_add_from_temp(conflict_update, cur) + @remote_api_endpoint('revision_metadata/missing') @db_transaction_generator() def revision_metadata_missing(self, metadata, db=None, cur=None): """List metadata missing from storage. Args: metadata (iterable): dictionaries with keys: id (bytes): sha1_git revision identifier indexer_configuration_id (int): tool used to compute the results Returns: iterable: missing ids """ for obj in db.revision_metadata_missing_from_list(metadata, cur): yield obj[0] + @remote_api_endpoint('revision_metadata') @db_transaction_generator() def revision_metadata_get(self, ids, db=None, cur=None): """Retrieve revision metadata per id. Args: ids (iterable): sha1 checksums Yields: list: dictionaries with the following keys: id (bytes) translated_metadata (str): associated metadata tool (dict): tool used to compute metadata """ for c in db.revision_metadata_get_from_list(ids, cur): yield converters.db_to_metadata( dict(zip(db.revision_metadata_cols, c))) + @remote_api_endpoint('revision_metadata/add') @db_transaction() def revision_metadata_add(self, metadata, conflict_update=False, db=None, cur=None): """Add metadata not present in storage. Args: metadata (iterable): dictionaries with keys: - id: sha1_git of revision - - translated_metadata: bytes / jsonb ? + - translated_metadata: arbitrary dict conflict_update: Flag to determine if we want to overwrite (true) or skip duplicates (false, the default) """ db.mktemp_revision_metadata(cur) - # empty metadata is mapped to 'unknown' db.copy_to(metadata, 'tmp_revision_metadata', ['id', 'translated_metadata', 'indexer_configuration_id'], cur) db.revision_metadata_add_from_temp(conflict_update, cur) + @remote_api_endpoint('origin_intrinsic_metadata') + @db_transaction_generator() + def origin_intrinsic_metadata_get(self, ids, db=None, cur=None): + """Retrieve origin metadata per id. + + Args: + ids (iterable): origin identifiers + + Yields: + list: dictionaries with the following keys: + + id (int) + translated_metadata (str): associated metadata + tool (dict): tool used to compute metadata + + """ + for c in db.origin_intrinsic_metadata_get_from_list(ids, cur): + yield converters.db_to_metadata( + dict(zip(db.origin_intrinsic_metadata_cols, c))) + + @remote_api_endpoint('origin_intrinsic_metadata/add') + @db_transaction() + def origin_intrinsic_metadata_add(self, metadata, + conflict_update=False, db=None, + cur=None): + """Add origin metadata not present in storage. + + Args: + metadata (iterable): dictionaries with keys: + + - origin_id: origin identifier + - from_revision: sha1 id of the revision used to generate + these metadata. + - metadata: arbitrary dict + + conflict_update: Flag to determine if we want to overwrite (true) + or skip duplicates (false, the default) + + """ + db.mktemp_origin_intrinsic_metadata(cur) + + db.copy_to(metadata, 'tmp_origin_intrinsic_metadata', + ['origin_id', 'metadata', 'indexer_configuration_id', + 'from_revision'], + cur) + db.origin_intrinsic_metadata_add_from_temp(conflict_update, cur) + + @remote_api_endpoint('indexer_configuration/add') @db_transaction_generator() def indexer_configuration_add(self, tools, db=None, cur=None): """Add new tools to the storage. Args: tools ([dict]): List of dictionary representing tool to insert in the db. Dictionary with the following keys:: tool_name (str): tool's name tool_version (str): tool's version tool_configuration (dict): tool's configuration (free form dict) Returns: List of dict inserted in the db (holding the id key as well). The order of the list is not guaranteed to match the order of the initial list. """ db.mktemp_indexer_configuration(cur) db.copy_to(tools, 'tmp_indexer_configuration', ['tool_name', 'tool_version', 'tool_configuration'], cur) tools = db.indexer_configuration_add_from_temp(cur) for line in tools: yield dict(zip(db.indexer_configuration_cols, line)) + @remote_api_endpoint('indexer_configuration/data') @db_transaction() def indexer_configuration_get(self, tool, db=None, cur=None): """Retrieve tool information. Args: tool (dict): Dictionary representing a tool with the following keys:: tool_name (str): tool's name tool_version (str): tool's version tool_configuration (dict): tool's configuration (free form dict) Returns: The identifier of the tool if it exists, None otherwise. """ tool_conf = tool['tool_configuration'] if isinstance(tool_conf, dict): tool_conf = json.dumps(tool_conf) idx = db.indexer_configuration_get(tool['tool_name'], tool['tool_version'], tool_conf) if not idx: return None return dict(zip(db.indexer_configuration_cols, idx)) diff --git a/swh/indexer/storage/api/client.py b/swh/indexer/storage/api/client.py index 004d323..7dc616d 100644 --- a/swh/indexer/storage/api/client.py +++ b/swh/indexer/storage/api/client.py @@ -1,101 +1,20 @@ # Copyright (C) 2015-2018 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information - from swh.core.api import SWHRemoteAPI from swh.storage.exc import StorageAPIError +from .. import IndexerStorage + class RemoteStorage(SWHRemoteAPI): """Proxy to a remote storage API""" + + backend_class = IndexerStorage + def __init__(self, url, timeout=None): super().__init__( api_exception=StorageAPIError, url=url, timeout=timeout) - - def check_config(self, *, check_write): - return self.post('check_config', {'check_write': check_write}) - - def content_mimetype_add(self, mimetypes, conflict_update=False): - return self.post('content_mimetype/add', { - 'mimetypes': mimetypes, - 'conflict_update': conflict_update, - }) - - def content_mimetype_missing(self, mimetypes): - return self.post('content_mimetype/missing', {'mimetypes': mimetypes}) - - def content_mimetype_get(self, ids): - return self.post('content_mimetype', {'ids': ids}) - - def content_language_add(self, languages, conflict_update=False): - return self.post('content_language/add', { - 'languages': languages, - 'conflict_update': conflict_update, - }) - - def content_language_missing(self, languages): - return self.post('content_language/missing', {'languages': languages}) - - def content_language_get(self, ids): - return self.post('content_language', {'ids': ids}) - - def content_ctags_add(self, ctags, conflict_update=False): - return self.post('content/ctags/add', { - 'ctags': ctags, - 'conflict_update': conflict_update, - }) - - def content_ctags_missing(self, ctags): - return self.post('content/ctags/missing', {'ctags': ctags}) - - def content_ctags_get(self, ids): - return self.post('content/ctags', {'ids': ids}) - - def content_ctags_search(self, expression, limit=10, last_sha1=None): - return self.post('content/ctags/search', { - 'expression': expression, - 'limit': limit, - 'last_sha1': last_sha1, - }) - - def content_fossology_license_add(self, licenses, conflict_update=False): - return self.post('content/fossology_license/add', { - 'licenses': licenses, - 'conflict_update': conflict_update, - }) - - def content_fossology_license_get(self, ids): - return self.post('content/fossology_license', {'ids': ids}) - - def content_metadata_add(self, metadata, conflict_update=False): - return self.post('content_metadata/add', { - 'metadata': metadata, - 'conflict_update': conflict_update, - }) - - def content_metadata_missing(self, metadata): - return self.post('content_metadata/missing', {'metadata': metadata}) - - def content_metadata_get(self, ids): - return self.post('content_metadata', {'ids': ids}) - - def revision_metadata_add(self, metadata, conflict_update=False): - return self.post('revision_metadata/add', { - 'metadata': metadata, - 'conflict_update': conflict_update, - }) - - def revision_metadata_missing(self, metadata): - return self.post('revision_metadata/missing', {'metadata': metadata}) - - def revision_metadata_get(self, ids): - return self.post('revision_metadata', {'ids': ids}) - - def indexer_configuration_add(self, tools): - return self.post('indexer_configuration/add', {'tools': tools}) - - def indexer_configuration_get(self, tool): - return self.post('indexer_configuration/data', {'tool': tool}) diff --git a/swh/indexer/storage/api/server.py b/swh/indexer/storage/api/server.py index 4d64c72..912fccc 100644 --- a/swh/indexer/storage/api/server.py +++ b/swh/indexer/storage/api/server.py @@ -1,199 +1,75 @@ # Copyright (C) 2015-2018 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import logging import click -from flask import request - from swh.core import config -from swh.core.api import (SWHServerAPIApp, decode_request, - error_handler, +from swh.core.api import (SWHServerAPIApp, error_handler, encode_data_server as encode_data) from swh.indexer.storage import get_indexer_storage, INDEXER_CFG_KEY +from .. import IndexerStorage + DEFAULT_CONFIG_PATH = 'storage/indexer' DEFAULT_CONFIG = { INDEXER_CFG_KEY: ('dict', { 'cls': 'local', 'args': { 'db': 'dbname=softwareheritage-indexer-dev', }, }) } -app = SWHServerAPIApp(__name__) -storage = None - - -@app.errorhandler(Exception) -def my_error_handler(exception): - return error_handler(exception, encode_data) - - def get_storage(): global storage if not storage: storage = get_indexer_storage(**app.config[INDEXER_CFG_KEY]) return storage -@app.route('/') -def index(): - return 'SWH Indexer Storage API server' - - -@app.route('/check_config', methods=['POST']) -def check_config(): - return encode_data(get_storage().check_config(**decode_request(request))) - - -@app.route('/content_mimetype/add', methods=['POST']) -def content_mimetype_add(): - return encode_data( - get_storage().content_mimetype_add(**decode_request(request))) - - -@app.route('/content_mimetype/missing', methods=['POST']) -def content_mimetype_missing(): - return encode_data( - get_storage().content_mimetype_missing(**decode_request(request))) - - -@app.route('/content_mimetype', methods=['POST']) -def content_mimetype_get(): - return encode_data( - get_storage().content_mimetype_get(**decode_request(request))) - - -@app.route('/content_language/add', methods=['POST']) -def content_language_add(): - return encode_data( - get_storage().content_language_add(**decode_request(request))) - - -@app.route('/content_language/missing', methods=['POST']) -def content_language_missing(): - return encode_data( - get_storage().content_language_missing(**decode_request(request))) - - -@app.route('/content_language', methods=['POST']) -def content_language_get(): - return encode_data( - get_storage().content_language_get(**decode_request(request))) - - -@app.route('/content/ctags/add', methods=['POST']) -def content_ctags_add(): - return encode_data( - get_storage().content_ctags_add(**decode_request(request))) - - -@app.route('/content/ctags/search', methods=['POST']) -def content_ctags_search(): - return encode_data( - get_storage().content_ctags_search(**decode_request(request))) - - -@app.route('/content/ctags/missing', methods=['POST']) -def content_ctags_missing(): - return encode_data( - get_storage().content_ctags_missing(**decode_request(request))) - - -@app.route('/content/ctags', methods=['POST']) -def content_ctags_get(): - return encode_data( - get_storage().content_ctags_get(**decode_request(request))) - - -@app.route('/content/fossology_license/add', methods=['POST']) -def content_fossology_license_add(): - return encode_data( - get_storage().content_fossology_license_add(**decode_request(request))) - - -@app.route('/content/fossology_license', methods=['POST']) -def content_fossology_license_get(): - return encode_data( - get_storage().content_fossology_license_get(**decode_request(request))) - - -@app.route('/indexer_configuration/data', methods=['POST']) -def indexer_configuration_get(): - return encode_data(get_storage().indexer_configuration_get( - **decode_request(request))) - - -@app.route('/indexer_configuration/add', methods=['POST']) -def indexer_configuration_add(): - return encode_data(get_storage().indexer_configuration_add( - **decode_request(request))) - - -@app.route('/content_metadata/add', methods=['POST']) -def content_metadata_add(): - return encode_data( - get_storage().content_metadata_add(**decode_request(request))) - - -@app.route('/content_metadata/missing', methods=['POST']) -def content_metadata_missing(): - return encode_data( - get_storage().content_metadata_missing(**decode_request(request))) - - -@app.route('/content_metadata', methods=['POST']) -def content_metadata_get(): - return encode_data( - get_storage().content_metadata_get(**decode_request(request))) - - -@app.route('/revision_metadata/add', methods=['POST']) -def revision_metadata_add(): - return encode_data( - get_storage().revision_metadata_add(**decode_request(request))) +app = SWHServerAPIApp(__name__, + backend_class=IndexerStorage, + backend_factory=get_storage) +storage = None -@app.route('/revision_metadata/missing', methods=['POST']) -def revision_metadata_missing(): - return encode_data( - get_storage().revision_metadata_missing(**decode_request(request))) +@app.errorhandler(Exception) +def my_error_handler(exception): + return error_handler(exception, encode_data) -@app.route('/revision_metadata', methods=['POST']) -def revision_metadata_get(): - return encode_data( - get_storage().revision_metadata_get(**decode_request(request))) +@app.route('/') +def index(): + return 'SWH Indexer Storage API server' def run_from_webserver(environ, start_response, config_path=DEFAULT_CONFIG_PATH): """Run the WSGI app from the webserver, loading the configuration.""" cfg = config.load_named_config(config_path, DEFAULT_CONFIG) app.config.update(cfg) handler = logging.StreamHandler() app.logger.addHandler(handler) return app(environ, start_response) @click.command() @click.option('--host', default='0.0.0.0', help="Host to run the server") @click.option('--port', default=5007, type=click.INT, help="Binding port of the server") @click.option('--debug/--nodebug', default=True, help="Indicates if the server should run in debug mode") def launch(host, port, debug): cfg = config.load_named_config(DEFAULT_CONFIG_PATH, DEFAULT_CONFIG) app.config.update(cfg) app.run(host, port=int(port), debug=bool(debug)) if __name__ == '__main__': launch() diff --git a/swh/indexer/storage/converters.py b/swh/indexer/storage/converters.py index 3cf5da1..65859fc 100644 --- a/swh/indexer/storage/converters.py +++ b/swh/indexer/storage/converters.py @@ -1,139 +1,138 @@ # Copyright (C) 2015-2017 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information def ctags_to_db(ctags): """Convert a ctags entry into a ready ctags entry. Args: ctags (dict): ctags entry with the following keys: - id (bytes): content's identifier - tool_id (int): tool id used to compute ctags - ctags ([dict]): List of dictionary with the following keys: - name (str): symbol's name - kind (str): symbol's kind - line (int): symbol's line in the content - language (str): language Returns: list: list of ctags entries as dicts with the following keys: - id (bytes): content's identifier - name (str): symbol's name - kind (str): symbol's kind - language (str): language for that content - tool_id (int): tool id used to compute ctags """ id = ctags['id'] tool_id = ctags['indexer_configuration_id'] for ctag in ctags['ctags']: yield { 'id': id, 'name': ctag['name'], 'kind': ctag['kind'], 'line': ctag['line'], 'lang': ctag['lang'], 'indexer_configuration_id': tool_id, } def db_to_ctags(ctag): """Convert a ctags entry into a ready ctags entry. Args: ctags (dict): ctags entry with the following keys: - id (bytes): content's identifier - ctags ([dict]): List of dictionary with the following keys: - name (str): symbol's name - kind (str): symbol's kind - line (int): symbol's line in the content - language (str): language Returns: List of ctags ready entry (dict with the following keys): - id (bytes): content's identifier - name (str): symbol's name - kind (str): symbol's kind - language (str): language for that content - tool (dict): tool used to compute the ctags """ return { 'id': ctag['id'], 'name': ctag['name'], 'kind': ctag['kind'], 'line': ctag['line'], 'lang': ctag['lang'], 'tool': { 'id': ctag['tool_id'], 'name': ctag['tool_name'], 'version': ctag['tool_version'], 'configuration': ctag['tool_configuration'] } } def db_to_mimetype(mimetype): """Convert a ctags entry into a ready ctags output. """ return { 'id': mimetype['id'], 'encoding': mimetype['encoding'], 'mimetype': mimetype['mimetype'], 'tool': { 'id': mimetype['tool_id'], 'name': mimetype['tool_name'], 'version': mimetype['tool_version'], 'configuration': mimetype['tool_configuration'] } } def db_to_language(language): """Convert a language entry into a ready language output. """ return { 'id': language['id'], 'lang': language['lang'], 'tool': { 'id': language['tool_id'], 'name': language['tool_name'], 'version': language['tool_version'], 'configuration': language['tool_configuration'] } } def db_to_metadata(metadata): """Convert a metadata entry into a ready metadata output. """ - return { - 'id': metadata['id'], - 'translated_metadata': metadata['translated_metadata'], - 'tool': { - 'id': metadata['tool_id'], - 'name': metadata['tool_name'], - 'version': metadata['tool_version'], - 'configuration': metadata['tool_configuration'] - } + metadata['tool'] = { + 'id': metadata['tool_id'], + 'name': metadata['tool_name'], + 'version': metadata['tool_version'], + 'configuration': metadata['tool_configuration'] } + del metadata['tool_id'], metadata['tool_configuration'] + del metadata['tool_version'], metadata['tool_name'] + return metadata def db_to_fossology_license(license): return { 'licenses': license['licenses'], 'tool': { 'id': license['tool_id'], 'name': license['tool_name'], 'version': license['tool_version'], 'configuration': license['tool_configuration'], } } diff --git a/swh/indexer/storage/db.py b/swh/indexer/storage/db.py index 3c17d78..48b9b61 100644 --- a/swh/indexer/storage/db.py +++ b/swh/indexer/storage/db.py @@ -1,305 +1,334 @@ # Copyright (C) 2015-2018 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information from swh.model import hashutil from swh.storage.db import BaseDb, stored_procedure, cursor_to_bytes from swh.storage.db import line_to_bytes, execute_values_to_bytes class Db(BaseDb): """Proxy to the SWH Indexer DB, with wrappers around stored procedures """ content_mimetype_hash_keys = ['id', 'indexer_configuration_id'] def _missing_from_list(self, table, data, hash_keys, cur=None): """Read from table the data with hash_keys that are missing. Args: table (str): Table name (e.g content_mimetype, content_language, etc...) data (dict): Dict of data to read from hash_keys ([str]): List of keys to read in the data dict. Yields: The data which is missing from the db. """ cur = self._cursor(cur) keys = ', '.join(hash_keys) equality = ' AND '.join( ('t.%s = c.%s' % (key, key)) for key in hash_keys ) yield from execute_values_to_bytes( cur, """ select %s from (values %%s) as t(%s) where not exists ( select 1 from %s c where %s ) """ % (keys, keys, table, equality), (tuple(m[k] for k in hash_keys) for m in data) ) def content_mimetype_missing_from_list(self, mimetypes, cur=None): """List missing mimetypes. """ yield from self._missing_from_list( 'content_mimetype', mimetypes, self.content_mimetype_hash_keys, cur=cur) content_mimetype_cols = [ 'id', 'mimetype', 'encoding', 'tool_id', 'tool_name', 'tool_version', 'tool_configuration'] @stored_procedure('swh_mktemp_content_mimetype') def mktemp_content_mimetype(self, cur=None): pass def content_mimetype_add_from_temp(self, conflict_update, cur=None): self._cursor(cur).execute("SELECT swh_content_mimetype_add(%s)", (conflict_update, )) def _convert_key(self, key, main_table='c'): """Convert keys according to specific use in the module. Args: key (str): Key expression to change according to the alias used in the query main_table (str): Alias to use for the main table. Default to c for content_{something}. Expected: Tables content_{something} being aliased as 'c' (something in {language, mimetype, ...}), table indexer_configuration being aliased as 'i'. """ if key == 'id': return '%s.id' % main_table elif key == 'tool_id': return 'i.id as tool_id' elif key == 'licenses': return ''' array(select name from fossology_license where id = ANY( array_agg(%s.license_id))) as licenses''' % main_table return key - def _get_from_list(self, table, ids, cols, cur=None): + def _get_from_list(self, table, ids, cols, cur=None, id_col='id'): + """Fetches entries from the `table` such that their `id` field + (or whatever is given to `id_col`) is in `ids`. + Returns the columns `cols`. + The `cur`sor is used to connect to the database. + """ cur = self._cursor(cur) keys = map(self._convert_key, cols) - yield from execute_values_to_bytes( - cur, """ - select %s - from (values %%s) as t(id) - inner join %s c - on c.id=t.id + query = """ + select {keys} + from (values %s) as t(id) + inner join {table} c + on c.{id_col}=t.id inner join indexer_configuration i on c.indexer_configuration_id=i.id; - """ % (', '.join(keys), table), + """.format( + keys=', '.join(keys), + id_col=id_col, + table=table) + yield from execute_values_to_bytes( + cur, query, ((_id,) for _id in ids) ) def content_mimetype_get_from_list(self, ids, cur=None): yield from self._get_from_list( 'content_mimetype', ids, self.content_mimetype_cols, cur=cur) content_language_hash_keys = ['id', 'indexer_configuration_id'] def content_language_missing_from_list(self, languages, cur=None): """List missing languages. """ yield from self._missing_from_list( 'content_language', languages, self.content_language_hash_keys, cur=cur) content_language_cols = [ 'id', 'lang', 'tool_id', 'tool_name', 'tool_version', 'tool_configuration'] @stored_procedure('swh_mktemp_content_language') def mktemp_content_language(self, cur=None): pass def content_language_add_from_temp(self, conflict_update, cur=None): self._cursor(cur).execute("SELECT swh_content_language_add(%s)", (conflict_update, )) def content_language_get_from_list(self, ids, cur=None): yield from self._get_from_list( 'content_language', ids, self.content_language_cols, cur=cur) content_ctags_hash_keys = ['id', 'indexer_configuration_id'] def content_ctags_missing_from_list(self, ctags, cur=None): """List missing ctags. """ yield from self._missing_from_list( 'content_ctags', ctags, self.content_ctags_hash_keys, cur=cur) content_ctags_cols = [ 'id', 'name', 'kind', 'line', 'lang', 'tool_id', 'tool_name', 'tool_version', 'tool_configuration'] @stored_procedure('swh_mktemp_content_ctags') def mktemp_content_ctags(self, cur=None): pass def content_ctags_add_from_temp(self, conflict_update, cur=None): self._cursor(cur).execute("SELECT swh_content_ctags_add(%s)", (conflict_update, )) def content_ctags_get_from_list(self, ids, cur=None): cur = self._cursor(cur) keys = map(self._convert_key, self.content_ctags_cols) yield from execute_values_to_bytes( cur, """ select %s from (values %%s) as t(id) inner join content_ctags c on c.id=t.id inner join indexer_configuration i on c.indexer_configuration_id=i.id order by line """ % ', '.join(keys), ((_id,) for _id in ids) ) def content_ctags_search(self, expression, last_sha1, limit, cur=None): cur = self._cursor(cur) if not last_sha1: query = """SELECT %s FROM swh_content_ctags_search(%%s, %%s)""" % ( ','.join(self.content_ctags_cols)) cur.execute(query, (expression, limit)) else: if last_sha1 and isinstance(last_sha1, bytes): last_sha1 = '\\x%s' % hashutil.hash_to_hex(last_sha1) elif last_sha1: last_sha1 = '\\x%s' % last_sha1 query = """SELECT %s FROM swh_content_ctags_search(%%s, %%s, %%s)""" % ( ','.join(self.content_ctags_cols)) cur.execute(query, (expression, limit, last_sha1)) yield from cursor_to_bytes(cur) content_fossology_license_cols = [ 'id', 'tool_id', 'tool_name', 'tool_version', 'tool_configuration', 'licenses'] @stored_procedure('swh_mktemp_content_fossology_license') def mktemp_content_fossology_license(self, cur=None): pass def content_fossology_license_add_from_temp(self, conflict_update, cur=None): """Add new licenses per content. """ self._cursor(cur).execute( "SELECT swh_content_fossology_license_add(%s)", (conflict_update, )) def content_fossology_license_get_from_list(self, ids, cur=None): """Retrieve licenses per id. """ cur = self._cursor(cur) keys = map(self._convert_key, self.content_fossology_license_cols) yield from execute_values_to_bytes( cur, """ select %s from (values %%s) as t(id) inner join content_fossology_license c on t.id=c.id inner join indexer_configuration i on i.id=c.indexer_configuration_id group by c.id, i.id, i.tool_name, i.tool_version, i.tool_configuration; """ % ', '.join(keys), ((_id,) for _id in ids) ) content_metadata_hash_keys = ['id', 'indexer_configuration_id'] def content_metadata_missing_from_list(self, metadata, cur=None): """List missing metadata. """ yield from self._missing_from_list( 'content_metadata', metadata, self.content_metadata_hash_keys, cur=cur) content_metadata_cols = [ 'id', 'translated_metadata', 'tool_id', 'tool_name', 'tool_version', 'tool_configuration'] @stored_procedure('swh_mktemp_content_metadata') def mktemp_content_metadata(self, cur=None): pass def content_metadata_add_from_temp(self, conflict_update, cur=None): self._cursor(cur).execute("SELECT swh_content_metadata_add(%s)", (conflict_update, )) def content_metadata_get_from_list(self, ids, cur=None): yield from self._get_from_list( 'content_metadata', ids, self.content_metadata_cols, cur=cur) revision_metadata_hash_keys = ['id', 'indexer_configuration_id'] def revision_metadata_missing_from_list(self, metadata, cur=None): """List missing metadata. """ yield from self._missing_from_list( 'revision_metadata', metadata, self.revision_metadata_hash_keys, cur=cur) revision_metadata_cols = [ 'id', 'translated_metadata', 'tool_id', 'tool_name', 'tool_version', 'tool_configuration'] @stored_procedure('swh_mktemp_revision_metadata') def mktemp_revision_metadata(self, cur=None): pass def revision_metadata_add_from_temp(self, conflict_update, cur=None): self._cursor(cur).execute("SELECT swh_revision_metadata_add(%s)", (conflict_update, )) def revision_metadata_get_from_list(self, ids, cur=None): yield from self._get_from_list( 'revision_metadata', ids, self.revision_metadata_cols, cur=cur) + origin_intrinsic_metadata_cols = [ + 'origin_id', 'metadata', 'from_revision', + 'tool_id', 'tool_name', 'tool_version', 'tool_configuration'] + + @stored_procedure('swh_mktemp_origin_intrinsic_metadata') + def mktemp_origin_intrinsic_metadata(self, cur=None): pass + + def origin_intrinsic_metadata_add_from_temp( + self, conflict_update, cur=None): + cur = self._cursor(cur) + cur.execute( + "SELECT swh_origin_intrinsic_metadata_add(%s)", + (conflict_update, )) + + def origin_intrinsic_metadata_get_from_list(self, orig_ids, cur=None): + yield from self._get_from_list( + 'origin_intrinsic_metadata', orig_ids, + self.origin_intrinsic_metadata_cols, cur=cur, + id_col='origin_id') + indexer_configuration_cols = ['id', 'tool_name', 'tool_version', 'tool_configuration'] @stored_procedure('swh_mktemp_indexer_configuration') def mktemp_indexer_configuration(self, cur=None): pass def indexer_configuration_add_from_temp(self, cur=None): cur = self._cursor(cur) cur.execute("SELECT %s from swh_indexer_configuration_add()" % ( ','.join(self.indexer_configuration_cols), )) yield from cursor_to_bytes(cur) def indexer_configuration_get(self, tool_name, tool_version, tool_configuration, cur=None): cur = self._cursor(cur) cur.execute('''select %s from indexer_configuration where tool_name=%%s and tool_version=%%s and tool_configuration=%%s''' % ( ','.join(self.indexer_configuration_cols)), (tool_name, tool_version, tool_configuration)) data = cur.fetchone() if not data: return None return line_to_bytes(data) diff --git a/swh/indexer/tasks.py b/swh/indexer/tasks.py index deddd8b..1be3bed 100644 --- a/swh/indexer/tasks.py +++ b/swh/indexer/tasks.py @@ -1,90 +1,105 @@ # Copyright (C) 2016-2017 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import logging -from swh.scheduler.task import Task +import celery from .orchestrator import OrchestratorAllContentsIndexer from .orchestrator import OrchestratorTextContentsIndexer from .mimetype import ContentMimetypeIndexer from .language import ContentLanguageIndexer from .ctags import CtagsIndexer from .fossology_license import ContentFossologyLicenseIndexer from .rehash import RecomputeChecksums +from .metadata import RevisionMetadataIndexer, OriginMetadataIndexer logging.basicConfig(level=logging.INFO) -class SWHOrchestratorAllContentsTask(Task): +class Task(celery.Task): + def run_task(self, *args, **kwargs): + indexer = self.Indexer().run(*args, **kwargs) + return indexer.results + + +class OrchestratorAllContents(Task): """Main task in charge of reading batch contents (of any type) and broadcasting them back to other tasks. """ task_queue = 'swh_indexer_orchestrator_content_all' - def run_task(self, *args, **kwargs): - OrchestratorAllContentsIndexer().run(*args, **kwargs) + Indexer = OrchestratorAllContentsIndexer -class SWHOrchestratorTextContentsTask(Task): +class OrchestratorTextContents(Task): """Main task in charge of reading batch contents (of type text) and broadcasting them back to other tasks. """ task_queue = 'swh_indexer_orchestrator_content_text' - def run_task(self, *args, **kwargs): - OrchestratorTextContentsIndexer().run(*args, **kwargs) + Indexer = OrchestratorTextContentsIndexer + + +class RevisionMetadata(Task): + task_queue = 'swh_indexer_revision_metadata' + + serializer = 'msgpack' + Indexer = RevisionMetadataIndexer -class SWHContentMimetypeTask(Task): + +class OriginMetadata(Task): + task_queue = 'swh_indexer_origin_intrinsic_metadata' + + Indexer = OriginMetadataIndexer + + +class ContentMimetype(Task): """Task which computes the mimetype, encoding from the sha1's content. """ task_queue = 'swh_indexer_content_mimetype' - def run_task(self, *args, **kwargs): - ContentMimetypeIndexer().run(*args, **kwargs) + Indexer = ContentMimetypeIndexer -class SWHContentLanguageTask(Task): +class ContentLanguage(Task): """Task which computes the language from the sha1's content. """ task_queue = 'swh_indexer_content_language' def run_task(self, *args, **kwargs): ContentLanguageIndexer().run(*args, **kwargs) -class SWHCtagsTask(Task): +class Ctags(Task): """Task which computes ctags from the sha1's content. """ task_queue = 'swh_indexer_content_ctags' - def run_task(self, *args, **kwargs): - CtagsIndexer().run(*args, **kwargs) + Indexer = CtagsIndexer -class SWHContentFossologyLicenseTask(Task): +class ContentFossologyLicense(Task): """Task which computes licenses from the sha1's content. """ task_queue = 'swh_indexer_content_fossology_license' - def run_task(self, *args, **kwargs): - ContentFossologyLicenseIndexer().run(*args, **kwargs) + Indexer = ContentFossologyLicenseIndexer -class SWHRecomputeChecksumsTask(Task): +class RecomputeChecksums(Task): """Task which recomputes hashes and possibly new ones. """ task_queue = 'swh_indexer_content_rehash' - def run_task(self, *args, **kwargs): - RecomputeChecksums().run(*args, **kwargs) + Indexer = RecomputeChecksums diff --git a/swh/indexer/tests/__init__.py b/swh/indexer/tests/__init__.py index e69de29..35c2fa8 100644 --- a/swh/indexer/tests/__init__.py +++ b/swh/indexer/tests/__init__.py @@ -0,0 +1,15 @@ +import swh.indexer +from os import path + +from celery.contrib.testing.worker import start_worker +import celery.contrib.testing.tasks # noqa + +from swh.scheduler.celery_backend.config import app + +__all__ = ['start_worker_thread'] + +SQL_DIR = path.join(path.dirname(swh.indexer.__file__), 'sql') + + +def start_worker_thread(): + return start_worker(app) diff --git a/swh/indexer/tests/storage/test_api_client.py b/swh/indexer/tests/storage/test_api_client.py index 90caa35..19c3880 100644 --- a/swh/indexer/tests/storage/test_api_client.py +++ b/swh/indexer/tests/storage/test_api_client.py @@ -1,38 +1,38 @@ # Copyright (C) 2015-2018 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import unittest from swh.core.tests.server_testing import ServerTestFixture from swh.indexer.storage import INDEXER_CFG_KEY from swh.indexer.storage.api.client import RemoteStorage from swh.indexer.storage.api.server import app from .test_storage import CommonTestStorage class TestRemoteStorage(CommonTestStorage, ServerTestFixture, unittest.TestCase): """Test the indexer's remote storage API. This class doesn't define any tests as we want identical functionality between local and remote storage. All the tests are therefore defined in `class`:swh.indexer.storage.test_storage.CommonTestStorage. """ def setUp(self): self.config = { INDEXER_CFG_KEY: { 'cls': 'local', 'args': { - 'db': 'dbname=%s' % self.TEST_STORAGE_DB_NAME, + 'db': 'dbname=%s' % self.TEST_DB_NAME, } } } self.app = app super().setUp() self.storage = RemoteStorage(self.url()) diff --git a/swh/indexer/tests/storage/test_converters.py b/swh/indexer/tests/storage/test_converters.py index a5d6bfa..90d466b 100644 --- a/swh/indexer/tests/storage/test_converters.py +++ b/swh/indexer/tests/storage/test_converters.py @@ -1,198 +1,188 @@ -# Copyright (C) 2015-2017 The Software Heritage developers +# Copyright (C) 2015-2018 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import unittest -from nose.tools import istest -from nose.plugins.attrib import attr - from swh.indexer.storage import converters -@attr('!db') class TestConverters(unittest.TestCase): def setUp(self): self.maxDiff = None - @istest - def ctags_to_db(self): + def test_ctags_to_db(self): input_ctag = { 'id': b'some-id', 'indexer_configuration_id': 100, 'ctags': [ { 'name': 'some-name', 'kind': 'some-kind', 'line': 10, 'lang': 'Yaml', }, { 'name': 'main', 'kind': 'function', 'line': 12, 'lang': 'Yaml', }, ] } expected_ctags = [ { 'id': b'some-id', 'name': 'some-name', 'kind': 'some-kind', 'line': 10, 'lang': 'Yaml', 'indexer_configuration_id': 100, }, { 'id': b'some-id', 'name': 'main', 'kind': 'function', 'line': 12, 'lang': 'Yaml', 'indexer_configuration_id': 100, }] # when actual_ctags = list(converters.ctags_to_db(input_ctag)) # then - self.assertEquals(actual_ctags, expected_ctags) + self.assertEqual(actual_ctags, expected_ctags) - @istest - def db_to_ctags(self): + def test_db_to_ctags(self): input_ctags = { 'id': b'some-id', 'name': 'some-name', 'kind': 'some-kind', 'line': 10, 'lang': 'Yaml', 'tool_id': 200, 'tool_name': 'some-toolname', 'tool_version': 'some-toolversion', 'tool_configuration': {} } expected_ctags = { 'id': b'some-id', 'name': 'some-name', 'kind': 'some-kind', 'line': 10, 'lang': 'Yaml', 'tool': { 'id': 200, 'name': 'some-toolname', 'version': 'some-toolversion', 'configuration': {}, } } # when actual_ctags = converters.db_to_ctags(input_ctags) # then - self.assertEquals(actual_ctags, expected_ctags) + self.assertEqual(actual_ctags, expected_ctags) - @istest - def db_to_mimetype(self): + def test_db_to_mimetype(self): input_mimetype = { 'id': b'some-id', 'tool_id': 10, 'tool_name': 'some-toolname', 'tool_version': 'some-toolversion', 'tool_configuration': {}, 'encoding': b'ascii', 'mimetype': b'text/plain', } expected_mimetype = { 'id': b'some-id', 'encoding': b'ascii', 'mimetype': b'text/plain', 'tool': { 'id': 10, 'name': 'some-toolname', 'version': 'some-toolversion', 'configuration': {}, } } actual_mimetype = converters.db_to_mimetype(input_mimetype) - self.assertEquals(actual_mimetype, expected_mimetype) + self.assertEqual(actual_mimetype, expected_mimetype) - @istest - def db_to_language(self): + def test_db_to_language(self): input_language = { 'id': b'some-id', 'tool_id': 20, 'tool_name': 'some-toolname', 'tool_version': 'some-toolversion', 'tool_configuration': {}, 'lang': b'css', } expected_language = { 'id': b'some-id', 'lang': b'css', 'tool': { 'id': 20, 'name': 'some-toolname', 'version': 'some-toolversion', 'configuration': {}, } } actual_language = converters.db_to_language(input_language) - self.assertEquals(actual_language, expected_language) + self.assertEqual(actual_language, expected_language) - @istest - def db_to_fossology_license(self): + def test_db_to_fossology_license(self): input_license = { 'id': b'some-id', 'tool_id': 20, 'tool_name': 'nomossa', 'tool_version': '5.22', 'tool_configuration': {}, 'licenses': ['GPL2.0'], } expected_license = { 'licenses': ['GPL2.0'], 'tool': { 'id': 20, 'name': 'nomossa', 'version': '5.22', 'configuration': {}, } } actual_license = converters.db_to_fossology_license(input_license) - self.assertEquals(actual_license, expected_license) + self.assertEqual(actual_license, expected_license) - @istest - def db_to_metadata(self): + def test_db_to_metadata(self): input_metadata = { 'id': b'some-id', 'tool_id': 20, 'tool_name': 'some-toolname', 'tool_version': 'some-toolversion', 'tool_configuration': {}, 'translated_metadata': b'translated_metadata', } expected_metadata = { 'id': b'some-id', 'translated_metadata': b'translated_metadata', 'tool': { 'id': 20, 'name': 'some-toolname', 'version': 'some-toolversion', 'configuration': {}, } } actual_metadata = converters.db_to_metadata(input_metadata) - self.assertEquals(actual_metadata, expected_metadata) + self.assertEqual(actual_metadata, expected_metadata) diff --git a/swh/indexer/tests/storage/test_storage.py b/swh/indexer/tests/storage/test_storage.py index 7b97b61..ab343e2 100644 --- a/swh/indexer/tests/storage/test_storage.py +++ b/swh/indexer/tests/storage/test_storage.py @@ -1,1487 +1,1624 @@ # Copyright (C) 2015-2018 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information -import pathlib +import os import unittest -from nose.tools import istest -from nose.plugins.attrib import attr from swh.model.hashutil import hash_to_bytes from swh.indexer.storage import get_indexer_storage -from swh.core.tests.db_testing import DbTestFixture +from swh.core.tests.db_testing import SingleDbTestFixture +from swh.indexer.tests import SQL_DIR +import pytest -PATH_TO_STORAGE_TEST_DATA = '../../../../../swh-storage-testdata' +@pytest.mark.db +class BaseTestStorage(SingleDbTestFixture): + """Base test class for most indexer tests. -class StorageTestFixture: - """Mix this in a test subject class to get Storage testing support. - - This fixture requires to come before DbTestFixture in the inheritance list - as it uses its methods to setup its own internal database. - - Usage example: - - class TestStorage(StorageTestFixture, DbTestFixture): - ... + It adds support for Storage testing to the SingleDbTestFixture class. + It will also build the database from the swh-indexed/sql/*.sql files. """ - TEST_STORAGE_DB_NAME = 'softwareheritage-test-indexer' - - @classmethod - def setUpClass(cls): - if not hasattr(cls, 'DB_TEST_FIXTURE_IMPORTED'): - raise RuntimeError("StorageTestFixture needs to be followed by " - "DbTestFixture in the inheritance list.") - test_dir = pathlib.Path(__file__).absolute().parent - test_data_dir = test_dir / PATH_TO_STORAGE_TEST_DATA - test_db_dump = (test_data_dir / 'dumps/swh-indexer.dump').absolute() - cls.add_db(cls.TEST_STORAGE_DB_NAME, str(test_db_dump), 'pg_dump') - super().setUpClass() + TEST_DB_NAME = 'softwareheritage-test-indexer' + TEST_DB_DUMP = os.path.join(SQL_DIR, '*.sql') def setUp(self): super().setUp() - self.storage_config = { 'cls': 'local', 'args': { - 'db': 'dbname=%s' % self.TEST_STORAGE_DB_NAME, + 'db': 'dbname=%s' % self.TEST_DB_NAME, }, } self.storage = get_indexer_storage(**self.storage_config) - def tearDown(self): - self.storage = None - super().tearDown() - - def reset_storage_tables(self): - excluded = {'indexer_configuration'} - self.reset_db_tables(self.TEST_STORAGE_DB_NAME, excluded=excluded) - - db = self.test_db[self.TEST_STORAGE_DB_NAME] - db.conn.commit() - - -@attr('db') -class BaseTestStorage(StorageTestFixture, DbTestFixture): - - def setUp(self): - super().setUp() - self.sha1_1 = hash_to_bytes('34973274ccef6ab4dfaaf86599792fa9c3fe4689') self.sha1_2 = hash_to_bytes('61c2b3a30496d329e21af70dd2d7e097046d07b7') self.revision_id_1 = hash_to_bytes( '7026b7c1a2af56521e951c01ed20f255fa054238') self.revision_id_2 = hash_to_bytes( '7026b7c1a2af56521e9587659012345678904321') + self.origin_id_1 = 54974445 - cur = self.test_db[self.TEST_STORAGE_DB_NAME].cursor + cur = self.test_db[self.TEST_DB_NAME].cursor tools = {} cur.execute(''' select tool_name, id, tool_version, tool_configuration from indexer_configuration order by id''') for row in cur.fetchall(): key = row[0] while key in tools: key = '_' + key tools[key] = { 'id': row[1], 'name': row[0], 'version': row[2], 'configuration': row[3] } self.tools = tools def tearDown(self): self.reset_storage_tables() + self.storage = None super().tearDown() + def reset_storage_tables(self): + excluded = {'indexer_configuration'} + self.reset_db_tables(self.TEST_DB_NAME, excluded=excluded) + + db = self.test_db[self.TEST_DB_NAME] + db.conn.commit() + -@attr('db') +@pytest.mark.db class CommonTestStorage(BaseTestStorage): """Base class for Indexer Storage testing. """ - @istest - def check_config(self): + def test_check_config(self): self.assertTrue(self.storage.check_config(check_write=True)) self.assertTrue(self.storage.check_config(check_write=False)) - @istest - def content_mimetype_missing(self): + def test_content_mimetype_missing(self): # given tool_id = self.tools['file']['id'] mimetypes = [ { 'id': self.sha1_1, 'indexer_configuration_id': tool_id, }, { 'id': self.sha1_2, 'indexer_configuration_id': tool_id, }] # when actual_missing = self.storage.content_mimetype_missing(mimetypes) # then self.assertEqual(list(actual_missing), [ self.sha1_1, self.sha1_2, ]) # given self.storage.content_mimetype_add([{ 'id': self.sha1_2, 'mimetype': b'text/plain', 'encoding': b'utf-8', 'indexer_configuration_id': tool_id, }]) # when actual_missing = self.storage.content_mimetype_missing(mimetypes) # then self.assertEqual(list(actual_missing), [self.sha1_1]) - @istest - def content_mimetype_add__drop_duplicate(self): + def test_content_mimetype_add__drop_duplicate(self): # given tool_id = self.tools['file']['id'] mimetype_v1 = { 'id': self.sha1_2, 'mimetype': b'text/plain', 'encoding': b'utf-8', 'indexer_configuration_id': tool_id, } # given self.storage.content_mimetype_add([mimetype_v1]) # when actual_mimetypes = list(self.storage.content_mimetype_get( [self.sha1_2])) # then expected_mimetypes_v1 = [{ 'id': self.sha1_2, 'mimetype': b'text/plain', 'encoding': b'utf-8', 'tool': self.tools['file'], }] self.assertEqual(actual_mimetypes, expected_mimetypes_v1) # given mimetype_v2 = mimetype_v1.copy() mimetype_v2.update({ 'mimetype': b'text/html', 'encoding': b'us-ascii', }) self.storage.content_mimetype_add([mimetype_v2]) actual_mimetypes = list(self.storage.content_mimetype_get( [self.sha1_2])) # mimetype did not change as the v2 was dropped. self.assertEqual(actual_mimetypes, expected_mimetypes_v1) - @istest - def content_mimetype_add__update_in_place_duplicate(self): + def test_content_mimetype_add__update_in_place_duplicate(self): # given tool_id = self.tools['file']['id'] mimetype_v1 = { 'id': self.sha1_2, 'mimetype': b'text/plain', 'encoding': b'utf-8', 'indexer_configuration_id': tool_id, } # given self.storage.content_mimetype_add([mimetype_v1]) # when actual_mimetypes = list(self.storage.content_mimetype_get( [self.sha1_2])) expected_mimetypes_v1 = [{ 'id': self.sha1_2, 'mimetype': b'text/plain', 'encoding': b'utf-8', 'tool': self.tools['file'], }] # then self.assertEqual(actual_mimetypes, expected_mimetypes_v1) # given mimetype_v2 = mimetype_v1.copy() mimetype_v2.update({ 'mimetype': b'text/html', 'encoding': b'us-ascii', }) self.storage.content_mimetype_add([mimetype_v2], conflict_update=True) actual_mimetypes = list(self.storage.content_mimetype_get( [self.sha1_2])) expected_mimetypes_v2 = [{ 'id': self.sha1_2, 'mimetype': b'text/html', 'encoding': b'us-ascii', 'tool': { 'id': 2, 'name': 'file', 'version': '5.22', 'configuration': {'command_line': 'file --mime '} } }] # mimetype did change as the v2 was used to overwrite v1 self.assertEqual(actual_mimetypes, expected_mimetypes_v2) - @istest - def content_mimetype_get(self): + def test_content_mimetype_get(self): # given tool_id = self.tools['file']['id'] mimetypes = [self.sha1_2, self.sha1_1] mimetype1 = { 'id': self.sha1_2, 'mimetype': b'text/plain', 'encoding': b'utf-8', 'indexer_configuration_id': tool_id, } # when self.storage.content_mimetype_add([mimetype1]) # then actual_mimetypes = list(self.storage.content_mimetype_get(mimetypes)) # then expected_mimetypes = [{ 'id': self.sha1_2, 'mimetype': b'text/plain', 'encoding': b'utf-8', 'tool': self.tools['file'] }] self.assertEqual(actual_mimetypes, expected_mimetypes) - @istest - def content_language_missing(self): + def test_content_language_missing(self): # given tool_id = self.tools['pygments']['id'] languages = [ { 'id': self.sha1_2, 'indexer_configuration_id': tool_id, }, { 'id': self.sha1_1, 'indexer_configuration_id': tool_id, } ] # when actual_missing = list(self.storage.content_language_missing(languages)) # then self.assertEqual(list(actual_missing), [ self.sha1_2, self.sha1_1, ]) # given self.storage.content_language_add([{ 'id': self.sha1_2, 'lang': 'haskell', 'indexer_configuration_id': tool_id, }]) # when actual_missing = list(self.storage.content_language_missing(languages)) # then self.assertEqual(actual_missing, [self.sha1_1]) - @istest - def content_language_get(self): + def test_content_language_get(self): # given tool_id = self.tools['pygments']['id'] language1 = { 'id': self.sha1_2, 'lang': 'common-lisp', 'indexer_configuration_id': tool_id, } # when self.storage.content_language_add([language1]) # then actual_languages = list(self.storage.content_language_get( [self.sha1_2, self.sha1_1])) # then expected_languages = [{ 'id': self.sha1_2, 'lang': 'common-lisp', 'tool': self.tools['pygments'] }] self.assertEqual(actual_languages, expected_languages) - @istest - def content_language_add__drop_duplicate(self): + def test_content_language_add__drop_duplicate(self): # given tool_id = self.tools['pygments']['id'] language_v1 = { 'id': self.sha1_2, 'lang': 'emacslisp', 'indexer_configuration_id': tool_id, } # given self.storage.content_language_add([language_v1]) # when actual_languages = list(self.storage.content_language_get( [self.sha1_2])) # then expected_languages_v1 = [{ 'id': self.sha1_2, 'lang': 'emacslisp', 'tool': self.tools['pygments'] }] self.assertEqual(actual_languages, expected_languages_v1) # given language_v2 = language_v1.copy() language_v2.update({ 'lang': 'common-lisp', }) self.storage.content_language_add([language_v2]) actual_languages = list(self.storage.content_language_get( [self.sha1_2])) # language did not change as the v2 was dropped. self.assertEqual(actual_languages, expected_languages_v1) - @istest - def content_language_add__update_in_place_duplicate(self): + def test_content_language_add__update_in_place_duplicate(self): # given tool_id = self.tools['pygments']['id'] language_v1 = { 'id': self.sha1_2, 'lang': 'common-lisp', 'indexer_configuration_id': tool_id, } # given self.storage.content_language_add([language_v1]) # when actual_languages = list(self.storage.content_language_get( [self.sha1_2])) # then expected_languages_v1 = [{ 'id': self.sha1_2, 'lang': 'common-lisp', 'tool': self.tools['pygments'] }] self.assertEqual(actual_languages, expected_languages_v1) # given language_v2 = language_v1.copy() language_v2.update({ 'lang': 'emacslisp', }) self.storage.content_language_add([language_v2], conflict_update=True) actual_languages = list(self.storage.content_language_get( [self.sha1_2])) # language did not change as the v2 was dropped. expected_languages_v2 = [{ 'id': self.sha1_2, 'lang': 'emacslisp', 'tool': self.tools['pygments'] }] # language did change as the v2 was used to overwrite v1 self.assertEqual(actual_languages, expected_languages_v2) - @istest - def content_ctags_missing(self): + def test_content_ctags_missing(self): # given tool_id = self.tools['universal-ctags']['id'] ctags = [ { 'id': self.sha1_2, 'indexer_configuration_id': tool_id, }, { 'id': self.sha1_1, 'indexer_configuration_id': tool_id, } ] # when actual_missing = self.storage.content_ctags_missing(ctags) # then self.assertEqual(list(actual_missing), [ self.sha1_2, self.sha1_1 ]) # given self.storage.content_ctags_add([ { 'id': self.sha1_2, 'indexer_configuration_id': tool_id, 'ctags': [{ 'name': 'done', 'kind': 'variable', 'line': 119, 'lang': 'OCaml', }] }, ]) # when actual_missing = self.storage.content_ctags_missing(ctags) # then self.assertEqual(list(actual_missing), [self.sha1_1]) - @istest - def content_ctags_get(self): + def test_content_ctags_get(self): # given tool_id = self.tools['universal-ctags']['id'] ctags = [self.sha1_2, self.sha1_1] ctag1 = { 'id': self.sha1_2, 'indexer_configuration_id': tool_id, 'ctags': [ { 'name': 'done', 'kind': 'variable', 'line': 100, 'lang': 'Python', }, { 'name': 'main', 'kind': 'function', 'line': 119, 'lang': 'Python', }] } # when self.storage.content_ctags_add([ctag1]) # then actual_ctags = list(self.storage.content_ctags_get(ctags)) # then expected_ctags = [ { 'id': self.sha1_2, 'tool': self.tools['universal-ctags'], 'name': 'done', 'kind': 'variable', 'line': 100, 'lang': 'Python', }, { 'id': self.sha1_2, 'tool': self.tools['universal-ctags'], 'name': 'main', 'kind': 'function', 'line': 119, 'lang': 'Python', } ] self.assertEqual(actual_ctags, expected_ctags) - @istest - def content_ctags_search(self): + def test_content_ctags_search(self): # 1. given tool = self.tools['universal-ctags'] tool_id = tool['id'] ctag1 = { 'id': self.sha1_1, 'indexer_configuration_id': tool_id, 'ctags': [ { 'name': 'hello', 'kind': 'function', 'line': 133, 'lang': 'Python', }, { 'name': 'counter', 'kind': 'variable', 'line': 119, 'lang': 'Python', }, ] } ctag2 = { 'id': self.sha1_2, 'indexer_configuration_id': tool_id, 'ctags': [ { 'name': 'hello', 'kind': 'variable', 'line': 100, 'lang': 'C', }, ] } self.storage.content_ctags_add([ctag1, ctag2]) # 1. when actual_ctags = list(self.storage.content_ctags_search('hello', limit=1)) # 1. then self.assertEqual(actual_ctags, [ { 'id': ctag1['id'], 'tool': tool, 'name': 'hello', 'kind': 'function', 'line': 133, 'lang': 'Python', } ]) # 2. when actual_ctags = list(self.storage.content_ctags_search( 'hello', limit=1, last_sha1=ctag1['id'])) # 2. then self.assertEqual(actual_ctags, [ { 'id': ctag2['id'], 'tool': tool, 'name': 'hello', 'kind': 'variable', 'line': 100, 'lang': 'C', } ]) # 3. when actual_ctags = list(self.storage.content_ctags_search('hello')) # 3. then self.assertEqual(actual_ctags, [ { 'id': ctag1['id'], 'tool': tool, 'name': 'hello', 'kind': 'function', 'line': 133, 'lang': 'Python', }, { 'id': ctag2['id'], 'tool': tool, 'name': 'hello', 'kind': 'variable', 'line': 100, 'lang': 'C', }, ]) # 4. when actual_ctags = list(self.storage.content_ctags_search('counter')) # then self.assertEqual(actual_ctags, [{ 'id': ctag1['id'], 'tool': tool, 'name': 'counter', 'kind': 'variable', 'line': 119, 'lang': 'Python', }]) - @istest - def content_ctags_search_no_result(self): + def test_content_ctags_search_no_result(self): actual_ctags = list(self.storage.content_ctags_search('counter')) - self.assertEquals(actual_ctags, []) + self.assertEqual(actual_ctags, []) - @istest - def content_ctags_add__add_new_ctags_added(self): + def test_content_ctags_add__add_new_ctags_added(self): # given tool = self.tools['universal-ctags'] tool_id = tool['id'] ctag_v1 = { 'id': self.sha1_2, 'indexer_configuration_id': tool_id, 'ctags': [{ 'name': 'done', 'kind': 'variable', 'line': 100, 'lang': 'Scheme', }] } # given self.storage.content_ctags_add([ctag_v1]) self.storage.content_ctags_add([ctag_v1]) # conflict does nothing # when actual_ctags = list(self.storage.content_ctags_get( [self.sha1_2])) # then expected_ctags = [{ 'id': self.sha1_2, 'name': 'done', 'kind': 'variable', 'line': 100, 'lang': 'Scheme', 'tool': tool, }] self.assertEqual(actual_ctags, expected_ctags) # given ctag_v2 = ctag_v1.copy() ctag_v2.update({ 'ctags': [ { 'name': 'defn', 'kind': 'function', 'line': 120, 'lang': 'Scheme', } ] }) self.storage.content_ctags_add([ctag_v2]) expected_ctags = [ { 'id': self.sha1_2, 'name': 'done', 'kind': 'variable', 'line': 100, 'lang': 'Scheme', 'tool': tool, }, { 'id': self.sha1_2, 'name': 'defn', 'kind': 'function', 'line': 120, 'lang': 'Scheme', 'tool': tool, } ] actual_ctags = list(self.storage.content_ctags_get( [self.sha1_2])) self.assertEqual(actual_ctags, expected_ctags) - @istest - def content_ctags_add__update_in_place(self): + def test_content_ctags_add__update_in_place(self): # given tool = self.tools['universal-ctags'] tool_id = tool['id'] ctag_v1 = { 'id': self.sha1_2, 'indexer_configuration_id': tool_id, 'ctags': [{ 'name': 'done', 'kind': 'variable', 'line': 100, 'lang': 'Scheme', }] } # given self.storage.content_ctags_add([ctag_v1]) # when actual_ctags = list(self.storage.content_ctags_get( [self.sha1_2])) # then expected_ctags = [ { 'id': self.sha1_2, 'name': 'done', 'kind': 'variable', 'line': 100, 'lang': 'Scheme', 'tool': tool } ] self.assertEqual(actual_ctags, expected_ctags) # given ctag_v2 = ctag_v1.copy() ctag_v2.update({ 'ctags': [ { 'name': 'done', 'kind': 'variable', 'line': 100, 'lang': 'Scheme', }, { 'name': 'defn', 'kind': 'function', 'line': 120, 'lang': 'Scheme', } ] }) self.storage.content_ctags_add([ctag_v2], conflict_update=True) actual_ctags = list(self.storage.content_ctags_get( [self.sha1_2])) # ctag did change as the v2 was used to overwrite v1 expected_ctags = [ { 'id': self.sha1_2, 'name': 'done', 'kind': 'variable', 'line': 100, 'lang': 'Scheme', 'tool': tool, }, { 'id': self.sha1_2, 'name': 'defn', 'kind': 'function', 'line': 120, 'lang': 'Scheme', 'tool': tool, } ] self.assertEqual(actual_ctags, expected_ctags) - @istest - def content_fossology_license_get(self): + def test_content_fossology_license_get(self): # given tool = self.tools['nomos'] tool_id = tool['id'] license1 = { 'id': self.sha1_1, 'licenses': ['GPL-2.0+'], 'indexer_configuration_id': tool_id, } # when self.storage.content_fossology_license_add([license1]) # then actual_licenses = list(self.storage.content_fossology_license_get( [self.sha1_2, self.sha1_1])) expected_license = { self.sha1_1: [{ 'licenses': ['GPL-2.0+'], 'tool': tool, }] } # then self.assertEqual(actual_licenses, [expected_license]) - @istest - def content_fossology_license_add__new_license_added(self): + def test_content_fossology_license_add__new_license_added(self): # given tool = self.tools['nomos'] tool_id = tool['id'] license_v1 = { 'id': self.sha1_1, 'licenses': ['Apache-2.0'], 'indexer_configuration_id': tool_id, } # given self.storage.content_fossology_license_add([license_v1]) # conflict does nothing self.storage.content_fossology_license_add([license_v1]) # when actual_licenses = list(self.storage.content_fossology_license_get( [self.sha1_1])) # then expected_license = { self.sha1_1: [{ 'licenses': ['Apache-2.0'], 'tool': tool, }] } self.assertEqual(actual_licenses, [expected_license]) # given license_v2 = license_v1.copy() license_v2.update({ 'licenses': ['BSD-2-Clause'], }) self.storage.content_fossology_license_add([license_v2]) actual_licenses = list(self.storage.content_fossology_license_get( [self.sha1_1])) expected_license = { self.sha1_1: [{ 'licenses': ['Apache-2.0', 'BSD-2-Clause'], 'tool': tool }] } # license did not change as the v2 was dropped. self.assertEqual(actual_licenses, [expected_license]) - @istest - def content_fossology_license_add__update_in_place_duplicate(self): + def test_content_fossology_license_add__update_in_place_duplicate(self): # given tool = self.tools['nomos'] tool_id = tool['id'] license_v1 = { 'id': self.sha1_1, 'licenses': ['CECILL'], 'indexer_configuration_id': tool_id, } # given self.storage.content_fossology_license_add([license_v1]) # conflict does nothing self.storage.content_fossology_license_add([license_v1]) # when actual_licenses = list(self.storage.content_fossology_license_get( [self.sha1_1])) # then expected_license = { self.sha1_1: [{ 'licenses': ['CECILL'], 'tool': tool, }] } self.assertEqual(actual_licenses, [expected_license]) # given license_v2 = license_v1.copy() license_v2.update({ 'licenses': ['CECILL-2.0'] }) self.storage.content_fossology_license_add([license_v2], conflict_update=True) actual_licenses = list(self.storage.content_fossology_license_get( [self.sha1_1])) # license did change as the v2 was used to overwrite v1 expected_license = { self.sha1_1: [{ 'licenses': ['CECILL-2.0'], 'tool': tool, }] } self.assertEqual(actual_licenses, [expected_license]) - @istest - def content_metadata_missing(self): + def test_content_metadata_missing(self): # given tool_id = self.tools['swh-metadata-translator']['id'] metadata = [ { 'id': self.sha1_2, 'indexer_configuration_id': tool_id, }, { 'id': self.sha1_1, 'indexer_configuration_id': tool_id, } ] # when actual_missing = list(self.storage.content_metadata_missing(metadata)) # then self.assertEqual(list(actual_missing), [ self.sha1_2, self.sha1_1, ]) # given self.storage.content_metadata_add([{ 'id': self.sha1_2, 'translated_metadata': { 'other': {}, 'codeRepository': { 'type': 'git', 'url': 'https://github.com/moranegg/metadata_test' }, 'description': 'Simple package.json test for indexer', 'name': 'test_metadata', 'version': '0.0.1' }, 'indexer_configuration_id': tool_id }]) # when actual_missing = list(self.storage.content_metadata_missing(metadata)) # then self.assertEqual(actual_missing, [self.sha1_1]) - @istest - def content_metadata_get(self): + def test_content_metadata_get(self): # given tool_id = self.tools['swh-metadata-translator']['id'] metadata1 = { 'id': self.sha1_2, 'translated_metadata': { 'other': {}, 'codeRepository': { 'type': 'git', 'url': 'https://github.com/moranegg/metadata_test' }, 'description': 'Simple package.json test for indexer', 'name': 'test_metadata', 'version': '0.0.1' }, 'indexer_configuration_id': tool_id, } # when self.storage.content_metadata_add([metadata1]) # then actual_metadata = list(self.storage.content_metadata_get( [self.sha1_2, self.sha1_1])) expected_metadata = [{ 'id': self.sha1_2, 'translated_metadata': { 'other': {}, 'codeRepository': { 'type': 'git', 'url': 'https://github.com/moranegg/metadata_test' }, 'description': 'Simple package.json test for indexer', 'name': 'test_metadata', 'version': '0.0.1' }, 'tool': self.tools['swh-metadata-translator'] }] self.assertEqual(actual_metadata, expected_metadata) - @istest - def content_metadata_add_drop_duplicate(self): + def test_content_metadata_add_drop_duplicate(self): # given tool_id = self.tools['swh-metadata-translator']['id'] metadata_v1 = { 'id': self.sha1_2, 'translated_metadata': { 'other': {}, 'name': 'test_metadata', 'version': '0.0.1' }, 'indexer_configuration_id': tool_id, } # given self.storage.content_metadata_add([metadata_v1]) # when actual_metadata = list(self.storage.content_metadata_get( [self.sha1_2])) expected_metadata_v1 = [{ 'id': self.sha1_2, 'translated_metadata': { 'other': {}, 'name': 'test_metadata', 'version': '0.0.1' }, 'tool': self.tools['swh-metadata-translator'] }] self.assertEqual(actual_metadata, expected_metadata_v1) # given metadata_v2 = metadata_v1.copy() metadata_v2.update({ 'translated_metadata': { 'other': {}, 'name': 'test_drop_duplicated_metadata', 'version': '0.0.1' }, }) self.storage.content_metadata_add([metadata_v2]) # then actual_metadata = list(self.storage.content_metadata_get( [self.sha1_2])) # metadata did not change as the v2 was dropped. self.assertEqual(actual_metadata, expected_metadata_v1) - @istest - def content_metadata_add_update_in_place_duplicate(self): + def test_content_metadata_add_update_in_place_duplicate(self): # given tool_id = self.tools['swh-metadata-translator']['id'] metadata_v1 = { 'id': self.sha1_2, 'translated_metadata': { 'other': {}, 'name': 'test_metadata', 'version': '0.0.1' }, 'indexer_configuration_id': tool_id, } # given self.storage.content_metadata_add([metadata_v1]) # when actual_metadata = list(self.storage.content_metadata_get( [self.sha1_2])) # then expected_metadata_v1 = [{ 'id': self.sha1_2, 'translated_metadata': { 'other': {}, 'name': 'test_metadata', 'version': '0.0.1' }, 'tool': self.tools['swh-metadata-translator'] }] self.assertEqual(actual_metadata, expected_metadata_v1) # given metadata_v2 = metadata_v1.copy() metadata_v2.update({ 'translated_metadata': { 'other': {}, 'name': 'test_update_duplicated_metadata', 'version': '0.0.1' }, }) self.storage.content_metadata_add([metadata_v2], conflict_update=True) actual_metadata = list(self.storage.content_metadata_get( [self.sha1_2])) # language did not change as the v2 was dropped. expected_metadata_v2 = [{ 'id': self.sha1_2, 'translated_metadata': { 'other': {}, 'name': 'test_update_duplicated_metadata', 'version': '0.0.1' }, 'tool': self.tools['swh-metadata-translator'] }] # metadata did change as the v2 was used to overwrite v1 self.assertEqual(actual_metadata, expected_metadata_v2) - @istest - def revision_metadata_missing(self): + def test_revision_metadata_missing(self): # given tool_id = self.tools['swh-metadata-detector']['id'] metadata = [ { 'id': self.revision_id_1, 'indexer_configuration_id': tool_id, }, { 'id': self.revision_id_2, 'indexer_configuration_id': tool_id, } ] # when actual_missing = list(self.storage.revision_metadata_missing( metadata)) # then self.assertEqual(list(actual_missing), [ self.revision_id_1, self.revision_id_2, ]) # given self.storage.revision_metadata_add([{ 'id': self.revision_id_1, 'translated_metadata': { 'developmentStatus': None, 'version': None, 'operatingSystem': None, 'description': None, 'keywords': None, 'issueTracker': None, 'name': None, 'author': None, 'relatedLink': None, 'url': None, - 'type': None, 'license': None, 'maintainer': None, 'email': None, 'softwareRequirements': None, 'identifier': None }, 'indexer_configuration_id': tool_id }]) # when actual_missing = list(self.storage.revision_metadata_missing( metadata)) # then self.assertEqual(actual_missing, [self.revision_id_2]) - @istest - def revision_metadata_get(self): + def test_revision_metadata_get(self): # given tool_id = self.tools['swh-metadata-detector']['id'] metadata_rev = { 'id': self.revision_id_2, 'translated_metadata': { 'developmentStatus': None, 'version': None, 'operatingSystem': None, 'description': None, 'keywords': None, 'issueTracker': None, 'name': None, 'author': None, 'relatedLink': None, 'url': None, - 'type': None, 'license': None, 'maintainer': None, 'email': None, 'softwareRequirements': None, 'identifier': None }, 'indexer_configuration_id': tool_id } # when self.storage.revision_metadata_add([metadata_rev]) # then actual_metadata = list(self.storage.revision_metadata_get( [self.revision_id_2, self.revision_id_1])) expected_metadata = [{ 'id': self.revision_id_2, 'translated_metadata': metadata_rev['translated_metadata'], 'tool': self.tools['swh-metadata-detector'] }] self.assertEqual(actual_metadata, expected_metadata) - @istest - def revision_metadata_add_drop_duplicate(self): + def test_revision_metadata_add_drop_duplicate(self): # given tool_id = self.tools['swh-metadata-detector']['id'] metadata_v1 = { 'id': self.revision_id_1, 'translated_metadata': { 'developmentStatus': None, 'version': None, 'operatingSystem': None, 'description': None, 'keywords': None, 'issueTracker': None, 'name': None, 'author': None, 'relatedLink': None, 'url': None, - 'type': None, 'license': None, 'maintainer': None, 'email': None, 'softwareRequirements': None, 'identifier': None }, 'indexer_configuration_id': tool_id, } # given self.storage.revision_metadata_add([metadata_v1]) # when actual_metadata = list(self.storage.revision_metadata_get( [self.revision_id_1])) expected_metadata_v1 = [{ 'id': self.revision_id_1, 'translated_metadata': metadata_v1['translated_metadata'], 'tool': self.tools['swh-metadata-detector'] }] self.assertEqual(actual_metadata, expected_metadata_v1) # given metadata_v2 = metadata_v1.copy() metadata_v2.update({ 'translated_metadata': { 'name': 'test_metadata', 'author': 'MG', }, }) self.storage.revision_metadata_add([metadata_v2]) # then actual_metadata = list(self.storage.revision_metadata_get( [self.revision_id_1])) # metadata did not change as the v2 was dropped. self.assertEqual(actual_metadata, expected_metadata_v1) - @istest - def revision_metadata_add_update_in_place_duplicate(self): + def test_revision_metadata_add_update_in_place_duplicate(self): # given tool_id = self.tools['swh-metadata-detector']['id'] metadata_v1 = { 'id': self.revision_id_2, 'translated_metadata': { 'developmentStatus': None, 'version': None, 'operatingSystem': None, 'description': None, 'keywords': None, 'issueTracker': None, 'name': None, 'author': None, 'relatedLink': None, 'url': None, - 'type': None, 'license': None, 'maintainer': None, 'email': None, 'softwareRequirements': None, 'identifier': None }, 'indexer_configuration_id': tool_id, } # given self.storage.revision_metadata_add([metadata_v1]) # when actual_metadata = list(self.storage.revision_metadata_get( [self.revision_id_2])) # then expected_metadata_v1 = [{ 'id': self.revision_id_2, 'translated_metadata': metadata_v1['translated_metadata'], 'tool': self.tools['swh-metadata-detector'] }] self.assertEqual(actual_metadata, expected_metadata_v1) # given metadata_v2 = metadata_v1.copy() metadata_v2.update({ 'translated_metadata': { 'name': 'test_update_duplicated_metadata', 'author': 'MG' }, }) self.storage.revision_metadata_add([metadata_v2], conflict_update=True) actual_metadata = list(self.storage.revision_metadata_get( [self.revision_id_2])) - # language did not change as the v2 was dropped. expected_metadata_v2 = [{ 'id': self.revision_id_2, 'translated_metadata': metadata_v2['translated_metadata'], 'tool': self.tools['swh-metadata-detector'] }] # metadata did change as the v2 was used to overwrite v1 self.assertEqual(actual_metadata, expected_metadata_v2) - @istest - def indexer_configuration_add(self): + def test_origin_intrinsic_metadata_get(self): + # given + tool_id = self.tools['swh-metadata-detector']['id'] + + metadata = { + 'developmentStatus': None, + 'version': None, + 'operatingSystem': None, + 'description': None, + 'keywords': None, + 'issueTracker': None, + 'name': None, + 'author': None, + 'relatedLink': None, + 'url': None, + 'license': None, + 'maintainer': None, + 'email': None, + 'softwareRequirements': None, + 'identifier': None, + } + metadata_rev = { + 'id': self.revision_id_2, + 'translated_metadata': metadata, + 'indexer_configuration_id': tool_id, + } + metadata_origin = { + 'origin_id': self.origin_id_1, + 'metadata': metadata, + 'indexer_configuration_id': tool_id, + 'from_revision': self.revision_id_2, + } + + # when + self.storage.revision_metadata_add([metadata_rev]) + self.storage.origin_intrinsic_metadata_add([metadata_origin]) + + # then + actual_metadata = list(self.storage.origin_intrinsic_metadata_get( + [self.origin_id_1, 42])) + + expected_metadata = [{ + 'origin_id': self.origin_id_1, + 'metadata': metadata, + 'tool': self.tools['swh-metadata-detector'], + 'from_revision': self.revision_id_2, + }] + + self.assertEqual(actual_metadata, expected_metadata) + + def test_origin_intrinsic_metadata_add_drop_duplicate(self): + # given + tool_id = self.tools['swh-metadata-detector']['id'] + + metadata_v1 = { + 'developmentStatus': None, + 'version': None, + 'operatingSystem': None, + 'description': None, + 'keywords': None, + 'issueTracker': None, + 'name': None, + 'author': None, + 'relatedLink': None, + 'url': None, + 'license': None, + 'maintainer': None, + 'email': None, + 'softwareRequirements': None, + 'identifier': None + } + metadata_rev_v1 = { + 'id': self.revision_id_1, + 'translated_metadata': metadata_v1.copy(), + 'indexer_configuration_id': tool_id, + } + metadata_origin_v1 = { + 'origin_id': self.origin_id_1, + 'metadata': metadata_v1.copy(), + 'indexer_configuration_id': tool_id, + 'from_revision': self.revision_id_1, + } + + # given + self.storage.revision_metadata_add([metadata_rev_v1]) + self.storage.origin_intrinsic_metadata_add([metadata_origin_v1]) + + # when + actual_metadata = list(self.storage.origin_intrinsic_metadata_get( + [self.origin_id_1, 42])) + + expected_metadata_v1 = [{ + 'origin_id': self.origin_id_1, + 'metadata': metadata_v1, + 'tool': self.tools['swh-metadata-detector'], + 'from_revision': self.revision_id_1, + }] + + self.assertEqual(actual_metadata, expected_metadata_v1) + + # given + metadata_v2 = metadata_v1.copy() + metadata_v2.update({ + 'name': 'test_metadata', + 'author': 'MG', + }) + metadata_rev_v2 = metadata_rev_v1.copy() + metadata_origin_v2 = metadata_origin_v1.copy() + metadata_rev_v2['translated_metadata'] = metadata_v2 + metadata_origin_v2['translated_metadata'] = metadata_v2 + + self.storage.revision_metadata_add([metadata_rev_v2]) + self.storage.origin_intrinsic_metadata_add([metadata_origin_v2]) + + # then + actual_metadata = list(self.storage.origin_intrinsic_metadata_get( + [self.origin_id_1])) + + # metadata did not change as the v2 was dropped. + self.assertEqual(actual_metadata, expected_metadata_v1) + + def test_origin_intrinsic_metadata_add_update_in_place_duplicate(self): + # given + tool_id = self.tools['swh-metadata-detector']['id'] + + metadata_v1 = { + 'developmentStatus': None, + 'version': None, + 'operatingSystem': None, + 'description': None, + 'keywords': None, + 'issueTracker': None, + 'name': None, + 'author': None, + 'relatedLink': None, + 'url': None, + 'license': None, + 'maintainer': None, + 'email': None, + 'softwareRequirements': None, + 'identifier': None + } + metadata_rev_v1 = { + 'id': self.revision_id_2, + 'translated_metadata': metadata_v1, + 'indexer_configuration_id': tool_id, + } + metadata_origin_v1 = { + 'origin_id': self.origin_id_1, + 'metadata': metadata_v1.copy(), + 'indexer_configuration_id': tool_id, + 'from_revision': self.revision_id_2, + } + + # given + self.storage.revision_metadata_add([metadata_rev_v1]) + self.storage.origin_intrinsic_metadata_add([metadata_origin_v1]) + + # when + actual_metadata = list(self.storage.origin_intrinsic_metadata_get( + [self.origin_id_1])) + + # then + expected_metadata_v1 = [{ + 'origin_id': self.origin_id_1, + 'metadata': metadata_v1, + 'tool': self.tools['swh-metadata-detector'], + 'from_revision': self.revision_id_2, + }] + self.assertEqual(actual_metadata, expected_metadata_v1) + + # given + metadata_v2 = metadata_v1.copy() + metadata_v2.update({ + 'name': 'test_update_duplicated_metadata', + 'author': 'MG', + }) + metadata_rev_v2 = metadata_rev_v1.copy() + metadata_origin_v2 = metadata_origin_v1.copy() + metadata_rev_v2['translated_metadata'] = metadata_v2 + metadata_origin_v2['metadata'] = metadata_v2 + + self.storage.revision_metadata_add([metadata_rev_v2], + conflict_update=True) + self.storage.origin_intrinsic_metadata_add([metadata_origin_v2], + conflict_update=True) + + actual_metadata = list(self.storage.origin_intrinsic_metadata_get( + [self.origin_id_1])) + + expected_metadata_v2 = [{ + 'origin_id': self.origin_id_1, + 'metadata': metadata_v2, + 'tool': self.tools['swh-metadata-detector'], + 'from_revision': self.revision_id_2, + }] + + # metadata did change as the v2 was used to overwrite v1 + self.assertEqual(actual_metadata, expected_metadata_v2) + + def test_indexer_configuration_add(self): tool = { 'tool_name': 'some-unknown-tool', 'tool_version': 'some-version', 'tool_configuration': {"debian-package": "some-package"}, } actual_tool = self.storage.indexer_configuration_get(tool) self.assertIsNone(actual_tool) # does not exist # add it actual_tools = list(self.storage.indexer_configuration_add([tool])) - self.assertEquals(len(actual_tools), 1) + self.assertEqual(len(actual_tools), 1) actual_tool = actual_tools[0] self.assertIsNotNone(actual_tool) # now it exists new_id = actual_tool.pop('id') - self.assertEquals(actual_tool, tool) + self.assertEqual(actual_tool, tool) actual_tools2 = list(self.storage.indexer_configuration_add([tool])) actual_tool2 = actual_tools2[0] self.assertIsNotNone(actual_tool2) # now it exists new_id2 = actual_tool2.pop('id') self.assertEqual(new_id, new_id2) self.assertEqual(actual_tool, actual_tool2) - @istest - def indexer_configuration_add_multiple(self): + def test_indexer_configuration_add_multiple(self): tool = { 'tool_name': 'some-unknown-tool', 'tool_version': 'some-version', 'tool_configuration': {"debian-package": "some-package"}, } actual_tools = list(self.storage.indexer_configuration_add([tool])) self.assertEqual(len(actual_tools), 1) new_tools = [tool, { 'tool_name': 'yet-another-tool', 'tool_version': 'version', 'tool_configuration': {}, }] actual_tools = list(self.storage.indexer_configuration_add(new_tools)) self.assertEqual(len(actual_tools), 2) # order not guaranteed, so we iterate over results to check for tool in actual_tools: _id = tool.pop('id') self.assertIsNotNone(_id) self.assertIn(tool, new_tools) - @istest - def indexer_configuration_get_missing(self): + def test_indexer_configuration_get_missing(self): tool = { 'tool_name': 'unknown-tool', 'tool_version': '3.1.0rc2-31-ga2cbb8c', 'tool_configuration': {"command_line": "nomossa "}, } actual_tool = self.storage.indexer_configuration_get(tool) self.assertIsNone(actual_tool) - @istest - def indexer_configuration_get(self): + def test_indexer_configuration_get(self): tool = { 'tool_name': 'nomos', 'tool_version': '3.1.0rc2-31-ga2cbb8c', 'tool_configuration': {"command_line": "nomossa "}, } actual_tool = self.storage.indexer_configuration_get(tool) expected_tool = tool.copy() expected_tool['id'] = 1 self.assertEqual(expected_tool, actual_tool) - @istest - def indexer_configuration_metadata_get_missing_context(self): + def test_indexer_configuration_metadata_get_missing_context(self): tool = { 'tool_name': 'swh-metadata-translator', 'tool_version': '0.0.1', 'tool_configuration': {"context": "unknown-context"}, } actual_tool = self.storage.indexer_configuration_get(tool) self.assertIsNone(actual_tool) - @istest - def indexer_configuration_metadata_get(self): + def test_indexer_configuration_metadata_get(self): tool = { 'tool_name': 'swh-metadata-translator', 'tool_version': '0.0.1', - 'tool_configuration': {"type": "local", "context": "npm"}, + 'tool_configuration': {"type": "local", "context": "NpmMapping"}, } actual_tool = self.storage.indexer_configuration_get(tool) expected_tool = tool.copy() expected_tool['id'] = actual_tool['id'] self.assertEqual(expected_tool, actual_tool) class IndexerTestStorage(CommonTestStorage, unittest.TestCase): """Running the tests locally. For the client api tests (remote storage), see `class`:swh.indexer.storage.test_api_client:TestRemoteStorage class. """ pass diff --git a/swh/indexer/tests/test_language.py b/swh/indexer/tests/test_language.py index 048f309..0c50636 100644 --- a/swh/indexer/tests/test_language.py +++ b/swh/indexer/tests/test_language.py @@ -1,113 +1,109 @@ # Copyright (C) 2017 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import unittest import logging -from nose.tools import istest from swh.indexer import language from swh.indexer.language import ContentLanguageIndexer from swh.indexer.tests.test_utils import MockObjStorage class _MockIndexerStorage(): """Mock storage to simplify reading indexers' outputs. """ def content_language_add(self, languages, conflict_update=None): self.state = languages self.conflict_update = conflict_update def indexer_configuration_add(self, tools): return [{ 'id': 20, }] class TestLanguageIndexer(ContentLanguageIndexer): """Specific language whose configuration is enough to satisfy the indexing tests. """ def prepare(self): self.config = { - 'destination_queue': None, + 'destination_task': None, 'rescheduling_task': None, 'tools': { 'name': 'pygments', 'version': '2.0.1+dfsg-1.1+deb8u1', 'configuration': { 'type': 'library', 'debian-package': 'python3-pygments', 'max_content_size': 10240, }, } } self.idx_storage = _MockIndexerStorage() self.log = logging.getLogger('swh.indexer') self.objstorage = MockObjStorage() - self.task_destination = None + self.destination_task = None self.rescheduling_task = self.config['rescheduling_task'] self.tool_config = self.config['tools']['configuration'] self.max_content_size = self.tool_config['max_content_size'] self.tools = self.register_tools(self.config['tools']) self.tool = self.tools[0] class Language(unittest.TestCase): """ Tests pygments tool for language detection """ def setUp(self): self.maxDiff = None - @istest def test_compute_language_none(self): # given self.content = "" self.declared_language = { 'lang': None } # when result = language.compute_language(self.content) # then self.assertEqual(self.declared_language, result) - @istest def test_index_content_language_python(self): # given # testing python sha1s = ['02fb2c89e14f7fab46701478c83779c7beb7b069'] lang_indexer = TestLanguageIndexer() # when lang_indexer.run(sha1s, policy_update='ignore-dups') results = lang_indexer.idx_storage.state expected_results = [{ 'id': '02fb2c89e14f7fab46701478c83779c7beb7b069', 'indexer_configuration_id': 20, 'lang': 'python' }] # then self.assertEqual(expected_results, results) - @istest def test_index_content_language_c(self): # given # testing c sha1s = ['103bc087db1d26afc3a0283f38663d081e9b01e6'] lang_indexer = TestLanguageIndexer() # when lang_indexer.run(sha1s, policy_update='ignore-dups') results = lang_indexer.idx_storage.state expected_results = [{ 'id': '103bc087db1d26afc3a0283f38663d081e9b01e6', 'indexer_configuration_id': 20, 'lang': 'c' }] # then self.assertEqual('c', results[0]['lang']) self.assertEqual(expected_results, results) diff --git a/swh/indexer/tests/test_metadata.py b/swh/indexer/tests/test_metadata.py index 2953bfc..56fffc3 100644 --- a/swh/indexer/tests/test_metadata.py +++ b/swh/indexer/tests/test_metadata.py @@ -1,305 +1,384 @@ # Copyright (C) 2017 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import unittest import logging -from nose.tools import istest -from swh.indexer.metadata_dictionary import compute_metadata +from swh.indexer.metadata_dictionary import CROSSWALK_TABLE, MAPPINGS from swh.indexer.metadata_detector import detect_metadata +from swh.indexer.metadata_detector import extract_minimal_metadata_dict from swh.indexer.metadata import ContentMetadataIndexer from swh.indexer.metadata import RevisionMetadataIndexer from swh.indexer.tests.test_utils import MockObjStorage, MockStorage from swh.indexer.tests.test_utils import MockIndexerStorage class TestContentMetadataIndexer(ContentMetadataIndexer): """Specific Metadata whose configuration is enough to satisfy the indexing tests. """ def prepare(self): self.config.update({ 'rescheduling_task': None, }) self.idx_storage = MockIndexerStorage() self.log = logging.getLogger('swh.indexer') self.objstorage = MockObjStorage() - self.task_destination = None + self.destination_task = None self.rescheduling_task = self.config['rescheduling_task'] self.tools = self.register_tools(self.config['tools']) self.tool = self.tools[0] self.results = [] class TestRevisionMetadataIndexer(RevisionMetadataIndexer): """Specific indexer whose configuration is enough to satisfy the indexing tests. """ + + ContentMetadataIndexer = TestContentMetadataIndexer + def prepare(self): self.config = { 'rescheduling_task': None, 'storage': { 'cls': 'remote', 'args': { 'url': 'http://localhost:9999', } }, 'tools': { 'name': 'swh-metadata-detector', - 'version': '0.0.1', + 'version': '0.0.2', 'configuration': { 'type': 'local', - 'context': 'npm' + 'context': 'NpmMapping' } } } self.storage = MockStorage() self.idx_storage = MockIndexerStorage() self.log = logging.getLogger('swh.indexer') self.objstorage = MockObjStorage() - self.task_destination = None + self.destination_task = None self.rescheduling_task = self.config['rescheduling_task'] self.tools = self.register_tools(self.config['tools']) self.tool = self.tools[0] self.results = [] class Metadata(unittest.TestCase): """ Tests metadata_mock_tool tool for Metadata detection """ def setUp(self): """ shows the entire diff in the results """ self.maxDiff = None self.content_tool = { 'name': 'swh-metadata-translator', - 'version': '0.0.1', + 'version': '0.0.2', 'configuration': { 'type': 'local', - 'context': 'npm' + 'context': 'NpmMapping' } } + MockIndexerStorage.added_data = [] + + def test_crosstable(self): + self.assertEqual(CROSSWALK_TABLE['NodeJS'], { + 'repository': 'codeRepository', + 'os': 'operatingSystem', + 'cpu': 'processorRequirements', + 'engines': 'processorRequirements', + 'dependencies': 'softwareRequirements', + 'bundleDependencies': 'softwareRequirements', + 'bundledDependencies': 'softwareRequirements', + 'peerDependencies': 'softwareRequirements', + 'author': 'creator', + 'author.email': 'email', + 'author.name': 'name', + 'contributor': 'contributor', + 'keywords': 'keywords', + 'license': 'license', + 'version': 'version', + 'description': 'description', + 'name': 'name', + 'devDependencies': 'softwareSuggestions', + 'optionalDependencies': 'softwareSuggestions', + 'bugs': 'issueTracker', + 'homepage': 'url' + }) - @istest def test_compute_metadata_none(self): """ testing content empty content is empty should return None """ # given content = b"" - context = "npm" # None if no metadata was found or an error occurred declared_metadata = None # when - result = compute_metadata(context, content) + result = MAPPINGS["NpmMapping"].translate(content) # then self.assertEqual(declared_metadata, result) - @istest def test_compute_metadata_npm(self): """ testing only computation of metadata with hard_mapping_npm """ # given content = b""" { "name": "test_metadata", - "version": "0.0.1", + "version": "0.0.2", "description": "Simple package.json test for indexer", "repository": { "type": "git", "url": "https://github.com/moranegg/metadata_test" } } """ declared_metadata = { 'name': 'test_metadata', - 'version': '0.0.1', + 'version': '0.0.2', 'description': 'Simple package.json test for indexer', 'codeRepository': { 'type': 'git', 'url': 'https://github.com/moranegg/metadata_test' }, 'other': {} } # when - result = compute_metadata("npm", content) + result = MAPPINGS["NpmMapping"].translate(content) # then self.assertEqual(declared_metadata, result) - @istest + def test_extract_minimal_metadata_dict(self): + """ + Test the creation of a coherent minimal metadata set + """ + # given + metadata_list = [{ + 'name': 'test_1', + 'version': '0.0.2', + 'description': 'Simple package.json test for indexer', + 'codeRepository': { + 'type': 'git', + 'url': 'https://github.com/moranegg/metadata_test' + }, + 'other': {} + }, { + 'name': 'test_0_1', + 'version': '0.0.2', + 'description': 'Simple package.json test for indexer', + 'codeRepository': { + 'type': 'git', + 'url': 'https://github.com/moranegg/metadata_test' + }, + 'other': {} + }, { + 'name': 'test_metadata', + 'version': '0.0.2', + 'author': 'moranegg', + 'other': {} + }] + + # when + results = extract_minimal_metadata_dict(metadata_list) + + # then + expected_results = { + "developmentStatus": None, + "version": ['0.0.2'], + "operatingSystem": None, + "description": ['Simple package.json test for indexer'], + "keywords": None, + "issueTracker": None, + "name": ['test_1', 'test_0_1', 'test_metadata'], + "author": ['moranegg'], + "relatedLink": None, + "url": None, + "license": None, + "maintainer": None, + "email": None, + "softwareRequirements": None, + "identifier": None, + "codeRepository": [{ + 'type': 'git', + 'url': 'https://github.com/moranegg/metadata_test' + }] + } + self.assertEqual(expected_results, results) + def test_index_content_metadata_npm(self): """ testing NPM with package.json - one sha1 uses a file that can't be translated to metadata and should return None in the translated metadata """ # given sha1s = ['26a9f72a7c87cc9205725cfd879f514ff4f3d8d5', 'd4c647f0fc257591cc9ba1722484229780d1c607', '02fb2c89e14f7fab46701478c83779c7beb7b069'] # this metadata indexer computes only metadata for package.json # in npm context with a hard mapping metadata_indexer = TestContentMetadataIndexer( tool=self.content_tool, config={}) # when metadata_indexer.run(sha1s, policy_update='ignore-dups') - results = metadata_indexer.idx_storage.state + results = metadata_indexer.idx_storage.added_data - expected_results = [{ + expected_results = [('content_metadata', False, [{ 'indexer_configuration_id': 30, 'translated_metadata': { 'other': {}, 'codeRepository': { 'type': 'git', 'url': 'https://github.com/moranegg/metadata_test' }, 'description': 'Simple package.json test for indexer', 'name': 'test_metadata', 'version': '0.0.1' }, 'id': '26a9f72a7c87cc9205725cfd879f514ff4f3d8d5' }, { 'indexer_configuration_id': 30, 'translated_metadata': { 'softwareRequirements': { 'JSONStream': '~1.3.1', 'abbrev': '~1.1.0', 'ansi-regex': '~2.1.1', 'ansicolors': '~0.3.2', 'ansistyles': '~0.1.3' }, 'issueTracker': { 'url': 'https://github.com/npm/npm/issues' }, - 'author': + 'creator': 'Isaac Z. Schlueter (http://blog.izs.me)', 'codeRepository': { 'type': 'git', 'url': 'https://github.com/npm/npm' }, 'description': 'a package manager for JavaScript', 'softwareSuggestions': { 'tacks': '~1.2.6', 'tap': '~10.3.2' }, 'license': 'Artistic-2.0', 'version': '5.0.3', 'other': { 'preferGlobal': True, 'config': { 'publishtest': False } }, 'name': 'npm', 'keywords': [ 'install', 'modules', 'package manager', 'package.json' ], 'url': 'https://docs.npmjs.com/' }, 'id': 'd4c647f0fc257591cc9ba1722484229780d1c607' }, { 'indexer_configuration_id': 30, 'translated_metadata': None, 'id': '02fb2c89e14f7fab46701478c83779c7beb7b069' - }] + }])] - # The assertion bellow returns False sometimes because of nested lists + # The assertion below returns False sometimes because of nested lists self.assertEqual(expected_results, results) - @istest def test_detect_metadata_package_json(self): # given df = [{ 'sha1_git': b'abc', 'name': b'index.js', 'target': b'abc', 'length': 897, 'status': 'visible', 'type': 'file', 'perms': 33188, 'dir_id': b'dir_a', 'sha1': b'bcd' }, { 'sha1_git': b'aab', 'name': b'package.json', 'target': b'aab', 'length': 712, 'status': 'visible', 'type': 'file', 'perms': 33188, 'dir_id': b'dir_a', 'sha1': b'cde' }] # when results = detect_metadata(df) expected_results = { - 'npm': [ + 'NpmMapping': [ b'cde' ] } # then self.assertEqual(expected_results, results) - @istest def test_revision_metadata_indexer(self): metadata_indexer = TestRevisionMetadataIndexer() sha1_gits = [ b'8dbb6aeb036e7fd80664eb8bfd1507881af1ba9f', ] metadata_indexer.run(sha1_gits, 'update-dups') - results = metadata_indexer.idx_storage.state + results = metadata_indexer.idx_storage.added_data - expected_results = [{ + expected_results = [('revision_metadata', True, [{ 'id': b'8dbb6aeb036e7fd80664eb8bfd1507881af1ba9f', 'translated_metadata': { 'identifier': None, 'maintainer': None, 'url': [ 'https://github.com/librariesio/yarn-parser#readme' ], 'codeRepository': [{ 'type': 'git', 'url': 'git+https://github.com/librariesio/yarn-parser.git' }], 'author': ['Andrew Nesbitt'], 'license': ['AGPL-3.0'], 'version': ['1.0.0'], 'description': [ 'Tiny web service for parsing yarn.lock files' ], 'relatedLink': None, 'developmentStatus': None, 'operatingSystem': None, 'issueTracker': [{ 'url': 'https://github.com/librariesio/yarn-parser/issues' }], 'softwareRequirements': [{ 'express': '^4.14.0', 'yarn': '^0.21.0', 'body-parser': '^1.15.2' }], 'name': ['yarn-parser'], 'keywords': [['yarn', 'parse', 'lock', 'dependencies']], - 'type': None, 'email': None }, 'indexer_configuration_id': 7 - }] + }])] # then self.assertEqual(expected_results, results) diff --git a/swh/indexer/tests/test_mimetype.py b/swh/indexer/tests/test_mimetype.py index 63f6044..2082815 100644 --- a/swh/indexer/tests/test_mimetype.py +++ b/swh/indexer/tests/test_mimetype.py @@ -1,158 +1,153 @@ -# Copyright (C) 2017 The Software Heritage developers +# Copyright (C) 2017-2018 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import unittest import logging -from nose.tools import istest from swh.indexer.mimetype import ContentMimetypeIndexer from swh.indexer.tests.test_utils import MockObjStorage class _MockIndexerStorage(): """Mock storage to simplify reading indexers' outputs. """ def content_mimetype_add(self, mimetypes, conflict_update=None): self.state = mimetypes self.conflict_update = conflict_update def indexer_configuration_add(self, tools): return [{ 'id': 10, }] class TestMimetypeIndexer(ContentMimetypeIndexer): """Specific mimetype whose configuration is enough to satisfy the indexing tests. """ def prepare(self): self.config = { - 'destination_queue': None, + 'destination_task': None, 'rescheduling_task': None, 'tools': { 'name': 'file', 'version': '1:5.30-1+deb9u1', 'configuration': { "type": "library", "debian-package": "python3-magic" }, }, } self.idx_storage = _MockIndexerStorage() self.log = logging.getLogger('swh.indexer') self.objstorage = MockObjStorage() - self.task_destination = None + self.destination_task = None self.rescheduling_task = self.config['rescheduling_task'] - self.destination_queue = self.config['destination_queue'] + self.destination_task = self.config['destination_task'] self.tools = self.register_tools(self.config['tools']) self.tool = self.tools[0] class TestMimetypeIndexerUnknownToolStorage(TestMimetypeIndexer): """Specific mimetype whose configuration is not enough to satisfy the indexing tests. """ def prepare(self): super().prepare() self.tools = None class TestMimetypeIndexerWithErrors(unittest.TestCase): - @istest - def wrong_unknown_configuration_tool(self): + def test_wrong_unknown_configuration_tool(self): """Indexer with unknown configuration tool should fail the check""" with self.assertRaisesRegex(ValueError, 'Tools None is unknown'): TestMimetypeIndexerUnknownToolStorage() class TestMimetypeIndexerTest(unittest.TestCase): def setUp(self): self.indexer = TestMimetypeIndexer() - @istest def test_index_no_update(self): # given sha1s = [ '01c9379dfc33803963d07c1ccc748d3fe4c96bb5', '688a5ef812c53907562fe379d4b3851e69c7cb15', ] # when self.indexer.run(sha1s, policy_update='ignore-dups') # then expected_results = [{ 'id': '01c9379dfc33803963d07c1ccc748d3fe4c96bb5', 'indexer_configuration_id': 10, 'mimetype': b'text/plain', 'encoding': b'us-ascii', }, { 'id': '688a5ef812c53907562fe379d4b3851e69c7cb15', 'indexer_configuration_id': 10, 'mimetype': b'text/plain', 'encoding': b'us-ascii', }] self.assertFalse(self.indexer.idx_storage.conflict_update) - self.assertEquals(expected_results, self.indexer.idx_storage.state) + self.assertEqual(expected_results, self.indexer.idx_storage.state) - @istest def test_index_update(self): # given sha1s = [ '01c9379dfc33803963d07c1ccc748d3fe4c96bb5', '688a5ef812c53907562fe379d4b3851e69c7cb15', 'da39a3ee5e6b4b0d3255bfef95601890afd80709', # empty content ] # when self.indexer.run(sha1s, policy_update='update-dups') # then expected_results = [{ 'id': '01c9379dfc33803963d07c1ccc748d3fe4c96bb5', 'indexer_configuration_id': 10, 'mimetype': b'text/plain', 'encoding': b'us-ascii', }, { 'id': '688a5ef812c53907562fe379d4b3851e69c7cb15', 'indexer_configuration_id': 10, 'mimetype': b'text/plain', 'encoding': b'us-ascii', }, { 'id': 'da39a3ee5e6b4b0d3255bfef95601890afd80709', 'indexer_configuration_id': 10, 'mimetype': b'application/x-empty', 'encoding': b'binary', }] self.assertTrue(self.indexer.idx_storage.conflict_update) - self.assertEquals(expected_results, self.indexer.idx_storage.state) + self.assertEqual(expected_results, self.indexer.idx_storage.state) - @istest def test_index_one_unknown_sha1(self): # given sha1s = ['688a5ef812c53907562fe379d4b3851e69c7cb15', '799a5ef812c53907562fe379d4b3851e69c7cb15', # unknown '800a5ef812c53907562fe379d4b3851e69c7cb15'] # unknown # when self.indexer.run(sha1s, policy_update='update-dups') # then expected_results = [{ 'id': '688a5ef812c53907562fe379d4b3851e69c7cb15', 'indexer_configuration_id': 10, 'mimetype': b'text/plain', 'encoding': b'us-ascii', }] self.assertTrue(self.indexer.idx_storage.conflict_update) - self.assertEquals(expected_results, self.indexer.idx_storage.state) + self.assertEqual(expected_results, self.indexer.idx_storage.state) diff --git a/swh/indexer/tests/test_orchestrator.py b/swh/indexer/tests/test_orchestrator.py new file mode 100644 index 0000000..c9804e3 --- /dev/null +++ b/swh/indexer/tests/test_orchestrator.py @@ -0,0 +1,199 @@ +# Copyright (C) 2018 The Software Heritage developers +# See the AUTHORS file at the top-level directory of this distribution +# License: GNU General Public License version 3, or any later version +# See top-level LICENSE file for more information + +import unittest + +import celery + +from swh.indexer.orchestrator import BaseOrchestratorIndexer +from swh.indexer.indexer import BaseIndexer +from swh.indexer.tests.test_utils import MockIndexerStorage, MockStorage +from swh.scheduler.tests.celery_testing import CeleryTestFixture +from swh.indexer.tests import start_worker_thread + + +class BaseTestIndexer(BaseIndexer): + ADDITIONAL_CONFIG = { + 'tools': ('dict', { + 'name': 'foo', + 'version': 'bar', + 'configuration': {} + }), + } + + def prepare(self): + self.idx_storage = MockIndexerStorage() + self.storage = MockStorage() + + def check(self): + pass + + def filter(self, ids): + self.filtered.append(ids) + return ids + + def run(self, ids, policy_update): + return self.index(ids) + + def index(self, ids): + self.indexed.append(ids) + return [id_ + '_indexed_by_' + self.__class__.__name__ + for id_ in ids] + + def persist_index_computations(self, result, policy_update): + self.persisted = result + + +class Indexer1(BaseTestIndexer): + filtered = [] + indexed = [] + + def filter(self, ids): + return super().filter([id_ for id_ in ids if '1' in id_]) + + +class Indexer2(BaseTestIndexer): + filtered = [] + indexed = [] + + def filter(self, ids): + return super().filter([id_ for id_ in ids if '2' in id_]) + + +class Indexer3(BaseTestIndexer): + filtered = [] + indexed = [] + + def filter(self, ids): + return super().filter([id_ for id_ in ids if '3' in id_]) + + +@celery.task +def indexer1_task(*args, **kwargs): + return Indexer1().run(*args, **kwargs) + + +@celery.task +def indexer2_task(*args, **kwargs): + return Indexer2().run(*args, **kwargs) + + +@celery.task +def indexer3_task(self, *args, **kwargs): + return Indexer3().run(*args, **kwargs) + + +class TestOrchestrator12(BaseOrchestratorIndexer): + TASK_NAMES = { + 'indexer1': 'swh.indexer.tests.test_orchestrator.indexer1_task', + 'indexer2': 'swh.indexer.tests.test_orchestrator.indexer2_task', + 'indexer3': 'swh.indexer.tests.test_orchestrator.indexer3_task', + } + + INDEXER_CLASSES = { + 'indexer1': 'swh.indexer.tests.test_orchestrator.Indexer1', + 'indexer2': 'swh.indexer.tests.test_orchestrator.Indexer2', + 'indexer3': 'swh.indexer.tests.test_orchestrator.Indexer3', + } + + def __init__(self): + super().__init__() + self.running_tasks = [] + + def prepare(self): + self.config = { + 'indexers': { + 'indexer1': { + 'batch_size': 2, + 'check_presence': True, + }, + 'indexer2': { + 'batch_size': 2, + 'check_presence': True, + }, + } + } + self.prepare_tasks() + + +class MockedTestOrchestrator12(TestOrchestrator12): + def _run_tasks(self, celery_tasks): + self.running_tasks.extend(celery_tasks) + + +class OrchestratorTest(CeleryTestFixture, unittest.TestCase): + def test_orchestrator_filter(self): + with start_worker_thread(): + o = TestOrchestrator12() + o.prepare() + promises = o.run(['id12', 'id2']) + results = [] + for promise in reversed(promises): + results.append(promise.get(timeout=10)) + self.assertCountEqual( + results, + [[['id12_indexed_by_Indexer1']], + [['id12_indexed_by_Indexer2', + 'id2_indexed_by_Indexer2']]]) + self.assertEqual(Indexer2.indexed, [['id12', 'id2']]) + self.assertEqual(Indexer1.indexed, [['id12']]) + + +class MockedOrchestratorTest(unittest.TestCase): + maxDiff = None + + def test_mocked_orchestrator_filter(self): + o = MockedTestOrchestrator12() + o.prepare() + o.run(['id12', 'id2']) + self.assertCountEqual(o.running_tasks, [ + {'args': (), + 'chord_size': None, + 'immutable': False, + 'kwargs': {'ids': ['id12'], + 'policy_update': 'ignore-dups'}, + 'options': {}, + 'subtask_type': None, + 'task': 'swh.indexer.tests.test_orchestrator.indexer1_task'}, + {'args': (), + 'chord_size': None, + 'immutable': False, + 'kwargs': {'ids': ['id12', 'id2'], + 'policy_update': 'ignore-dups'}, + 'options': {}, + 'subtask_type': None, + 'task': 'swh.indexer.tests.test_orchestrator.indexer2_task'}, + ]) + + def test_mocked_orchestrator_batch(self): + o = MockedTestOrchestrator12() + o.prepare() + o.run(['id12', 'id2a', 'id2b', 'id2c']) + self.assertCountEqual(o.running_tasks, [ + {'args': (), + 'chord_size': None, + 'immutable': False, + 'kwargs': {'ids': ['id12'], + 'policy_update': 'ignore-dups'}, + 'options': {}, + 'subtask_type': None, + 'task': 'swh.indexer.tests.test_orchestrator.indexer1_task'}, + {'args': (), + 'chord_size': None, + 'immutable': False, + 'kwargs': {'ids': ['id12', 'id2a'], + 'policy_update': 'ignore-dups'}, + 'options': {}, + 'subtask_type': None, + 'task': 'swh.indexer.tests.test_orchestrator.indexer2_task'}, + {'args': (), + 'chord_size': None, + 'immutable': False, + 'kwargs': {'ids': ['id2b', 'id2c'], + 'policy_update': 'ignore-dups'}, + 'options': {}, + 'subtask_type': None, + 'task': 'swh.indexer.tests.test_orchestrator.indexer2_task'}, + ]) diff --git a/swh/indexer/tests/test_origin_head.py b/swh/indexer/tests/test_origin_head.py new file mode 100644 index 0000000..63d9ffd --- /dev/null +++ b/swh/indexer/tests/test_origin_head.py @@ -0,0 +1,91 @@ +# Copyright (C) 2017 The Software Heritage developers +# See the AUTHORS file at the top-level directory of this distribution +# License: GNU General Public License version 3, or any later version +# See top-level LICENSE file for more information + +import unittest +import logging + +from swh.indexer.origin_head import OriginHeadIndexer +from swh.indexer.tests.test_utils import MockIndexerStorage, MockStorage + + +class TestOriginHeadIndexer(OriginHeadIndexer): + """Specific indexer whose configuration is enough to satisfy the + indexing tests. + """ + + revision_metadata_task = None + origin_intrinsic_metadata_task = None + + def prepare(self): + self.config = { + 'tools': { + 'name': 'origin-metadata', + 'version': '0.0.1', + 'configuration': {}, + }, + } + self.storage = MockStorage() + self.idx_storage = MockIndexerStorage() + self.log = logging.getLogger('swh.indexer') + self.objstorage = None + self.tools = self.register_tools(self.config['tools']) + self.tool = self.tools[0] + self.results = None + + def persist_index_computations(self, results, policy_update): + self.results = results + + +class OriginHead(unittest.TestCase): + def test_git(self): + indexer = TestOriginHeadIndexer() + indexer.run( + ['git+https://github.com/SoftwareHeritage/swh-storage'], + 'update-dups', parse_ids=True) + self.assertEqual(indexer.results, [{ + 'revision_id': b'8K\x12\x00d\x03\xcc\xe4]bS\xe3\x8f{' + b'\xd7}\xac\xefrm', + 'origin_id': 52189575}]) + + def test_ftp(self): + indexer = TestOriginHeadIndexer() + indexer.run( + ['ftp+rsync://ftp.gnu.org/gnu/3dldf'], + 'update-dups', parse_ids=True) + self.assertEqual(indexer.results, [{ + 'revision_id': b'\x8e\xa9\x8e/\xea}\x9feF\xf4\x9f\xfd\xee' + b'\xcc\x1a\xb4`\x8c\x8by', + 'origin_id': 4423668}]) + + def test_deposit(self): + indexer = TestOriginHeadIndexer() + indexer.run( + ['deposit+https://forge.softwareheritage.org/source/' + 'jesuisgpl/'], + 'update-dups', parse_ids=True) + self.assertEqual(indexer.results, [{ + 'revision_id': b'\xe7n\xa4\x9c\x9f\xfb\xb7\xf76\x11\x08{' + b'\xa6\xe9\x99\xb1\x9e]q\xeb', + 'origin_id': 77775770}]) + + def test_pypi(self): + indexer = TestOriginHeadIndexer() + indexer.run( + ['pypi+https://pypi.org/project/limnoria/'], + 'update-dups', parse_ids=True) + self.assertEqual(indexer.results, [{ + 'revision_id': b'\x83\xb9\xb6\xc7\x05\xb1%\xd0\xfem\xd8k' + b'A\x10\x9d\xc5\xfa2\xf8t', + 'origin_id': 85072327}]) + + def test_svn(self): + indexer = TestOriginHeadIndexer() + indexer.run( + ['svn+http://0-512-md.googlecode.com/svn/'], + 'update-dups', parse_ids=True) + self.assertEqual(indexer.results, [{ + 'revision_id': b'\xe4?r\xe1,\x88\xab\xec\xe7\x9a\x87\xb8' + b'\xc9\xad#.\x1bw=\x18', + 'origin_id': 49908349}]) diff --git a/swh/indexer/tests/test_origin_metadata.py b/swh/indexer/tests/test_origin_metadata.py new file mode 100644 index 0000000..84e218b --- /dev/null +++ b/swh/indexer/tests/test_origin_metadata.py @@ -0,0 +1,126 @@ +# Copyright (C) 2018 The Software Heritage developers +# See the AUTHORS file at the top-level directory of this distribution +# License: GNU General Public License version 3, or any later version +# See top-level LICENSE file for more information + +import logging +import unittest +from celery import task + +from swh.indexer.metadata import OriginMetadataIndexer +from swh.indexer.tests.test_utils import MockObjStorage, MockStorage +from swh.indexer.tests.test_utils import MockIndexerStorage +from swh.indexer.tests.test_origin_head import TestOriginHeadIndexer +from swh.indexer.tests.test_metadata import TestRevisionMetadataIndexer + +from swh.scheduler.tests.celery_testing import CeleryTestFixture +from swh.indexer.tests import start_worker_thread + + +class TestOriginMetadataIndexer(OriginMetadataIndexer): + def prepare(self): + self.config = { + 'storage': { + 'cls': 'remote', + 'args': { + 'url': 'http://localhost:9999', + } + }, + 'tools': { + 'name': 'origin-metadata', + 'version': '0.0.1', + 'configuration': {} + } + } + self.storage = MockStorage() + self.idx_storage = MockIndexerStorage() + self.log = logging.getLogger('swh.indexer') + self.objstorage = MockObjStorage() + self.destination_task = None + self.tools = self.register_tools(self.config['tools']) + self.tool = self.tools[0] + self.results = [] + + +@task +def revision_metadata_test_task(*args, **kwargs): + indexer = TestRevisionMetadataIndexer() + indexer.run(*args, **kwargs) + return indexer.results + + +@task +def origin_intrinsic_metadata_test_task(*args, **kwargs): + indexer = TestOriginMetadataIndexer() + indexer.run(*args, **kwargs) + return indexer.results + + +class TestOriginHeadIndexer(TestOriginHeadIndexer): + revision_metadata_task = revision_metadata_test_task + origin_intrinsic_metadata_task = origin_intrinsic_metadata_test_task + + +class TestOriginMetadata(CeleryTestFixture, unittest.TestCase): + def setUp(self): + super().setUp() + self.maxDiff = None + MockIndexerStorage.added_data = [] + + def test_pipeline(self): + indexer = TestOriginHeadIndexer() + with start_worker_thread(): + promise = indexer.run( + ["git+https://github.com/librariesio/yarn-parser"], + policy_update='update-dups', + parse_ids=True) + promise.get() + + metadata = { + 'identifier': None, + 'maintainer': None, + 'url': [ + 'https://github.com/librariesio/yarn-parser#readme' + ], + 'codeRepository': [{ + 'type': 'git', + 'url': 'git+https://github.com/librariesio/yarn-parser.git' + }], + 'author': ['Andrew Nesbitt'], + 'license': ['AGPL-3.0'], + 'version': ['1.0.0'], + 'description': [ + 'Tiny web service for parsing yarn.lock files' + ], + 'relatedLink': None, + 'developmentStatus': None, + 'operatingSystem': None, + 'issueTracker': [{ + 'url': 'https://github.com/librariesio/yarn-parser/issues' + }], + 'softwareRequirements': [{ + 'express': '^4.14.0', + 'yarn': '^0.21.0', + 'body-parser': '^1.15.2' + }], + 'name': ['yarn-parser'], + 'keywords': [['yarn', 'parse', 'lock', 'dependencies']], + 'email': None + } + rev_metadata = { + 'id': b'8dbb6aeb036e7fd80664eb8bfd1507881af1ba9f', + 'translated_metadata': metadata, + 'indexer_configuration_id': 7, + } + origin_metadata = { + 'origin_id': 54974445, + 'from_revision': b'8dbb6aeb036e7fd80664eb8bfd1507881af1ba9f', + 'metadata': metadata, + 'indexer_configuration_id': 7, + } + expected_results = [ + ('origin_intrinsic_metadata', True, [origin_metadata]), + ('revision_metadata', True, [rev_metadata])] + + results = list(indexer.idx_storage.added_data) + self.assertCountEqual(expected_results, results) diff --git a/swh/indexer/tests/test_utils.py b/swh/indexer/tests/test_utils.py index 41c9068..0c519b9 100644 --- a/swh/indexer/tests/test_utils.py +++ b/swh/indexer/tests/test_utils.py @@ -1,261 +1,409 @@ # Copyright (C) 2017 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information from swh.objstorage.exc import ObjNotFoundError +ORIGINS = [ + { + 'id': 52189575, + 'lister': None, + 'project': None, + 'type': 'git', + 'url': 'https://github.com/SoftwareHeritage/swh-storage'}, + { + 'id': 4423668, + 'lister': None, + 'project': None, + 'type': 'ftp', + 'url': 'rsync://ftp.gnu.org/gnu/3dldf'}, + { + 'id': 77775770, + 'lister': None, + 'project': None, + 'type': 'deposit', + 'url': 'https://forge.softwareheritage.org/source/jesuisgpl/'}, + { + 'id': 85072327, + 'lister': None, + 'project': None, + 'type': 'pypi', + 'url': 'https://pypi.org/project/limnoria/'}, + { + 'id': 49908349, + 'lister': None, + 'project': None, + 'type': 'svn', + 'url': 'http://0-512-md.googlecode.com/svn/'}, + { + 'id': 54974445, + 'lister': None, + 'project': None, + 'type': 'git', + 'url': 'https://github.com/librariesio/yarn-parser'}, + ] + +SNAPSHOTS = { + 52189575: { + 'branches': { + b'refs/heads/add-revision-origin-cache': { + 'target': b'L[\xce\x1c\x88\x8eF\t\xf1"\x19\x1e\xfb\xc0' + b's\xe7/\xe9l\x1e', + 'target_type': 'revision'}, + b'HEAD': { + 'target': b'8K\x12\x00d\x03\xcc\xe4]bS\xe3\x8f{\xd7}' + b'\xac\xefrm', + 'target_type': 'revision'}, + b'refs/tags/v0.0.103': { + 'target': b'\xb6"Im{\xfdLb\xb0\x94N\xea\x96m\x13x\x88+' + b'\x0f\xdd', + 'target_type': 'release'}, + }}, + 4423668: { + 'branches': { + b'3DLDF-1.1.4.tar.gz': { + 'target': b'dJ\xfb\x1c\x91\xf4\x82B%]6\xa2\x90|\xd3\xfc' + b'"G\x99\x11', + 'target_type': 'revision'}, + b'3DLDF-2.0.2.tar.gz': { + 'target': b'\xb6\x0e\xe7\x9e9\xac\xaa\x19\x9e=' + b'\xd1\xc5\x00\\\xc6\xfc\xe0\xa6\xb4V', + 'target_type': 'revision'}, + b'3DLDF-2.0.3-examples.tar.gz': { + 'target': b'!H\x19\xc0\xee\x82-\x12F1\xbd\x97' + b'\xfe\xadZ\x80\x80\xc1\x83\xff', + 'target_type': 'revision'}, + b'3DLDF-2.0.3.tar.gz': { + 'target': b'\x8e\xa9\x8e/\xea}\x9feF\xf4\x9f\xfd\xee' + b'\xcc\x1a\xb4`\x8c\x8by', + 'target_type': 'revision'}, + b'3DLDF-2.0.tar.gz': { + 'target': b'F6*\xff(?\x19a\xef\xb6\xc2\x1fv$S\xe3G' + b'\xd3\xd1m', + b'target_type': 'revision'} + }}, + 77775770: { + 'branches': { + b'master': { + 'target': b'\xe7n\xa4\x9c\x9f\xfb\xb7\xf76\x11\x08{' + b'\xa6\xe9\x99\xb1\x9e]q\xeb', + 'target_type': 'revision'} + }, + 'id': b"h\xc0\xd2a\x04\xd4~'\x8d\xd6\xbe\x07\xeda\xfa\xfbV" + b"\x1d\r "}, + 85072327: { + 'branches': { + b'HEAD': { + 'target': b'releases/2018.09.09', + 'target_type': 'alias'}, + b'releases/2018.09.01': { + 'target': b'<\xee1(\xe8\x8d_\xc1\xc9\xa6rT\xf1\x1d' + b'\xbb\xdfF\xfdw\xcf', + 'target_type': 'revision'}, + b'releases/2018.09.09': { + 'target': b'\x83\xb9\xb6\xc7\x05\xb1%\xd0\xfem\xd8k' + b'A\x10\x9d\xc5\xfa2\xf8t', + 'target_type': 'revision'}}, + 'id': b'{\xda\x8e\x84\x7fX\xff\x92\x80^\x93V\x18\xa3\xfay' + b'\x12\x9e\xd6\xb3'}, + 49908349: { + 'branches': { + b'master': { + 'target': b'\xe4?r\xe1,\x88\xab\xec\xe7\x9a\x87\xb8' + b'\xc9\xad#.\x1bw=\x18', + 'target_type': 'revision'}}, + 'id': b'\xa1\xa2\x8c\n\xb3\x87\xa8\xf9\xe0a\x8c\xb7' + b'\x05\xea\xb8\x1f\xc4H\xf4s'}, + 54974445: { + 'branches': { + b'HEAD': { + 'target': b'8dbb6aeb036e7fd80664eb8bfd1507881af1ba9f', + 'target_type': 'revision'}}} + } + class MockObjStorage: """Mock an swh-objstorage objstorage with predefined contents. """ data = {} def __init__(self): self.data = { '01c9379dfc33803963d07c1ccc748d3fe4c96bb5': b'this is some text', '688a5ef812c53907562fe379d4b3851e69c7cb15': b'another text', '8986af901dd2043044ce8f0d8fc039153641cf17': b'yet another text', '02fb2c89e14f7fab46701478c83779c7beb7b069': b""" import unittest import logging - from nose.tools import istest from swh.indexer.mimetype import ContentMimetypeIndexer from swh.indexer.tests.test_utils import MockObjStorage class MockStorage(): def content_mimetype_add(self, mimetypes): self.state = mimetypes self.conflict_update = conflict_update def indexer_configuration_add(self, tools): return [{ 'id': 10, }] """, '103bc087db1d26afc3a0283f38663d081e9b01e6': b""" #ifndef __AVL__ #define __AVL__ typedef struct _avl_tree avl_tree; typedef struct _data_t { int content; } data_t; """, '93666f74f1cf635c8c8ac118879da6ec5623c410': b""" (should 'pygments (recognize 'lisp 'easily)) """, '26a9f72a7c87cc9205725cfd879f514ff4f3d8d5': b""" { "name": "test_metadata", "version": "0.0.1", "description": "Simple package.json test for indexer", "repository": { "type": "git", "url": "https://github.com/moranegg/metadata_test" } } """, 'd4c647f0fc257591cc9ba1722484229780d1c607': b""" { "version": "5.0.3", "name": "npm", "description": "a package manager for JavaScript", "keywords": [ "install", "modules", "package manager", "package.json" ], "preferGlobal": true, "config": { "publishtest": false }, "homepage": "https://docs.npmjs.com/", "author": "Isaac Z. Schlueter (http://blog.izs.me)", "repository": { "type": "git", "url": "https://github.com/npm/npm" }, "bugs": { "url": "https://github.com/npm/npm/issues" }, "dependencies": { "JSONStream": "~1.3.1", "abbrev": "~1.1.0", "ansi-regex": "~2.1.1", "ansicolors": "~0.3.2", "ansistyles": "~0.1.3" }, "devDependencies": { "tacks": "~1.2.6", "tap": "~10.3.2" }, "license": "Artistic-2.0" } """, 'a7ab314d8a11d2c93e3dcf528ca294e7b431c449': b""" """, 'da39a3ee5e6b4b0d3255bfef95601890afd80709': b'', } def __iter__(self): yield from self.data.keys() def __contains__(self, sha1): return self.data.get(sha1) is not None def get(self, sha1): raw_content = self.data.get(sha1) if raw_content is None: raise ObjNotFoundError(sha1) return raw_content class MockIndexerStorage(): """Mock an swh-indexer storage. """ + added_data = [] + def indexer_configuration_add(self, tools): tool = tools[0] if tool['tool_name'] == 'swh-metadata-translator': return [{ 'id': 30, 'tool_name': 'swh-metadata-translator', 'tool_version': '0.0.1', 'tool_configuration': { 'type': 'local', - 'context': 'npm' + 'context': 'NpmMapping' }, }] elif tool['tool_name'] == 'swh-metadata-detector': return [{ 'id': 7, 'tool_name': 'swh-metadata-detector', 'tool_version': '0.0.1', 'tool_configuration': { 'type': 'local', - 'context': 'npm' + 'context': 'NpmMapping' }, }] + elif tool['tool_name'] == 'origin-metadata': + return [{ + 'id': 8, + 'tool_name': 'origin-metadata', + 'tool_version': '0.0.1', + 'tool_configuration': {}, + }] + else: + assert False, 'Unknown tool {tool_name}'.format(**tool) def content_metadata_missing(self, sha1s): yield from [] def content_metadata_add(self, metadata, conflict_update=None): - self.state = metadata - self.conflict_update = conflict_update + self.added_data.append( + ('content_metadata', conflict_update, metadata)) def revision_metadata_add(self, metadata, conflict_update=None): - self.state = metadata - self.conflict_update = conflict_update + self.added_data.append( + ('revision_metadata', conflict_update, metadata)) + + def origin_intrinsic_metadata_add(self, metadata, conflict_update=None): + self.added_data.append( + ('origin_intrinsic_metadata', conflict_update, metadata)) def content_metadata_get(self, sha1s): return [{ 'tool': { 'configuration': { 'type': 'local', - 'context': 'npm' + 'context': 'NpmMapping' }, 'version': '0.0.1', 'id': 6, 'name': 'swh-metadata-translator' }, 'id': b'cde', 'translated_metadata': { 'issueTracker': { 'url': 'https://github.com/librariesio/yarn-parser/issues' }, 'version': '1.0.0', 'name': 'yarn-parser', 'author': 'Andrew Nesbitt', 'url': 'https://github.com/librariesio/yarn-parser#readme', 'processorRequirements': {'node': '7.5'}, 'other': { 'scripts': { 'start': 'node index.js' }, 'main': 'index.js' }, 'license': 'AGPL-3.0', 'keywords': ['yarn', 'parse', 'lock', 'dependencies'], 'codeRepository': { 'type': 'git', 'url': 'git+https://github.com/librariesio/yarn-parser.git' }, 'description': 'Tiny web service for parsing yarn.lock files', 'softwareRequirements': { 'yarn': '^0.21.0', 'express': '^4.14.0', 'body-parser': '^1.15.2'} } }] class MockStorage(): """Mock a real swh-storage storage to simplify reading indexers' outputs. """ + def origin_get(self, id_): + for origin in ORIGINS: + for (k, v) in id_.items(): + if origin[k] != v: + break + else: + # This block is run iff we didn't break, ie. if all supplied + # parts of the id are set to the expected value. + return origin + assert False, id_ + + def snapshot_get_latest(self, origin_id): + if origin_id in SNAPSHOTS: + return SNAPSHOTS[origin_id] + else: + assert False, origin_id + def revision_get(self, revisions): return [{ 'id': b'8dbb6aeb036e7fd80664eb8bfd1507881af1ba9f', 'committer': { 'id': 26, 'name': b'Andrew Nesbitt', 'fullname': b'Andrew Nesbitt ', 'email': b'andrewnez@gmail.com' }, 'synthetic': False, 'date': { 'negative_utc': False, 'timestamp': { 'seconds': 1487596456, 'microseconds': 0 }, 'offset': 0 }, 'directory': b'10' }] def directory_ls(self, directory, recursive=False, cur=None): # with directory: b'\x9d', return [{ 'sha1_git': b'abc', 'name': b'index.js', 'target': b'abc', 'length': 897, 'status': 'visible', 'type': 'file', 'perms': 33188, 'dir_id': b'10', 'sha1': b'bcd' }, { 'sha1_git': b'aab', 'name': b'package.json', 'target': b'aab', 'length': 712, 'status': 'visible', 'type': 'file', 'perms': 33188, 'dir_id': b'10', 'sha1': b'cde' }, { 'dir_id': b'10', 'target': b'11', 'type': 'dir', 'length': None, 'name': b'.github', 'sha1': None, 'perms': 16384, 'sha1_git': None, 'status': None, 'sha256': None }] diff --git a/tox.ini b/tox.ini new file mode 100644 index 0000000..70265ee --- /dev/null +++ b/tox.ini @@ -0,0 +1,17 @@ +[tox] +envlist=flake8,py3 + +[testenv:py3] +deps = + .[testing] + pytest-cov + pifpaf +commands = + pifpaf run postgresql -- pytest --cov=swh --cov-branch {posargs} + +[testenv:flake8] +skip_install = true +deps = + flake8 +commands = + {envpython} -m flake8 diff --git a/version.txt b/version.txt index 43e891d..bee75cf 100644 --- a/version.txt +++ b/version.txt @@ -1 +1 @@ -v0.0.52-0-gda92de4 \ No newline at end of file +v0.0.53-0-gad017c8 \ No newline at end of file