diff --git a/.gitignore b/.gitignore index 6e3db8c..1d3b249 100644 --- a/.gitignore +++ b/.gitignore @@ -1,13 +1,14 @@ *.pyc *.sw? *~ .coverage .eggs/ __pycache__ *.egg-info/ build/ dist/ version.txt /sql/createdb-stamp /sql/filldb-stamp .tox/ +.hypothesis/ \ No newline at end of file diff --git a/Makefile.local b/Makefile.local new file mode 100644 index 0000000..c163514 --- /dev/null +++ b/Makefile.local @@ -0,0 +1 @@ +TESTFLAGS=--hypothesis-profile=fast diff --git a/PKG-INFO b/PKG-INFO index 4d86fdc..52f059e 100644 --- a/PKG-INFO +++ b/PKG-INFO @@ -1,114 +1,69 @@ Metadata-Version: 2.1 Name: swh.indexer -Version: 0.0.55 +Version: 0.0.56 Summary: Software Heritage Content Indexer Home-page: https://forge.softwareheritage.org/diffusion/78/ Author: Software Heritage developers Author-email: swh-devel@inria.fr License: UNKNOWN Project-URL: Bug Reports, https://forge.softwareheritage.org/maniphest Project-URL: Funding, https://www.softwareheritage.org/donate Project-URL: Source, https://forge.softwareheritage.org/source/swh-indexer Description: swh-indexer ============ Tools to compute multiple indexes on SWH's raw contents: - content: - mimetype - ctags - language - fossology-license - metadata - revision: - metadata - ## Context + An indexer is in charge of: + - looking up objects + - extracting information from those objects + - store those information in the swh-indexer db - SWH has currently stored around 5B contents. The table `content` - holds their checksums. - - Those contents are physically stored in an object storage (using - disks) and replicated in another. Those object storages are not - destined for reading yet. - - We are in the process to copy those contents over to azure's blob - storages. As such, we will use that opportunity to trigger the - computations on these contents once those have been copied over. - - - ## Workers - - There are two types of workers: - - orchestrators (orchestrator, orchestrator-text) - - indexer (mimetype, language, ctags, fossology-license) - - ### Orchestrator - - - The orchestrator is in charge of dispatching a batch of sha1 hashes to - different indexers. - - Orchestration procedure: - - receive batch of sha1s - - split those batches into groups (according to setup) - - broadcast those group to indexers - - There are two types of orchestrators: - - - orchestrator (swh_indexer_orchestrator_content_all): Receives and - broadcast sha1 ids (of contents) to indexers (currently only the - mimetype indexer) - - - orchestrator-text (swh_indexer_orchestrator_content_text): Receives - batch of sha1 ids (of textual contents) and broadcast those to - indexers (currently language, ctags, and fossology-license - indexers). - - - ### Indexers - - - An indexer is in charge of the content retrieval and indexation of the - extracted information in the swh-indexer db. - - There are two types of indexers: + There are multiple indexers working on different object types: - content indexer: works with content sha1 hashes - revision indexer: works with revision sha1 hashes + - origin indexer: works with origin identifiers Indexation procedure: - receive batch of ids - retrieve the associated data depending on object type - compute for that object some index - store the result to swh's storage - - (and possibly do some broadcast itself) Current content indexers: - - mimetype (queue swh_indexer_content_mimetype): compute the mimetype, - filter out the textual contents and broadcast the list to the - orchestrator-text + - mimetype (queue swh_indexer_content_mimetype): detect the encoding + and mimetype - - language (queue swh_indexer_content_language): detect the programming language + - language (queue swh_indexer_content_language): detect the + programming language - - ctags (queue swh_indexer_content_ctags): try and compute tags - information + - ctags (queue swh_indexer_content_ctags): compute tags information - - fossology-license (queue swh_indexer_fossology_license): try and - compute the license + - fossology-license (queue swh_indexer_fossology_license): compute the + license - - metadata : translate file into translated_metadata dict + - metadata: translate file into translated_metadata dict Current revision indexers: - metadata: detects files containing metadata and retrieves translated_metadata in content_metadata table in storage or run content indexer to translate files. Platform: UNKNOWN Classifier: Programming Language :: Python :: 3 Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3) Classifier: Operating System :: OS Independent Classifier: Development Status :: 5 - Production/Stable Description-Content-Type: text/markdown Provides-Extra: testing diff --git a/README.md b/README.md index 562f028..f4f2481 100644 --- a/README.md +++ b/README.md @@ -1,94 +1,49 @@ swh-indexer ============ Tools to compute multiple indexes on SWH's raw contents: - content: - mimetype - ctags - language - fossology-license - metadata - revision: - metadata -## Context +An indexer is in charge of: +- looking up objects +- extracting information from those objects +- store those information in the swh-indexer db -SWH has currently stored around 5B contents. The table `content` -holds their checksums. - -Those contents are physically stored in an object storage (using -disks) and replicated in another. Those object storages are not -destined for reading yet. - -We are in the process to copy those contents over to azure's blob -storages. As such, we will use that opportunity to trigger the -computations on these contents once those have been copied over. - - -## Workers - -There are two types of workers: -- orchestrators (orchestrator, orchestrator-text) -- indexer (mimetype, language, ctags, fossology-license) - -### Orchestrator - - -The orchestrator is in charge of dispatching a batch of sha1 hashes to -different indexers. - -Orchestration procedure: -- receive batch of sha1s -- split those batches into groups (according to setup) -- broadcast those group to indexers - -There are two types of orchestrators: - -- orchestrator (swh_indexer_orchestrator_content_all): Receives and - broadcast sha1 ids (of contents) to indexers (currently only the - mimetype indexer) - -- orchestrator-text (swh_indexer_orchestrator_content_text): Receives - batch of sha1 ids (of textual contents) and broadcast those to - indexers (currently language, ctags, and fossology-license - indexers). - - -### Indexers - - -An indexer is in charge of the content retrieval and indexation of the -extracted information in the swh-indexer db. - -There are two types of indexers: +There are multiple indexers working on different object types: - content indexer: works with content sha1 hashes - revision indexer: works with revision sha1 hashes + - origin indexer: works with origin identifiers Indexation procedure: - receive batch of ids - retrieve the associated data depending on object type - compute for that object some index - store the result to swh's storage -- (and possibly do some broadcast itself) Current content indexers: -- mimetype (queue swh_indexer_content_mimetype): compute the mimetype, - filter out the textual contents and broadcast the list to the - orchestrator-text +- mimetype (queue swh_indexer_content_mimetype): detect the encoding + and mimetype -- language (queue swh_indexer_content_language): detect the programming language +- language (queue swh_indexer_content_language): detect the + programming language -- ctags (queue swh_indexer_content_ctags): try and compute tags - information +- ctags (queue swh_indexer_content_ctags): compute tags information -- fossology-license (queue swh_indexer_fossology_license): try and - compute the license +- fossology-license (queue swh_indexer_fossology_license): compute the + license -- metadata : translate file into translated_metadata dict +- metadata: translate file into translated_metadata dict Current revision indexers: - metadata: detects files containing metadata and retrieves translated_metadata in content_metadata table in storage or run content indexer to translate files. diff --git a/conftest.py b/conftest.py new file mode 100644 index 0000000..eb6de3d --- /dev/null +++ b/conftest.py @@ -0,0 +1,6 @@ +from hypothesis import settings + +# define tests profile. Full documentation is at: +# https://hypothesis.readthedocs.io/en/latest/settings.html#settings-profiles +settings.register_profile("fast", max_examples=5, deadline=5000) +settings.register_profile("slow", max_examples=20, deadline=5000) diff --git a/debian/control b/debian/control index 793f9b6..c1d0a84 100644 --- a/debian/control +++ b/debian/control @@ -1,47 +1,50 @@ Source: swh-indexer Maintainer: Software Heritage developers Section: python Priority: optional Build-Depends: debhelper (>= 9), dh-python (>= 2), python3-all, python3-chardet (>= 2.3.0~), python3-click, + python3-hypothesis (>= 3.11.0~), python3-pytest, python3-pygments, python3-magic, + python3-pyld, python3-setuptools, python3-swh.core (>= 0.0.44~), python3-swh.model (>= 0.0.15~), python3-swh.objstorage (>= 0.0.13~), python3-swh.scheduler (>= 0.0.35~), - python3-swh.storage (>= 0.0.102~), - python3-vcversioner + python3-swh.storage (>= 0.0.110~), + python3-vcversioner, + python3-xmltodict Standards-Version: 3.9.6 Homepage: https://forge.softwareheritage.org/diffusion/78/ Package: python3-swh.indexer.storage Architecture: all Depends: python3-swh.core (>= 0.0.44~), python3-swh.model (>= 0.0.15~), python3-swh.objstorage (>= 0.0.13~), python3-swh.scheduler (>= 0.0.35~), - python3-swh.storage (>= 0.0.102~), + python3-swh.storage (>= 0.0.110~), ${misc:Depends}, ${python3:Depends} Description: Software Heritage Content Indexer Storage Package: python3-swh.indexer Architecture: all Depends: python3-swh.scheduler (>= 0.0.14~), python3-swh.core (>= 0.0.44~), python3-swh.model (>= 0.0.15~), python3-swh.objstorage (>= 0.0.13~), python3-swh.scheduler (>= 0.0.35~), - python3-swh.storage (>= 0.0.102~), + python3-swh.storage (>= 0.0.110~), python3-swh.indexer.storage (= ${binary:Version}), universal-ctags (>= 0.8~), fossology-nomossa (>= 3.1~), ${misc:Depends}, ${python3:Depends} Description: Software Heritage Content Indexer diff --git a/docs/.gitignore b/docs/.gitignore index f6b5c55..58a761e 100644 --- a/docs/.gitignore +++ b/docs/.gitignore @@ -1,4 +1,3 @@ _build/ apidoc/ *-stamp -README.md diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 0000000..f4f2481 --- /dev/null +++ b/docs/README.md @@ -0,0 +1,49 @@ +swh-indexer +============ + +Tools to compute multiple indexes on SWH's raw contents: +- content: + - mimetype + - ctags + - language + - fossology-license + - metadata +- revision: + - metadata + +An indexer is in charge of: +- looking up objects +- extracting information from those objects +- store those information in the swh-indexer db + +There are multiple indexers working on different object types: + - content indexer: works with content sha1 hashes + - revision indexer: works with revision sha1 hashes + - origin indexer: works with origin identifiers + +Indexation procedure: +- receive batch of ids +- retrieve the associated data depending on object type +- compute for that object some index +- store the result to swh's storage + +Current content indexers: + +- mimetype (queue swh_indexer_content_mimetype): detect the encoding + and mimetype + +- language (queue swh_indexer_content_language): detect the + programming language + +- ctags (queue swh_indexer_content_ctags): compute tags information + +- fossology-license (queue swh_indexer_fossology_license): compute the + license + +- metadata: translate file into translated_metadata dict + +Current revision indexers: + +- metadata: detects files containing metadata and retrieves translated_metadata + in content_metadata table in storage or run content indexer to translate + files. diff --git a/docs/index.rst b/docs/index.rst index 9c7dd9b..9d678ec 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -1,23 +1,23 @@ .. _swh-indexer: Software Heritage - Indexer =========================== Tools and workers used to mine the content of the archive and extract derived information from archive source code artifacts. .. toctree:: :maxdepth: 1 :caption: Contents: - README + README.md dev-info.rst Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` diff --git a/requirements-swh.txt b/requirements-swh.txt index 376de6d..5cd838b 100644 --- a/requirements-swh.txt +++ b/requirements-swh.txt @@ -1,5 +1,5 @@ swh.core >= 0.0.44 swh.model >= 0.0.15 swh.objstorage >= 0.0.13 swh.scheduler >= 0.0.35 -swh.storage >= 0.0.102 +swh.storage >= 0.0.110 diff --git a/requirements-test.txt b/requirements-test.txt index e079f8a..d3fc701 100644 --- a/requirements-test.txt +++ b/requirements-test.txt @@ -1 +1,2 @@ pytest +hypothesis (>= 3.11.0) diff --git a/requirements.txt b/requirements.txt index dbade70..3a7428c 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,5 +1,7 @@ vcversioner pygments click chardet file_magic +pyld +xmltodict diff --git a/swh.indexer.egg-info/PKG-INFO b/swh.indexer.egg-info/PKG-INFO index 4d86fdc..52f059e 100644 --- a/swh.indexer.egg-info/PKG-INFO +++ b/swh.indexer.egg-info/PKG-INFO @@ -1,114 +1,69 @@ Metadata-Version: 2.1 Name: swh.indexer -Version: 0.0.55 +Version: 0.0.56 Summary: Software Heritage Content Indexer Home-page: https://forge.softwareheritage.org/diffusion/78/ Author: Software Heritage developers Author-email: swh-devel@inria.fr License: UNKNOWN Project-URL: Bug Reports, https://forge.softwareheritage.org/maniphest Project-URL: Funding, https://www.softwareheritage.org/donate Project-URL: Source, https://forge.softwareheritage.org/source/swh-indexer Description: swh-indexer ============ Tools to compute multiple indexes on SWH's raw contents: - content: - mimetype - ctags - language - fossology-license - metadata - revision: - metadata - ## Context + An indexer is in charge of: + - looking up objects + - extracting information from those objects + - store those information in the swh-indexer db - SWH has currently stored around 5B contents. The table `content` - holds their checksums. - - Those contents are physically stored in an object storage (using - disks) and replicated in another. Those object storages are not - destined for reading yet. - - We are in the process to copy those contents over to azure's blob - storages. As such, we will use that opportunity to trigger the - computations on these contents once those have been copied over. - - - ## Workers - - There are two types of workers: - - orchestrators (orchestrator, orchestrator-text) - - indexer (mimetype, language, ctags, fossology-license) - - ### Orchestrator - - - The orchestrator is in charge of dispatching a batch of sha1 hashes to - different indexers. - - Orchestration procedure: - - receive batch of sha1s - - split those batches into groups (according to setup) - - broadcast those group to indexers - - There are two types of orchestrators: - - - orchestrator (swh_indexer_orchestrator_content_all): Receives and - broadcast sha1 ids (of contents) to indexers (currently only the - mimetype indexer) - - - orchestrator-text (swh_indexer_orchestrator_content_text): Receives - batch of sha1 ids (of textual contents) and broadcast those to - indexers (currently language, ctags, and fossology-license - indexers). - - - ### Indexers - - - An indexer is in charge of the content retrieval and indexation of the - extracted information in the swh-indexer db. - - There are two types of indexers: + There are multiple indexers working on different object types: - content indexer: works with content sha1 hashes - revision indexer: works with revision sha1 hashes + - origin indexer: works with origin identifiers Indexation procedure: - receive batch of ids - retrieve the associated data depending on object type - compute for that object some index - store the result to swh's storage - - (and possibly do some broadcast itself) Current content indexers: - - mimetype (queue swh_indexer_content_mimetype): compute the mimetype, - filter out the textual contents and broadcast the list to the - orchestrator-text + - mimetype (queue swh_indexer_content_mimetype): detect the encoding + and mimetype - - language (queue swh_indexer_content_language): detect the programming language + - language (queue swh_indexer_content_language): detect the + programming language - - ctags (queue swh_indexer_content_ctags): try and compute tags - information + - ctags (queue swh_indexer_content_ctags): compute tags information - - fossology-license (queue swh_indexer_fossology_license): try and - compute the license + - fossology-license (queue swh_indexer_fossology_license): compute the + license - - metadata : translate file into translated_metadata dict + - metadata: translate file into translated_metadata dict Current revision indexers: - metadata: detects files containing metadata and retrieves translated_metadata in content_metadata table in storage or run content indexer to translate files. Platform: UNKNOWN Classifier: Programming Language :: Python :: 3 Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3) Classifier: Operating System :: OS Independent Classifier: Development Status :: 5 - Production/Stable Description-Content-Type: text/markdown Provides-Extra: testing diff --git a/swh.indexer.egg-info/SOURCES.txt b/swh.indexer.egg-info/SOURCES.txt index 9f7dfbf..2840f44 100644 --- a/swh.indexer.egg-info/SOURCES.txt +++ b/swh.indexer.egg-info/SOURCES.txt @@ -1,89 +1,94 @@ .gitignore AUTHORS CONTRIBUTORS LICENSE MANIFEST.in Makefile +Makefile.local README.md codemeta.json +conftest.py pytest.ini requirements-swh.txt requirements-test.txt requirements.txt setup.py tox.ini version.txt debian/changelog debian/compat debian/control debian/copyright debian/rules debian/source/format docs/.gitignore docs/Makefile +docs/README.md docs/conf.py docs/dev-info.rst docs/index.rst docs/_static/.placeholder docs/_templates/.placeholder sql/createdb-stamp sql/filldb-stamp sql/bin/db-upgrade sql/bin/dot_add_content sql/doc/json sql/doc/json/.gitignore sql/doc/json/Makefile sql/doc/json/indexer_configuration.tool_configuration.schema.json sql/doc/json/revision_metadata.translated_metadata.json sql/json/.gitignore sql/json/Makefile sql/json/indexer_configuration.tool_configuration.schema.json sql/json/revision_metadata.translated_metadata.json sql/upgrades/115.sql sql/upgrades/116.sql swh/__init__.py swh.indexer.egg-info/PKG-INFO swh.indexer.egg-info/SOURCES.txt swh.indexer.egg-info/dependency_links.txt swh.indexer.egg-info/requires.txt swh.indexer.egg-info/top_level.txt swh/indexer/__init__.py +swh/indexer/codemeta.py swh/indexer/ctags.py swh/indexer/fossology_license.py swh/indexer/indexer.py swh/indexer/language.py swh/indexer/metadata.py swh/indexer/metadata_detector.py swh/indexer/metadata_dictionary.py swh/indexer/mimetype.py -swh/indexer/orchestrator.py swh/indexer/origin_head.py -swh/indexer/producer.py swh/indexer/rehash.py swh/indexer/tasks.py +swh/indexer/data/codemeta/CITATION swh/indexer/data/codemeta/LICENSE +swh/indexer/data/codemeta/codemeta.jsonld swh/indexer/data/codemeta/crosswalk.csv swh/indexer/sql/10-swh-init.sql swh/indexer/sql/20-swh-enums.sql swh/indexer/sql/30-swh-schema.sql swh/indexer/sql/40-swh-func.sql swh/indexer/sql/50-swh-data.sql swh/indexer/sql/60-swh-indexes.sql swh/indexer/storage/__init__.py swh/indexer/storage/converters.py swh/indexer/storage/db.py swh/indexer/storage/api/__init__.py swh/indexer/storage/api/client.py swh/indexer/storage/api/server.py swh/indexer/tests/__init__.py +swh/indexer/tests/test_ctags.py +swh/indexer/tests/test_fossology_license.py swh/indexer/tests/test_language.py swh/indexer/tests/test_metadata.py swh/indexer/tests/test_mimetype.py -swh/indexer/tests/test_orchestrator.py swh/indexer/tests/test_origin_head.py swh/indexer/tests/test_origin_metadata.py swh/indexer/tests/test_utils.py swh/indexer/tests/storage/__init__.py swh/indexer/tests/storage/test_api_client.py swh/indexer/tests/storage/test_converters.py swh/indexer/tests/storage/test_storage.py \ No newline at end of file diff --git a/swh.indexer.egg-info/requires.txt b/swh.indexer.egg-info/requires.txt index 6fd8fdd..f96ef7b 100644 --- a/swh.indexer.egg-info/requires.txt +++ b/swh.indexer.egg-info/requires.txt @@ -1,13 +1,16 @@ chardet click file_magic pygments +pyld swh.core>=0.0.44 swh.model>=0.0.15 swh.objstorage>=0.0.13 swh.scheduler>=0.0.35 -swh.storage>=0.0.102 +swh.storage>=0.0.110 vcversioner +xmltodict [testing] +hypothesis>=3.11.0 pytest diff --git a/swh/indexer/__init__.py b/swh/indexer/__init__.py index 8020958..7ad5ba4 100644 --- a/swh/indexer/__init__.py +++ b/swh/indexer/__init__.py @@ -1,31 +1,4 @@ # Copyright (C) 2016-2018 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information - - -INDEXER_CLASSES = { - 'mimetype': 'swh.indexer.mimetype.ContentMimetypeIndexer', - 'language': 'swh.indexer.language.ContentLanguageIndexer', - 'ctags': 'swh.indexer.ctags.CtagsIndexer', - 'fossology_license': - 'swh.indexer.fossology_license.ContentFossologyLicenseIndexer', -} - - -TASK_NAMES = { - 'orchestrator_all': 'swh.indexer.tasks.OrchestratorAllContents', - 'orchestrator_text': 'swh.indexer.tasks.OrchestratorTextContents', - 'mimetype': 'swh.indexer.tasks.ContentMimetype', - 'language': 'swh.indexer.tasks.ContentLanguage', - 'ctags': 'swh.indexer.tasks.Ctags', - 'fossology_license': 'swh.indexer.tasks.ContentFossologyLicense', - 'rehash': 'swh.indexer.tasks.RecomputeChecksums', - 'revision_metadata': 'swh.indexer.tasks.RevisionMetadata', - 'origin_intrinsic_metadata': 'swh.indexer.tasks.OriginMetadata', -} - - -__all__ = [ - 'INDEXER_CLASSES', 'TASK_NAMES', -] diff --git a/swh/indexer/codemeta.py b/swh/indexer/codemeta.py new file mode 100644 index 0000000..4548029 --- /dev/null +++ b/swh/indexer/codemeta.py @@ -0,0 +1,120 @@ +# Copyright (C) 2018 The Software Heritage developers +# See the AUTHORS file at the top-level directory of this distribution +# License: GNU General Public License version 3, or any later version +# See top-level LICENSE file for more information + +import csv +import json +import os.path + +import swh.indexer +from pyld import jsonld + +_DATA_DIR = os.path.join(os.path.dirname(swh.indexer.__file__), 'data') + +CROSSWALK_TABLE_PATH = os.path.join(_DATA_DIR, 'codemeta', 'crosswalk.csv') + +CODEMETA_CONTEXT_PATH = os.path.join(_DATA_DIR, 'codemeta', 'codemeta.jsonld') + + +with open(CODEMETA_CONTEXT_PATH) as fd: + CODEMETA_CONTEXT = json.load(fd) + +CODEMETA_CONTEXT_URL = 'https://doi.org/10.5063/schema/codemeta-2.0' +CODEMETA_URI = 'https://codemeta.github.io/terms/' +SCHEMA_URI = 'http://schema.org/' + + +PROPERTY_BLACKLIST = { + # CodeMeta properties that we cannot properly represent. + SCHEMA_URI + 'softwareRequirements', + CODEMETA_URI + 'softwareSuggestions', + + # Duplicate of 'author' + SCHEMA_URI + 'creator', + } + + +def make_absolute_uri(local_name): + definition = CODEMETA_CONTEXT['@context'][local_name] + if isinstance(definition, str): + return definition + elif isinstance(definition, dict): + prefixed_name = definition['@id'] + (prefix, local_name) = prefixed_name.split(':') + if prefix == 'schema': + canonical_name = SCHEMA_URI + local_name + elif prefix == 'codemeta': + canonical_name = CODEMETA_URI + local_name + else: + assert False, prefix + return canonical_name + else: + assert False, definition + + +def _read_crosstable(fd): + reader = csv.reader(fd) + try: + header = next(reader) + except StopIteration: + raise ValueError('empty file') + + data_sources = set(header) - {'Parent Type', 'Property', + 'Type', 'Description'} + assert 'codemeta-V1' in data_sources + + codemeta_translation = {data_source: {} for data_source in data_sources} + + for line in reader: # For each canonical name + local_name = dict(zip(header, line))['Property'] + if not local_name: + continue + canonical_name = make_absolute_uri(local_name) + if canonical_name in PROPERTY_BLACKLIST: + continue + for (col, value) in zip(header, line): # For each cell in the row + if col in data_sources: + # If that's not the parentType/property/type/description + for local_name in value.split('/'): + # For each of the data source's properties that maps + # to this canonical name + if local_name.strip(): + codemeta_translation[col][local_name.strip()] = \ + canonical_name + + return codemeta_translation + + +with open(CROSSWALK_TABLE_PATH) as fd: + CROSSWALK_TABLE = _read_crosstable(fd) + + +def _document_loader(url): + """Document loader for pyld. + + Reads the local codemeta.jsonld file instead of fetching it + from the Internet every single time.""" + if url == CODEMETA_CONTEXT_URL: + return { + 'contextUrl': None, + 'documentUrl': url, + 'document': CODEMETA_CONTEXT, + } + elif url == CODEMETA_URI: + raise Exception('{} is CodeMeta\'s URI, use {} as context url'.format( + CODEMETA_URI, CODEMETA_CONTEXT_URL)) + else: + raise Exception(url) + + +def compact(doc): + """Same as `pyld.jsonld.compact`, but in the context of CodeMeta.""" + return jsonld.compact(doc, CODEMETA_CONTEXT_URL, + options={'documentLoader': _document_loader}) + + +def expand(doc): + """Same as `pyld.jsonld.expand`, but in the context of CodeMeta.""" + return jsonld.expand(doc, + options={'documentLoader': _document_loader}) diff --git a/swh/indexer/ctags.py b/swh/indexer/ctags.py index dde3740..b6f4bb7 100644 --- a/swh/indexer/ctags.py +++ b/swh/indexer/ctags.py @@ -1,161 +1,167 @@ # Copyright (C) 2015-2017 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import click import subprocess import json from swh.model import hashutil from .language import compute_language from .indexer import ContentIndexer, DiskIndexer # Options used to compute tags __FLAGS = [ '--fields=+lnz', # +l: language # +n: line number of tag definition # +z: include the symbol's kind (function, variable, ...) '--sort=no', # sort output on tag name '--links=no', # do not follow symlinks '--output-format=json', # outputs in json ] def run_ctags(path, lang=None, ctags_command='ctags'): """Run ctags on file path with optional language. Args: path: path to the file lang: language for that path (optional) Returns: ctags' output """ optional = [] if lang: optional = ['--language-force=%s' % lang] cmd = [ctags_command] + __FLAGS + optional + [path] output = subprocess.check_output(cmd, universal_newlines=True) for symbol in output.split('\n'): if not symbol: continue js_symbol = json.loads(symbol) yield { 'name': js_symbol['name'], 'kind': js_symbol['kind'], 'line': js_symbol['line'], 'lang': js_symbol['language'], } class CtagsIndexer(ContentIndexer, DiskIndexer): CONFIG_BASE_FILENAME = 'indexer/ctags' ADDITIONAL_CONFIG = { 'workdir': ('str', '/tmp/swh/indexer.ctags'), 'tools': ('dict', { 'name': 'universal-ctags', 'version': '~git7859817b', 'configuration': { 'command_line': '''ctags --fields=+lnz --sort=no --links=no ''' '''--output-format=json ''' }, }), 'languages': ('dict', { 'ada': 'Ada', 'adl': None, 'agda': None, # ... }) } def prepare(self): super().prepare() self.working_directory = self.config['workdir'] self.language_map = self.config['languages'] self.tool = self.tools[0] def filter(self, ids): """Filter out known sha1s and return only missing ones. """ yield from self.idx_storage.content_ctags_missing(( { 'id': sha1, 'indexer_configuration_id': self.tool['id'], } for sha1 in ids )) + def compute_ctags(self, path, lang): + """Compute ctags on file at path with language lang. + + """ + return run_ctags(path, lang=lang) + def index(self, id, data): """Index sha1s' content and store result. Args: id (bytes): content's identifier data (bytes): raw content in bytes Returns: A dict, representing a content_mimetype, with keys: - id (bytes): content's identifier (sha1) - ctags ([dict]): ctags list of symbols """ lang = compute_language(data, log=self.log)['lang'] if not lang: return None ctags_lang = self.language_map.get(lang) if not ctags_lang: return None ctags = { 'id': id, } filename = hashutil.hash_to_hex(id) content_path = self.write_to_temp( filename=filename, data=data) result = run_ctags(content_path, lang=ctags_lang) ctags.update({ 'ctags': list(result), 'indexer_configuration_id': self.tool['id'], }) self.cleanup(content_path) return ctags def persist_index_computations(self, results, policy_update): """Persist the results in storage. Args: results ([dict]): list of content_mimetype, dict with the following keys: - id (bytes): content's identifier (sha1) - ctags ([dict]): ctags list of symbols policy_update ([str]): either 'update-dups' or 'ignore-dups' to respectively update duplicates or ignore them """ self.idx_storage.content_ctags_add( results, conflict_update=(policy_update == 'update-dups')) @click.command() @click.option('--path', help="Path to execute index on") def main(path): r = list(run_ctags(path)) print(r) if __name__ == '__main__': main() diff --git a/swh/indexer/data/codemeta/CITATION b/swh/indexer/data/codemeta/CITATION new file mode 100644 index 0000000..9f1a546 --- /dev/null +++ b/swh/indexer/data/codemeta/CITATION @@ -0,0 +1,2 @@ +Matthew B. Jones, Carl Boettiger, Abby Cabunoc Mayes, Arfon Smith, Peter Slaughter, Kyle Niemeyer, Yolanda Gil, Martin Fenner, Krzysztof Nowak, Mark Hahnel, Luke Coy, Alice Allen, Mercè Crosas, Ashley Sands, Neil Chue Hong, Patricia Cruse, Daniel S. Katz, Carole Goble. 2017. CodeMeta: an exchange schema for software metadata. Version 2.0. KNB Data Repository. doi:10.5063/schema/codemeta-2.0 +swh:1:dir:39c509fd2002f9e531fb4b3a321ceb5e6994e54a;origin=https://github.com/codemeta/codemeta diff --git a/swh/indexer/data/codemeta/codemeta.jsonld b/swh/indexer/data/codemeta/codemeta.jsonld new file mode 100644 index 0000000..ecba88b --- /dev/null +++ b/swh/indexer/data/codemeta/codemeta.jsonld @@ -0,0 +1,80 @@ +{ + "@context": { + "type": "@type", + "id": "@id", + "schema":"http://schema.org/", + "codemeta": "https://codemeta.github.io/terms/", + "Organization": {"@id": "schema:Organization"}, + "Person": {"@id": "schema:Person"}, + "SoftwareSourceCode": {"@id": "schema:SoftwareSourceCode"}, + "SoftwareApplication": {"@id": "schema:SoftwareApplication"}, + "Text": {"@id": "schema:Text"}, + "URL": {"@id": "schema:URL"}, + "address": { "@id": "schema:address"}, + "affiliation": { "@id": "schema:affiliation"}, + "applicationCategory": { "@id": "schema:applicationCategory", "@type": "@id"}, + "applicationSubCategory": { "@id": "schema:applicationSubCategory", "@type": "@id"}, + "citation": { "@id": "schema:citation"}, + "codeRepository": { "@id": "schema:codeRepository", "@type": "@id"}, + "contributor": { "@id": "schema:contributor"}, + "copyrightHolder": { "@id": "schema:copyrightHolder"}, + "copyrightYear": { "@id": "schema:copyrightYear"}, + "creator": { "@id": "schema:creator"}, + "dateCreated": {"@id": "schema:dateCreated", "@type": "schema:Date" }, + "dateModified": {"@id": "schema:dateModified", "@type": "schema:Date" }, + "datePublished": {"@id": "schema:datePublished", "@type": "schema:Date" }, + "description": { "@id": "schema:description"}, + "downloadUrl": { "@id": "schema:downloadUrl", "@type": "@id"}, + "email": { "@id": "schema:email"}, + "editor": { "@id": "schema:editor"}, + "encoding": { "@id": "schema:encoding"}, + "familyName": { "@id": "schema:familyName"}, + "fileFormat": { "@id": "schema:fileFormat", "@type": "@id"}, + "fileSize": { "@id": "schema:fileSize"}, + "funder": { "@id": "schema:funder"}, + "givenName": { "@id": "schema:givenName"}, + "hasPart": { "@id": "schema:hasPart" }, + "identifier": { "@id": "schema:identifier", "@type": "@id"}, + "installUrl": { "@id": "schema:installUrl", "@type": "@id"}, + "isAccessibleForFree": { "@id": "schema:isAccessibleForFree"}, + "isPartOf": { "@id": "schema:isPartOf"}, + "keywords": { "@id": "schema:keywords"}, + "license": { "@id": "schema:license", "@type": "@id"}, + "memoryRequirements": { "@id": "schema:memoryRequirements", "@type": "@id"}, + "name": { "@id": "schema:name"}, + "operatingSystem": { "@id": "schema:operatingSystem"}, + "permissions": { "@id": "schema:permissions"}, + "position": { "@id": "schema:position"}, + "processorRequirements": { "@id": "schema:processorRequirements"}, + "producer": { "@id": "schema:producer"}, + "programmingLanguage": { "@id": "schema:programmingLanguage"}, + "provider": { "@id": "schema:provider"}, + "publisher": { "@id": "schema:publisher"}, + "relatedLink": { "@id": "schema:relatedLink", "@type": "@id"}, + "releaseNotes": { "@id": "schema:releaseNotes", "@type": "@id"}, + "runtimePlatform": { "@id": "schema:runtimePlatform"}, + "sameAs": { "@id": "schema:sameAs", "@type": "@id"}, + "softwareHelp": { "@id": "schema:softwareHelp"}, + "softwareRequirements": { "@id": "schema:softwareRequirements", "@type": "@id"}, + "softwareVersion": { "@id": "schema:softwareVersion"}, + "sponsor": { "@id": "schema:sponsor"}, + "storageRequirements": { "@id": "schema:storageRequirements", "@type": "@id"}, + "supportingData": { "@id": "schema:supportingData"}, + "targetProduct": { "@id": "schema:targetProduct"}, + "url": { "@id": "schema:url", "@type": "@id"}, + "version": { "@id": "schema:version"}, + + "author": { "@id": "schema:author", "@container": "@list" }, + + "softwareSuggestions": { "@id": "codemeta:softwareSuggestions", "@type": "@id"}, + "contIntegration": { "@id": "codemeta:contIntegration", "@type": "@id"}, + "buildInstructions": { "@id": "codemeta:buildInstructions", "@type": "@id"}, + "developmentStatus": { "@id": "codemeta:developmentStatus", "@type": "@id"}, + "embargoDate": { "@id":"codemeta:embargoDate", "@type": "schema:Date" }, + "funding": { "@id": "codemeta:funding" }, + "readme": { "@id":"codemeta:readme", "@type": "@id" }, + "issueTracker": { "@id":"codemeta:issueTracker", "@type": "@id" }, + "referencePublication": { "@id": "codemeta:referencePublication", "@type": "@id"}, + "maintainer": { "@id": "codemeta:maintainer" } + } +} diff --git a/swh/indexer/fossology_license.py b/swh/indexer/fossology_license.py index 3d46407..395019b 100644 --- a/swh/indexer/fossology_license.py +++ b/swh/indexer/fossology_license.py @@ -1,141 +1,185 @@ -# Copyright (C) 2016-2017 The Software Heritage developers +# Copyright (C) 2016-2018 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import click import subprocess from swh.model import hashutil -from .indexer import ContentIndexer, DiskIndexer +from .indexer import ContentIndexer, ContentRangeIndexer, DiskIndexer -def compute_license(path, log=None): - """Determine license from file at path. +class MixinFossologyLicenseIndexer: + """Mixin fossology license indexer. - Args: - path: filepath to determine the license - - Returns: - A dict with the following keys: - - licenses ([str]): associated detected licenses to path - - path (bytes): content filepath - - tool (str): tool used to compute the output - - """ - try: - properties = subprocess.check_output(['nomossa', path], - universal_newlines=True) - if properties: - res = properties.rstrip().split(' contains license(s) ') - licenses = res[1].split(',') - - return { - 'licenses': licenses, - 'path': path, - } - except subprocess.CalledProcessError: - if log: - from os import path as __path - log.exception('Problem during license detection for sha1 %s' % - __path.basename(path)) - return { - 'licenses': [], - 'path': path, - } - - -class ContentFossologyLicenseIndexer(ContentIndexer, DiskIndexer): - """Indexer in charge of: - - filtering out content already indexed - - reading content from objstorage per the content's id (sha1) - - computing {license, encoding} from that content - - store result in storage + See :class:`ContentFossologyLicenseIndexer` and + :class:`FossologyLicenseRangeIndexer` """ ADDITIONAL_CONFIG = { 'workdir': ('str', '/tmp/swh/indexer.fossology.license'), 'tools': ('dict', { 'name': 'nomos', 'version': '3.1.0rc2-31-ga2cbb8c', 'configuration': { 'command_line': 'nomossa ', }, }), } CONFIG_BASE_FILENAME = 'indexer/fossology_license' def prepare(self): super().prepare() self.working_directory = self.config['workdir'] self.tool = self.tools[0] - def filter(self, ids): - """Filter out known sha1s and return only missing ones. + def compute_license(self, path, log=None): + """Determine license from file at path. + + Args: + path: filepath to determine the license + + Returns: + A dict with the following keys: + - licenses ([str]): associated detected licenses to path + - path (bytes): content filepath + - tool (str): tool used to compute the output """ - yield from self.idx_storage.content_fossology_license_missing(( - { - 'id': sha1, - 'indexer_configuration_id': self.tool['id'], - } for sha1 in ids - )) + try: + properties = subprocess.check_output(['nomossa', path], + universal_newlines=True) + if properties: + res = properties.rstrip().split(' contains license(s) ') + licenses = res[1].split(',') + + return { + 'licenses': licenses, + 'path': path, + } + except subprocess.CalledProcessError: + if log: + from os import path as __path + log.exception('Problem during license detection for sha1 %s' % + __path.basename(path)) + return { + 'licenses': [], + 'path': path, + } def index(self, id, data): """Index sha1s' content and store result. Args: - sha1 (bytes): content's identifier + id (bytes): content's identifier raw_content (bytes): raw content in bytes Returns: A dict, representing a content_license, with keys: - id (bytes): content's identifier (sha1) - license (bytes): license in bytes - path (bytes): path """ - filename = hashutil.hash_to_hex(id) + if isinstance(id, str): + id = hashutil.hash_to_hex(id) content_path = self.write_to_temp( - filename=filename, + filename=id, data=data) try: - properties = compute_license(path=content_path, log=self.log) + properties = self.compute_license(path=content_path, log=self.log) properties.update({ 'id': id, 'indexer_configuration_id': self.tool['id'], }) finally: self.cleanup(content_path) return properties def persist_index_computations(self, results, policy_update): """Persist the results in storage. Args: results ([dict]): list of content_license, dict with the following keys: - id (bytes): content's identifier (sha1) - license (bytes): license in bytes - path (bytes): path policy_update ([str]): either 'update-dups' or 'ignore-dups' to respectively update duplicates or ignore them """ self.idx_storage.content_fossology_license_add( results, conflict_update=(policy_update == 'update-dups')) +class ContentFossologyLicenseIndexer( + MixinFossologyLicenseIndexer, DiskIndexer, ContentIndexer): + """Indexer in charge of: + - filtering out content already indexed + - reading content from objstorage per the content's id (sha1) + - computing {license, encoding} from that content + - store result in storage + + """ + def filter(self, ids): + """Filter out known sha1s and return only missing ones. + + """ + yield from self.idx_storage.content_fossology_license_missing(( + { + 'id': sha1, + 'indexer_configuration_id': self.tool['id'], + } for sha1 in ids + )) + + +class FossologyLicenseRangeIndexer( + MixinFossologyLicenseIndexer, DiskIndexer, ContentRangeIndexer): + """FossologyLicense Range Indexer working on range of content identifiers. + + It: + - filters out the non textual content + - (optionally) filters out content already indexed (cf :callable:`range`) + - reads content from objstorage per the content's id (sha1) + - computes {mimetype, encoding} from that content + - stores result in storage + + """ + def indexed_contents_in_range(self, start, end): + """Retrieve indexed content id within range [start, end]. + + Args + **start** (bytes): Starting bound from range identifier + **end** (bytes): End range identifier + + Yields: + Content identifier (bytes) present in the range [start, end] + + """ + while start: + result = self.idx_storage.content_fossology_license_get_range( + start, end, self.tool['id']) + contents = result['ids'] + for _id in contents: + yield _id + start = result['next'] + if start is None: + break + + @click.command(help='Compute license for path using tool') @click.option('--tool', default='nomossa', help="Path to tool") @click.option('--path', required=1, help="Path to execute index on") def main(tool, path): - print(compute_license(tool, path)) + indexer = ContentFossologyLicenseIndexer() + print(indexer.compute_license(tool, path)) if __name__ == '__main__': main() diff --git a/swh/indexer/indexer.py b/swh/indexer/indexer.py index 9015e4a..1136d8d 100644 --- a/swh/indexer/indexer.py +++ b/swh/indexer/indexer.py @@ -1,507 +1,587 @@ # Copyright (C) 2016-2018 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import abc import os import logging import shutil import tempfile import datetime from copy import deepcopy from swh.scheduler import get_scheduler from swh.storage import get_storage from swh.core.config import SWHConfig from swh.objstorage import get_objstorage from swh.objstorage.exc import ObjNotFoundError -from swh.model import hashutil from swh.indexer.storage import get_indexer_storage, INDEXER_CFG_KEY +from swh.model import hashutil +from swh.core import utils class DiskIndexer: """Mixin intended to be used with other SomethingIndexer classes. Indexers inheriting from this class are a category of indexers which needs the disk for their computations. Note: This expects `self.working_directory` variable defined at runtime. """ def write_to_temp(self, filename, data): """Write the sha1's content in a temporary file. Args: sha1 (str): the sha1 name filename (str): one of sha1's many filenames data (bytes): the sha1's content to write in temporary file Returns: The path to the temporary file created. That file is filled in with the raw content's data. """ os.makedirs(self.working_directory, exist_ok=True) temp_dir = tempfile.mkdtemp(dir=self.working_directory) content_path = os.path.join(temp_dir, filename) with open(content_path, 'wb') as f: f.write(data) return content_path def cleanup(self, content_path): """Remove content_path from working directory. Args: content_path (str): the file to remove """ temp_dir = os.path.dirname(content_path) shutil.rmtree(temp_dir) class BaseIndexer(SWHConfig, metaclass=abc.ABCMeta): """Base class for indexers to inherit from. The main entry point is the :func:`run` function which is in charge of triggering the computations on the batch dict/ids received. Indexers can: - filter out ids whose data has already been indexed. - retrieve ids data from storage or objstorage - index this data depending on the object and store the result in storage. To implement a new object type indexer, inherit from the BaseIndexer and implement indexing: :func:`run`: object_ids are different depending on object. For example: sha1 for content, sha1_git for revision, directory, release, and id for origin To implement a new concrete indexer, inherit from the object level classes: :class:`ContentIndexer`, :class:`RevisionIndexer`, :class:`OriginIndexer`. Then you need to implement the following functions: :func:`filter`: - filter out data already indexed (in storage). This function is used by - the orchestrator and not directly by the indexer - (cf. swh.indexer.orchestrator.BaseOrchestratorIndexer). + filter out data already indexed (in storage). :func:`index_object`: compute index on id with data (retrieved from the storage or the objstorage by the id key) and return the resulting index computation. :func:`persist_index_computations`: persist the results of multiple index computations in the storage. The new indexer implementation can also override the following functions: :func:`prepare`: Configuration preparation for the indexer. When overriding, this must call the `super().prepare()` instruction. :func:`check`: Configuration check for the indexer. When overriding, this must call the `super().check()` instruction. :func:`register_tools`: This should return a dict of the tool(s) to use when indexing or filtering. """ CONFIG = 'indexer/base' DEFAULT_CONFIG = { INDEXER_CFG_KEY: ('dict', { 'cls': 'remote', 'args': { 'url': 'http://localhost:5007/' } }), 'storage': ('dict', { 'cls': 'remote', 'args': { 'url': 'http://localhost:5002/', } }), 'objstorage': ('dict', { - 'cls': 'multiplexer', + 'cls': 'remote', 'args': { - 'objstorages': [{ - 'cls': 'filtered', - 'args': { - 'storage_conf': { - 'cls': 'azure', - 'args': { - 'account_name': '0euwestswh', - 'api_secret_key': 'secret', - 'container_name': 'contents' - } - }, - 'filters_conf': [ - {'type': 'readonly'}, - {'type': 'prefix', 'prefix': '0'} - ] - } - }, { - 'cls': 'filtered', - 'args': { - 'storage_conf': { - 'cls': 'azure', - 'args': { - 'account_name': '1euwestswh', - 'api_secret_key': 'secret', - 'container_name': 'contents' - } - }, - 'filters_conf': [ - {'type': 'readonly'}, - {'type': 'prefix', 'prefix': '1'} - ] - } - }] - }, - }), + 'url': 'http://localhost:5003/', + } + }) } ADDITIONAL_CONFIG = {} def __init__(self): """Prepare and check that the indexer is ready to run. """ super().__init__() self.prepare() self.check() def prepare(self): """Prepare the indexer's needed runtime configuration. Without this step, the indexer cannot possibly run. """ self.config = self.parse_config_file( additional_configs=[self.ADDITIONAL_CONFIG]) if self.config['storage']: self.storage = get_storage(**self.config['storage']) objstorage = self.config['objstorage'] self.objstorage = get_objstorage(objstorage['cls'], objstorage['args']) idx_storage = self.config[INDEXER_CFG_KEY] self.idx_storage = get_indexer_storage(**idx_storage) _log = logging.getLogger('requests.packages.urllib3.connectionpool') _log.setLevel(logging.WARN) self.log = logging.getLogger('swh.indexer') self.tools = list(self.register_tools(self.config['tools'])) def check(self): """Check the indexer's configuration is ok before proceeding. If ok, does nothing. If not raise error. """ if not self.tools: raise ValueError('Tools %s is unknown, cannot continue' % self.tools) def _prepare_tool(self, tool): """Prepare the tool dict to be compliant with the storage api. """ return {'tool_%s' % key: value for key, value in tool.items()} def register_tools(self, tools): """Permit to register tools to the storage. Add a sensible default which can be overridden if not sufficient. (For now, all indexers use only one tool) Expects the self.config['tools'] property to be set with one or more tools. Args: tools (dict/[dict]): Either a dict or a list of dict. Returns: List of dict with additional id key. Raises: ValueError if not a list nor a dict. """ tools = self.config['tools'] if isinstance(tools, list): tools = map(self._prepare_tool, tools) elif isinstance(tools, dict): tools = [self._prepare_tool(tools)] else: raise ValueError('Configuration tool(s) must be a dict or list!') return self.idx_storage.indexer_configuration_add(tools) - @abc.abstractmethod - def filter(self, ids): - """Filter missing ids for that particular indexer. - - Args: - ids ([bytes]): list of ids - - Yields: - iterator of missing ids - - """ - pass - @abc.abstractmethod def index(self, id, data): """Index computation for the id and associated raw data. Args: id (bytes): identifier data (bytes): id's data from storage or objstorage depending on object type Returns: a dict that makes sense for the persist_index_computations function. """ pass @abc.abstractmethod def persist_index_computations(self, results, policy_update): """Persist the computation resulting from the index. Args: results ([result]): List of results. One result is the result of the index function. policy_update ([str]): either 'update-dups' or 'ignore-dups' to respectively update duplicates or ignore them Returns: None """ pass def next_step(self, results, task): """Do something else with computations results (e.g. send to another queue, ...). (This is not an abstractmethod since it is optional). Args: results ([result]): List of results (dict) as returned by index function. task (dict): a dict in the form expected by `scheduler.backend.SchedulerBackend.create_tasks` without `next_run`, plus a `result_name` key. Returns: None """ if task: if getattr(self, 'scheduler', None): scheduler = self.scheduler else: scheduler = get_scheduler(**self.config['scheduler']) task = deepcopy(task) result_name = task.pop('result_name') task['next_run'] = datetime.datetime.now() task['arguments']['kwargs'][result_name] = self.results scheduler.create_tasks([task]) @abc.abstractmethod def run(self, ids, policy_update, next_step=None, **kwargs): """Given a list of ids: - retrieves the data from the storage - executes the indexing computations - stores the results (according to policy_update) Args: ids ([bytes]): id's identifier list policy_update (str): either 'update-dups' or 'ignore-dups' to respectively update duplicates or ignore them next_step (dict): a dict in the form expected by `scheduler.backend.SchedulerBackend.create_tasks` without `next_run`, plus a `result_name` key. **kwargs: passed to the `index` method """ pass class ContentIndexer(BaseIndexer): - """An object type indexer, inherits from the :class:`BaseIndexer` and - implements Content indexing using the run method + """A content indexer working on a list of ids directly. - Note: the :class:`ContentIndexer` is not an instantiable - object. To use it in another context, one should inherit from this - class and override the methods mentioned in the - :class:`BaseIndexer` class. + To work on indexer range, use the :class:`ContentRangeIndexer` + instead. + + Note: :class:`ContentIndexer` is not an instantiable object. To + use it, one should inherit from this class and override the + methods mentioned in the :class:`BaseIndexer` class. """ + @abc.abstractmethod + def filter(self, ids): + """Filter missing ids for that particular indexer. + + Args: + ids ([bytes]): list of ids + + Yields: + iterator of missing ids + + """ + pass def run(self, ids, policy_update, next_step=None, **kwargs): """Given a list of ids: - retrieve the content from the storage - execute the indexing computations - store the results (according to policy_update) Args: ids ([bytes]): sha1's identifier list policy_update (str): either 'update-dups' or 'ignore-dups' to respectively update duplicates or ignore them next_step (dict): a dict in the form expected by `scheduler.backend.SchedulerBackend.create_tasks` without `next_run`, plus a `result_name` key. **kwargs: passed to the `index` method """ results = [] try: for sha1 in ids: try: raw_content = self.objstorage.get(sha1) except ObjNotFoundError: self.log.warning('Content %s not found in objstorage' % hashutil.hash_to_hex(sha1)) continue res = self.index(sha1, raw_content, **kwargs) if res: # If no results, skip it results.append(res) self.persist_index_computations(results, policy_update) self.results = results return self.next_step(results, task=next_step) except Exception: self.log.exception( 'Problem when reading contents metadata.') +class ContentRangeIndexer(BaseIndexer): + """A content range indexer. + + This expects as input a range of ids to index. + + To work on a list of ids, use the :class:`ContentIndexer` instead. + + Note: :class:`ContentRangeIndexer` is not an instantiable + object. To use it, one should inherit from this class and override + the methods mentioned in the :class:`BaseIndexer` class. + + """ + @abc.abstractmethod + def indexed_contents_in_range(self, start, end): + """Retrieve indexed contents within range [start, end]. + + Args + **start** (bytes): Starting bound from range identifier + **end** (bytes): End range identifier + + Yields: + Content identifier (bytes) present in the range [start, end] + + """ + pass + + def _list_contents_to_index(self, start, end, indexed): + """Compute from storage the new contents to index in the range [start, + end]. The already indexed contents are skipped. + + Args: + **start** (bytes): Starting bound from range identifier + **end** (bytes): End range identifier + **indexed** (Set[bytes]): Set of content already indexed. + + Yields: + Identifier (bytes) of contents to index. + + """ + while start: + result = self.storage.content_get_range(start, end) + contents = result['contents'] + for c in contents: + _id = c['sha1'] + if _id in indexed: + continue + yield _id + start = result['next'] + + def _index_contents(self, start, end, indexed, **kwargs): + """Index the contents from within range [start, end] + + Args: + **start** (bytes): Starting bound from range identifier + **end** (bytes): End range identifier + **indexed** (Set[bytes]): Set of content already indexed. + + Yields: + Data indexed (dict) to persist using the indexer storage + + """ + for sha1 in self._list_contents_to_index(start, end, indexed): + try: + raw_content = self.objstorage.get(sha1) + except ObjNotFoundError: + self.log.warning('Content %s not found in objstorage' % + hashutil.hash_to_hex(sha1)) + continue + res = self.index(sha1, raw_content, **kwargs) + if res: + yield res + + def run(self, start, end, skip_existing=True, **kwargs): + """Given a range of content ids, compute the indexing computations on + the contents within. Either the indexer is incremental + (filter out existing computed data) or not (compute + everything from scratch). + + Args: + **start** (Union[bytes, str]): Starting range identifier + **end** (Union[bytes, str]): Ending range identifier + **skip_existing** (bool): Skip existing indexed data + (default) or not + **kwargs: passed to the `index` method + + Returns: + a boolean. True if data was indexed, False otherwise. + + """ + with_indexed_data = False + try: + if isinstance(start, str): + start = hashutil.hash_to_bytes(start) + if isinstance(end, str): + end = hashutil.hash_to_bytes(end) + + if skip_existing: + indexed = set(self.indexed_contents_in_range(start, end)) + else: + indexed = set() + + index_computations = self._index_contents(start, end, indexed) + for results in utils.grouper(index_computations, + n=self.config['write_batch_size']): + self.persist_index_computations( + results, policy_update='update-dups') + with_indexed_data = True + return with_indexed_data + except Exception: + self.log.exception( + 'Problem when computing metadata.') + + class OriginIndexer(BaseIndexer): """An object type indexer, inherits from the :class:`BaseIndexer` and implements Origin indexing using the run method Note: the :class:`OriginIndexer` is not an instantiable object. To use it in another context one should inherit from this class and override the methods mentioned in the :class:`BaseIndexer` class. """ def run(self, ids, policy_update, parse_ids=False, next_step=None, **kwargs): """Given a list of origin ids: - retrieve origins from storage - execute the indexing computations - store the results (according to policy_update) Args: ids ([Union[int, Tuple[str, bytes]]]): list of origin ids or (type, url) tuples. policy_update (str): either 'update-dups' or 'ignore-dups' to respectively update duplicates or ignore them parse_ids (bool: If `True`, will try to convert `ids` from a human input to the valid type. next_step (dict): a dict in the form expected by `scheduler.backend.SchedulerBackend.create_tasks` without `next_run`, plus a `result_name` key. **kwargs: passed to the `index` method """ if parse_ids: ids = [ o.split('+', 1) if ':' in o else int(o) # type+url or id for o in ids] results = [] for id_ in ids: if isinstance(id_, (tuple, list)): if len(id_) != 2: raise TypeError('Expected a (type, url) tuple.') (type_, url) = id_ params = {'type': type_, 'url': url} elif isinstance(id_, int): params = {'id': id_} else: raise TypeError('Invalid value in "ids": %r' % id_) origin = self.storage.origin_get(params) if not origin: self.log.warning('Origins %s not found in storage' % list(ids)) continue try: res = self.index(origin, **kwargs) if origin: # If no results, skip it results.append(res) except Exception: self.log.exception( 'Problem when processing origin %s' % id_) self.persist_index_computations(results, policy_update) self.results = results return self.next_step(results, task=next_step) class RevisionIndexer(BaseIndexer): """An object type indexer, inherits from the :class:`BaseIndexer` and implements Revision indexing using the run method Note: the :class:`RevisionIndexer` is not an instantiable object. To use it in another context one should inherit from this class and override the methods mentioned in the :class:`BaseIndexer` class. """ def run(self, ids, policy_update, next_step=None): """Given a list of sha1_gits: - retrieve revisions from storage - execute the indexing computations - store the results (according to policy_update) Args: ids ([bytes or str]): sha1_git's identifier list policy_update (str): either 'update-dups' or 'ignore-dups' to respectively update duplicates or ignore them """ results = [] ids = [id_.encode() if isinstance(id_, str) else id_ for id_ in ids] revs = self.storage.revision_get(ids) for rev in revs: if not rev: self.log.warning('Revisions %s not found in storage' % list(map(hashutil.hash_to_hex, ids))) continue try: res = self.index(rev) if res: # If no results, skip it results.append(res) except Exception: self.log.exception( 'Problem when processing revision') self.persist_index_computations(results, policy_update) self.results = results return self.next_step(results, task=next_step) diff --git a/swh/indexer/metadata.py b/swh/indexer/metadata.py index 933716b..72c2cca 100644 --- a/swh/indexer/metadata.py +++ b/swh/indexer/metadata.py @@ -1,334 +1,334 @@ # Copyright (C) 2017 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import click import logging from swh.indexer.indexer import ContentIndexer, RevisionIndexer, OriginIndexer from swh.indexer.metadata_dictionary import MAPPINGS from swh.indexer.metadata_detector import detect_metadata from swh.indexer.metadata_detector import extract_minimal_metadata_dict from swh.indexer.storage import INDEXER_CFG_KEY from swh.model import hashutil class ContentMetadataIndexer(ContentIndexer): """Content-level indexer This indexer is in charge of: - filtering out content already indexed in content_metadata - reading content from objstorage with the content's id sha1 - computing translated_metadata by given context - using the metadata_dictionary as the 'swh-metadata-translator' tool - store result in content_metadata table """ - CONFIG_BASE_FILENAME = 'indexer/metadata' + CONFIG_BASE_FILENAME = 'indexer/content_metadata' def __init__(self, tool, config): # twisted way to use the exact same config of RevisionMetadataIndexer # object that uses internally ContentMetadataIndexer self.config = config self.config['tools'] = tool super().__init__() def filter(self, ids): """Filter out known sha1s and return only missing ones. """ yield from self.idx_storage.content_metadata_missing(( { 'id': sha1, 'indexer_configuration_id': self.tool['id'], } for sha1 in ids )) def index(self, id, data): """Index sha1s' content and store result. Args: id (bytes): content's identifier data (bytes): raw content in bytes Returns: dict: dictionary representing a content_metadata. If the translation wasn't successful the translated_metadata keys will be returned as None """ result = { 'id': id, 'indexer_configuration_id': self.tool['id'], 'translated_metadata': None } try: mapping_name = self.tool['tool_configuration']['context'] result['translated_metadata'] = MAPPINGS[mapping_name] \ .translate(data) # a twisted way to keep result with indexer object for get_results self.results.append(result) except Exception: self.log.exception( "Problem during tool retrieval of metadata translation") return result def persist_index_computations(self, results, policy_update): """Persist the results in storage. Args: results ([dict]): list of content_metadata, dict with the following keys: - id (bytes): content's identifier (sha1) - translated_metadata (jsonb): detected metadata policy_update ([str]): either 'update-dups' or 'ignore-dups' to respectively update duplicates or ignore them """ self.idx_storage.content_metadata_add( results, conflict_update=(policy_update == 'update-dups')) def get_results(self): """can be called only if run method was called before Returns: list: list of content_metadata entries calculated by current indexer """ return self.results class RevisionMetadataIndexer(RevisionIndexer): """Revision-level indexer This indexer is in charge of: - filtering revisions already indexed in revision_metadata table with defined computation tool - retrieve all entry_files in root directory - use metadata_detector for file_names containing metadata - compute metadata translation if necessary and possible (depends on tool) - send sha1s to content indexing if possible - store the results for revision """ - CONFIG_BASE_FILENAME = 'indexer/metadata' + CONFIG_BASE_FILENAME = 'indexer/revision_metadata' ADDITIONAL_CONFIG = { 'tools': ('dict', { 'name': 'swh-metadata-detector', 'version': '0.0.2', 'configuration': { 'type': 'local', 'context': ['NpmMapping', 'CodemetaMapping'] }, }), } ContentMetadataIndexer = ContentMetadataIndexer def prepare(self): super().prepare() self.tool = self.tools[0] def filter(self, sha1_gits): """Filter out known sha1s and return only missing ones. """ yield from self.idx_storage.revision_metadata_missing(( { 'id': sha1_git, 'indexer_configuration_id': self.tool['id'], } for sha1_git in sha1_gits )) def index(self, rev): """Index rev by processing it and organizing result. use metadata_detector to iterate on filenames - if one filename detected -> sends file to content indexer - if multiple file detected -> translation needed at revision level Args: rev (bytes): revision artifact from storage Returns: dict: dictionary representing a revision_metadata, with keys: - id (str): rev's identifier (sha1_git) - indexer_configuration_id (bytes): tool used - - translated_metadata (bytes): dict of retrieved metadata + - translated_metadata: dict of retrieved metadata """ try: result = { 'id': rev['id'].decode(), 'indexer_configuration_id': self.tool['id'], 'translated_metadata': None } root_dir = rev['directory'] dir_ls = self.storage.directory_ls(root_dir, recursive=False) files = [entry for entry in dir_ls if entry['type'] == 'file'] detected_files = detect_metadata(files) result['translated_metadata'] = self.translate_revision_metadata( - detected_files) + detected_files) except Exception as e: self.log.exception( 'Problem when indexing rev: %r', e) return result def persist_index_computations(self, results, policy_update): """Persist the results in storage. Args: results ([dict]): list of content_mimetype, dict with the following keys: - id (bytes): content's identifier (sha1) - mimetype (bytes): mimetype in bytes - encoding (bytes): encoding in bytes policy_update ([str]): either 'update-dups' or 'ignore-dups' to respectively update duplicates or ignore them """ # TODO: add functions in storage to keep data in revision_metadata self.idx_storage.revision_metadata_add( results, conflict_update=(policy_update == 'update-dups')) def translate_revision_metadata(self, detected_files): """ Determine plan of action to translate metadata when containing one or multiple detected files: Args: detected_files (dict): dictionary mapping context names (e.g., "npm", "authors") to list of sha1 Returns: dict: dict with translated metadata according to the CodeMeta vocabulary """ translated_metadata = [] tool = { 'name': 'swh-metadata-translator', 'version': '0.0.2', 'configuration': { 'type': 'local', 'context': None }, } # TODO: iterate on each context, on each file # -> get raw_contents # -> translate each content config = { INDEXER_CFG_KEY: self.idx_storage, 'objstorage': self.objstorage } for context in detected_files.keys(): tool['configuration']['context'] = context c_metadata_indexer = self.ContentMetadataIndexer(tool, config) # sha1s that are in content_metadata table sha1s_in_storage = [] metadata_generator = self.idx_storage.content_metadata_get( detected_files[context]) for c in metadata_generator: # extracting translated_metadata sha1 = c['id'] sha1s_in_storage.append(sha1) local_metadata = c['translated_metadata'] # local metadata is aggregated if local_metadata: translated_metadata.append(local_metadata) sha1s_filtered = [item for item in detected_files[context] if item not in sha1s_in_storage] if sha1s_filtered: # schedule indexation of content try: c_metadata_indexer.run(sha1s_filtered, policy_update='ignore-dups') # on the fly possibility: results = c_metadata_indexer.get_results() for result in results: local_metadata = result['translated_metadata'] translated_metadata.append(local_metadata) except Exception as e: self.log.warning("""Exception while indexing content""", e) # transform translated_metadata into min set with swh-metadata-detector min_metadata = extract_minimal_metadata_dict(translated_metadata) return min_metadata class OriginMetadataIndexer(OriginIndexer): def filter(self, ids): return ids def run(self, revisions_metadata, policy_update, *, origin_head): """Expected to be called with the result of RevisionMetadataIndexer as first argument; ie. not a list of ids as other indexers would. Args: * `revisions_metadata` (List[dict]): contains metadata from revisions, along with the respective revision ids. It is passed by RevisionMetadataIndexer via a Celery chain triggered by OriginIndexer.next_step. * `policy_update`: `'ignore-dups'` or `'update-dups'` * `origin_head` (dict): {str(origin_id): rev_id.encode()} keys `origin_id` and `revision_id`, which is the result of OriginHeadIndexer. """ origin_head_map = {int(origin_id): rev_id for (origin_id, rev_id) in origin_head.items()} # Fix up the argument order. revisions_metadata has to be the # first argument because of celery.chain; the next line calls # run() with the usual order, ie. origin ids first. return super().run(ids=list(origin_head_map), policy_update=policy_update, revisions_metadata=revisions_metadata, origin_head_map=origin_head_map) def index(self, origin, *, revisions_metadata, origin_head_map): # Get the last revision of the origin. revision_id = origin_head_map[origin['id']] # Get the metadata of that revision, and return it for revision_metadata in revisions_metadata: if revision_metadata['id'] == revision_id: return { 'origin_id': origin['id'], 'metadata': revision_metadata['translated_metadata'], 'from_revision': revision_id, 'indexer_configuration_id': revision_metadata['indexer_configuration_id'], } raise KeyError('%r not in %r' % (revision_id, [r['id'] for r in revisions_metadata])) def persist_index_computations(self, results, policy_update): self.idx_storage.origin_intrinsic_metadata_add( results, conflict_update=(policy_update == 'update-dups')) @click.command() @click.option('--revs', '-i', help='Default sha1_git to lookup', multiple=True) def main(revs): _git_sha1s = list(map(hashutil.hash_to_bytes, revs)) rev_metadata_indexer = RevisionMetadataIndexer() rev_metadata_indexer.run(_git_sha1s, 'update-dups') if __name__ == '__main__': logging.basicConfig(level=logging.INFO) main() diff --git a/swh/indexer/metadata_detector.py b/swh/indexer/metadata_detector.py index d26a7ef..629974a 100644 --- a/swh/indexer/metadata_detector.py +++ b/swh/indexer/metadata_detector.py @@ -1,65 +1,60 @@ # Copyright (C) 2017 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information - +from swh.indexer.codemeta import compact, expand +from swh.indexer.codemeta import make_absolute_uri from swh.indexer.metadata_dictionary import MAPPINGS def detect_metadata(files): """ Detects files potentially containing metadata Args: - file_entries (list): list of files Returns: - empty list if nothing was found - dictionary {mapping_filenames[name]:f['sha1']} """ results = {} for (mapping_name, mapping) in MAPPINGS.items(): matches = mapping.detect_metadata_files(files) if matches: results[mapping_name] = matches return results +_MINIMAL_PROPERTY_SET = { + "developmentStatus", "version", "operatingSystem", "description", + "keywords", "issueTracker", "name", "author", "relatedLink", + "url", "license", "maintainer", "email", "identifier", + "codeRepository"} + +MINIMAL_METADATA_SET = {make_absolute_uri(prop) + for prop in _MINIMAL_PROPERTY_SET} + + def extract_minimal_metadata_dict(metadata_list): """ Every item in the metadata_list is a dict of translated_metadata in the CodeMeta vocabulary we wish to extract a minimal set of terms and keep all values corresponding to this term without duplication Args: - metadata_list (list): list of dicts of translated_metadata Returns: - minimal_dict (dict): one dict with selected values of metadata """ - minimal_dict = { - "developmentStatus": [], - "version": [], - "operatingSystem": [], - "description": [], - "keywords": [], - "issueTracker": [], - "name": [], - "author": [], - "relatedLink": [], - "url": [], - "license": [], - "maintainer": [], - "email": [], - "softwareRequirements": [], - "identifier": [], - "codeRepository": [] - } - for term in minimal_dict.keys(): - for metadata_item in metadata_list: - if term in metadata_item: - if not metadata_item[term] in minimal_dict[term]: - minimal_dict[term].append(metadata_item[term]) - if not minimal_dict[term]: - minimal_dict[term] = None - return minimal_dict + minimal_dict = {} + for document in metadata_list: + for metadata_item in expand(document): + for (term, value) in metadata_item.items(): + if term in MINIMAL_METADATA_SET: + if term not in minimal_dict: + minimal_dict[term] = [value] + elif value not in minimal_dict[term]: + minimal_dict[term].append(value) + return compact(minimal_dict) diff --git a/swh/indexer/metadata_dictionary.py b/swh/indexer/metadata_dictionary.py index 4266001..b8e01b9 100644 --- a/swh/indexer/metadata_dictionary.py +++ b/swh/indexer/metadata_dictionary.py @@ -1,214 +1,284 @@ # Copyright (C) 2017 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information +import os +import re import abc -import csv import json -import os.path import logging +import xmltodict -import swh.indexer - -CROSSWALK_TABLE_PATH = os.path.join(os.path.dirname(swh.indexer.__file__), - 'data', 'codemeta', 'crosswalk.csv') - - -def read_crosstable(fd): - reader = csv.reader(fd) - try: - header = next(reader) - except StopIteration: - raise ValueError('empty file') - - data_sources = set(header) - {'Parent Type', 'Property', - 'Type', 'Description'} - assert 'codemeta-V1' in data_sources - - codemeta_translation = {data_source: {} for data_source in data_sources} - - for line in reader: # For each canonical name - canonical_name = dict(zip(header, line))['Property'] - for (col, value) in zip(header, line): # For each cell in the row - if col in data_sources: - # If that's not the parentType/property/type/description - for local_name in value.split('/'): - # For each of the data source's properties that maps - # to this canonical name - if local_name.strip(): - codemeta_translation[col][local_name.strip()] = \ - canonical_name - - return codemeta_translation - - -with open(CROSSWALK_TABLE_PATH) as fd: - CROSSWALK_TABLE = read_crosstable(fd) +from swh.indexer.codemeta import CROSSWALK_TABLE, SCHEMA_URI +from swh.indexer.codemeta import compact, expand MAPPINGS = {} def register_mapping(cls): MAPPINGS[cls.__name__] = cls() return cls class BaseMapping(metaclass=abc.ABCMeta): """Base class for mappings to inherit from To implement a new mapping: - inherit this class - override translate function """ def __init__(self): self.log = logging.getLogger('%s.%s' % ( self.__class__.__module__, self.__class__.__name__)) @abc.abstractmethod def detect_metadata_files(self, files): """ Detects files potentially containing metadata Args: - file_entries (list): list of files Returns: - empty list if nothing was found - list of sha1 otherwise """ pass @abc.abstractmethod def translate(self, file_content): pass + def normalize_translation(self, metadata): + return compact(metadata) + + +class SingleFileMapping(BaseMapping): + """Base class for all mappings that use a single file as input.""" + + @property + @abc.abstractmethod + def filename(self): + """The .json file to extract metadata from.""" + pass + + def detect_metadata_files(self, file_entries): + for entry in file_entries: + if entry['name'] == self.filename: + return [entry['sha1']] + return [] + class DictMapping(BaseMapping): """Base class for mappings that take as input a file that is mostly a key-value store (eg. a shallow JSON dict).""" @property @abc.abstractmethod def mapping(self): """A translation dict to map dict keys into a canonical name.""" pass - def translate_dict(self, content_dict): + def translate_dict(self, content_dict, *, normalize=True): """ Translates content by parsing content from a dict object and translating with the appropriate mapping Args: content_dict (dict) Returns: dict: translated metadata in json-friendly form needed for the indexer """ - translated_metadata = {} - default = 'other' - translated_metadata['other'] = {} - try: - for k, v in content_dict.items(): - try: - term = self.mapping.get(k, default) - if term not in translated_metadata: - translated_metadata[term] = v - continue - if isinstance(translated_metadata[term], str): - in_value = translated_metadata[term] - translated_metadata[term] = [in_value, v] - continue - if isinstance(translated_metadata[term], list): - translated_metadata[term].append(v) - continue - if isinstance(translated_metadata[term], dict): - translated_metadata[term][k] = v - continue - except KeyError: - self.log.exception( - "Problem during item mapping") - continue - except Exception: - raise - return None - return translated_metadata - - -class JsonMapping(DictMapping): + translated_metadata = {'@type': SCHEMA_URI + 'SoftwareSourceCode'} + for k, v in content_dict.items(): + # First, check if there is a specific translation + # method for this key + translation_method = getattr(self, 'translate_' + k, None) + if translation_method: + translation_method(translated_metadata, v) + elif k in self.mapping: + # if there is no method, but the key is known from the + # crosswalk table + + # if there is a normalization method, use it on the value + normalization_method = getattr(self, 'normalize_' + k, None) + if normalization_method: + v = normalization_method(v) + + # set the translation metadata with the normalized value + translated_metadata[self.mapping[k]] = v + if normalize: + return self.normalize_translation(translated_metadata) + else: + return translated_metadata + + +class JsonMapping(DictMapping, SingleFileMapping): """Base class for all mappings that use a JSON file as input.""" - @property - @abc.abstractmethod - def filename(self): - """The .json file to extract metadata from.""" - pass - - def detect_metadata_files(self, file_entries): - for entry in file_entries: - if entry['name'] == self.filename: - return [entry['sha1']] - return [] - def translate(self, raw_content): """ Translates content by parsing content from a bytestring containing json data and translating with the appropriate mapping Args: raw_content: bytes Returns: dict: translated metadata in json-friendly form needed for the indexer """ try: raw_content = raw_content.decode() except UnicodeDecodeError: self.log.warning('Error unidecoding %r', raw_content) return try: content_dict = json.loads(raw_content) except json.JSONDecodeError: self.log.warning('Error unjsoning %r' % raw_content) return return self.translate_dict(content_dict) @register_mapping class NpmMapping(JsonMapping): """ dedicated class for NPM (package.json) mapping and translation """ mapping = CROSSWALK_TABLE['NodeJS'] filename = b'package.json' + _schema_shortcuts = { + 'github': 'https://github.com/', + 'gist': 'https://gist.github.com/', + 'bitbucket': 'https://bitbucket.org/', + 'gitlab': 'https://gitlab.com/', + } + + def normalize_repository(self, d): + """https://docs.npmjs.com/files/package.json#repository""" + if isinstance(d, dict): + return '{type}+{url}'.format(**d) + elif isinstance(d, str): + if '://' in d: + return d + elif ':' in d: + (schema, rest) = d.split(':', 1) + if schema in self._schema_shortcuts: + return self._schema_shortcuts[schema] + rest + else: + return None + else: + return self._schema_shortcuts['github'] + d + + else: + return None + + def normalize_bugs(self, d): + return '{url}'.format(**d) + + _parse_author = re.compile(r'^ *' + r'(?P.*?)' + r'( +<(?P.*)>)?' + r'( +\((?P.*)\))?' + r' *$') + + def normalize_author(self, d): + 'https://docs.npmjs.com/files/package.json' \ + '#people-fields-author-contributors' + author = {'@type': SCHEMA_URI+'Person'} + if isinstance(d, dict): + name = d.get('name', None) + email = d.get('email', None) + url = d.get('url', None) + elif isinstance(d, str): + match = self._parse_author.match(d) + name = match.group('name') + email = match.group('email') + url = match.group('url') + else: + return None + if name: + author[SCHEMA_URI+'name'] = name + if email: + author[SCHEMA_URI+'email'] = email + if url: + author[SCHEMA_URI+'url'] = url + return author + @register_mapping -class CodemetaMapping(JsonMapping): +class CodemetaMapping(SingleFileMapping): """ dedicated class for CodeMeta (codemeta.json) mapping and translation """ - mapping = CROSSWALK_TABLE['codemeta-V1'] filename = b'codemeta.json' + def translate(self, content): + return self.normalize_translation(expand(json.loads(content.decode()))) + + +@register_mapping +class MavenMapping(DictMapping, SingleFileMapping): + """ + dedicated class for Maven (pom.xml) mapping and translation + """ + filename = b'pom.xml' + mapping = CROSSWALK_TABLE['Java (Maven)'] + + def translate(self, content): + d = xmltodict.parse(content)['project'] + metadata = self.translate_dict(d, normalize=False) + metadata[SCHEMA_URI+'codeRepository'] = self.parse_repositories(d) + return self.normalize_translation(metadata) + + _default_repository = {'url': 'https://repo.maven.apache.org/maven2/'} + + def parse_repositories(self, d): + """https://maven.apache.org/pom.html#Repositories""" + if 'repositories' not in d: + return [self.parse_repository(d, self._default_repository)] + else: + repositories = d['repositories'].get('repository', []) + if not isinstance(repositories, list): + repositories = [repositories] + results = [] + for repo in repositories: + res = self.parse_repository(d, repo) + if res: + results.append(res) + return results + + def parse_repository(self, d, repo): + if repo.get('layout', 'default') != 'default': + return # TODO ? + url = repo['url'] + if d['groupId']: + url = os.path.join(url, *d['groupId'].split('.')) + if d['artifactId']: + url = os.path.join(url, d['artifactId']) + return url + def main(): raw_content = """{"name": "test_name", "unknown_term": "ut"}""" raw_content1 = b"""{"name": "test_name", "unknown_term": "ut", "prerequisites" :"packageXYZ"}""" result = MAPPINGS["NpmMapping"].translate(raw_content) result1 = MAPPINGS["MavenMapping"].translate(raw_content1) print(result) print(result1) if __name__ == "__main__": main() diff --git a/swh/indexer/mimetype.py b/swh/indexer/mimetype.py index 1b42419..17842ad 100644 --- a/swh/indexer/mimetype.py +++ b/swh/indexer/mimetype.py @@ -1,167 +1,179 @@ -# Copyright (C) 2016-2017 The Software Heritage developers +# Copyright (C) 2016-2018 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import click import magic from swh.model import hashutil from swh.scheduler import get_scheduler -from swh.scheduler.utils import create_task_dict -from .indexer import ContentIndexer +from .indexer import ContentIndexer, ContentRangeIndexer def compute_mimetype_encoding(raw_content): """Determine mimetype and encoding from the raw content. Args: raw_content (bytes): content's raw data Returns: A dict with mimetype and encoding key and corresponding values (as bytes). """ r = magic.detect_from_content(raw_content) return { 'mimetype': r.mime_type.encode('utf-8'), 'encoding': r.encoding.encode('utf-8'), } -class ContentMimetypeIndexer(ContentIndexer): - """Indexer in charge of: +class MixinMimetypeIndexer: + """Mixin mimetype indexer. - - filtering out content already indexed - - reading content from objstorage per the content's id (sha1) - - computing {mimetype, encoding} from that content - - store result in storage + See :class:`ContentMimetypeIndexer` and :class:`MimetypeRangeIndexer` """ ADDITIONAL_CONFIG = { - 'scheduler': { + 'scheduler': ('dict', { 'cls': 'remote', 'args': { 'url': 'http://localhost:5008', }, - }, - 'destination_task': ('str', None), + }), 'tools': ('dict', { 'name': 'file', 'version': '1:5.30-1+deb9u1', 'configuration': { "type": "library", "debian-package": "python3-magic" }, }), + 'write_batch_size': ('int', 100), } CONFIG_BASE_FILENAME = 'indexer/mimetype' def prepare(self): super().prepare() - self.destination_task = self.config.get('destination_task') self.scheduler = get_scheduler(**self.config['scheduler']) self.tool = self.tools[0] - def filter(self, ids): - """Filter out known sha1s and return only missing ones. - - """ - yield from self.idx_storage.content_mimetype_missing(( - { - 'id': sha1, - 'indexer_configuration_id': self.tool['id'], - } for sha1 in ids - )) - def index(self, id, data): """Index sha1s' content and store result. Args: id (bytes): content's identifier data (bytes): raw content in bytes Returns: A dict, representing a content_mimetype, with keys: - id (bytes): content's identifier (sha1) - mimetype (bytes): mimetype in bytes - encoding (bytes): encoding in bytes """ try: properties = compute_mimetype_encoding(data) properties.update({ 'id': id, 'indexer_configuration_id': self.tool['id'], }) except TypeError: self.log.error('Detecting mimetype error for id %s' % ( hashutil.hash_to_hex(id), )) return None return properties def persist_index_computations(self, results, policy_update): """Persist the results in storage. Args: results ([dict]): list of content_mimetype, dict with the following keys: - id (bytes): content's identifier (sha1) - mimetype (bytes): mimetype in bytes - encoding (bytes): encoding in bytes policy_update ([str]): either 'update-dups' or 'ignore-dups' to respectively update duplicates or ignore them """ self.idx_storage.content_mimetype_add( results, conflict_update=(policy_update == 'update-dups')) - def _filter_text(self, results): - """Filter sha1 whose raw content is text. + +class ContentMimetypeIndexer(MixinMimetypeIndexer, ContentIndexer): + """Mimetype Indexer working on list of content identifiers. + + It: + - (optionally) filters out content already indexed (cf. :callable:`filter`) + - reads content from objstorage per the content's id (sha1) + - computes {mimetype, encoding} from that content + - stores result in storage + + FIXME: + - 1. Rename redundant ContentMimetypeIndexer to MimetypeIndexer + - 2. Do we keep it afterwards? ~> i think this can be used with the journal + + """ + def filter(self, ids): + """Filter out known sha1s and return only missing ones. """ - for result in results: - if b'binary' in result['encoding']: - continue - yield result['id'] + yield from self.idx_storage.content_mimetype_missing(( + { + 'id': sha1, + 'indexer_configuration_id': self.tool['id'], + } for sha1 in ids + )) - def next_step(self, results): - """When the computations is done, we'd like to send over only text - contents to the text content orchestrator. - Args: - results ([dict]): List of content_mimetype results, dict - with the following keys: +class MimetypeRangeIndexer(MixinMimetypeIndexer, ContentRangeIndexer): + """Mimetype Range Indexer working on range of content identifiers. - - id (bytes): content's identifier (sha1) - - mimetype (bytes): mimetype in bytes - - encoding (bytes): encoding in bytes + It: + - (optionally) filters out content already indexed (cf :callable:`range`) + - reads content from objstorage per the content's id (sha1) + - computes {mimetype, encoding} from that content + - stores result in storage + + """ + def indexed_contents_in_range(self, start, end): + """Retrieve indexed content id within range [start, end]. + + Args + **start** (bytes): Starting bound from range identifier + **end** (bytes): End range identifier + + Yields: + Content identifier (bytes) present in the range [start, end] """ - if self.destination_task: - assert self.scheduler - self.scheduler.create_tasks([create_task_dict( - self.destination_task, - 'oneshot', - list(self._filter_text(results)) - )]) + while start: + result = self.idx_storage.content_mimetype_get_range( + start, end, self.tool['id']) + contents = result['ids'] + for _id in contents: + yield _id + start = result['next'] + if start is None: + break @click.command() @click.option('--path', help="Path to execute index on") def main(path): with open(path, 'rb') as f: raw_content = f.read() print(compute_mimetype_encoding(raw_content)) if __name__ == '__main__': main() diff --git a/swh/indexer/orchestrator.py b/swh/indexer/orchestrator.py deleted file mode 100644 index d63c696..0000000 --- a/swh/indexer/orchestrator.py +++ /dev/null @@ -1,143 +0,0 @@ -# Copyright (C) 2016-2018 The Software Heritage developers -# See the AUTHORS file at the top-level directory of this distribution -# License: GNU General Public License version 3, or any later version -# See top-level LICENSE file for more information - -import random - -from swh.core.config import SWHConfig -from swh.core.utils import grouper -from swh.scheduler import utils -from swh.scheduler import get_scheduler -from swh.scheduler.utils import create_task_dict - - -def get_class(clazz): - """Get a symbol class dynamically by its fully qualified name string - representation. - - """ - parts = clazz.split('.') - module = '.'.join(parts[:-1]) - m = __import__(module) - for comp in parts[1:]: - m = getattr(m, comp) - return m - - -class BaseOrchestratorIndexer(SWHConfig): - """The indexer orchestrator is in charge of dispatching batch of - contents (filtered or not based on presence) to indexers. - - That dispatch is indexer specific, so the configuration reflects it: - - - when `check_presence` flag is true, filter out the - contents already present for that indexer, otherwise send - everything - - - broadcast those (filtered or not) contents to indexers in a - `batch_size` fashioned - - For example:: - - indexers: - mimetype: - batch_size: 10 - check_presence: false - language: - batch_size: 2 - check_presence: true - - means: - - - send all contents received as batch of size 10 to the 'mimetype' indexer - - send only unknown contents as batch of size 2 to the 'language' indexer. - - """ - CONFIG_BASE_FILENAME = 'indexer/orchestrator' - - # Overridable in child classes. - from . import TASK_NAMES, INDEXER_CLASSES - - DEFAULT_CONFIG = { - 'scheduler': { - 'cls': 'remote', - 'args': { - 'url': 'http://localhost:5008', - }, - }, - 'indexers': ('dict', { - 'mimetype': { - 'batch_size': 10, - 'check_presence': True, - }, - }), - } - - def __init__(self): - super().__init__() - self.config = self.parse_config_file() - self.prepare_tasks() - self.prepare_scheduler() - - def prepare_scheduler(self): - self.scheduler = get_scheduler(**self.config['scheduler']) - - def prepare_tasks(self): - indexer_names = list(self.config['indexers']) - random.shuffle(indexer_names) - indexers = {} - tasks = {} - for name in indexer_names: - if name not in self.TASK_NAMES: - raise ValueError('%s must be one of %s' % ( - name, ', '.join(self.TASK_NAMES))) - - opts = self.config['indexers'][name] - indexers[name] = ( - self.INDEXER_CLASSES[name], - opts['check_presence'], - opts['batch_size']) - tasks[name] = utils.get_task(self.TASK_NAMES[name]) - - self.indexers = indexers - self.tasks = tasks - - def run(self, ids): - for task_name, task_attrs in self.indexers.items(): - (idx_class, filtering, batch_size) = task_attrs - if filtering: - policy_update = 'ignore-dups' - indexer_class = get_class(idx_class) - ids_filtered = list(indexer_class().filter(ids)) - if not ids_filtered: - continue - else: - policy_update = 'update-dups' - ids_filtered = ids - - tasks = [] - for ids_to_send in grouper(ids_filtered, batch_size): - tasks.append(create_task_dict( - task_name, - 'oneshot', - ids=list(ids_to_send), - policy_update=policy_update, - )) - self._create_tasks(tasks) - - def _create_tasks(self, tasks): - self.scheduler.create_tasks(tasks) - - -class OrchestratorAllContentsIndexer(BaseOrchestratorIndexer): - """Orchestrator which deals with batch of any types of contents. - - """ - - -class OrchestratorTextContentsIndexer(BaseOrchestratorIndexer): - """Orchestrator which deals with batch of text contents. - - """ - CONFIG_BASE_FILENAME = 'indexer/orchestrator_text' diff --git a/swh/indexer/origin_head.py b/swh/indexer/origin_head.py index 9de1aa0..54123ac 100644 --- a/swh/indexer/origin_head.py +++ b/swh/indexer/origin_head.py @@ -1,217 +1,217 @@ # Copyright (C) 2018 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import re import click import logging from swh.scheduler import get_scheduler from swh.scheduler.utils import create_task_dict from swh.indexer.indexer import OriginIndexer class OriginHeadIndexer(OriginIndexer): """Origin-level indexer. This indexer is in charge of looking up the revision that acts as the "head" of an origin. In git, this is usually the commit pointed to by the 'master' branch.""" ADDITIONAL_CONFIG = { 'tools': ('dict', { 'name': 'origin-metadata', 'version': '0.0.1', 'configuration': {}, }), } CONFIG_BASE_FILENAME = 'indexer/origin_head' revision_metadata_task = 'revision_metadata' origin_intrinsic_metadata_task = 'origin_metadata' def filter(self, ids): yield from ids def persist_index_computations(self, results, policy_update): """Do nothing. The indexer's results are not persistent, they - should only be piped to another indexer via the orchestrator.""" + should only be piped to another indexer.""" pass def next_step(self, results, task): """Once the head is found, call the RevisionMetadataIndexer on these revisions, then call the OriginMetadataIndexer with both the origin_id and the revision metadata, so it can copy the revision metadata to the origin's metadata. Args: results (Iterable[dict]): Iterable of return values from `index`. """ super().next_step(results, task) if self.revision_metadata_task is None and \ self.origin_intrinsic_metadata_task is None: return assert self.revision_metadata_task is not None assert self.origin_intrinsic_metadata_task is not None # Second task to run after this one: copy the revision's metadata # to the origin sub_task = create_task_dict( self.origin_intrinsic_metadata_task, 'oneshot', origin_head={ str(result['origin_id']): result['revision_id'].decode() for result in results}, policy_update='update-dups', ) del sub_task['next_run'] # Not json-serializable # First task to run after this one: index the metadata of the # revision task = create_task_dict( self.revision_metadata_task, 'oneshot', ids=[res['revision_id'].decode() for res in results], policy_update='update-dups', next_step={ **sub_task, 'result_name': 'revisions_metadata'}, ) if getattr(self, 'scheduler', None): scheduler = self.scheduler else: scheduler = get_scheduler(**self.config['scheduler']) scheduler.create_tasks([task]) # Dispatch def index(self, origin): origin_id = origin['id'] latest_snapshot = self.storage.snapshot_get_latest(origin_id) method = getattr(self, '_try_get_%s_head' % origin['type'], None) if method is None: method = self._try_get_head_generic rev_id = method(latest_snapshot) if rev_id is None: return None result = { 'origin_id': origin_id, 'revision_id': rev_id, } return result # VCSs def _try_get_vcs_head(self, snapshot): try: if isinstance(snapshot, dict): branches = snapshot['branches'] if branches[b'HEAD']['target_type'] == 'revision': return branches[b'HEAD']['target'] except KeyError: return None _try_get_hg_head = _try_get_git_head = _try_get_vcs_head # Tarballs _archive_filename_re = re.compile( rb'^' rb'(?P.*)[-_]' rb'(?P[0-9]+(\.[0-9])*)' rb'(?P[-+][a-zA-Z0-9.~]+?)?' rb'(?P(\.[a-zA-Z0-9]+)+)' rb'$') @classmethod def _parse_version(cls, filename): """Extracts the release version from an archive filename, to get an ordering whose maximum is likely to be the last version of the software >>> OriginHeadIndexer._parse_version(b'foo') (-inf,) >>> OriginHeadIndexer._parse_version(b'foo.tar.gz') (-inf,) >>> OriginHeadIndexer._parse_version(b'gnu-hello-0.0.1.tar.gz') (0, 0, 1, 0) >>> OriginHeadIndexer._parse_version(b'gnu-hello-0.0.1-beta2.tar.gz') (0, 0, 1, -1, 'beta2') >>> OriginHeadIndexer._parse_version(b'gnu-hello-0.0.1+foobar.tar.gz') (0, 0, 1, 1, 'foobar') """ res = cls._archive_filename_re.match(filename) if res is None: return (float('-infinity'),) version = [int(n) for n in res.group('version').decode().split('.')] if res.group('preversion') is None: version.append(0) else: preversion = res.group('preversion').decode() if preversion.startswith('-'): version.append(-1) version.append(preversion[1:]) elif preversion.startswith('+'): version.append(1) version.append(preversion[1:]) else: assert False, res.group('preversion') return tuple(version) def _try_get_ftp_head(self, snapshot): archive_names = list(snapshot['branches']) max_archive_name = max(archive_names, key=self._parse_version) r = self._try_resolve_target(snapshot['branches'], max_archive_name) return r # Generic def _try_get_head_generic(self, snapshot): # Works on 'deposit', 'svn', and 'pypi'. try: if isinstance(snapshot, dict): branches = snapshot['branches'] except KeyError: return None else: return ( self._try_resolve_target(branches, b'HEAD') or self._try_resolve_target(branches, b'master') ) def _try_resolve_target(self, branches, target_name): try: target = branches[target_name] while target['target_type'] == 'alias': target = branches[target['target']] if target['target_type'] == 'revision': return target['target'] elif target['target_type'] == 'content': return None # TODO elif target['target_type'] == 'directory': return None # TODO elif target['target_type'] == 'release': return None # TODO else: assert False except KeyError: return None @click.command() @click.option('--origins', '-i', help='Origins to lookup, in the "type+url" format', multiple=True) def main(origins): rev_metadata_indexer = OriginHeadIndexer() rev_metadata_indexer.run(origins, 'update-dups', parse_ids=True) if __name__ == '__main__': logging.basicConfig(level=logging.INFO) main() diff --git a/swh/indexer/producer.py b/swh/indexer/producer.py deleted file mode 100755 index 87eff04..0000000 --- a/swh/indexer/producer.py +++ /dev/null @@ -1,103 +0,0 @@ -# Copyright (C) 2016-2017 The Software Heritage developers -# See the AUTHORS file at the top-level directory of this distribution -# License: GNU General Public License version 3, or any later version -# See top-level LICENSE file for more information - -import click -import random -import sys - -from swh.core import utils -from swh.model import hashutil -from swh.scheduler.utils import get_task - - -def read_from_stdin(): - for sha1 in sys.stdin: - yield hashutil.hash_to_bytes(sha1.strip()) - - -def gen_sha1(batch, dict_with_key=None): - """Generate batch of grouped sha1s from the objstorage. - - """ - def _gen(): - for ids in utils.grouper(read_from_stdin(), batch): - ids = list(ids) - random.shuffle(ids) - yield ids - - if dict_with_key: - for ids in _gen(): - yield [{dict_with_key: sha1} for sha1 in ids] - else: - yield from _gen() - - -def make_function_execute(task, sync): - """Compute a function which executes computations on sha1s - synchronously or asynchronously. - - """ - if sync: - return (lambda ids, task=task: task(ids)) - return (lambda ids, task=task: task.delay(ids)) - - -def run_with_limit(task, limit, batch, dict_with_key=None, sync=False): - execute_fn = make_function_execute(task, sync) - - count = 0 - for ids in gen_sha1(batch, dict_with_key): - count += len(ids) - execute_fn(ids) - print('%s sent - [%s, ...]' % (len(ids), ids[0])) - if count >= limit: - return - - -def run(task, batch, dict_with_key=None, sync=False): - execute_fn = make_function_execute(task, sync) - - for ids in gen_sha1(batch, dict_with_key): - execute_fn(ids) - print('%s sent - [%s, ...]' % (len(ids), ids[0])) - - -@click.command(help='Read sha1 from stdin and send them for indexing') -@click.option('--limit', default=None, help='Limit the number of data to read') -@click.option('--batch', default='10', help='Group data by batch') -@click.option('--task-name', default='orchestrator_all', help='Task\'s name') -@click.option('--sync/--nosync', default=False, - help='Make the producer actually execute the routine.') -@click.option('--dict-with-key', default=None) -def main(limit, batch, task_name, sync, dict_with_key): - """Read sha1 from stdin and send them for indexing. - - By default, send directly list of hashes. Using the - --dict-with-key, this will send dict list with one key mentioned - as parameter to the dict-with-key flag. - - """ - batch = int(batch) - - from . import tasks, TASK_NAMES # noqa - possible_tasks = TASK_NAMES.keys() - - if task_name not in possible_tasks: - print('The task_name can only be one of %s' % - ', '.join(possible_tasks)) - return - - task = get_task(TASK_NAMES[task_name]) - - if limit: - run_with_limit(task, int(limit), batch, - dict_with_key=dict_with_key, sync=sync) - else: - run(task, batch, - dict_with_key=dict_with_key, sync=sync) - - -if __name__ == '__main__': - main() diff --git a/swh/indexer/storage/__init__.py b/swh/indexer/storage/__init__.py index 78a2791..b241f86 100644 --- a/swh/indexer/storage/__init__.py +++ b/swh/indexer/storage/__init__.py @@ -1,608 +1,715 @@ # Copyright (C) 2015-2018 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import json import psycopg2 from collections import defaultdict from swh.core.api import remote_api_endpoint from swh.storage.common import db_transaction_generator, db_transaction from swh.storage.exc import StorageDBError from .db import Db from . import converters INDEXER_CFG_KEY = 'indexer_storage' def get_indexer_storage(cls, args): """Get an indexer storage object of class `storage_class` with arguments `storage_args`. Args: - args (dict): dictionary with keys: - - cls (str): storage's class, either 'local' or 'remote' - - args (dict): dictionary with keys + cls (str): storage's class, either 'local' or 'remote' + args (dict): dictionary of arguments passed to the + storage class constructor Returns: an instance of swh.indexer's storage (either local or remote) Raises: ValueError if passed an unknown storage class. """ if cls == 'remote': from .api.client import RemoteStorage as IndexerStorage elif cls == 'local': from . import IndexerStorage else: raise ValueError('Unknown indexer storage class `%s`' % cls) return IndexerStorage(**args) class IndexerStorage: """SWH Indexer Storage """ def __init__(self, db, min_pool_conns=1, max_pool_conns=10): """ Args: db_conn: either a libpq connection string, or a psycopg2 connection """ try: if isinstance(db, psycopg2.extensions.connection): self._pool = None self._db = Db(db) else: self._pool = psycopg2.pool.ThreadedConnectionPool( min_pool_conns, max_pool_conns, db ) self._db = None except psycopg2.OperationalError as e: raise StorageDBError(e) def get_db(self): if self._db: return self._db return Db.from_pool(self._pool) @remote_api_endpoint('check_config') def check_config(self, *, check_write): """Check that the storage is configured and ready to go.""" # Check permissions on one of the tables with self.get_db().transaction() as cur: if check_write: check = 'INSERT' else: check = 'SELECT' cur.execute( "select has_table_privilege(current_user, 'content_mimetype', %s)", # noqa (check,) ) return cur.fetchone()[0] return True @remote_api_endpoint('content_mimetype/missing') @db_transaction_generator() def content_mimetype_missing(self, mimetypes, db=None, cur=None): - """List mimetypes missing from storage. + """Generate mimetypes missing from storage. Args: mimetypes (iterable): iterable of dict with keys: - id (bytes): sha1 identifier - indexer_configuration_id (int): tool used to compute - the results + - **id** (bytes): sha1 identifier + - **indexer_configuration_id** (int): tool used to compute the + results Yields: - an iterable of missing id for the tuple (id, - indexer_configuration_id) + tuple (id, indexer_configuration_id): missing id """ for obj in db.content_mimetype_missing_from_list(mimetypes, cur): yield obj[0] + def _content_get_range(self, content_type, start, end, + indexer_configuration_id, limit=1000, + db=None, cur=None): + """Retrieve ids of type content_type within range [start, end] bound + by limit. + + Args: + **content_type** (str): content's type (mimetype, language, etc...) + **start** (bytes): Starting identifier range (expected smaller + than end) + **end** (bytes): Ending identifier range (expected larger + than start) + **indexer_configuration_id** (int): The tool used to index data + **limit** (int): Limit result (default to 1000) + + Raises: + ValueError for; + - limit to None + - wrong content_type provided + + Returns: + a dict with keys: + - **ids** [bytes]: iterable of content ids within the range. + - **next** (Optional[bytes]): The next range of sha1 starts at + this sha1 if any + + """ + if limit is None: + raise ValueError('Development error: limit should not be None') + if content_type not in db.content_indexer_names: + err = 'Development error: Wrong type. Should be one of [%s]' % ( + ','.join(db.content_indexer_names)) + raise ValueError(err) + + ids = [] + next_id = None + for counter, obj in enumerate(db.content_get_range( + content_type, start, end, indexer_configuration_id, + limit=limit+1, cur=cur)): + _id = obj[0] + if counter >= limit: + next_id = _id + break + + ids.append(_id) + + return { + 'ids': ids, + 'next': next_id + } + + @remote_api_endpoint('content_mimetype/range') + @db_transaction() + def content_mimetype_get_range(self, start, end, indexer_configuration_id, + limit=1000, db=None, cur=None): + """Retrieve mimetypes within range [start, end] bound by limit. + + Args: + **start** (bytes): Starting identifier range (expected smaller + than end) + **end** (bytes): Ending identifier range (expected larger + than start) + **indexer_configuration_id** (int): The tool used to index data + **limit** (int): Limit result (default to 1000) + + Raises: + ValueError for limit to None + + Returns: + a dict with keys: + - **ids** [bytes]: iterable of content ids within the range. + - **next** (Optional[bytes]): The next range of sha1 starts at + this sha1 if any + + """ + return self._content_get_range('mimetype', start, end, + indexer_configuration_id, limit=limit, + db=db, cur=cur) + @remote_api_endpoint('content_mimetype/add') @db_transaction() def content_mimetype_add(self, mimetypes, conflict_update=False, db=None, cur=None): """Add mimetypes not present in storage. Args: mimetypes (iterable): dictionaries with keys: - id (bytes): sha1 identifier - mimetype (bytes): raw content's mimetype - encoding (bytes): raw content's encoding - indexer_configuration_id (int): tool's id used to - compute the results - conflict_update (bool): Flag to determine if we want to - overwrite (true) or skip duplicates - (false, the default) + - **id** (bytes): sha1 identifier + - **mimetype** (bytes): raw content's mimetype + - **encoding** (bytes): raw content's encoding + - **indexer_configuration_id** (int): tool's id used to + compute the results + - **conflict_update** (bool): Flag to determine if we want to + overwrite (``True``) or skip duplicates (``False``, the + default) """ db.mktemp_content_mimetype(cur) db.copy_to(mimetypes, 'tmp_content_mimetype', ['id', 'mimetype', 'encoding', 'indexer_configuration_id'], cur) db.content_mimetype_add_from_temp(conflict_update, cur) @remote_api_endpoint('content_mimetype') @db_transaction_generator() def content_mimetype_get(self, ids, db=None, cur=None): """Retrieve full content mimetype per ids. Args: ids (iterable): sha1 identifier Yields: mimetypes (iterable): dictionaries with keys: - id (bytes): sha1 identifier - mimetype (bytes): raw content's mimetype - encoding (bytes): raw content's encoding - tool (dict): Tool used to compute the language + - **id** (bytes): sha1 identifier + - **mimetype** (bytes): raw content's mimetype + - **encoding** (bytes): raw content's encoding + - **tool** (dict): Tool used to compute the language """ for c in db.content_mimetype_get_from_list(ids, cur): yield converters.db_to_mimetype( dict(zip(db.content_mimetype_cols, c))) @remote_api_endpoint('content_language/missing') @db_transaction_generator() def content_language_missing(self, languages, db=None, cur=None): """List languages missing from storage. Args: languages (iterable): dictionaries with keys: - id (bytes): sha1 identifier - indexer_configuration_id (int): tool used to compute - the results + - **id** (bytes): sha1 identifier + - **indexer_configuration_id** (int): tool used to compute + the results Yields: an iterable of missing id for the tuple (id, indexer_configuration_id) """ for obj in db.content_language_missing_from_list(languages, cur): yield obj[0] @remote_api_endpoint('content_language') @db_transaction_generator() def content_language_get(self, ids, db=None, cur=None): """Retrieve full content language per ids. Args: ids (iterable): sha1 identifier Yields: languages (iterable): dictionaries with keys: - id (bytes): sha1 identifier - lang (bytes): raw content's language - tool (dict): Tool used to compute the language + - **id** (bytes): sha1 identifier + - **lang** (bytes): raw content's language + - **tool** (dict): Tool used to compute the language """ for c in db.content_language_get_from_list(ids, cur): yield converters.db_to_language( dict(zip(db.content_language_cols, c))) @remote_api_endpoint('content_language/add') @db_transaction() def content_language_add(self, languages, conflict_update=False, db=None, cur=None): """Add languages not present in storage. Args: languages (iterable): dictionaries with keys: - id (bytes): sha1 - lang (bytes): language detected + - **id** (bytes): sha1 + - **lang** (bytes): language detected conflict_update (bool): Flag to determine if we want to overwrite (true) or skip duplicates (false, the default) """ db.mktemp_content_language(cur) # empty language is mapped to 'unknown' db.copy_to( ({ 'id': l['id'], 'lang': 'unknown' if not l['lang'] else l['lang'], 'indexer_configuration_id': l['indexer_configuration_id'], } for l in languages), 'tmp_content_language', ['id', 'lang', 'indexer_configuration_id'], cur) db.content_language_add_from_temp(conflict_update, cur) @remote_api_endpoint('content/ctags/missing') @db_transaction_generator() def content_ctags_missing(self, ctags, db=None, cur=None): """List ctags missing from storage. Args: ctags (iterable): dicts with keys: - id (bytes): sha1 identifier - indexer_configuration_id (int): tool used to compute - the results + - **id** (bytes): sha1 identifier + - **indexer_configuration_id** (int): tool used to compute + the results Yields: an iterable of missing id for the tuple (id, indexer_configuration_id) """ for obj in db.content_ctags_missing_from_list(ctags, cur): yield obj[0] @remote_api_endpoint('content/ctags') @db_transaction_generator() def content_ctags_get(self, ids, db=None, cur=None): """Retrieve ctags per id. Args: ids (iterable): sha1 checksums Yields: Dictionaries with keys: - id (bytes): content's identifier - name (str): symbol's name - kind (str): symbol's kind - language (str): language for that content - tool (dict): tool used to compute the ctags' info + - **id** (bytes): content's identifier + - **name** (str): symbol's name + - **kind** (str): symbol's kind + - **language** (str): language for that content + - **tool** (dict): tool used to compute the ctags' info """ for c in db.content_ctags_get_from_list(ids, cur): yield converters.db_to_ctags(dict(zip(db.content_ctags_cols, c))) @remote_api_endpoint('content/ctags/add') @db_transaction() def content_ctags_add(self, ctags, conflict_update=False, db=None, cur=None): """Add ctags not present in storage Args: ctags (iterable): dictionaries with keys: - id (bytes): sha1 - ctags ([list): List of dictionary with keys: name, kind, - line, language + - **id** (bytes): sha1 + - **ctags** ([list): List of dictionary with keys: name, kind, + line, language """ def _convert_ctags(__ctags): """Convert ctags dict to list of ctags. """ for ctags in __ctags: yield from converters.ctags_to_db(ctags) db.mktemp_content_ctags(cur) db.copy_to(list(_convert_ctags(ctags)), tblname='tmp_content_ctags', columns=['id', 'name', 'kind', 'line', 'lang', 'indexer_configuration_id'], cur=cur) db.content_ctags_add_from_temp(conflict_update, cur) @remote_api_endpoint('content/ctags/search') @db_transaction_generator() def content_ctags_search(self, expression, limit=10, last_sha1=None, db=None, cur=None): """Search through content's raw ctags symbols. Args: expression (str): Expression to search for limit (int): Number of rows to return (default to 10). last_sha1 (str): Offset from which retrieving data (default to ''). Yields: rows of ctags including id, name, lang, kind, line, etc... """ for obj in db.content_ctags_search(expression, last_sha1, limit, cur=cur): yield converters.db_to_ctags(dict(zip(db.content_ctags_cols, obj))) @remote_api_endpoint('content/fossology_license') @db_transaction_generator() def content_fossology_license_get(self, ids, db=None, cur=None): """Retrieve licenses per id. Args: ids (iterable): sha1 checksums Yields: list: dictionaries with the following keys: - id (bytes) - licenses ([str]): associated licenses for that content - tool (dict): Tool used to compute the license + - **id** (bytes) + - **licenses** ([str]): associated licenses for that content + - **tool** (dict): Tool used to compute the license """ d = defaultdict(list) for c in db.content_fossology_license_get_from_list(ids, cur): license = dict(zip(db.content_fossology_license_cols, c)) id_ = license['id'] d[id_].append(converters.db_to_fossology_license(license)) for id_, facts in d.items(): yield {id_: facts} @remote_api_endpoint('content/fossology_license/add') @db_transaction() def content_fossology_license_add(self, licenses, conflict_update=False, db=None, cur=None): """Add licenses not present in storage. Args: licenses (iterable): dictionaries with keys: - - id: sha1 - - license ([bytes]): List of licenses associated to sha1 - - tool (str): nomossa + - **id**: sha1 + - **license** ([bytes]): List of licenses associated to sha1 + - **tool** (str): nomossa conflict_update: Flag to determine if we want to overwrite (true) or skip duplicates (false, the default) Returns: list: content_license entries which failed due to unknown licenses """ # Then, we add the correct ones db.mktemp_content_fossology_license(cur) db.copy_to( ({ 'id': sha1['id'], 'indexer_configuration_id': sha1['indexer_configuration_id'], 'license': license, } for sha1 in licenses for license in sha1['licenses']), tblname='tmp_content_fossology_license', columns=['id', 'license', 'indexer_configuration_id'], cur=cur) db.content_fossology_license_add_from_temp(conflict_update, cur) + @remote_api_endpoint('content/fossology_license/range') + @db_transaction() + def content_fossology_license_get_range( + self, start, end, indexer_configuration_id, + limit=1000, db=None, cur=None): + """Retrieve licenses within range [start, end] bound by limit. + + Args: + **start** (bytes): Starting identifier range (expected smaller + than end) + **end** (bytes): Ending identifier range (expected larger + than start) + **indexer_configuration_id** (int): The tool used to index data + **limit** (int): Limit result (default to 1000) + + Raises: + ValueError for limit to None + + Returns: + a dict with keys: + - **ids** [bytes]: iterable of content ids within the range. + - **next** (Optional[bytes]): The next range of sha1 starts at + this sha1 if any + + """ + return self._content_get_range('fossology_license', start, end, + indexer_configuration_id, limit=limit, + db=db, cur=cur) + @remote_api_endpoint('content_metadata/missing') @db_transaction_generator() def content_metadata_missing(self, metadata, db=None, cur=None): """List metadata missing from storage. Args: metadata (iterable): dictionaries with keys: - id (bytes): sha1 identifier - indexer_configuration_id (int): tool used to compute - the results + - **id** (bytes): sha1 identifier + - **indexer_configuration_id** (int): tool used to compute + the results Yields: an iterable of missing id for the tuple (id, indexer_configuration_id) """ for obj in db.content_metadata_missing_from_list(metadata, cur): yield obj[0] @remote_api_endpoint('content_metadata') @db_transaction_generator() def content_metadata_get(self, ids, db=None, cur=None): """Retrieve metadata per id. Args: ids (iterable): sha1 checksums Yields: list: dictionaries with the following keys: id (bytes) translated_metadata (str): associated metadata tool (dict): tool used to compute metadata """ for c in db.content_metadata_get_from_list(ids, cur): yield converters.db_to_metadata( dict(zip(db.content_metadata_cols, c))) @remote_api_endpoint('content_metadata/add') @db_transaction() def content_metadata_add(self, metadata, conflict_update=False, db=None, cur=None): """Add metadata not present in storage. Args: metadata (iterable): dictionaries with keys: - id: sha1 - translated_metadata: arbitrary dict + - **id**: sha1 + - **translated_metadata**: arbitrary dict conflict_update: Flag to determine if we want to overwrite (true) or skip duplicates (false, the default) """ db.mktemp_content_metadata(cur) db.copy_to(metadata, 'tmp_content_metadata', ['id', 'translated_metadata', 'indexer_configuration_id'], cur) db.content_metadata_add_from_temp(conflict_update, cur) @remote_api_endpoint('revision_metadata/missing') @db_transaction_generator() def revision_metadata_missing(self, metadata, db=None, cur=None): """List metadata missing from storage. Args: metadata (iterable): dictionaries with keys: - id (bytes): sha1_git revision identifier - indexer_configuration_id (int): tool used to compute - the results + - **id** (bytes): sha1_git revision identifier + - **indexer_configuration_id** (int): tool used to compute + the results Returns: iterable: missing ids """ for obj in db.revision_metadata_missing_from_list(metadata, cur): yield obj[0] @remote_api_endpoint('revision_metadata') @db_transaction_generator() def revision_metadata_get(self, ids, db=None, cur=None): """Retrieve revision metadata per id. Args: ids (iterable): sha1 checksums Yields: list: dictionaries with the following keys: - id (bytes) - translated_metadata (str): associated metadata - tool (dict): tool used to compute metadata + - **id** (bytes) + - **translated_metadata** (str): associated metadata + - **tool** (dict): tool used to compute metadata """ for c in db.revision_metadata_get_from_list(ids, cur): yield converters.db_to_metadata( dict(zip(db.revision_metadata_cols, c))) @remote_api_endpoint('revision_metadata/add') @db_transaction() def revision_metadata_add(self, metadata, conflict_update=False, db=None, cur=None): """Add metadata not present in storage. Args: metadata (iterable): dictionaries with keys: - - id: sha1_git of revision - - translated_metadata: arbitrary dict + - **id**: sha1_git of revision + - **translated_metadata**: arbitrary dict conflict_update: Flag to determine if we want to overwrite (true) or skip duplicates (false, the default) """ db.mktemp_revision_metadata(cur) db.copy_to(metadata, 'tmp_revision_metadata', ['id', 'translated_metadata', 'indexer_configuration_id'], cur) db.revision_metadata_add_from_temp(conflict_update, cur) @remote_api_endpoint('origin_intrinsic_metadata') @db_transaction_generator() def origin_intrinsic_metadata_get(self, ids, db=None, cur=None): """Retrieve origin metadata per id. Args: ids (iterable): origin identifiers Yields: list: dictionaries with the following keys: - id (int) - translated_metadata (str): associated metadata - tool (dict): tool used to compute metadata + - **id** (int) + - **translated_metadata** (str): associated metadata + - **tool** (dict): tool used to compute metadata """ for c in db.origin_intrinsic_metadata_get_from_list(ids, cur): yield converters.db_to_metadata( dict(zip(db.origin_intrinsic_metadata_cols, c))) @remote_api_endpoint('origin_intrinsic_metadata/add') @db_transaction() def origin_intrinsic_metadata_add(self, metadata, conflict_update=False, db=None, cur=None): """Add origin metadata not present in storage. Args: metadata (iterable): dictionaries with keys: - - origin_id: origin identifier - - from_revision: sha1 id of the revision used to generate - these metadata. - - metadata: arbitrary dict + - **origin_id**: origin identifier + - **from_revision**: sha1 id of the revision used to generate + these metadata. + - **metadata**: arbitrary dict conflict_update: Flag to determine if we want to overwrite (true) or skip duplicates (false, the default) """ db.mktemp_origin_intrinsic_metadata(cur) db.copy_to(metadata, 'tmp_origin_intrinsic_metadata', ['origin_id', 'metadata', 'indexer_configuration_id', 'from_revision'], cur) db.origin_intrinsic_metadata_add_from_temp(conflict_update, cur) @remote_api_endpoint('indexer_configuration/add') @db_transaction_generator() def indexer_configuration_add(self, tools, db=None, cur=None): """Add new tools to the storage. Args: tools ([dict]): List of dictionary representing tool to - insert in the db. Dictionary with the following keys:: + insert in the db. Dictionary with the following keys: - tool_name (str): tool's name - tool_version (str): tool's version - tool_configuration (dict): tool's configuration (free form - dict) + - **tool_name** (str): tool's name + - **tool_version** (str): tool's version + - **tool_configuration** (dict): tool's configuration + (free form dict) Returns: List of dict inserted in the db (holding the id key as well). The order of the list is not guaranteed to match the order of the initial list. """ db.mktemp_indexer_configuration(cur) db.copy_to(tools, 'tmp_indexer_configuration', ['tool_name', 'tool_version', 'tool_configuration'], cur) tools = db.indexer_configuration_add_from_temp(cur) for line in tools: yield dict(zip(db.indexer_configuration_cols, line)) @remote_api_endpoint('indexer_configuration/data') @db_transaction() def indexer_configuration_get(self, tool, db=None, cur=None): """Retrieve tool information. Args: tool (dict): Dictionary representing a tool with the - following keys:: + following keys: - tool_name (str): tool's name - tool_version (str): tool's version - tool_configuration (dict): tool's configuration (free form - dict) + - **tool_name** (str): tool's name + - **tool_version** (str): tool's version + - **tool_configuration** (dict): tool's configuration + (free form dict) Returns: The identifier of the tool if it exists, None otherwise. """ tool_conf = tool['tool_configuration'] if isinstance(tool_conf, dict): tool_conf = json.dumps(tool_conf) idx = db.indexer_configuration_get(tool['tool_name'], tool['tool_version'], tool_conf) if not idx: return None return dict(zip(db.indexer_configuration_cols, idx)) diff --git a/swh/indexer/storage/api/server.py b/swh/indexer/storage/api/server.py index 912fccc..14a358a 100644 --- a/swh/indexer/storage/api/server.py +++ b/swh/indexer/storage/api/server.py @@ -1,75 +1,75 @@ # Copyright (C) 2015-2018 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import logging import click from swh.core import config from swh.core.api import (SWHServerAPIApp, error_handler, encode_data_server as encode_data) -from swh.indexer.storage import get_indexer_storage, INDEXER_CFG_KEY - -from .. import IndexerStorage +from swh.indexer.storage import ( + get_indexer_storage, INDEXER_CFG_KEY, IndexerStorage +) DEFAULT_CONFIG_PATH = 'storage/indexer' DEFAULT_CONFIG = { INDEXER_CFG_KEY: ('dict', { 'cls': 'local', 'args': { 'db': 'dbname=softwareheritage-indexer-dev', }, }) } def get_storage(): global storage if not storage: storage = get_indexer_storage(**app.config[INDEXER_CFG_KEY]) return storage app = SWHServerAPIApp(__name__, backend_class=IndexerStorage, backend_factory=get_storage) storage = None @app.errorhandler(Exception) def my_error_handler(exception): return error_handler(exception, encode_data) @app.route('/') def index(): return 'SWH Indexer Storage API server' def run_from_webserver(environ, start_response, config_path=DEFAULT_CONFIG_PATH): """Run the WSGI app from the webserver, loading the configuration.""" cfg = config.load_named_config(config_path, DEFAULT_CONFIG) app.config.update(cfg) handler = logging.StreamHandler() app.logger.addHandler(handler) return app(environ, start_response) @click.command() @click.option('--host', default='0.0.0.0', help="Host to run the server") @click.option('--port', default=5007, type=click.INT, help="Binding port of the server") @click.option('--debug/--nodebug', default=True, help="Indicates if the server should run in debug mode") def launch(host, port, debug): cfg = config.load_named_config(DEFAULT_CONFIG_PATH, DEFAULT_CONFIG) app.config.update(cfg) app.run(host, port=int(port), debug=bool(debug)) if __name__ == '__main__': launch() diff --git a/swh/indexer/storage/db.py b/swh/indexer/storage/db.py index 48b9b61..c04fbd7 100644 --- a/swh/indexer/storage/db.py +++ b/swh/indexer/storage/db.py @@ -1,334 +1,359 @@ # Copyright (C) 2015-2018 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information from swh.model import hashutil from swh.storage.db import BaseDb, stored_procedure, cursor_to_bytes from swh.storage.db import line_to_bytes, execute_values_to_bytes class Db(BaseDb): """Proxy to the SWH Indexer DB, with wrappers around stored procedures """ content_mimetype_hash_keys = ['id', 'indexer_configuration_id'] def _missing_from_list(self, table, data, hash_keys, cur=None): """Read from table the data with hash_keys that are missing. Args: table (str): Table name (e.g content_mimetype, content_language, etc...) data (dict): Dict of data to read from hash_keys ([str]): List of keys to read in the data dict. Yields: The data which is missing from the db. """ cur = self._cursor(cur) keys = ', '.join(hash_keys) equality = ' AND '.join( ('t.%s = c.%s' % (key, key)) for key in hash_keys ) yield from execute_values_to_bytes( cur, """ select %s from (values %%s) as t(%s) where not exists ( select 1 from %s c where %s ) """ % (keys, keys, table, equality), (tuple(m[k] for k in hash_keys) for m in data) ) def content_mimetype_missing_from_list(self, mimetypes, cur=None): """List missing mimetypes. """ yield from self._missing_from_list( 'content_mimetype', mimetypes, self.content_mimetype_hash_keys, cur=cur) content_mimetype_cols = [ 'id', 'mimetype', 'encoding', 'tool_id', 'tool_name', 'tool_version', 'tool_configuration'] @stored_procedure('swh_mktemp_content_mimetype') def mktemp_content_mimetype(self, cur=None): pass def content_mimetype_add_from_temp(self, conflict_update, cur=None): self._cursor(cur).execute("SELECT swh_content_mimetype_add(%s)", (conflict_update, )) def _convert_key(self, key, main_table='c'): """Convert keys according to specific use in the module. Args: key (str): Key expression to change according to the alias used in the query main_table (str): Alias to use for the main table. Default to c for content_{something}. Expected: Tables content_{something} being aliased as 'c' (something in {language, mimetype, ...}), table indexer_configuration being aliased as 'i'. """ if key == 'id': return '%s.id' % main_table elif key == 'tool_id': return 'i.id as tool_id' elif key == 'licenses': return ''' array(select name from fossology_license where id = ANY( array_agg(%s.license_id))) as licenses''' % main_table return key def _get_from_list(self, table, ids, cols, cur=None, id_col='id'): """Fetches entries from the `table` such that their `id` field (or whatever is given to `id_col`) is in `ids`. Returns the columns `cols`. The `cur`sor is used to connect to the database. """ cur = self._cursor(cur) keys = map(self._convert_key, cols) query = """ select {keys} from (values %s) as t(id) inner join {table} c on c.{id_col}=t.id inner join indexer_configuration i on c.indexer_configuration_id=i.id; """.format( keys=', '.join(keys), id_col=id_col, table=table) yield from execute_values_to_bytes( cur, query, ((_id,) for _id in ids) ) + content_indexer_names = { + 'mimetype': 'content_mimetype', + 'fossology_license': 'content_fossology_license', + } + + def content_get_range(self, content_type, start, end, + indexer_configuration_id, limit=1000, cur=None): + """Retrieve contents with content_type, within range [start, end] + bound by limit and associated to the given indexer + configuration id. + + """ + cur = self._cursor(cur) + table = self.content_indexer_names[content_type] + query = """select t.id + from %s t + inner join indexer_configuration ic + on t.indexer_configuration_id=ic.id + where ic.id=%%s and + %%s <= t.id and t.id <= %%s + order by t.indexer_configuration_id, t.id + limit %%s""" % table + cur.execute(query, (indexer_configuration_id, start, end, limit)) + yield from cursor_to_bytes(cur) + def content_mimetype_get_from_list(self, ids, cur=None): yield from self._get_from_list( 'content_mimetype', ids, self.content_mimetype_cols, cur=cur) content_language_hash_keys = ['id', 'indexer_configuration_id'] def content_language_missing_from_list(self, languages, cur=None): """List missing languages. """ yield from self._missing_from_list( 'content_language', languages, self.content_language_hash_keys, cur=cur) content_language_cols = [ 'id', 'lang', 'tool_id', 'tool_name', 'tool_version', 'tool_configuration'] @stored_procedure('swh_mktemp_content_language') def mktemp_content_language(self, cur=None): pass def content_language_add_from_temp(self, conflict_update, cur=None): self._cursor(cur).execute("SELECT swh_content_language_add(%s)", (conflict_update, )) def content_language_get_from_list(self, ids, cur=None): yield from self._get_from_list( 'content_language', ids, self.content_language_cols, cur=cur) content_ctags_hash_keys = ['id', 'indexer_configuration_id'] def content_ctags_missing_from_list(self, ctags, cur=None): """List missing ctags. """ yield from self._missing_from_list( 'content_ctags', ctags, self.content_ctags_hash_keys, cur=cur) content_ctags_cols = [ 'id', 'name', 'kind', 'line', 'lang', 'tool_id', 'tool_name', 'tool_version', 'tool_configuration'] @stored_procedure('swh_mktemp_content_ctags') def mktemp_content_ctags(self, cur=None): pass def content_ctags_add_from_temp(self, conflict_update, cur=None): self._cursor(cur).execute("SELECT swh_content_ctags_add(%s)", (conflict_update, )) def content_ctags_get_from_list(self, ids, cur=None): cur = self._cursor(cur) keys = map(self._convert_key, self.content_ctags_cols) yield from execute_values_to_bytes( cur, """ select %s from (values %%s) as t(id) inner join content_ctags c on c.id=t.id inner join indexer_configuration i on c.indexer_configuration_id=i.id order by line """ % ', '.join(keys), ((_id,) for _id in ids) ) def content_ctags_search(self, expression, last_sha1, limit, cur=None): cur = self._cursor(cur) if not last_sha1: query = """SELECT %s FROM swh_content_ctags_search(%%s, %%s)""" % ( ','.join(self.content_ctags_cols)) cur.execute(query, (expression, limit)) else: if last_sha1 and isinstance(last_sha1, bytes): last_sha1 = '\\x%s' % hashutil.hash_to_hex(last_sha1) elif last_sha1: last_sha1 = '\\x%s' % last_sha1 query = """SELECT %s FROM swh_content_ctags_search(%%s, %%s, %%s)""" % ( ','.join(self.content_ctags_cols)) cur.execute(query, (expression, limit, last_sha1)) yield from cursor_to_bytes(cur) content_fossology_license_cols = [ 'id', 'tool_id', 'tool_name', 'tool_version', 'tool_configuration', 'licenses'] @stored_procedure('swh_mktemp_content_fossology_license') def mktemp_content_fossology_license(self, cur=None): pass def content_fossology_license_add_from_temp(self, conflict_update, cur=None): """Add new licenses per content. """ self._cursor(cur).execute( "SELECT swh_content_fossology_license_add(%s)", (conflict_update, )) def content_fossology_license_get_from_list(self, ids, cur=None): """Retrieve licenses per id. """ cur = self._cursor(cur) keys = map(self._convert_key, self.content_fossology_license_cols) yield from execute_values_to_bytes( cur, """ select %s from (values %%s) as t(id) inner join content_fossology_license c on t.id=c.id inner join indexer_configuration i on i.id=c.indexer_configuration_id group by c.id, i.id, i.tool_name, i.tool_version, i.tool_configuration; """ % ', '.join(keys), ((_id,) for _id in ids) ) content_metadata_hash_keys = ['id', 'indexer_configuration_id'] def content_metadata_missing_from_list(self, metadata, cur=None): """List missing metadata. """ yield from self._missing_from_list( 'content_metadata', metadata, self.content_metadata_hash_keys, cur=cur) content_metadata_cols = [ 'id', 'translated_metadata', 'tool_id', 'tool_name', 'tool_version', 'tool_configuration'] @stored_procedure('swh_mktemp_content_metadata') def mktemp_content_metadata(self, cur=None): pass def content_metadata_add_from_temp(self, conflict_update, cur=None): self._cursor(cur).execute("SELECT swh_content_metadata_add(%s)", (conflict_update, )) def content_metadata_get_from_list(self, ids, cur=None): yield from self._get_from_list( 'content_metadata', ids, self.content_metadata_cols, cur=cur) revision_metadata_hash_keys = ['id', 'indexer_configuration_id'] def revision_metadata_missing_from_list(self, metadata, cur=None): """List missing metadata. """ yield from self._missing_from_list( 'revision_metadata', metadata, self.revision_metadata_hash_keys, cur=cur) revision_metadata_cols = [ 'id', 'translated_metadata', 'tool_id', 'tool_name', 'tool_version', 'tool_configuration'] @stored_procedure('swh_mktemp_revision_metadata') def mktemp_revision_metadata(self, cur=None): pass def revision_metadata_add_from_temp(self, conflict_update, cur=None): self._cursor(cur).execute("SELECT swh_revision_metadata_add(%s)", (conflict_update, )) def revision_metadata_get_from_list(self, ids, cur=None): yield from self._get_from_list( 'revision_metadata', ids, self.revision_metadata_cols, cur=cur) origin_intrinsic_metadata_cols = [ 'origin_id', 'metadata', 'from_revision', 'tool_id', 'tool_name', 'tool_version', 'tool_configuration'] @stored_procedure('swh_mktemp_origin_intrinsic_metadata') def mktemp_origin_intrinsic_metadata(self, cur=None): pass def origin_intrinsic_metadata_add_from_temp( self, conflict_update, cur=None): cur = self._cursor(cur) cur.execute( "SELECT swh_origin_intrinsic_metadata_add(%s)", (conflict_update, )) def origin_intrinsic_metadata_get_from_list(self, orig_ids, cur=None): yield from self._get_from_list( 'origin_intrinsic_metadata', orig_ids, self.origin_intrinsic_metadata_cols, cur=cur, id_col='origin_id') indexer_configuration_cols = ['id', 'tool_name', 'tool_version', 'tool_configuration'] @stored_procedure('swh_mktemp_indexer_configuration') def mktemp_indexer_configuration(self, cur=None): pass def indexer_configuration_add_from_temp(self, cur=None): cur = self._cursor(cur) cur.execute("SELECT %s from swh_indexer_configuration_add()" % ( ','.join(self.indexer_configuration_cols), )) yield from cursor_to_bytes(cur) def indexer_configuration_get(self, tool_name, tool_version, tool_configuration, cur=None): cur = self._cursor(cur) cur.execute('''select %s from indexer_configuration where tool_name=%%s and tool_version=%%s and tool_configuration=%%s''' % ( ','.join(self.indexer_configuration_cols)), (tool_name, tool_version, tool_configuration)) data = cur.fetchone() if not data: return None return line_to_bytes(data) diff --git a/swh/indexer/tasks.py b/swh/indexer/tasks.py index 2c8d4a4..92bf3ca 100644 --- a/swh/indexer/tasks.py +++ b/swh/indexer/tasks.py @@ -1,107 +1,110 @@ # Copyright (C) 2016-2018 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import logging from swh.scheduler.task import Task as SchedulerTask -from .orchestrator import OrchestratorAllContentsIndexer -from .orchestrator import OrchestratorTextContentsIndexer -from .mimetype import ContentMimetypeIndexer +from .mimetype import ContentMimetypeIndexer, MimetypeRangeIndexer from .language import ContentLanguageIndexer from .ctags import CtagsIndexer from .fossology_license import ContentFossologyLicenseIndexer from .rehash import RecomputeChecksums from .metadata import RevisionMetadataIndexer, OriginMetadataIndexer +from .origin_head import OriginHeadIndexer logging.basicConfig(level=logging.INFO) class Task(SchedulerTask): + """Task whose results is needed for other computations. + + """ def run_task(self, *args, **kwargs): indexer = self.Indexer().run(*args, **kwargs) if hasattr(indexer, 'results'): # indexer tasks return indexer.results return indexer -class OrchestratorAllContents(Task): - """Main task in charge of reading batch contents (of any type) and - broadcasting them back to other tasks. +class StatusTask(SchedulerTask): + """Task which returns a status either eventful or uneventful. """ - task_queue = 'swh_indexer_orchestrator_content_all' - - Indexer = OrchestratorAllContentsIndexer - - -class OrchestratorTextContents(Task): - """Main task in charge of reading batch contents (of type text) and - broadcasting them back to other tasks. - - """ - task_queue = 'swh_indexer_orchestrator_content_text' - - Indexer = OrchestratorTextContentsIndexer + def run_task(self, *args, **kwargs): + results = self.Indexer().run(*args, **kwargs) + return {'status': 'eventful' if results else 'uneventful'} class RevisionMetadata(Task): task_queue = 'swh_indexer_revision_metadata' serializer = 'msgpack' Indexer = RevisionMetadataIndexer class OriginMetadata(Task): task_queue = 'swh_indexer_origin_intrinsic_metadata' Indexer = OriginMetadataIndexer -class ContentMimetype(Task): - """Task which computes the mimetype, encoding from the sha1's content. +class OriginHead(Task): + task_queue = 'swh_indexer_origin_head' + + Indexer = OriginHeadIndexer + + +class ContentMimetype(StatusTask): + """Compute (mimetype, encoding) from the sha1's content. """ task_queue = 'swh_indexer_content_mimetype' - Indexer = ContentMimetypeIndexer +class ContentRangeMimetype(StatusTask): + """Compute (mimetype, encoding) on a range of sha1s. + + """ + task_queue = 'swh_indexer_content_mimetype_range' + Indexer = MimetypeRangeIndexer + + class ContentLanguage(Task): """Task which computes the language from the sha1's content. """ task_queue = 'swh_indexer_content_language' - def run_task(self, *args, **kwargs): - ContentLanguageIndexer().run(*args, **kwargs) + Indexer = ContentLanguageIndexer class Ctags(Task): """Task which computes ctags from the sha1's content. """ task_queue = 'swh_indexer_content_ctags' Indexer = CtagsIndexer class ContentFossologyLicense(Task): """Task which computes licenses from the sha1's content. """ task_queue = 'swh_indexer_content_fossology_license' Indexer = ContentFossologyLicenseIndexer class RecomputeChecksums(Task): """Task which recomputes hashes and possibly new ones. """ task_queue = 'swh_indexer_content_rehash' Indexer = RecomputeChecksums diff --git a/swh/indexer/tests/storage/__init__.py b/swh/indexer/tests/storage/__init__.py index e69de29..a194d3d 100644 --- a/swh/indexer/tests/storage/__init__.py +++ b/swh/indexer/tests/storage/__init__.py @@ -0,0 +1,142 @@ +# Copyright (C) 2018 The Software Heritage developers +# See the AUTHORS file at the top-level directory of this distribution +# License: GNU General Public License version 3, or any later version +# See top-level LICENSE file for more information + +from os import path +import swh.storage + +from swh.model.hashutil import MultiHash +from hypothesis.strategies import (composite, sets, one_of, uuids, + tuples, sampled_from) + +SQL_DIR = path.join(path.dirname(swh.indexer.__file__), 'sql') + + +MIMETYPES = [ + b'application/json', + b'application/octet-stream', + b'application/xml', + b'text/plain', +] + +ENCODINGS = [ + b'iso8859-1', + b'iso8859-15', + b'latin1', + b'utf-8', +] + + +def gen_mimetype(): + """Generate one mimetype strategy. + + """ + return one_of(sampled_from(MIMETYPES)) + + +def gen_encoding(): + """Generate one encoding strategy. + + """ + return one_of(sampled_from(ENCODINGS)) + + +def _init_content(uuid): + """Given a uuid, initialize a content + + """ + return { + 'id': MultiHash.from_data(uuid.bytes, {'sha1'}).digest()['sha1'], + 'indexer_configuration_id': 1, + } + + +@composite +def gen_content_mimetypes(draw, *, min_size=0, max_size=100): + """Generate valid and consistent content_mimetypes. + + Context: Test purposes + + Args: + **draw** (callable): Used by hypothesis to generate data + **min_size** (int): Minimal number of elements to generate + (default: 0) + **max_size** (int): Maximal number of elements to generate + (default: 100) + + Returns: + List of content_mimetypes as expected by the + content_mimetype_add api endpoint. + + """ + _ids = draw( + sets( + tuples( + uuids(), + gen_mimetype(), + gen_encoding() + ), + min_size=min_size, max_size=max_size + ) + ) + + content_mimetypes = [] + for uuid, mimetype, encoding in _ids: + content_mimetypes.append({ + **_init_content(uuid), + 'mimetype': mimetype, + 'encoding': encoding, + }) + return content_mimetypes + + +FOSSOLOGY_LICENSES = [ + b'3DFX', + b'BSD', + b'GPL', + b'Apache2', + b'MIT', +] + + +def gen_license(): + return one_of(sampled_from(FOSSOLOGY_LICENSES)) + + +@composite +def gen_content_fossology_licenses(draw, *, min_size=0, max_size=100): + """Generate valid and consistent content_fossology_licenses. + + Context: Test purposes + + Args: + **draw** (callable): Used by hypothesis to generate data + **min_size** (int): Minimal number of elements to generate + (default: 0) + **max_size** (int): Maximal number of elements to generate + (default: 100) + + Returns: + List of content_fossology_licenses as expected by the + content_fossology_license_add api endpoint. + + """ + _ids = draw( + sets( + tuples( + uuids(), + gen_license(), + ), + min_size=min_size, max_size=max_size + ) + ) + + content_licenses = [] + for uuid, license in _ids: + content_licenses.append({ + **_init_content(uuid), + 'licenses': [license], + 'indexer_configuration_id': 1, + }) + return content_licenses diff --git a/swh/indexer/tests/storage/test_storage.py b/swh/indexer/tests/storage/test_storage.py index ab343e2..c2e766b 100644 --- a/swh/indexer/tests/storage/test_storage.py +++ b/swh/indexer/tests/storage/test_storage.py @@ -1,1624 +1,1785 @@ # Copyright (C) 2015-2018 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import os +import pytest import unittest +from hypothesis import given + from swh.model.hashutil import hash_to_bytes from swh.indexer.storage import get_indexer_storage from swh.core.tests.db_testing import SingleDbTestFixture -from swh.indexer.tests import SQL_DIR - -import pytest +from swh.indexer.tests.storage import ( + SQL_DIR, gen_content_mimetypes, gen_content_fossology_licenses +) @pytest.mark.db class BaseTestStorage(SingleDbTestFixture): """Base test class for most indexer tests. It adds support for Storage testing to the SingleDbTestFixture class. It will also build the database from the swh-indexed/sql/*.sql files. """ TEST_DB_NAME = 'softwareheritage-test-indexer' TEST_DB_DUMP = os.path.join(SQL_DIR, '*.sql') def setUp(self): super().setUp() self.storage_config = { 'cls': 'local', 'args': { 'db': 'dbname=%s' % self.TEST_DB_NAME, }, } self.storage = get_indexer_storage(**self.storage_config) self.sha1_1 = hash_to_bytes('34973274ccef6ab4dfaaf86599792fa9c3fe4689') self.sha1_2 = hash_to_bytes('61c2b3a30496d329e21af70dd2d7e097046d07b7') self.revision_id_1 = hash_to_bytes( '7026b7c1a2af56521e951c01ed20f255fa054238') self.revision_id_2 = hash_to_bytes( '7026b7c1a2af56521e9587659012345678904321') self.origin_id_1 = 54974445 cur = self.test_db[self.TEST_DB_NAME].cursor tools = {} cur.execute(''' select tool_name, id, tool_version, tool_configuration from indexer_configuration order by id''') for row in cur.fetchall(): key = row[0] while key in tools: key = '_' + key tools[key] = { 'id': row[1], 'name': row[0], 'version': row[2], 'configuration': row[3] } self.tools = tools def tearDown(self): self.reset_storage_tables() self.storage = None super().tearDown() def reset_storage_tables(self): excluded = {'indexer_configuration'} self.reset_db_tables(self.TEST_DB_NAME, excluded=excluded) db = self.test_db[self.TEST_DB_NAME] db.conn.commit() @pytest.mark.db class CommonTestStorage(BaseTestStorage): """Base class for Indexer Storage testing. """ def test_check_config(self): self.assertTrue(self.storage.check_config(check_write=True)) self.assertTrue(self.storage.check_config(check_write=False)) def test_content_mimetype_missing(self): # given tool_id = self.tools['file']['id'] mimetypes = [ { 'id': self.sha1_1, 'indexer_configuration_id': tool_id, }, { 'id': self.sha1_2, 'indexer_configuration_id': tool_id, }] # when actual_missing = self.storage.content_mimetype_missing(mimetypes) # then self.assertEqual(list(actual_missing), [ self.sha1_1, self.sha1_2, ]) # given self.storage.content_mimetype_add([{ 'id': self.sha1_2, 'mimetype': b'text/plain', 'encoding': b'utf-8', 'indexer_configuration_id': tool_id, }]) # when actual_missing = self.storage.content_mimetype_missing(mimetypes) # then self.assertEqual(list(actual_missing), [self.sha1_1]) def test_content_mimetype_add__drop_duplicate(self): # given tool_id = self.tools['file']['id'] mimetype_v1 = { 'id': self.sha1_2, 'mimetype': b'text/plain', 'encoding': b'utf-8', 'indexer_configuration_id': tool_id, } # given self.storage.content_mimetype_add([mimetype_v1]) # when actual_mimetypes = list(self.storage.content_mimetype_get( [self.sha1_2])) # then expected_mimetypes_v1 = [{ 'id': self.sha1_2, 'mimetype': b'text/plain', 'encoding': b'utf-8', 'tool': self.tools['file'], }] self.assertEqual(actual_mimetypes, expected_mimetypes_v1) # given mimetype_v2 = mimetype_v1.copy() mimetype_v2.update({ 'mimetype': b'text/html', 'encoding': b'us-ascii', }) self.storage.content_mimetype_add([mimetype_v2]) actual_mimetypes = list(self.storage.content_mimetype_get( [self.sha1_2])) # mimetype did not change as the v2 was dropped. self.assertEqual(actual_mimetypes, expected_mimetypes_v1) def test_content_mimetype_add__update_in_place_duplicate(self): # given tool_id = self.tools['file']['id'] mimetype_v1 = { 'id': self.sha1_2, 'mimetype': b'text/plain', 'encoding': b'utf-8', 'indexer_configuration_id': tool_id, } # given self.storage.content_mimetype_add([mimetype_v1]) # when actual_mimetypes = list(self.storage.content_mimetype_get( [self.sha1_2])) expected_mimetypes_v1 = [{ 'id': self.sha1_2, 'mimetype': b'text/plain', 'encoding': b'utf-8', 'tool': self.tools['file'], }] # then self.assertEqual(actual_mimetypes, expected_mimetypes_v1) # given mimetype_v2 = mimetype_v1.copy() mimetype_v2.update({ 'mimetype': b'text/html', 'encoding': b'us-ascii', }) self.storage.content_mimetype_add([mimetype_v2], conflict_update=True) actual_mimetypes = list(self.storage.content_mimetype_get( [self.sha1_2])) expected_mimetypes_v2 = [{ 'id': self.sha1_2, 'mimetype': b'text/html', 'encoding': b'us-ascii', 'tool': { 'id': 2, 'name': 'file', 'version': '5.22', 'configuration': {'command_line': 'file --mime '} } }] # mimetype did change as the v2 was used to overwrite v1 self.assertEqual(actual_mimetypes, expected_mimetypes_v2) def test_content_mimetype_get(self): # given tool_id = self.tools['file']['id'] mimetypes = [self.sha1_2, self.sha1_1] mimetype1 = { 'id': self.sha1_2, 'mimetype': b'text/plain', 'encoding': b'utf-8', 'indexer_configuration_id': tool_id, } # when self.storage.content_mimetype_add([mimetype1]) # then actual_mimetypes = list(self.storage.content_mimetype_get(mimetypes)) # then expected_mimetypes = [{ 'id': self.sha1_2, 'mimetype': b'text/plain', 'encoding': b'utf-8', 'tool': self.tools['file'] }] self.assertEqual(actual_mimetypes, expected_mimetypes) def test_content_language_missing(self): # given tool_id = self.tools['pygments']['id'] languages = [ { 'id': self.sha1_2, 'indexer_configuration_id': tool_id, }, { 'id': self.sha1_1, 'indexer_configuration_id': tool_id, } ] # when actual_missing = list(self.storage.content_language_missing(languages)) # then self.assertEqual(list(actual_missing), [ self.sha1_2, self.sha1_1, ]) # given self.storage.content_language_add([{ 'id': self.sha1_2, 'lang': 'haskell', 'indexer_configuration_id': tool_id, }]) # when actual_missing = list(self.storage.content_language_missing(languages)) # then self.assertEqual(actual_missing, [self.sha1_1]) def test_content_language_get(self): # given tool_id = self.tools['pygments']['id'] language1 = { 'id': self.sha1_2, 'lang': 'common-lisp', 'indexer_configuration_id': tool_id, } # when self.storage.content_language_add([language1]) # then actual_languages = list(self.storage.content_language_get( [self.sha1_2, self.sha1_1])) # then expected_languages = [{ 'id': self.sha1_2, 'lang': 'common-lisp', 'tool': self.tools['pygments'] }] self.assertEqual(actual_languages, expected_languages) def test_content_language_add__drop_duplicate(self): # given tool_id = self.tools['pygments']['id'] language_v1 = { 'id': self.sha1_2, 'lang': 'emacslisp', 'indexer_configuration_id': tool_id, } # given self.storage.content_language_add([language_v1]) # when actual_languages = list(self.storage.content_language_get( [self.sha1_2])) # then expected_languages_v1 = [{ 'id': self.sha1_2, 'lang': 'emacslisp', 'tool': self.tools['pygments'] }] self.assertEqual(actual_languages, expected_languages_v1) # given language_v2 = language_v1.copy() language_v2.update({ 'lang': 'common-lisp', }) self.storage.content_language_add([language_v2]) actual_languages = list(self.storage.content_language_get( [self.sha1_2])) # language did not change as the v2 was dropped. self.assertEqual(actual_languages, expected_languages_v1) def test_content_language_add__update_in_place_duplicate(self): # given tool_id = self.tools['pygments']['id'] language_v1 = { 'id': self.sha1_2, 'lang': 'common-lisp', 'indexer_configuration_id': tool_id, } # given self.storage.content_language_add([language_v1]) # when actual_languages = list(self.storage.content_language_get( [self.sha1_2])) # then expected_languages_v1 = [{ 'id': self.sha1_2, 'lang': 'common-lisp', 'tool': self.tools['pygments'] }] self.assertEqual(actual_languages, expected_languages_v1) # given language_v2 = language_v1.copy() language_v2.update({ 'lang': 'emacslisp', }) self.storage.content_language_add([language_v2], conflict_update=True) actual_languages = list(self.storage.content_language_get( [self.sha1_2])) # language did not change as the v2 was dropped. expected_languages_v2 = [{ 'id': self.sha1_2, 'lang': 'emacslisp', 'tool': self.tools['pygments'] }] # language did change as the v2 was used to overwrite v1 self.assertEqual(actual_languages, expected_languages_v2) def test_content_ctags_missing(self): # given tool_id = self.tools['universal-ctags']['id'] ctags = [ { 'id': self.sha1_2, 'indexer_configuration_id': tool_id, }, { 'id': self.sha1_1, 'indexer_configuration_id': tool_id, } ] # when actual_missing = self.storage.content_ctags_missing(ctags) # then self.assertEqual(list(actual_missing), [ self.sha1_2, self.sha1_1 ]) # given self.storage.content_ctags_add([ { 'id': self.sha1_2, 'indexer_configuration_id': tool_id, 'ctags': [{ 'name': 'done', 'kind': 'variable', 'line': 119, 'lang': 'OCaml', }] }, ]) # when actual_missing = self.storage.content_ctags_missing(ctags) # then self.assertEqual(list(actual_missing), [self.sha1_1]) def test_content_ctags_get(self): # given tool_id = self.tools['universal-ctags']['id'] ctags = [self.sha1_2, self.sha1_1] ctag1 = { 'id': self.sha1_2, 'indexer_configuration_id': tool_id, 'ctags': [ { 'name': 'done', 'kind': 'variable', 'line': 100, 'lang': 'Python', }, { 'name': 'main', 'kind': 'function', 'line': 119, 'lang': 'Python', }] } # when self.storage.content_ctags_add([ctag1]) # then actual_ctags = list(self.storage.content_ctags_get(ctags)) # then expected_ctags = [ { 'id': self.sha1_2, 'tool': self.tools['universal-ctags'], 'name': 'done', 'kind': 'variable', 'line': 100, 'lang': 'Python', }, { 'id': self.sha1_2, 'tool': self.tools['universal-ctags'], 'name': 'main', 'kind': 'function', 'line': 119, 'lang': 'Python', } ] self.assertEqual(actual_ctags, expected_ctags) def test_content_ctags_search(self): # 1. given tool = self.tools['universal-ctags'] tool_id = tool['id'] ctag1 = { 'id': self.sha1_1, 'indexer_configuration_id': tool_id, 'ctags': [ { 'name': 'hello', 'kind': 'function', 'line': 133, 'lang': 'Python', }, { 'name': 'counter', 'kind': 'variable', 'line': 119, 'lang': 'Python', }, ] } ctag2 = { 'id': self.sha1_2, 'indexer_configuration_id': tool_id, 'ctags': [ { 'name': 'hello', 'kind': 'variable', 'line': 100, 'lang': 'C', }, ] } self.storage.content_ctags_add([ctag1, ctag2]) # 1. when actual_ctags = list(self.storage.content_ctags_search('hello', limit=1)) # 1. then self.assertEqual(actual_ctags, [ { 'id': ctag1['id'], 'tool': tool, 'name': 'hello', 'kind': 'function', 'line': 133, 'lang': 'Python', } ]) # 2. when actual_ctags = list(self.storage.content_ctags_search( 'hello', limit=1, last_sha1=ctag1['id'])) # 2. then self.assertEqual(actual_ctags, [ { 'id': ctag2['id'], 'tool': tool, 'name': 'hello', 'kind': 'variable', 'line': 100, 'lang': 'C', } ]) # 3. when actual_ctags = list(self.storage.content_ctags_search('hello')) # 3. then self.assertEqual(actual_ctags, [ { 'id': ctag1['id'], 'tool': tool, 'name': 'hello', 'kind': 'function', 'line': 133, 'lang': 'Python', }, { 'id': ctag2['id'], 'tool': tool, 'name': 'hello', 'kind': 'variable', 'line': 100, 'lang': 'C', }, ]) # 4. when actual_ctags = list(self.storage.content_ctags_search('counter')) # then self.assertEqual(actual_ctags, [{ 'id': ctag1['id'], 'tool': tool, 'name': 'counter', 'kind': 'variable', 'line': 119, 'lang': 'Python', }]) def test_content_ctags_search_no_result(self): actual_ctags = list(self.storage.content_ctags_search('counter')) self.assertEqual(actual_ctags, []) def test_content_ctags_add__add_new_ctags_added(self): # given tool = self.tools['universal-ctags'] tool_id = tool['id'] ctag_v1 = { 'id': self.sha1_2, 'indexer_configuration_id': tool_id, 'ctags': [{ 'name': 'done', 'kind': 'variable', 'line': 100, 'lang': 'Scheme', }] } # given self.storage.content_ctags_add([ctag_v1]) self.storage.content_ctags_add([ctag_v1]) # conflict does nothing # when actual_ctags = list(self.storage.content_ctags_get( [self.sha1_2])) # then expected_ctags = [{ 'id': self.sha1_2, 'name': 'done', 'kind': 'variable', 'line': 100, 'lang': 'Scheme', 'tool': tool, }] self.assertEqual(actual_ctags, expected_ctags) # given ctag_v2 = ctag_v1.copy() ctag_v2.update({ 'ctags': [ { 'name': 'defn', 'kind': 'function', 'line': 120, 'lang': 'Scheme', } ] }) self.storage.content_ctags_add([ctag_v2]) expected_ctags = [ { 'id': self.sha1_2, 'name': 'done', 'kind': 'variable', 'line': 100, 'lang': 'Scheme', 'tool': tool, }, { 'id': self.sha1_2, 'name': 'defn', 'kind': 'function', 'line': 120, 'lang': 'Scheme', 'tool': tool, } ] actual_ctags = list(self.storage.content_ctags_get( [self.sha1_2])) self.assertEqual(actual_ctags, expected_ctags) def test_content_ctags_add__update_in_place(self): # given tool = self.tools['universal-ctags'] tool_id = tool['id'] ctag_v1 = { 'id': self.sha1_2, 'indexer_configuration_id': tool_id, 'ctags': [{ 'name': 'done', 'kind': 'variable', 'line': 100, 'lang': 'Scheme', }] } # given self.storage.content_ctags_add([ctag_v1]) # when actual_ctags = list(self.storage.content_ctags_get( [self.sha1_2])) # then expected_ctags = [ { 'id': self.sha1_2, 'name': 'done', 'kind': 'variable', 'line': 100, 'lang': 'Scheme', 'tool': tool } ] self.assertEqual(actual_ctags, expected_ctags) # given ctag_v2 = ctag_v1.copy() ctag_v2.update({ 'ctags': [ { 'name': 'done', 'kind': 'variable', 'line': 100, 'lang': 'Scheme', }, { 'name': 'defn', 'kind': 'function', 'line': 120, 'lang': 'Scheme', } ] }) self.storage.content_ctags_add([ctag_v2], conflict_update=True) actual_ctags = list(self.storage.content_ctags_get( [self.sha1_2])) # ctag did change as the v2 was used to overwrite v1 expected_ctags = [ { 'id': self.sha1_2, 'name': 'done', 'kind': 'variable', 'line': 100, 'lang': 'Scheme', 'tool': tool, }, { 'id': self.sha1_2, 'name': 'defn', 'kind': 'function', 'line': 120, 'lang': 'Scheme', 'tool': tool, } ] self.assertEqual(actual_ctags, expected_ctags) def test_content_fossology_license_get(self): # given tool = self.tools['nomos'] tool_id = tool['id'] license1 = { 'id': self.sha1_1, 'licenses': ['GPL-2.0+'], 'indexer_configuration_id': tool_id, } # when self.storage.content_fossology_license_add([license1]) # then actual_licenses = list(self.storage.content_fossology_license_get( [self.sha1_2, self.sha1_1])) expected_license = { self.sha1_1: [{ 'licenses': ['GPL-2.0+'], 'tool': tool, }] } # then self.assertEqual(actual_licenses, [expected_license]) def test_content_fossology_license_add__new_license_added(self): # given tool = self.tools['nomos'] tool_id = tool['id'] license_v1 = { 'id': self.sha1_1, 'licenses': ['Apache-2.0'], 'indexer_configuration_id': tool_id, } # given self.storage.content_fossology_license_add([license_v1]) # conflict does nothing self.storage.content_fossology_license_add([license_v1]) # when actual_licenses = list(self.storage.content_fossology_license_get( [self.sha1_1])) # then expected_license = { self.sha1_1: [{ 'licenses': ['Apache-2.0'], 'tool': tool, }] } self.assertEqual(actual_licenses, [expected_license]) # given license_v2 = license_v1.copy() license_v2.update({ 'licenses': ['BSD-2-Clause'], }) self.storage.content_fossology_license_add([license_v2]) actual_licenses = list(self.storage.content_fossology_license_get( [self.sha1_1])) expected_license = { self.sha1_1: [{ 'licenses': ['Apache-2.0', 'BSD-2-Clause'], 'tool': tool }] } # license did not change as the v2 was dropped. self.assertEqual(actual_licenses, [expected_license]) def test_content_fossology_license_add__update_in_place_duplicate(self): # given tool = self.tools['nomos'] tool_id = tool['id'] license_v1 = { 'id': self.sha1_1, 'licenses': ['CECILL'], 'indexer_configuration_id': tool_id, } # given self.storage.content_fossology_license_add([license_v1]) # conflict does nothing self.storage.content_fossology_license_add([license_v1]) # when actual_licenses = list(self.storage.content_fossology_license_get( [self.sha1_1])) # then expected_license = { self.sha1_1: [{ 'licenses': ['CECILL'], 'tool': tool, }] } self.assertEqual(actual_licenses, [expected_license]) # given license_v2 = license_v1.copy() license_v2.update({ 'licenses': ['CECILL-2.0'] }) self.storage.content_fossology_license_add([license_v2], conflict_update=True) actual_licenses = list(self.storage.content_fossology_license_get( [self.sha1_1])) # license did change as the v2 was used to overwrite v1 expected_license = { self.sha1_1: [{ 'licenses': ['CECILL-2.0'], 'tool': tool, }] } self.assertEqual(actual_licenses, [expected_license]) def test_content_metadata_missing(self): # given tool_id = self.tools['swh-metadata-translator']['id'] metadata = [ { 'id': self.sha1_2, 'indexer_configuration_id': tool_id, }, { 'id': self.sha1_1, 'indexer_configuration_id': tool_id, } ] # when actual_missing = list(self.storage.content_metadata_missing(metadata)) # then self.assertEqual(list(actual_missing), [ self.sha1_2, self.sha1_1, ]) # given self.storage.content_metadata_add([{ 'id': self.sha1_2, 'translated_metadata': { 'other': {}, 'codeRepository': { 'type': 'git', 'url': 'https://github.com/moranegg/metadata_test' }, 'description': 'Simple package.json test for indexer', 'name': 'test_metadata', 'version': '0.0.1' }, 'indexer_configuration_id': tool_id }]) # when actual_missing = list(self.storage.content_metadata_missing(metadata)) # then self.assertEqual(actual_missing, [self.sha1_1]) def test_content_metadata_get(self): # given tool_id = self.tools['swh-metadata-translator']['id'] metadata1 = { 'id': self.sha1_2, 'translated_metadata': { 'other': {}, 'codeRepository': { 'type': 'git', 'url': 'https://github.com/moranegg/metadata_test' }, 'description': 'Simple package.json test for indexer', 'name': 'test_metadata', 'version': '0.0.1' }, 'indexer_configuration_id': tool_id, } # when self.storage.content_metadata_add([metadata1]) # then actual_metadata = list(self.storage.content_metadata_get( [self.sha1_2, self.sha1_1])) expected_metadata = [{ 'id': self.sha1_2, 'translated_metadata': { 'other': {}, 'codeRepository': { 'type': 'git', 'url': 'https://github.com/moranegg/metadata_test' }, 'description': 'Simple package.json test for indexer', 'name': 'test_metadata', 'version': '0.0.1' }, 'tool': self.tools['swh-metadata-translator'] }] self.assertEqual(actual_metadata, expected_metadata) def test_content_metadata_add_drop_duplicate(self): # given tool_id = self.tools['swh-metadata-translator']['id'] metadata_v1 = { 'id': self.sha1_2, 'translated_metadata': { 'other': {}, 'name': 'test_metadata', 'version': '0.0.1' }, 'indexer_configuration_id': tool_id, } # given self.storage.content_metadata_add([metadata_v1]) # when actual_metadata = list(self.storage.content_metadata_get( [self.sha1_2])) expected_metadata_v1 = [{ 'id': self.sha1_2, 'translated_metadata': { 'other': {}, 'name': 'test_metadata', 'version': '0.0.1' }, 'tool': self.tools['swh-metadata-translator'] }] self.assertEqual(actual_metadata, expected_metadata_v1) # given metadata_v2 = metadata_v1.copy() metadata_v2.update({ 'translated_metadata': { 'other': {}, 'name': 'test_drop_duplicated_metadata', 'version': '0.0.1' }, }) self.storage.content_metadata_add([metadata_v2]) # then actual_metadata = list(self.storage.content_metadata_get( [self.sha1_2])) # metadata did not change as the v2 was dropped. self.assertEqual(actual_metadata, expected_metadata_v1) def test_content_metadata_add_update_in_place_duplicate(self): # given tool_id = self.tools['swh-metadata-translator']['id'] metadata_v1 = { 'id': self.sha1_2, 'translated_metadata': { 'other': {}, 'name': 'test_metadata', 'version': '0.0.1' }, 'indexer_configuration_id': tool_id, } # given self.storage.content_metadata_add([metadata_v1]) # when actual_metadata = list(self.storage.content_metadata_get( [self.sha1_2])) # then expected_metadata_v1 = [{ 'id': self.sha1_2, 'translated_metadata': { 'other': {}, 'name': 'test_metadata', 'version': '0.0.1' }, 'tool': self.tools['swh-metadata-translator'] }] self.assertEqual(actual_metadata, expected_metadata_v1) # given metadata_v2 = metadata_v1.copy() metadata_v2.update({ 'translated_metadata': { 'other': {}, 'name': 'test_update_duplicated_metadata', 'version': '0.0.1' }, }) self.storage.content_metadata_add([metadata_v2], conflict_update=True) actual_metadata = list(self.storage.content_metadata_get( [self.sha1_2])) # language did not change as the v2 was dropped. expected_metadata_v2 = [{ 'id': self.sha1_2, 'translated_metadata': { 'other': {}, 'name': 'test_update_duplicated_metadata', 'version': '0.0.1' }, 'tool': self.tools['swh-metadata-translator'] }] # metadata did change as the v2 was used to overwrite v1 self.assertEqual(actual_metadata, expected_metadata_v2) def test_revision_metadata_missing(self): # given tool_id = self.tools['swh-metadata-detector']['id'] metadata = [ { 'id': self.revision_id_1, 'indexer_configuration_id': tool_id, }, { 'id': self.revision_id_2, 'indexer_configuration_id': tool_id, } ] # when actual_missing = list(self.storage.revision_metadata_missing( metadata)) # then self.assertEqual(list(actual_missing), [ self.revision_id_1, self.revision_id_2, ]) # given self.storage.revision_metadata_add([{ 'id': self.revision_id_1, 'translated_metadata': { 'developmentStatus': None, 'version': None, 'operatingSystem': None, 'description': None, 'keywords': None, 'issueTracker': None, 'name': None, 'author': None, 'relatedLink': None, 'url': None, 'license': None, 'maintainer': None, 'email': None, 'softwareRequirements': None, 'identifier': None }, 'indexer_configuration_id': tool_id }]) # when actual_missing = list(self.storage.revision_metadata_missing( metadata)) # then self.assertEqual(actual_missing, [self.revision_id_2]) def test_revision_metadata_get(self): # given tool_id = self.tools['swh-metadata-detector']['id'] metadata_rev = { 'id': self.revision_id_2, 'translated_metadata': { 'developmentStatus': None, 'version': None, 'operatingSystem': None, 'description': None, 'keywords': None, 'issueTracker': None, 'name': None, 'author': None, 'relatedLink': None, 'url': None, 'license': None, 'maintainer': None, 'email': None, 'softwareRequirements': None, 'identifier': None }, 'indexer_configuration_id': tool_id } # when self.storage.revision_metadata_add([metadata_rev]) # then actual_metadata = list(self.storage.revision_metadata_get( [self.revision_id_2, self.revision_id_1])) expected_metadata = [{ 'id': self.revision_id_2, 'translated_metadata': metadata_rev['translated_metadata'], 'tool': self.tools['swh-metadata-detector'] }] self.assertEqual(actual_metadata, expected_metadata) def test_revision_metadata_add_drop_duplicate(self): # given tool_id = self.tools['swh-metadata-detector']['id'] metadata_v1 = { 'id': self.revision_id_1, 'translated_metadata': { 'developmentStatus': None, 'version': None, 'operatingSystem': None, 'description': None, 'keywords': None, 'issueTracker': None, 'name': None, 'author': None, 'relatedLink': None, 'url': None, 'license': None, 'maintainer': None, 'email': None, 'softwareRequirements': None, 'identifier': None }, 'indexer_configuration_id': tool_id, } # given self.storage.revision_metadata_add([metadata_v1]) # when actual_metadata = list(self.storage.revision_metadata_get( [self.revision_id_1])) expected_metadata_v1 = [{ 'id': self.revision_id_1, 'translated_metadata': metadata_v1['translated_metadata'], 'tool': self.tools['swh-metadata-detector'] }] self.assertEqual(actual_metadata, expected_metadata_v1) # given metadata_v2 = metadata_v1.copy() metadata_v2.update({ 'translated_metadata': { 'name': 'test_metadata', 'author': 'MG', }, }) self.storage.revision_metadata_add([metadata_v2]) # then actual_metadata = list(self.storage.revision_metadata_get( [self.revision_id_1])) # metadata did not change as the v2 was dropped. self.assertEqual(actual_metadata, expected_metadata_v1) def test_revision_metadata_add_update_in_place_duplicate(self): # given tool_id = self.tools['swh-metadata-detector']['id'] metadata_v1 = { 'id': self.revision_id_2, 'translated_metadata': { 'developmentStatus': None, 'version': None, 'operatingSystem': None, 'description': None, 'keywords': None, 'issueTracker': None, 'name': None, 'author': None, 'relatedLink': None, 'url': None, 'license': None, 'maintainer': None, 'email': None, 'softwareRequirements': None, 'identifier': None }, 'indexer_configuration_id': tool_id, } # given self.storage.revision_metadata_add([metadata_v1]) # when actual_metadata = list(self.storage.revision_metadata_get( [self.revision_id_2])) # then expected_metadata_v1 = [{ 'id': self.revision_id_2, 'translated_metadata': metadata_v1['translated_metadata'], 'tool': self.tools['swh-metadata-detector'] }] self.assertEqual(actual_metadata, expected_metadata_v1) # given metadata_v2 = metadata_v1.copy() metadata_v2.update({ 'translated_metadata': { 'name': 'test_update_duplicated_metadata', 'author': 'MG' }, }) self.storage.revision_metadata_add([metadata_v2], conflict_update=True) actual_metadata = list(self.storage.revision_metadata_get( [self.revision_id_2])) expected_metadata_v2 = [{ 'id': self.revision_id_2, 'translated_metadata': metadata_v2['translated_metadata'], 'tool': self.tools['swh-metadata-detector'] }] # metadata did change as the v2 was used to overwrite v1 self.assertEqual(actual_metadata, expected_metadata_v2) def test_origin_intrinsic_metadata_get(self): # given tool_id = self.tools['swh-metadata-detector']['id'] metadata = { 'developmentStatus': None, 'version': None, 'operatingSystem': None, 'description': None, 'keywords': None, 'issueTracker': None, 'name': None, 'author': None, 'relatedLink': None, 'url': None, 'license': None, 'maintainer': None, 'email': None, 'softwareRequirements': None, 'identifier': None, } metadata_rev = { 'id': self.revision_id_2, 'translated_metadata': metadata, 'indexer_configuration_id': tool_id, } metadata_origin = { 'origin_id': self.origin_id_1, 'metadata': metadata, 'indexer_configuration_id': tool_id, 'from_revision': self.revision_id_2, } # when self.storage.revision_metadata_add([metadata_rev]) self.storage.origin_intrinsic_metadata_add([metadata_origin]) # then actual_metadata = list(self.storage.origin_intrinsic_metadata_get( [self.origin_id_1, 42])) expected_metadata = [{ 'origin_id': self.origin_id_1, 'metadata': metadata, 'tool': self.tools['swh-metadata-detector'], 'from_revision': self.revision_id_2, }] self.assertEqual(actual_metadata, expected_metadata) def test_origin_intrinsic_metadata_add_drop_duplicate(self): # given tool_id = self.tools['swh-metadata-detector']['id'] metadata_v1 = { 'developmentStatus': None, 'version': None, 'operatingSystem': None, 'description': None, 'keywords': None, 'issueTracker': None, 'name': None, 'author': None, 'relatedLink': None, 'url': None, 'license': None, 'maintainer': None, 'email': None, 'softwareRequirements': None, 'identifier': None } metadata_rev_v1 = { 'id': self.revision_id_1, 'translated_metadata': metadata_v1.copy(), 'indexer_configuration_id': tool_id, } metadata_origin_v1 = { 'origin_id': self.origin_id_1, 'metadata': metadata_v1.copy(), 'indexer_configuration_id': tool_id, 'from_revision': self.revision_id_1, } # given self.storage.revision_metadata_add([metadata_rev_v1]) self.storage.origin_intrinsic_metadata_add([metadata_origin_v1]) # when actual_metadata = list(self.storage.origin_intrinsic_metadata_get( [self.origin_id_1, 42])) expected_metadata_v1 = [{ 'origin_id': self.origin_id_1, 'metadata': metadata_v1, 'tool': self.tools['swh-metadata-detector'], 'from_revision': self.revision_id_1, }] self.assertEqual(actual_metadata, expected_metadata_v1) # given metadata_v2 = metadata_v1.copy() metadata_v2.update({ 'name': 'test_metadata', 'author': 'MG', }) metadata_rev_v2 = metadata_rev_v1.copy() metadata_origin_v2 = metadata_origin_v1.copy() metadata_rev_v2['translated_metadata'] = metadata_v2 metadata_origin_v2['translated_metadata'] = metadata_v2 self.storage.revision_metadata_add([metadata_rev_v2]) self.storage.origin_intrinsic_metadata_add([metadata_origin_v2]) # then actual_metadata = list(self.storage.origin_intrinsic_metadata_get( [self.origin_id_1])) # metadata did not change as the v2 was dropped. self.assertEqual(actual_metadata, expected_metadata_v1) def test_origin_intrinsic_metadata_add_update_in_place_duplicate(self): # given tool_id = self.tools['swh-metadata-detector']['id'] metadata_v1 = { 'developmentStatus': None, 'version': None, 'operatingSystem': None, 'description': None, 'keywords': None, 'issueTracker': None, 'name': None, 'author': None, 'relatedLink': None, 'url': None, 'license': None, 'maintainer': None, 'email': None, 'softwareRequirements': None, 'identifier': None } metadata_rev_v1 = { 'id': self.revision_id_2, 'translated_metadata': metadata_v1, 'indexer_configuration_id': tool_id, } metadata_origin_v1 = { 'origin_id': self.origin_id_1, 'metadata': metadata_v1.copy(), 'indexer_configuration_id': tool_id, 'from_revision': self.revision_id_2, } # given self.storage.revision_metadata_add([metadata_rev_v1]) self.storage.origin_intrinsic_metadata_add([metadata_origin_v1]) # when actual_metadata = list(self.storage.origin_intrinsic_metadata_get( [self.origin_id_1])) # then expected_metadata_v1 = [{ 'origin_id': self.origin_id_1, 'metadata': metadata_v1, 'tool': self.tools['swh-metadata-detector'], 'from_revision': self.revision_id_2, }] self.assertEqual(actual_metadata, expected_metadata_v1) # given metadata_v2 = metadata_v1.copy() metadata_v2.update({ 'name': 'test_update_duplicated_metadata', 'author': 'MG', }) metadata_rev_v2 = metadata_rev_v1.copy() metadata_origin_v2 = metadata_origin_v1.copy() metadata_rev_v2['translated_metadata'] = metadata_v2 metadata_origin_v2['metadata'] = metadata_v2 self.storage.revision_metadata_add([metadata_rev_v2], conflict_update=True) self.storage.origin_intrinsic_metadata_add([metadata_origin_v2], conflict_update=True) actual_metadata = list(self.storage.origin_intrinsic_metadata_get( [self.origin_id_1])) expected_metadata_v2 = [{ 'origin_id': self.origin_id_1, 'metadata': metadata_v2, 'tool': self.tools['swh-metadata-detector'], 'from_revision': self.revision_id_2, }] # metadata did change as the v2 was used to overwrite v1 self.assertEqual(actual_metadata, expected_metadata_v2) def test_indexer_configuration_add(self): tool = { 'tool_name': 'some-unknown-tool', 'tool_version': 'some-version', 'tool_configuration': {"debian-package": "some-package"}, } actual_tool = self.storage.indexer_configuration_get(tool) self.assertIsNone(actual_tool) # does not exist # add it actual_tools = list(self.storage.indexer_configuration_add([tool])) self.assertEqual(len(actual_tools), 1) actual_tool = actual_tools[0] self.assertIsNotNone(actual_tool) # now it exists new_id = actual_tool.pop('id') self.assertEqual(actual_tool, tool) actual_tools2 = list(self.storage.indexer_configuration_add([tool])) actual_tool2 = actual_tools2[0] self.assertIsNotNone(actual_tool2) # now it exists new_id2 = actual_tool2.pop('id') self.assertEqual(new_id, new_id2) self.assertEqual(actual_tool, actual_tool2) def test_indexer_configuration_add_multiple(self): tool = { 'tool_name': 'some-unknown-tool', 'tool_version': 'some-version', 'tool_configuration': {"debian-package": "some-package"}, } actual_tools = list(self.storage.indexer_configuration_add([tool])) self.assertEqual(len(actual_tools), 1) new_tools = [tool, { 'tool_name': 'yet-another-tool', 'tool_version': 'version', 'tool_configuration': {}, }] actual_tools = list(self.storage.indexer_configuration_add(new_tools)) self.assertEqual(len(actual_tools), 2) # order not guaranteed, so we iterate over results to check for tool in actual_tools: _id = tool.pop('id') self.assertIsNotNone(_id) self.assertIn(tool, new_tools) def test_indexer_configuration_get_missing(self): tool = { 'tool_name': 'unknown-tool', 'tool_version': '3.1.0rc2-31-ga2cbb8c', 'tool_configuration': {"command_line": "nomossa "}, } actual_tool = self.storage.indexer_configuration_get(tool) self.assertIsNone(actual_tool) def test_indexer_configuration_get(self): tool = { 'tool_name': 'nomos', 'tool_version': '3.1.0rc2-31-ga2cbb8c', 'tool_configuration': {"command_line": "nomossa "}, } actual_tool = self.storage.indexer_configuration_get(tool) expected_tool = tool.copy() expected_tool['id'] = 1 self.assertEqual(expected_tool, actual_tool) def test_indexer_configuration_metadata_get_missing_context(self): tool = { 'tool_name': 'swh-metadata-translator', 'tool_version': '0.0.1', 'tool_configuration': {"context": "unknown-context"}, } actual_tool = self.storage.indexer_configuration_get(tool) self.assertIsNone(actual_tool) def test_indexer_configuration_metadata_get(self): tool = { 'tool_name': 'swh-metadata-translator', 'tool_version': '0.0.1', 'tool_configuration': {"type": "local", "context": "NpmMapping"}, } actual_tool = self.storage.indexer_configuration_get(tool) expected_tool = tool.copy() expected_tool['id'] = actual_tool['id'] self.assertEqual(expected_tool, actual_tool) +@pytest.mark.property_based +class PropBasedTestStorage(BaseTestStorage, unittest.TestCase): + """Properties-based tests + + """ + def test_generate_content_mimetype_get_range_limit_none(self): + """mimetype_get_range call with wrong limit input should fail""" + with self.assertRaises(ValueError) as e: + self.storage.content_mimetype_get_range( + start=None, end=None, indexer_configuration_id=None, + limit=None) + + self.assertEqual(e.exception.args, ( + 'Development error: limit should not be None',)) + + @given(gen_content_mimetypes(min_size=1, max_size=4)) + def test_generate_content_mimetype_get_range_no_limit(self, mimetypes): + """mimetype_get_range returns mimetypes within range provided""" + self.reset_storage_tables() + # add mimetypes to storage + self.storage.content_mimetype_add(mimetypes) + + # All ids from the db + content_ids = sorted([c['id'] for c in mimetypes]) + + start = content_ids[0] + end = content_ids[-1] + + # retrieve mimetypes + tool_id = mimetypes[0]['indexer_configuration_id'] + actual_result = self.storage.content_mimetype_get_range( + start, end, indexer_configuration_id=tool_id) + + actual_ids = actual_result['ids'] + actual_next = actual_result['next'] + + self.assertEqual(len(mimetypes), len(actual_ids)) + self.assertIsNone(actual_next) + self.assertEqual(content_ids, actual_ids) + + @given(gen_content_mimetypes(min_size=4, max_size=4)) + def test_generate_content_mimetype_get_range_limit(self, mimetypes): + """mimetype_get_range paginates results if limit exceeded""" + self.reset_storage_tables() + + # add mimetypes to storage + self.storage.content_mimetype_add(mimetypes) + + # input the list of sha1s we want from storage + content_ids = sorted([c['id'] for c in mimetypes]) + start = content_ids[0] + end = content_ids[-1] + + # retrieve mimetypes limited to 3 results + limited_results = len(mimetypes) - 1 + tool_id = mimetypes[0]['indexer_configuration_id'] + actual_result = self.storage.content_mimetype_get_range( + start, end, + indexer_configuration_id=tool_id, limit=limited_results) + + actual_ids = actual_result['ids'] + actual_next = actual_result['next'] + + self.assertEqual(limited_results, len(actual_ids)) + self.assertIsNotNone(actual_next) + self.assertEqual(actual_next, content_ids[-1]) + + expected_mimetypes = content_ids[:-1] + self.assertEqual(expected_mimetypes, actual_ids) + + # retrieve next part + actual_results2 = self.storage.content_mimetype_get_range( + start=end, end=end, indexer_configuration_id=tool_id) + actual_ids2 = actual_results2['ids'] + actual_next2 = actual_results2['next'] + + self.assertIsNone(actual_next2) + expected_mimetypes2 = [content_ids[-1]] + self.assertEqual(expected_mimetypes2, actual_ids2) + + def test_generate_content_fossology_license_get_range_limit_none(self): + """license_get_range call with wrong limit input should fail""" + with self.assertRaises(ValueError) as e: + self.storage.content_fossology_license_get_range( + start=None, end=None, indexer_configuration_id=None, + limit=None) + + self.assertEqual(e.exception.args, ( + 'Development error: limit should not be None',)) + + @given(gen_content_fossology_licenses(min_size=1, max_size=4)) + def test_generate_content_fossology_license_get_range_no_limit( + self, fossology_licenses): + """license_get_range returns licenses within range provided""" + self.reset_storage_tables() + # add fossology_licenses to storage + self.storage.content_fossology_license_add(fossology_licenses) + + # All ids from the db + content_ids = sorted([c['id'] for c in fossology_licenses]) + + start = content_ids[0] + end = content_ids[-1] + + # retrieve fossology_licenses + tool_id = fossology_licenses[0]['indexer_configuration_id'] + actual_result = self.storage.content_fossology_license_get_range( + start, end, indexer_configuration_id=tool_id) + + actual_ids = actual_result['ids'] + actual_next = actual_result['next'] + + self.assertEqual(len(fossology_licenses), len(actual_ids)) + self.assertIsNone(actual_next) + self.assertEqual(content_ids, actual_ids) + + @given(gen_content_fossology_licenses(min_size=4, max_size=4)) + def test_generate_fossology_license_get_range_limit( + self, fossology_licenses): + """fossology_license_get_range paginates results if limit exceeded""" + self.reset_storage_tables() + + # add fossology_licenses to storage + self.storage.content_fossology_license_add(fossology_licenses) + + # input the list of sha1s we want from storage + content_ids = sorted([c['id'] for c in fossology_licenses]) + start = content_ids[0] + end = content_ids[-1] + + # retrieve fossology_licenses limited to 3 results + limited_results = len(fossology_licenses) - 1 + tool_id = fossology_licenses[0]['indexer_configuration_id'] + actual_result = self.storage.content_fossology_license_get_range( + start, end, + indexer_configuration_id=tool_id, limit=limited_results) + + actual_ids = actual_result['ids'] + actual_next = actual_result['next'] + + self.assertEqual(limited_results, len(actual_ids)) + self.assertIsNotNone(actual_next) + self.assertEqual(actual_next, content_ids[-1]) + + expected_fossology_licenses = content_ids[:-1] + self.assertEqual(expected_fossology_licenses, actual_ids) + + # retrieve next part + actual_results2 = self.storage.content_fossology_license_get_range( + start=end, end=end, indexer_configuration_id=tool_id) + actual_ids2 = actual_results2['ids'] + actual_next2 = actual_results2['next'] + + self.assertIsNone(actual_next2) + expected_fossology_licenses2 = [content_ids[-1]] + self.assertEqual(expected_fossology_licenses2, actual_ids2) + + class IndexerTestStorage(CommonTestStorage, unittest.TestCase): """Running the tests locally. For the client api tests (remote storage), see `class`:swh.indexer.storage.test_api_client:TestRemoteStorage class. """ pass diff --git a/swh/indexer/tests/test_ctags.py b/swh/indexer/tests/test_ctags.py new file mode 100644 index 0000000..ae45338 --- /dev/null +++ b/swh/indexer/tests/test_ctags.py @@ -0,0 +1,104 @@ +# Copyright (C) 2017-2018 The Software Heritage developers +# See the AUTHORS file at the top-level directory of this distribution +# License: GNU General Public License version 3, or any later version +# See top-level LICENSE file for more information + +import unittest +import logging +from swh.indexer.ctags import CtagsIndexer +from swh.indexer.tests.test_utils import ( + BasicMockIndexerStorage, MockObjStorage, CommonContentIndexerTest, + CommonIndexerWithErrorsTest, CommonIndexerNoTool, + SHA1_TO_CTAGS, NoDiskIndexer +) + + +class InjectCtagsIndexer: + """Override ctags computations. + + """ + def compute_ctags(self, path, lang): + """Inject fake ctags given path (sha1 identifier). + + """ + return { + 'lang': lang, + **SHA1_TO_CTAGS.get(path) + } + + +class CtagsIndexerTest(NoDiskIndexer, InjectCtagsIndexer, CtagsIndexer): + """Specific language whose configuration is enough to satisfy the + indexing tests. + """ + def prepare(self): + self.config = { + 'tools': { + 'name': 'universal-ctags', + 'version': '~git7859817b', + 'configuration': { + 'command_line': '''ctags --fields=+lnz --sort=no ''' + ''' --links=no ''', + 'max_content_size': 1000, + }, + }, + 'languages': { + 'python': 'python', + 'haskell': 'haskell', + 'bar': 'bar', + } + } + self.idx_storage = BasicMockIndexerStorage() + self.log = logging.getLogger('swh.indexer') + self.objstorage = MockObjStorage() + self.tool_config = self.config['tools']['configuration'] + self.max_content_size = self.tool_config['max_content_size'] + self.tools = self.register_tools(self.config['tools']) + self.tool = self.tools[0] + self.language_map = self.config['languages'] + + +class TestCtagsIndexer(CommonContentIndexerTest, unittest.TestCase): + """Ctags indexer test scenarios: + + - Known sha1s in the input list have their data indexed + - Unknown sha1 in the input list are not indexed + + """ + def setUp(self): + self.indexer = CtagsIndexerTest() + + # Prepare test input + self.id0 = '01c9379dfc33803963d07c1ccc748d3fe4c96bb5' + self.id1 = 'd4c647f0fc257591cc9ba1722484229780d1c607' + self.id2 = '688a5ef812c53907562fe379d4b3851e69c7cb15' + + tool_id = self.indexer.tool['id'] + self.expected_results = { + self.id0: { + 'id': self.id0, + 'indexer_configuration_id': tool_id, + 'ctags': SHA1_TO_CTAGS[self.id0], + }, + self.id1: { + 'id': self.id1, + 'indexer_configuration_id': tool_id, + 'ctags': SHA1_TO_CTAGS[self.id1], + }, + self.id2: { + 'id': self.id2, + 'indexer_configuration_id': tool_id, + 'ctags': SHA1_TO_CTAGS[self.id2], + } + } + + +class CtagsIndexerUnknownToolTestStorage( + CommonIndexerNoTool, CtagsIndexerTest): + """Fossology license indexer with wrong configuration""" + + +class TestCtagsIndexersErrors( + CommonIndexerWithErrorsTest, unittest.TestCase): + """Test the indexer raise the right errors when wrongly initialized""" + Indexer = CtagsIndexerUnknownToolTestStorage diff --git a/swh/indexer/tests/test_fossology_license.py b/swh/indexer/tests/test_fossology_license.py new file mode 100644 index 0000000..10ad15f --- /dev/null +++ b/swh/indexer/tests/test_fossology_license.py @@ -0,0 +1,172 @@ +# Copyright (C) 2017-2018 The Software Heritage developers +# See the AUTHORS file at the top-level directory of this distribution +# License: GNU General Public License version 3, or any later version +# See top-level LICENSE file for more information + +import unittest +import logging + +from swh.indexer.fossology_license import ( + ContentFossologyLicenseIndexer, FossologyLicenseRangeIndexer +) + +from swh.indexer.tests.test_utils import ( + MockObjStorage, BasicMockStorage, BasicMockIndexerStorage, + SHA1_TO_LICENSES, CommonContentIndexerTest, CommonContentIndexerRangeTest, + CommonIndexerWithErrorsTest, CommonIndexerNoTool, NoDiskIndexer +) + + +class InjectLicenseIndexer: + """Override license computations. + + """ + def compute_license(self, path, log=None): + """path is the content identifier + + """ + return { + 'licenses': SHA1_TO_LICENSES.get(path) + } + + +class FossologyLicenseTestIndexer( + NoDiskIndexer, InjectLicenseIndexer, ContentFossologyLicenseIndexer): + """Specific fossology license whose configuration is enough to satisfy + the indexing checks. + + """ + def prepare(self): + self.config = { + 'tools': { + 'name': 'nomos', + 'version': '3.1.0rc2-31-ga2cbb8c', + 'configuration': { + 'command_line': 'nomossa ', + }, + }, + } + self.idx_storage = BasicMockIndexerStorage() + self.log = logging.getLogger('swh.indexer') + self.objstorage = MockObjStorage() + self.tools = self.register_tools(self.config['tools']) + self.tool = self.tools[0] + + +class TestFossologyLicenseIndexer(CommonContentIndexerTest, unittest.TestCase): + """Language indexer test scenarios: + + - Known sha1s in the input list have their data indexed + - Unknown sha1 in the input list are not indexed + + """ + def setUp(self): + self.indexer = FossologyLicenseTestIndexer() + + self.id0 = '01c9379dfc33803963d07c1ccc748d3fe4c96bb5' + self.id1 = '688a5ef812c53907562fe379d4b3851e69c7cb15' + self.id2 = 'da39a3ee5e6b4b0d3255bfef95601890afd80709' # empty content + tool_id = self.indexer.tool['id'] + # then + self.expected_results = { + self.id0: { + 'id': self.id0, + 'indexer_configuration_id': tool_id, + 'licenses': SHA1_TO_LICENSES[self.id0], + }, + self.id1: { + 'id': self.id1, + 'indexer_configuration_id': tool_id, + 'licenses': SHA1_TO_LICENSES[self.id1], + }, + self.id2: { + 'id': self.id2, + 'indexer_configuration_id': tool_id, + 'licenses': SHA1_TO_LICENSES[self.id2], + } + } + + +class FossologyLicenseRangeIndexerTest( + NoDiskIndexer, InjectLicenseIndexer, FossologyLicenseRangeIndexer): + """Testing the range indexer on fossology license. + + """ + def prepare(self): + self.config = { + 'tools': { + 'name': 'nomos', + 'version': '3.1.0rc2-31-ga2cbb8c', + 'configuration': { + 'command_line': 'nomossa ', + }, + }, + 'write_batch_size': 100, + } + self.idx_storage = BasicMockIndexerStorage() + self.log = logging.getLogger('swh.indexer') + # this hardcodes some contents, will use this to setup the storage + self.objstorage = MockObjStorage() + # sync objstorage and storage + contents = [{'sha1': c_id} for c_id in self.objstorage] + self.storage = BasicMockStorage(contents) + self.tools = self.register_tools(self.config['tools']) + self.tool = self.tools[0] + + +class TestFossologyLicenseRangeIndexer( + CommonContentIndexerRangeTest, unittest.TestCase): + """Range Fossology License Indexer tests. + + - new data within range are indexed + - no data outside a range are indexed + - with filtering existing indexed data prior to compute new index + - without filtering existing indexed data prior to compute new index + + """ + def setUp(self): + self.indexer = FossologyLicenseRangeIndexerTest() + # will play along with the objstorage's mocked contents for now + self.contents = sorted(self.indexer.objstorage) + # FIXME: leverage swh.objstorage.in_memory_storage's + # InMemoryObjStorage, swh.storage.tests's gen_contents, and + # hypothesis to generate data to actually run indexer on those + + self.id0 = '01c9379dfc33803963d07c1ccc748d3fe4c96bb5' + self.id1 = '02fb2c89e14f7fab46701478c83779c7beb7b069' + self.id2 = '103bc087db1d26afc3a0283f38663d081e9b01e6' + tool_id = self.indexer.tool['id'] + self.expected_results = { + self.id0: { + 'id': self.id0, + 'indexer_configuration_id': tool_id, + 'licenses': SHA1_TO_LICENSES[self.id0] + }, + self.id1: { + 'id': self.id1, + 'indexer_configuration_id': tool_id, + 'licenses': SHA1_TO_LICENSES[self.id1] + }, + self.id2: { + 'id': self.id2, + 'indexer_configuration_id': tool_id, + 'licenses': SHA1_TO_LICENSES[self.id2] + } + } + + +class FossologyLicenseIndexerUnknownToolTestStorage( + CommonIndexerNoTool, FossologyLicenseTestIndexer): + """Fossology license indexer with wrong configuration""" + + +class FossologyLicenseRangeIndexerUnknownToolTestStorage( + CommonIndexerNoTool, FossologyLicenseRangeIndexerTest): + """Fossology license range indexer with wrong configuration""" + + +class TestFossologyLicenseIndexersErrors( + CommonIndexerWithErrorsTest, unittest.TestCase): + """Test the indexer raise the right errors when wrongly initialized""" + Indexer = FossologyLicenseIndexerUnknownToolTestStorage + RangeIndexer = FossologyLicenseRangeIndexerUnknownToolTestStorage diff --git a/swh/indexer/tests/test_language.py b/swh/indexer/tests/test_language.py index 166cc46..ac182ad 100644 --- a/swh/indexer/tests/test_language.py +++ b/swh/indexer/tests/test_language.py @@ -1,107 +1,99 @@ # Copyright (C) 2017-2018 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import unittest import logging from swh.indexer import language from swh.indexer.language import ContentLanguageIndexer -from swh.indexer.tests.test_utils import MockObjStorage +from swh.indexer.tests.test_utils import ( + BasicMockIndexerStorage, MockObjStorage, CommonContentIndexerTest, + CommonIndexerWithErrorsTest, CommonIndexerNoTool +) -class _MockIndexerStorage(): - """Mock storage to simplify reading indexers' outputs. - """ - def content_language_add(self, languages, conflict_update=None): - self.state = languages - self.conflict_update = conflict_update - - def indexer_configuration_add(self, tools): - return [{ - 'id': 20, - }] - - -class TestLanguageIndexer(ContentLanguageIndexer): +class LanguageTestIndexer(ContentLanguageIndexer): """Specific language whose configuration is enough to satisfy the indexing tests. """ def prepare(self): self.config = { - 'destination_task': None, 'tools': { 'name': 'pygments', 'version': '2.0.1+dfsg-1.1+deb8u1', 'configuration': { 'type': 'library', 'debian-package': 'python3-pygments', 'max_content_size': 10240, }, } } - self.idx_storage = _MockIndexerStorage() + self.idx_storage = BasicMockIndexerStorage() self.log = logging.getLogger('swh.indexer') self.objstorage = MockObjStorage() - self.destination_task = None self.tool_config = self.config['tools']['configuration'] self.max_content_size = self.tool_config['max_content_size'] self.tools = self.register_tools(self.config['tools']) self.tool = self.tools[0] class Language(unittest.TestCase): - """ - Tests pygments tool for language detection - """ - def setUp(self): - self.maxDiff = None + """Tests pygments tool for language detection + """ def test_compute_language_none(self): # given self.content = "" self.declared_language = { 'lang': None } # when result = language.compute_language(self.content) # then self.assertEqual(self.declared_language, result) - def test_index_content_language_python(self): - # given - # testing python - sha1s = ['02fb2c89e14f7fab46701478c83779c7beb7b069'] - lang_indexer = TestLanguageIndexer() - # when - lang_indexer.run(sha1s, policy_update='ignore-dups') - results = lang_indexer.idx_storage.state - - expected_results = [{ - 'id': '02fb2c89e14f7fab46701478c83779c7beb7b069', - 'indexer_configuration_id': 20, - 'lang': 'python' - }] - # then - self.assertEqual(expected_results, results) +class TestLanguageIndexer(CommonContentIndexerTest, unittest.TestCase): + """Language indexer test scenarios: - def test_index_content_language_c(self): - # given - # testing c - sha1s = ['103bc087db1d26afc3a0283f38663d081e9b01e6'] - lang_indexer = TestLanguageIndexer() + - Known sha1s in the input list have their data indexed + - Unknown sha1 in the input list are not indexed - # when - lang_indexer.run(sha1s, policy_update='ignore-dups') - results = lang_indexer.idx_storage.state + """ + def setUp(self): + self.indexer = LanguageTestIndexer() + + self.id0 = '02fb2c89e14f7fab46701478c83779c7beb7b069' + self.id1 = '103bc087db1d26afc3a0283f38663d081e9b01e6' + self.id2 = 'd4c647f0fc257591cc9ba1722484229780d1c607' + tool_id = self.indexer.tool['id'] + + self.expected_results = { + self.id0: { + 'id': self.id0, + 'indexer_configuration_id': tool_id, + 'lang': 'python', + }, + self.id1: { + 'id': self.id1, + 'indexer_configuration_id': tool_id, + 'lang': 'c' + }, + self.id2: { + 'id': self.id2, + 'indexer_configuration_id': tool_id, + 'lang': 'text-only' + } + } - expected_results = [{ - 'id': '103bc087db1d26afc3a0283f38663d081e9b01e6', - 'indexer_configuration_id': 20, - 'lang': 'c' - }] - # then - self.assertEqual('c', results[0]['lang']) - self.assertEqual(expected_results, results) +class LanguageIndexerUnknownToolTestStorage( + CommonIndexerNoTool, LanguageTestIndexer): + """Fossology license indexer with wrong configuration""" + + +class TestLanguageIndexersErrors( + CommonIndexerWithErrorsTest, unittest.TestCase): + """Test the indexer raise the right errors when wrongly initialized""" + Indexer = LanguageIndexerUnknownToolTestStorage diff --git a/swh/indexer/tests/test_metadata.py b/swh/indexer/tests/test_metadata.py index 6951af9..f36cd1d 100644 --- a/swh/indexer/tests/test_metadata.py +++ b/swh/indexer/tests/test_metadata.py @@ -1,378 +1,478 @@ # Copyright (C) 2017-2018 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import unittest import logging from swh.indexer.metadata_dictionary import CROSSWALK_TABLE, MAPPINGS from swh.indexer.metadata_detector import detect_metadata from swh.indexer.metadata_detector import extract_minimal_metadata_dict from swh.indexer.metadata import ContentMetadataIndexer from swh.indexer.metadata import RevisionMetadataIndexer from swh.indexer.tests.test_utils import MockObjStorage, MockStorage from swh.indexer.tests.test_utils import MockIndexerStorage -class TestContentMetadataIndexer(ContentMetadataIndexer): +class ContentMetadataTestIndexer(ContentMetadataIndexer): """Specific Metadata whose configuration is enough to satisfy the indexing tests. """ def prepare(self): self.idx_storage = MockIndexerStorage() self.log = logging.getLogger('swh.indexer') self.objstorage = MockObjStorage() - self.destination_task = None self.tools = self.register_tools(self.config['tools']) self.tool = self.tools[0] self.results = [] -class TestRevisionMetadataIndexer(RevisionMetadataIndexer): +class RevisionMetadataTestIndexer(RevisionMetadataIndexer): """Specific indexer whose configuration is enough to satisfy the indexing tests. """ - ContentMetadataIndexer = TestContentMetadataIndexer + ContentMetadataIndexer = ContentMetadataTestIndexer def prepare(self): self.config = { 'storage': { 'cls': 'remote', 'args': { 'url': 'http://localhost:9999', } }, 'tools': { 'name': 'swh-metadata-detector', 'version': '0.0.2', 'configuration': { 'type': 'local', 'context': 'NpmMapping' } } } self.storage = MockStorage() self.idx_storage = MockIndexerStorage() self.log = logging.getLogger('swh.indexer') self.objstorage = MockObjStorage() - self.destination_task = None self.tools = self.register_tools(self.config['tools']) self.tool = self.tools[0] self.results = [] class Metadata(unittest.TestCase): """ Tests metadata_mock_tool tool for Metadata detection """ def setUp(self): """ shows the entire diff in the results """ self.maxDiff = None self.content_tool = { 'name': 'swh-metadata-translator', 'version': '0.0.2', 'configuration': { 'type': 'local', 'context': 'NpmMapping' } } MockIndexerStorage.added_data = [] def test_crosstable(self): self.assertEqual(CROSSWALK_TABLE['NodeJS'], { - 'repository': 'codeRepository', - 'os': 'operatingSystem', - 'cpu': 'processorRequirements', - 'engines': 'processorRequirements', - 'dependencies': 'softwareRequirements', - 'bundleDependencies': 'softwareRequirements', - 'bundledDependencies': 'softwareRequirements', - 'peerDependencies': 'softwareRequirements', - 'author': 'creator', - 'author.email': 'email', - 'author.name': 'name', - 'contributor': 'contributor', - 'keywords': 'keywords', - 'license': 'license', - 'version': 'version', - 'description': 'description', - 'name': 'name', - 'devDependencies': 'softwareSuggestions', - 'optionalDependencies': 'softwareSuggestions', - 'bugs': 'issueTracker', - 'homepage': 'url' + 'repository': 'http://schema.org/codeRepository', + 'os': 'http://schema.org/operatingSystem', + 'cpu': 'http://schema.org/processorRequirements', + 'engines': + 'http://schema.org/processorRequirements', + 'author': 'http://schema.org/author', + 'author.email': 'http://schema.org/email', + 'author.name': 'http://schema.org/name', + 'contributor': 'http://schema.org/contributor', + 'keywords': 'http://schema.org/keywords', + 'license': 'http://schema.org/license', + 'version': 'http://schema.org/version', + 'description': 'http://schema.org/description', + 'name': 'http://schema.org/name', + 'bugs': 'https://codemeta.github.io/terms/issueTracker', + 'homepage': 'http://schema.org/url' }) def test_compute_metadata_none(self): """ testing content empty content is empty should return None """ # given content = b"" # None if no metadata was found or an error occurred declared_metadata = None # when result = MAPPINGS["NpmMapping"].translate(content) # then self.assertEqual(declared_metadata, result) def test_compute_metadata_npm(self): """ testing only computation of metadata with hard_mapping_npm """ # given content = b""" { "name": "test_metadata", "version": "0.0.2", "description": "Simple package.json test for indexer", "repository": { "type": "git", "url": "https://github.com/moranegg/metadata_test" + }, + "author": { + "email": "moranegg@example.com", + "name": "Morane G" } } """ declared_metadata = { + '@context': 'https://doi.org/10.5063/schema/codemeta-2.0', + 'type': 'SoftwareSourceCode', 'name': 'test_metadata', 'version': '0.0.2', 'description': 'Simple package.json test for indexer', - 'codeRepository': { - 'type': 'git', - 'url': 'https://github.com/moranegg/metadata_test' - }, - 'other': {} + 'schema:codeRepository': + 'git+https://github.com/moranegg/metadata_test', + 'schema:author': { + 'type': 'Person', + 'name': 'Morane G', + 'email': 'moranegg@example.com', + }, } # when result = MAPPINGS["NpmMapping"].translate(content) # then self.assertEqual(declared_metadata, result) def test_extract_minimal_metadata_dict(self): """ Test the creation of a coherent minimal metadata set """ # given metadata_list = [{ + '@context': 'https://doi.org/10.5063/schema/codemeta-2.0', 'name': 'test_1', 'version': '0.0.2', 'description': 'Simple package.json test for indexer', - 'codeRepository': { - 'type': 'git', - 'url': 'https://github.com/moranegg/metadata_test' - }, - 'other': {} + 'schema:codeRepository': + 'git+https://github.com/moranegg/metadata_test', }, { + '@context': 'https://doi.org/10.5063/schema/codemeta-2.0', 'name': 'test_0_1', 'version': '0.0.2', 'description': 'Simple package.json test for indexer', - 'codeRepository': { - 'type': 'git', - 'url': 'https://github.com/moranegg/metadata_test' - }, - 'other': {} + 'schema:codeRepository': + 'git+https://github.com/moranegg/metadata_test' }, { + '@context': 'https://doi.org/10.5063/schema/codemeta-2.0', 'name': 'test_metadata', 'version': '0.0.2', - 'author': 'moranegg', - 'other': {} + 'schema:author': 'moranegg', }] # when results = extract_minimal_metadata_dict(metadata_list) # then expected_results = { - "developmentStatus": None, - "version": ['0.0.2'], - "operatingSystem": None, - "description": ['Simple package.json test for indexer'], - "keywords": None, - "issueTracker": None, + '@context': 'https://doi.org/10.5063/schema/codemeta-2.0', + "version": '0.0.2', + "description": 'Simple package.json test for indexer', "name": ['test_1', 'test_0_1', 'test_metadata'], - "author": ['moranegg'], - "relatedLink": None, - "url": None, - "license": None, - "maintainer": None, - "email": None, - "softwareRequirements": None, - "identifier": None, - "codeRepository": [{ - 'type': 'git', - 'url': 'https://github.com/moranegg/metadata_test' - }] + "schema:author": 'moranegg', + "schema:codeRepository": + 'git+https://github.com/moranegg/metadata_test', } self.assertEqual(expected_results, results) def test_index_content_metadata_npm(self): """ testing NPM with package.json - one sha1 uses a file that can't be translated to metadata and should return None in the translated metadata """ # given sha1s = ['26a9f72a7c87cc9205725cfd879f514ff4f3d8d5', 'd4c647f0fc257591cc9ba1722484229780d1c607', '02fb2c89e14f7fab46701478c83779c7beb7b069'] # this metadata indexer computes only metadata for package.json # in npm context with a hard mapping - metadata_indexer = TestContentMetadataIndexer( + metadata_indexer = ContentMetadataTestIndexer( tool=self.content_tool, config={}) # when metadata_indexer.run(sha1s, policy_update='ignore-dups') results = metadata_indexer.idx_storage.added_data expected_results = [('content_metadata', False, [{ 'indexer_configuration_id': 30, 'translated_metadata': { - 'other': {}, - 'codeRepository': { - 'type': 'git', - 'url': 'https://github.com/moranegg/metadata_test' - }, + '@context': 'https://doi.org/10.5063/schema/codemeta-2.0', + 'type': 'SoftwareSourceCode', + 'schema:codeRepository': + 'git+https://github.com/moranegg/metadata_test', 'description': 'Simple package.json test for indexer', 'name': 'test_metadata', 'version': '0.0.1' }, 'id': '26a9f72a7c87cc9205725cfd879f514ff4f3d8d5' }, { 'indexer_configuration_id': 30, 'translated_metadata': { - 'softwareRequirements': { - 'JSONStream': '~1.3.1', - 'abbrev': '~1.1.0', - 'ansi-regex': '~2.1.1', - 'ansicolors': '~0.3.2', - 'ansistyles': '~0.1.3' - }, - 'issueTracker': { - 'url': 'https://github.com/npm/npm/issues' - }, - 'creator': - 'Isaac Z. Schlueter (http://blog.izs.me)', - 'codeRepository': { - 'type': 'git', - 'url': 'https://github.com/npm/npm' + '@context': 'https://doi.org/10.5063/schema/codemeta-2.0', + 'type': 'SoftwareSourceCode', + 'codemeta:issueTracker': + 'https://github.com/npm/npm/issues', + 'schema:author': { + 'type': 'Person', + 'name': 'Isaac Z. Schlueter', + 'email': 'i@izs.me', + 'schema:url': 'http://blog.izs.me', }, + 'schema:codeRepository': + 'git+https://github.com/npm/npm', 'description': 'a package manager for JavaScript', - 'softwareSuggestions': { - 'tacks': '~1.2.6', - 'tap': '~10.3.2' - }, - 'license': 'Artistic-2.0', + 'schema:license': 'Artistic-2.0', 'version': '5.0.3', - 'other': { - 'preferGlobal': True, - 'config': { - 'publishtest': False - } - }, 'name': 'npm', 'keywords': [ 'install', 'modules', 'package manager', 'package.json' ], - 'url': 'https://docs.npmjs.com/' + 'schema:url': 'https://docs.npmjs.com/' }, 'id': 'd4c647f0fc257591cc9ba1722484229780d1c607' }, { 'indexer_configuration_id': 30, 'translated_metadata': None, 'id': '02fb2c89e14f7fab46701478c83779c7beb7b069' }])] # The assertion below returns False sometimes because of nested lists self.assertEqual(expected_results, results) def test_detect_metadata_package_json(self): # given df = [{ 'sha1_git': b'abc', 'name': b'index.js', 'target': b'abc', 'length': 897, 'status': 'visible', 'type': 'file', 'perms': 33188, 'dir_id': b'dir_a', 'sha1': b'bcd' }, { 'sha1_git': b'aab', 'name': b'package.json', 'target': b'aab', 'length': 712, 'status': 'visible', 'type': 'file', 'perms': 33188, 'dir_id': b'dir_a', 'sha1': b'cde' }] # when results = detect_metadata(df) expected_results = { 'NpmMapping': [ b'cde' ] } # then self.assertEqual(expected_results, results) + def test_compute_metadata_valid_codemeta(self): + raw_content = ( + b"""{ + "@context": "https://doi.org/10.5063/schema/codemeta-2.0", + "@type": "SoftwareSourceCode", + "identifier": "CodeMeta", + "description": "CodeMeta is a concept vocabulary that can be used to standardize the exchange of software metadata across repositories and organizations.", + "name": "CodeMeta: Minimal metadata schemas for science software and code, in JSON-LD", + "codeRepository": "https://github.com/codemeta/codemeta", + "issueTracker": "https://github.com/codemeta/codemeta/issues", + "license": "https://spdx.org/licenses/Apache-2.0", + "version": "2.0", + "author": [ + { + "@type": "Person", + "givenName": "Carl", + "familyName": "Boettiger", + "email": "cboettig@gmail.com", + "@id": "http://orcid.org/0000-0002-1642-628X" + }, + { + "@type": "Person", + "givenName": "Matthew B.", + "familyName": "Jones", + "email": "jones@nceas.ucsb.edu", + "@id": "http://orcid.org/0000-0003-0077-4738" + } + ], + "maintainer": { + "@type": "Person", + "givenName": "Carl", + "familyName": "Boettiger", + "email": "cboettig@gmail.com", + "@id": "http://orcid.org/0000-0002-1642-628X" + }, + "contIntegration": "https://travis-ci.org/codemeta/codemeta", + "developmentStatus": "active", + "downloadUrl": "https://github.com/codemeta/codemeta/archive/2.0.zip", + "funder": { + "@id": "https://doi.org/10.13039/100000001", + "@type": "Organization", + "name": "National Science Foundation" + }, + "funding":"1549758; Codemeta: A Rosetta Stone for Metadata in Scientific Software", + "keywords": [ + "metadata", + "software" + ], + "version":"2.0", + "dateCreated":"2017-06-05", + "datePublished":"2017-06-05", + "programmingLanguage": "JSON-LD" + }""") # noqa + expected_result = { + "@context": "https://doi.org/10.5063/schema/codemeta-2.0", + "type": "SoftwareSourceCode", + "identifier": "CodeMeta", + "description": + "CodeMeta is a concept vocabulary that can " + "be used to standardize the exchange of software metadata " + "across repositories and organizations.", + "name": + "CodeMeta: Minimal metadata schemas for science " + "software and code, in JSON-LD", + "codeRepository": "https://github.com/codemeta/codemeta", + "issueTracker": "https://github.com/codemeta/codemeta/issues", + "license": "https://spdx.org/licenses/Apache-2.0", + "version": "2.0", + "author": [ + { + "type": "Person", + "givenName": "Carl", + "familyName": "Boettiger", + "email": "cboettig@gmail.com", + "id": "http://orcid.org/0000-0002-1642-628X" + }, + { + "type": "Person", + "givenName": "Matthew B.", + "familyName": "Jones", + "email": "jones@nceas.ucsb.edu", + "id": "http://orcid.org/0000-0003-0077-4738" + } + ], + "maintainer": { + "type": "Person", + "givenName": "Carl", + "familyName": "Boettiger", + "email": "cboettig@gmail.com", + "id": "http://orcid.org/0000-0002-1642-628X" + }, + "contIntegration": "https://travis-ci.org/codemeta/codemeta", + "developmentStatus": "active", + "downloadUrl": + "https://github.com/codemeta/codemeta/archive/2.0.zip", + "funder": { + "id": "https://doi.org/10.13039/100000001", + "type": "Organization", + "name": "National Science Foundation" + }, + "funding": "1549758; Codemeta: A Rosetta Stone for Metadata " + "in Scientific Software", + "keywords": [ + "metadata", + "software" + ], + "version": "2.0", + "dateCreated": "2017-06-05", + "datePublished": "2017-06-05", + "programmingLanguage": "JSON-LD" + } + result = MAPPINGS["CodemetaMapping"].translate(raw_content) + self.assertEqual(result, expected_result) + + def test_compute_metadata_maven(self): + raw_content = b""" + + Maven Default Project + 4.0.0 + com.mycompany.app + my-app + 1.2.3 + + + central + Maven Repository Switchboard + default + http://repo1.maven.org/maven2 + + false + + + + """ + result = MAPPINGS["MavenMapping"].translate(raw_content) + self.assertEqual(result, { + '@context': 'https://doi.org/10.5063/schema/codemeta-2.0', + 'type': 'SoftwareSourceCode', + 'name': 'Maven Default Project', + 'schema:identifier': 'com.mycompany.app', + 'version': '1.2.3', + 'schema:codeRepository': + 'http://repo1.maven.org/maven2/com/mycompany/app/my-app', + }) + def test_revision_metadata_indexer(self): - metadata_indexer = TestRevisionMetadataIndexer() + metadata_indexer = RevisionMetadataTestIndexer() sha1_gits = [ b'8dbb6aeb036e7fd80664eb8bfd1507881af1ba9f', ] metadata_indexer.run(sha1_gits, 'update-dups') results = metadata_indexer.idx_storage.added_data expected_results = [('revision_metadata', True, [{ 'id': '8dbb6aeb036e7fd80664eb8bfd1507881af1ba9f', 'translated_metadata': { - 'identifier': None, - 'maintainer': None, - 'url': [ - 'https://github.com/librariesio/yarn-parser#readme' - ], - 'codeRepository': [{ - 'type': 'git', - 'url': 'git+https://github.com/librariesio/yarn-parser.git' - }], - 'author': ['Andrew Nesbitt'], - 'license': ['AGPL-3.0'], - 'version': ['1.0.0'], - 'description': [ - 'Tiny web service for parsing yarn.lock files' - ], - 'relatedLink': None, - 'developmentStatus': None, - 'operatingSystem': None, - 'issueTracker': [{ - 'url': 'https://github.com/librariesio/yarn-parser/issues' - }], - 'softwareRequirements': [{ - 'express': '^4.14.0', - 'yarn': '^0.21.0', - 'body-parser': '^1.15.2' - }], - 'name': ['yarn-parser'], - 'keywords': [['yarn', 'parse', 'lock', 'dependencies']], - 'email': None + '@context': 'https://doi.org/10.5063/schema/codemeta-2.0', + 'url': + 'https://github.com/librariesio/yarn-parser#readme', + 'schema:codeRepository': + 'git+https://github.com/librariesio/yarn-parser.git', + 'schema:author': 'Andrew Nesbitt', + 'license': 'AGPL-3.0', + 'version': '1.0.0', + 'description': + 'Tiny web service for parsing yarn.lock files', + 'codemeta:issueTracker': + 'https://github.com/librariesio/yarn-parser/issues', + 'name': 'yarn-parser', + 'keywords': ['yarn', 'parse', 'lock', 'dependencies'], }, 'indexer_configuration_id': 7 }])] # then self.assertEqual(expected_results, results) diff --git a/swh/indexer/tests/test_mimetype.py b/swh/indexer/tests/test_mimetype.py index 4632bcb..70d2e1d 100644 --- a/swh/indexer/tests/test_mimetype.py +++ b/swh/indexer/tests/test_mimetype.py @@ -1,150 +1,163 @@ # Copyright (C) 2017-2018 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import unittest import logging -from swh.indexer.mimetype import ContentMimetypeIndexer +from swh.indexer.mimetype import ( + ContentMimetypeIndexer, MimetypeRangeIndexer +) -from swh.indexer.tests.test_utils import MockObjStorage +from swh.indexer.tests.test_utils import ( + MockObjStorage, BasicMockStorage, BasicMockIndexerStorage, + CommonContentIndexerTest, CommonContentIndexerRangeTest, + CommonIndexerWithErrorsTest, CommonIndexerNoTool +) -class _MockIndexerStorage(): - """Mock storage to simplify reading indexers' outputs. +class MimetypeTestIndexer(ContentMimetypeIndexer): + """Specific mimetype indexer instance whose configuration is enough to + satisfy the indexing tests. """ - def content_mimetype_add(self, mimetypes, conflict_update=None): - self.state = mimetypes - self.conflict_update = conflict_update + def prepare(self): + self.config = { + 'tools': { + 'name': 'file', + 'version': '1:5.30-1+deb9u1', + 'configuration': { + "type": "library", + "debian-package": "python3-magic" + }, + }, + } + self.idx_storage = BasicMockIndexerStorage() + self.log = logging.getLogger('swh.indexer') + self.objstorage = MockObjStorage() + self.tools = self.register_tools(self.config['tools']) + self.tool = self.tools[0] + - def indexer_configuration_add(self, tools): - return [{ - 'id': 10, - }] +class TestMimetypeIndexer(CommonContentIndexerTest, unittest.TestCase): + """Mimetype indexer test scenarios: + - Known sha1s in the input list have their data indexed + - Unknown sha1 in the input list are not indexed -class TestMimetypeIndexer(ContentMimetypeIndexer): + """ + def setUp(self): + self.indexer = MimetypeTestIndexer() + + self.id0 = '01c9379dfc33803963d07c1ccc748d3fe4c96bb5' + self.id1 = '688a5ef812c53907562fe379d4b3851e69c7cb15' + self.id2 = 'da39a3ee5e6b4b0d3255bfef95601890afd80709' + tool_id = self.indexer.tool['id'] + self.expected_results = { + self.id0: { + 'id': self.id0, + 'indexer_configuration_id': tool_id, + 'mimetype': b'text/plain', + 'encoding': b'us-ascii', + }, + self.id1: { + 'id': self.id1, + 'indexer_configuration_id': tool_id, + 'mimetype': b'text/plain', + 'encoding': b'us-ascii', + }, + self.id2: { + 'id': self.id2, + 'indexer_configuration_id': tool_id, + 'mimetype': b'application/x-empty', + 'encoding': b'binary', + } + } + + +class MimetypeRangeIndexerTest(MimetypeRangeIndexer): """Specific mimetype whose configuration is enough to satisfy the indexing tests. """ def prepare(self): self.config = { - 'destination_task': None, 'tools': { 'name': 'file', 'version': '1:5.30-1+deb9u1', 'configuration': { "type": "library", "debian-package": "python3-magic" }, }, + 'write_batch_size': 100, } - self.idx_storage = _MockIndexerStorage() + self.idx_storage = BasicMockIndexerStorage() self.log = logging.getLogger('swh.indexer') + # this hardcodes some contents, will use this to setup the storage self.objstorage = MockObjStorage() - self.destination_task = self.config['destination_task'] + # sync objstorage and storage + contents = [{'sha1': c_id} for c_id in self.objstorage] + self.storage = BasicMockStorage(contents) self.tools = self.register_tools(self.config['tools']) self.tool = self.tools[0] -class TestMimetypeIndexerUnknownToolStorage(TestMimetypeIndexer): - """Specific mimetype whose configuration is not enough to satisfy the - indexing tests. +class TestMimetypeRangeIndexer( + CommonContentIndexerRangeTest, unittest.TestCase): + """Range Mimetype Indexer tests. + + - new data within range are indexed + - no data outside a range are indexed + - with filtering existing indexed data prior to compute new index + - without filtering existing indexed data prior to compute new index """ - def prepare(self): - super().prepare() - self.tools = None + def setUp(self): + self.indexer = MimetypeRangeIndexerTest() + # will play along with the objstorage's mocked contents for now + self.contents = sorted(self.indexer.objstorage) + # FIXME: leverage swh.objstorage.in_memory_storage's + # InMemoryObjStorage, swh.storage.tests's gen_contents, and + # hypothesis to generate data to actually run indexer on those + + self.id0 = '01c9379dfc33803963d07c1ccc748d3fe4c96bb5' + self.id1 = '02fb2c89e14f7fab46701478c83779c7beb7b069' + self.id2 = '103bc087db1d26afc3a0283f38663d081e9b01e6' + tool_id = self.indexer.tool['id'] + + self.expected_results = { + self.id0: { + 'encoding': b'us-ascii', + 'id': self.id0, + 'indexer_configuration_id': tool_id, + 'mimetype': b'text/plain'}, + self.id1: { + 'encoding': b'us-ascii', + 'id': self.id1, + 'indexer_configuration_id': tool_id, + 'mimetype': b'text/x-python'}, + self.id2: { + 'encoding': b'us-ascii', + 'id': self.id2, + 'indexer_configuration_id': tool_id, + 'mimetype': b'text/plain'} + } -class TestMimetypeIndexerWithErrors(unittest.TestCase): - def test_wrong_unknown_configuration_tool(self): - """Indexer with unknown configuration tool should fail the check""" - with self.assertRaisesRegex(ValueError, 'Tools None is unknown'): - TestMimetypeIndexerUnknownToolStorage() +class MimetypeIndexerUnknownToolTestStorage( + CommonIndexerNoTool, MimetypeTestIndexer): + """Fossology license indexer with wrong configuration""" -class TestMimetypeIndexerTest(unittest.TestCase): - def setUp(self): - self.indexer = TestMimetypeIndexer() - - def test_index_no_update(self): - # given - sha1s = [ - '01c9379dfc33803963d07c1ccc748d3fe4c96bb5', - '688a5ef812c53907562fe379d4b3851e69c7cb15', - ] - - # when - self.indexer.run(sha1s, policy_update='ignore-dups') - - # then - expected_results = [{ - 'id': '01c9379dfc33803963d07c1ccc748d3fe4c96bb5', - 'indexer_configuration_id': 10, - 'mimetype': b'text/plain', - 'encoding': b'us-ascii', - }, { - 'id': '688a5ef812c53907562fe379d4b3851e69c7cb15', - 'indexer_configuration_id': 10, - 'mimetype': b'text/plain', - 'encoding': b'us-ascii', - }] - - self.assertFalse(self.indexer.idx_storage.conflict_update) - self.assertEqual(expected_results, self.indexer.idx_storage.state) - - def test_index_update(self): - # given - sha1s = [ - '01c9379dfc33803963d07c1ccc748d3fe4c96bb5', - '688a5ef812c53907562fe379d4b3851e69c7cb15', - 'da39a3ee5e6b4b0d3255bfef95601890afd80709', # empty content - ] - - # when - self.indexer.run(sha1s, policy_update='update-dups') - - # then - expected_results = [{ - 'id': '01c9379dfc33803963d07c1ccc748d3fe4c96bb5', - 'indexer_configuration_id': 10, - 'mimetype': b'text/plain', - 'encoding': b'us-ascii', - }, { - 'id': '688a5ef812c53907562fe379d4b3851e69c7cb15', - 'indexer_configuration_id': 10, - 'mimetype': b'text/plain', - 'encoding': b'us-ascii', - }, { - 'id': 'da39a3ee5e6b4b0d3255bfef95601890afd80709', - 'indexer_configuration_id': 10, - 'mimetype': b'application/x-empty', - 'encoding': b'binary', - }] - - self.assertTrue(self.indexer.idx_storage.conflict_update) - self.assertEqual(expected_results, self.indexer.idx_storage.state) - - def test_index_one_unknown_sha1(self): - # given - sha1s = ['688a5ef812c53907562fe379d4b3851e69c7cb15', - '799a5ef812c53907562fe379d4b3851e69c7cb15', # unknown - '800a5ef812c53907562fe379d4b3851e69c7cb15'] # unknown - - # when - self.indexer.run(sha1s, policy_update='update-dups') - - # then - expected_results = [{ - 'id': '688a5ef812c53907562fe379d4b3851e69c7cb15', - 'indexer_configuration_id': 10, - 'mimetype': b'text/plain', - 'encoding': b'us-ascii', - }] - - self.assertTrue(self.indexer.idx_storage.conflict_update) - self.assertEqual(expected_results, self.indexer.idx_storage.state) +class MimetypeRangeIndexerUnknownToolTestStorage( + CommonIndexerNoTool, MimetypeRangeIndexerTest): + """Fossology license range indexer with wrong configuration""" + + +class TestMimetypeIndexersErrors( + CommonIndexerWithErrorsTest, unittest.TestCase): + """Test the indexer raise the right errors when wrongly initialized""" + Indexer = MimetypeIndexerUnknownToolTestStorage + RangeIndexer = MimetypeRangeIndexerUnknownToolTestStorage diff --git a/swh/indexer/tests/test_orchestrator.py b/swh/indexer/tests/test_orchestrator.py deleted file mode 100644 index 0fa2da9..0000000 --- a/swh/indexer/tests/test_orchestrator.py +++ /dev/null @@ -1,210 +0,0 @@ -# Copyright (C) 2018 The Software Heritage developers -# See the AUTHORS file at the top-level directory of this distribution -# License: GNU General Public License version 3, or any later version -# See top-level LICENSE file for more information - -import unittest - -import celery - -from swh.indexer.orchestrator import BaseOrchestratorIndexer -from swh.indexer.indexer import BaseIndexer -from swh.indexer.tests.test_utils import MockIndexerStorage, MockStorage -from swh.scheduler.tests.scheduler_testing import SchedulerTestFixture - - -class BaseTestIndexer(BaseIndexer): - ADDITIONAL_CONFIG = { - 'tools': ('dict', { - 'name': 'foo', - 'version': 'bar', - 'configuration': {} - }), - } - - def prepare(self): - self.idx_storage = MockIndexerStorage() - self.storage = MockStorage() - - def check(self): - pass - - def filter(self, ids): - self.filtered.append(ids) - return ids - - def run(self, ids, policy_update): - return self.index(ids) - - def index(self, ids): - self.indexed.append(ids) - return [id_ + '_indexed_by_' + self.__class__.__name__ - for id_ in ids] - - def persist_index_computations(self, result, policy_update): - self.persisted = result - - -class Indexer1(BaseTestIndexer): - filtered = [] - indexed = [] - - def filter(self, ids): - return super().filter([id_ for id_ in ids if '1' in id_]) - - -class Indexer2(BaseTestIndexer): - filtered = [] - indexed = [] - - def filter(self, ids): - return super().filter([id_ for id_ in ids if '2' in id_]) - - -class Indexer3(BaseTestIndexer): - filtered = [] - indexed = [] - - def filter(self, ids): - return super().filter([id_ for id_ in ids if '3' in id_]) - - -@celery.task -def indexer1_task(*args, **kwargs): - return Indexer1().run(*args, **kwargs) - - -@celery.task -def indexer2_task(*args, **kwargs): - return Indexer2().run(*args, **kwargs) - - -@celery.task -def indexer3_task(self, *args, **kwargs): - return Indexer3().run(*args, **kwargs) - - -class TestOrchestrator12(BaseOrchestratorIndexer): - TASK_NAMES = { - 'indexer1': 'swh.indexer.tests.test_orchestrator.indexer1_task', - 'indexer2': 'swh.indexer.tests.test_orchestrator.indexer2_task', - 'indexer3': 'swh.indexer.tests.test_orchestrator.indexer3_task', - } - - INDEXER_CLASSES = { - 'indexer1': 'swh.indexer.tests.test_orchestrator.Indexer1', - 'indexer2': 'swh.indexer.tests.test_orchestrator.Indexer2', - 'indexer3': 'swh.indexer.tests.test_orchestrator.Indexer3', - } - - def __init__(self): - super().__init__() - self.running_tasks = [] - - def parse_config_file(self): - return { - 'scheduler': { - 'cls': 'remote', - 'args': { - 'url': 'http://localhost:9999', - }, - }, - 'indexers': { - 'indexer1': { - 'batch_size': 2, - 'check_presence': True, - }, - 'indexer2': { - 'batch_size': 2, - 'check_presence': True, - }, - } - } - - -class MockedTestOrchestrator12(TestOrchestrator12): - def __init__(self): - super().__init__() - self.created_tasks = [] - - def _create_tasks(self, celery_tasks): - self.created_tasks.extend(celery_tasks) - - def prepare_scheduler(self): - pass - - -class OrchestratorTest(SchedulerTestFixture, unittest.TestCase): - def setUp(self): - super().setUp() - self.add_scheduler_task_type( - 'indexer1', - 'swh.indexer.tests.test_orchestrator.indexer1_task') - self.add_scheduler_task_type( - 'indexer2', - 'swh.indexer.tests.test_orchestrator.indexer2_task') - - def test_orchestrator_filter(self): - o = TestOrchestrator12() - o.scheduler = self.scheduler - o.run(['id12', 'id2']) - self.assertEqual(Indexer2.indexed, []) - self.assertEqual(Indexer1.indexed, []) - self.run_ready_tasks() - self.assertEqual(Indexer2.indexed, [['id12', 'id2']]) - self.assertEqual(Indexer1.indexed, [['id12']]) - - -class MockedOrchestratorTest(unittest.TestCase): - maxDiff = None - - def test_mocked_orchestrator_filter(self): - o = MockedTestOrchestrator12() - o.run(['id12', 'id2']) - for task in o.created_tasks: - del task['next_run'] # not worth the trouble testing it properly - self.assertCountEqual(o.created_tasks, [ - {'type': 'indexer1', - 'arguments': { - 'args': [], - 'kwargs': { - 'ids': ['id12'], - 'policy_update': 'ignore-dups'}}, - 'policy': 'oneshot'}, - {'type': 'indexer2', - 'arguments': { - 'args': [], - 'kwargs': { - 'ids': ['id12', 'id2'], - 'policy_update': 'ignore-dups'}}, - 'policy': 'oneshot'}, - ]) - - def test_mocked_orchestrator_batch(self): - o = MockedTestOrchestrator12() - o.run(['id12', 'id2a', 'id2b', 'id2c']) - for task in o.created_tasks: - del task['next_run'] # not worth the trouble testing it properly - self.assertCountEqual(o.created_tasks, [ - {'type': 'indexer1', - 'arguments': { - 'args': [], - 'kwargs': { - 'ids': ['id12'], - 'policy_update': 'ignore-dups'}}, - 'policy': 'oneshot'}, - {'type': 'indexer2', - 'arguments': { - 'args': [], - 'kwargs': { - 'ids': ['id12', 'id2a'], - 'policy_update': 'ignore-dups'}}, - 'policy': 'oneshot'}, - {'type': 'indexer2', - 'arguments': { - 'args': [], - 'kwargs': { - 'ids': ['id2b', 'id2c'], - 'policy_update': 'ignore-dups'}}, - 'policy': 'oneshot'}, - ]) diff --git a/swh/indexer/tests/test_origin_head.py b/swh/indexer/tests/test_origin_head.py index 63d9ffd..335ced7 100644 --- a/swh/indexer/tests/test_origin_head.py +++ b/swh/indexer/tests/test_origin_head.py @@ -1,91 +1,91 @@ # Copyright (C) 2017 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import unittest import logging from swh.indexer.origin_head import OriginHeadIndexer from swh.indexer.tests.test_utils import MockIndexerStorage, MockStorage -class TestOriginHeadIndexer(OriginHeadIndexer): +class OriginHeadTestIndexer(OriginHeadIndexer): """Specific indexer whose configuration is enough to satisfy the indexing tests. """ revision_metadata_task = None origin_intrinsic_metadata_task = None def prepare(self): self.config = { 'tools': { 'name': 'origin-metadata', 'version': '0.0.1', 'configuration': {}, }, } self.storage = MockStorage() self.idx_storage = MockIndexerStorage() self.log = logging.getLogger('swh.indexer') self.objstorage = None self.tools = self.register_tools(self.config['tools']) self.tool = self.tools[0] self.results = None def persist_index_computations(self, results, policy_update): self.results = results class OriginHead(unittest.TestCase): def test_git(self): - indexer = TestOriginHeadIndexer() + indexer = OriginHeadTestIndexer() indexer.run( ['git+https://github.com/SoftwareHeritage/swh-storage'], 'update-dups', parse_ids=True) self.assertEqual(indexer.results, [{ 'revision_id': b'8K\x12\x00d\x03\xcc\xe4]bS\xe3\x8f{' b'\xd7}\xac\xefrm', 'origin_id': 52189575}]) def test_ftp(self): - indexer = TestOriginHeadIndexer() + indexer = OriginHeadTestIndexer() indexer.run( ['ftp+rsync://ftp.gnu.org/gnu/3dldf'], 'update-dups', parse_ids=True) self.assertEqual(indexer.results, [{ 'revision_id': b'\x8e\xa9\x8e/\xea}\x9feF\xf4\x9f\xfd\xee' b'\xcc\x1a\xb4`\x8c\x8by', 'origin_id': 4423668}]) def test_deposit(self): - indexer = TestOriginHeadIndexer() + indexer = OriginHeadTestIndexer() indexer.run( ['deposit+https://forge.softwareheritage.org/source/' 'jesuisgpl/'], 'update-dups', parse_ids=True) self.assertEqual(indexer.results, [{ 'revision_id': b'\xe7n\xa4\x9c\x9f\xfb\xb7\xf76\x11\x08{' b'\xa6\xe9\x99\xb1\x9e]q\xeb', 'origin_id': 77775770}]) def test_pypi(self): - indexer = TestOriginHeadIndexer() + indexer = OriginHeadTestIndexer() indexer.run( ['pypi+https://pypi.org/project/limnoria/'], 'update-dups', parse_ids=True) self.assertEqual(indexer.results, [{ 'revision_id': b'\x83\xb9\xb6\xc7\x05\xb1%\xd0\xfem\xd8k' b'A\x10\x9d\xc5\xfa2\xf8t', 'origin_id': 85072327}]) def test_svn(self): - indexer = TestOriginHeadIndexer() + indexer = OriginHeadTestIndexer() indexer.run( ['svn+http://0-512-md.googlecode.com/svn/'], 'update-dups', parse_ids=True) self.assertEqual(indexer.results, [{ 'revision_id': b'\xe4?r\xe1,\x88\xab\xec\xe7\x9a\x87\xb8' b'\xc9\xad#.\x1bw=\x18', 'origin_id': 49908349}]) diff --git a/swh/indexer/tests/test_origin_metadata.py b/swh/indexer/tests/test_origin_metadata.py index 1b82b64..1ed3024 100644 --- a/swh/indexer/tests/test_origin_metadata.py +++ b/swh/indexer/tests/test_origin_metadata.py @@ -1,142 +1,126 @@ # Copyright (C) 2018 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import time import logging import unittest from celery import task from swh.indexer.metadata import OriginMetadataIndexer from swh.indexer.tests.test_utils import MockObjStorage, MockStorage from swh.indexer.tests.test_utils import MockIndexerStorage -from swh.indexer.tests.test_origin_head import TestOriginHeadIndexer -from swh.indexer.tests.test_metadata import TestRevisionMetadataIndexer +from swh.indexer.tests.test_origin_head import OriginHeadTestIndexer +from swh.indexer.tests.test_metadata import RevisionMetadataTestIndexer from swh.scheduler.tests.scheduler_testing import SchedulerTestFixture -class TestOriginMetadataIndexer(OriginMetadataIndexer): +class OriginMetadataTestIndexer(OriginMetadataIndexer): def prepare(self): self.config = { 'storage': { 'cls': 'remote', 'args': { 'url': 'http://localhost:9999', } }, 'tools': { 'name': 'origin-metadata', 'version': '0.0.1', 'configuration': {} } } self.storage = MockStorage() self.idx_storage = MockIndexerStorage() self.log = logging.getLogger('swh.indexer') self.objstorage = MockObjStorage() - self.destination_task = None self.tools = self.register_tools(self.config['tools']) self.tool = self.tools[0] self.results = [] @task def revision_metadata_test_task(*args, **kwargs): - indexer = TestRevisionMetadataIndexer() + indexer = RevisionMetadataTestIndexer() indexer.run(*args, **kwargs) return indexer.results @task def origin_intrinsic_metadata_test_task(*args, **kwargs): - indexer = TestOriginMetadataIndexer() + indexer = OriginMetadataTestIndexer() indexer.run(*args, **kwargs) return indexer.results -class TestOriginHeadIndexer(TestOriginHeadIndexer): +class OriginHeadTestIndexer(OriginHeadTestIndexer): revision_metadata_task = 'revision_metadata_test_task' origin_intrinsic_metadata_task = 'origin_intrinsic_metadata_test_task' class TestOriginMetadata(SchedulerTestFixture, unittest.TestCase): def setUp(self): super().setUp() self.maxDiff = None MockIndexerStorage.added_data = [] self.add_scheduler_task_type( 'revision_metadata_test_task', 'swh.indexer.tests.test_origin_metadata.' 'revision_metadata_test_task') self.add_scheduler_task_type( 'origin_intrinsic_metadata_test_task', 'swh.indexer.tests.test_origin_metadata.' 'origin_intrinsic_metadata_test_task') - TestRevisionMetadataIndexer.scheduler = self.scheduler + RevisionMetadataTestIndexer.scheduler = self.scheduler def tearDown(self): - del TestRevisionMetadataIndexer.scheduler + del RevisionMetadataTestIndexer.scheduler super().tearDown() def test_pipeline(self): - indexer = TestOriginHeadIndexer() + indexer = OriginHeadTestIndexer() indexer.scheduler = self.scheduler indexer.run( ["git+https://github.com/librariesio/yarn-parser"], policy_update='update-dups', parse_ids=True) self.run_ready_tasks() # Run the first task time.sleep(0.1) # Give it time to complete and schedule the 2nd one self.run_ready_tasks() # Run the second task metadata = { - 'identifier': None, - 'maintainer': None, - 'url': [ - 'https://github.com/librariesio/yarn-parser#readme' - ], - 'codeRepository': [{ - 'type': 'git', - 'url': 'git+https://github.com/librariesio/yarn-parser.git' - }], - 'author': ['Andrew Nesbitt'], - 'license': ['AGPL-3.0'], - 'version': ['1.0.0'], - 'description': [ - 'Tiny web service for parsing yarn.lock files' - ], - 'relatedLink': None, - 'developmentStatus': None, - 'operatingSystem': None, - 'issueTracker': [{ - 'url': 'https://github.com/librariesio/yarn-parser/issues' - }], - 'softwareRequirements': [{ - 'express': '^4.14.0', - 'yarn': '^0.21.0', - 'body-parser': '^1.15.2' - }], - 'name': ['yarn-parser'], - 'keywords': [['yarn', 'parse', 'lock', 'dependencies']], - 'email': None + '@context': 'https://doi.org/10.5063/schema/codemeta-2.0', + 'url': + 'https://github.com/librariesio/yarn-parser#readme', + 'schema:codeRepository': + 'git+https://github.com/librariesio/yarn-parser.git', + 'schema:author': 'Andrew Nesbitt', + 'license': 'AGPL-3.0', + 'version': '1.0.0', + 'description': + 'Tiny web service for parsing yarn.lock files', + 'codemeta:issueTracker': + 'https://github.com/librariesio/yarn-parser/issues', + 'name': 'yarn-parser', + 'keywords': ['yarn', 'parse', 'lock', 'dependencies'], } rev_metadata = { 'id': '8dbb6aeb036e7fd80664eb8bfd1507881af1ba9f', 'translated_metadata': metadata, 'indexer_configuration_id': 7, } origin_metadata = { 'origin_id': 54974445, 'from_revision': '8dbb6aeb036e7fd80664eb8bfd1507881af1ba9f', 'metadata': metadata, 'indexer_configuration_id': 7, } expected_results = [ ('origin_intrinsic_metadata', True, [origin_metadata]), ('revision_metadata', True, [rev_metadata])] results = list(indexer.idx_storage.added_data) self.assertCountEqual(expected_results, results) diff --git a/swh/indexer/tests/test_utils.py b/swh/indexer/tests/test_utils.py index 826a909..858ce23 100644 --- a/swh/indexer/tests/test_utils.py +++ b/swh/indexer/tests/test_utils.py @@ -1,410 +1,715 @@ -# Copyright (C) 2017 The Software Heritage developers +# Copyright (C) 2017-2018 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information - from swh.objstorage.exc import ObjNotFoundError +from swh.model import hashutil ORIGINS = [ { 'id': 52189575, 'lister': None, 'project': None, 'type': 'git', 'url': 'https://github.com/SoftwareHeritage/swh-storage'}, { 'id': 4423668, 'lister': None, 'project': None, 'type': 'ftp', 'url': 'rsync://ftp.gnu.org/gnu/3dldf'}, { 'id': 77775770, 'lister': None, 'project': None, 'type': 'deposit', 'url': 'https://forge.softwareheritage.org/source/jesuisgpl/'}, { 'id': 85072327, 'lister': None, 'project': None, 'type': 'pypi', 'url': 'https://pypi.org/project/limnoria/'}, { 'id': 49908349, 'lister': None, 'project': None, 'type': 'svn', 'url': 'http://0-512-md.googlecode.com/svn/'}, { 'id': 54974445, 'lister': None, 'project': None, 'type': 'git', 'url': 'https://github.com/librariesio/yarn-parser'}, ] SNAPSHOTS = { 52189575: { 'branches': { b'refs/heads/add-revision-origin-cache': { 'target': b'L[\xce\x1c\x88\x8eF\t\xf1"\x19\x1e\xfb\xc0' b's\xe7/\xe9l\x1e', 'target_type': 'revision'}, b'HEAD': { 'target': b'8K\x12\x00d\x03\xcc\xe4]bS\xe3\x8f{\xd7}' b'\xac\xefrm', 'target_type': 'revision'}, b'refs/tags/v0.0.103': { 'target': b'\xb6"Im{\xfdLb\xb0\x94N\xea\x96m\x13x\x88+' b'\x0f\xdd', 'target_type': 'release'}, }}, 4423668: { 'branches': { b'3DLDF-1.1.4.tar.gz': { 'target': b'dJ\xfb\x1c\x91\xf4\x82B%]6\xa2\x90|\xd3\xfc' b'"G\x99\x11', 'target_type': 'revision'}, b'3DLDF-2.0.2.tar.gz': { 'target': b'\xb6\x0e\xe7\x9e9\xac\xaa\x19\x9e=' b'\xd1\xc5\x00\\\xc6\xfc\xe0\xa6\xb4V', 'target_type': 'revision'}, b'3DLDF-2.0.3-examples.tar.gz': { 'target': b'!H\x19\xc0\xee\x82-\x12F1\xbd\x97' b'\xfe\xadZ\x80\x80\xc1\x83\xff', 'target_type': 'revision'}, b'3DLDF-2.0.3.tar.gz': { 'target': b'\x8e\xa9\x8e/\xea}\x9feF\xf4\x9f\xfd\xee' b'\xcc\x1a\xb4`\x8c\x8by', 'target_type': 'revision'}, b'3DLDF-2.0.tar.gz': { 'target': b'F6*\xff(?\x19a\xef\xb6\xc2\x1fv$S\xe3G' b'\xd3\xd1m', b'target_type': 'revision'} }}, 77775770: { 'branches': { b'master': { 'target': b'\xe7n\xa4\x9c\x9f\xfb\xb7\xf76\x11\x08{' b'\xa6\xe9\x99\xb1\x9e]q\xeb', 'target_type': 'revision'} }, 'id': b"h\xc0\xd2a\x04\xd4~'\x8d\xd6\xbe\x07\xeda\xfa\xfbV" b"\x1d\r "}, 85072327: { 'branches': { b'HEAD': { 'target': b'releases/2018.09.09', 'target_type': 'alias'}, b'releases/2018.09.01': { 'target': b'<\xee1(\xe8\x8d_\xc1\xc9\xa6rT\xf1\x1d' b'\xbb\xdfF\xfdw\xcf', 'target_type': 'revision'}, b'releases/2018.09.09': { 'target': b'\x83\xb9\xb6\xc7\x05\xb1%\xd0\xfem\xd8k' b'A\x10\x9d\xc5\xfa2\xf8t', 'target_type': 'revision'}}, 'id': b'{\xda\x8e\x84\x7fX\xff\x92\x80^\x93V\x18\xa3\xfay' b'\x12\x9e\xd6\xb3'}, 49908349: { 'branches': { b'master': { 'target': b'\xe4?r\xe1,\x88\xab\xec\xe7\x9a\x87\xb8' b'\xc9\xad#.\x1bw=\x18', 'target_type': 'revision'}}, 'id': b'\xa1\xa2\x8c\n\xb3\x87\xa8\xf9\xe0a\x8c\xb7' b'\x05\xea\xb8\x1f\xc4H\xf4s'}, 54974445: { 'branches': { b'HEAD': { 'target': b'8dbb6aeb036e7fd80664eb8bfd1507881af1ba9f', 'target_type': 'revision'}}} } +SHA1_TO_LICENSES = { + '01c9379dfc33803963d07c1ccc748d3fe4c96bb5': ['GPL'], + '02fb2c89e14f7fab46701478c83779c7beb7b069': ['Apache2.0'], + '103bc087db1d26afc3a0283f38663d081e9b01e6': ['MIT'], + '688a5ef812c53907562fe379d4b3851e69c7cb15': ['AGPL'], + 'da39a3ee5e6b4b0d3255bfef95601890afd80709': [], +} + + +SHA1_TO_CTAGS = { + '01c9379dfc33803963d07c1ccc748d3fe4c96bb5': [{ + 'name': 'foo', + 'kind': 'str', + 'line': 10, + 'lang': 'bar', + }], + 'd4c647f0fc257591cc9ba1722484229780d1c607': [{ + 'name': 'let', + 'kind': 'int', + 'line': 100, + 'lang': 'haskell', + }], + '688a5ef812c53907562fe379d4b3851e69c7cb15': [{ + 'name': 'symbol', + 'kind': 'float', + 'line': 99, + 'lang': 'python', + }], +} + + class MockObjStorage: """Mock an swh-objstorage objstorage with predefined contents. """ data = {} def __init__(self): self.data = { '01c9379dfc33803963d07c1ccc748d3fe4c96bb5': b'this is some text', '688a5ef812c53907562fe379d4b3851e69c7cb15': b'another text', '8986af901dd2043044ce8f0d8fc039153641cf17': b'yet another text', '02fb2c89e14f7fab46701478c83779c7beb7b069': b""" import unittest import logging from swh.indexer.mimetype import ContentMimetypeIndexer from swh.indexer.tests.test_utils import MockObjStorage class MockStorage(): def content_mimetype_add(self, mimetypes): self.state = mimetypes self.conflict_update = conflict_update def indexer_configuration_add(self, tools): return [{ 'id': 10, }] """, '103bc087db1d26afc3a0283f38663d081e9b01e6': b""" #ifndef __AVL__ #define __AVL__ typedef struct _avl_tree avl_tree; typedef struct _data_t { int content; } data_t; """, '93666f74f1cf635c8c8ac118879da6ec5623c410': b""" (should 'pygments (recognize 'lisp 'easily)) """, '26a9f72a7c87cc9205725cfd879f514ff4f3d8d5': b""" { "name": "test_metadata", "version": "0.0.1", "description": "Simple package.json test for indexer", "repository": { "type": "git", "url": "https://github.com/moranegg/metadata_test" } } """, 'd4c647f0fc257591cc9ba1722484229780d1c607': b""" { "version": "5.0.3", "name": "npm", "description": "a package manager for JavaScript", "keywords": [ "install", "modules", "package manager", "package.json" ], "preferGlobal": true, "config": { "publishtest": false }, "homepage": "https://docs.npmjs.com/", "author": "Isaac Z. Schlueter (http://blog.izs.me)", "repository": { "type": "git", "url": "https://github.com/npm/npm" }, "bugs": { "url": "https://github.com/npm/npm/issues" }, "dependencies": { "JSONStream": "~1.3.1", "abbrev": "~1.1.0", "ansi-regex": "~2.1.1", "ansicolors": "~0.3.2", "ansistyles": "~0.1.3" }, "devDependencies": { "tacks": "~1.2.6", "tap": "~10.3.2" }, "license": "Artistic-2.0" } """, 'a7ab314d8a11d2c93e3dcf528ca294e7b431c449': b""" """, 'da39a3ee5e6b4b0d3255bfef95601890afd80709': b'', } def __iter__(self): yield from self.data.keys() def __contains__(self, sha1): return self.data.get(sha1) is not None def get(self, sha1): raw_content = self.data.get(sha1) if raw_content is None: raise ObjNotFoundError(sha1) return raw_content class MockIndexerStorage(): """Mock an swh-indexer storage. """ added_data = [] def indexer_configuration_add(self, tools): tool = tools[0] if tool['tool_name'] == 'swh-metadata-translator': return [{ 'id': 30, 'tool_name': 'swh-metadata-translator', 'tool_version': '0.0.1', 'tool_configuration': { 'type': 'local', 'context': 'NpmMapping' }, }] elif tool['tool_name'] == 'swh-metadata-detector': return [{ 'id': 7, 'tool_name': 'swh-metadata-detector', 'tool_version': '0.0.1', 'tool_configuration': { 'type': 'local', 'context': 'NpmMapping' }, }] elif tool['tool_name'] == 'origin-metadata': return [{ 'id': 8, 'tool_name': 'origin-metadata', 'tool_version': '0.0.1', 'tool_configuration': {}, }] else: assert False, 'Unknown tool {tool_name}'.format(**tool) def content_metadata_missing(self, sha1s): yield from [] def content_metadata_add(self, metadata, conflict_update=None): self.added_data.append( ('content_metadata', conflict_update, metadata)) def revision_metadata_add(self, metadata, conflict_update=None): self.added_data.append( ('revision_metadata', conflict_update, metadata)) def origin_intrinsic_metadata_add(self, metadata, conflict_update=None): self.added_data.append( ('origin_intrinsic_metadata', conflict_update, metadata)) def content_metadata_get(self, sha1s): return [{ 'tool': { 'configuration': { 'type': 'local', 'context': 'NpmMapping' }, 'version': '0.0.1', 'id': 6, 'name': 'swh-metadata-translator' }, 'id': b'cde', 'translated_metadata': { - 'issueTracker': { - 'url': 'https://github.com/librariesio/yarn-parser/issues' - }, + '@context': 'https://doi.org/10.5063/schema/codemeta-2.0', + 'type': 'SoftwareSourceCode', + 'codemeta:issueTracker': + 'https://github.com/librariesio/yarn-parser/issues', 'version': '1.0.0', 'name': 'yarn-parser', - 'author': 'Andrew Nesbitt', - 'url': 'https://github.com/librariesio/yarn-parser#readme', + 'schema:author': 'Andrew Nesbitt', + 'url': + 'https://github.com/librariesio/yarn-parser#readme', 'processorRequirements': {'node': '7.5'}, - 'other': { - 'scripts': { - 'start': 'node index.js' - }, - 'main': 'index.js' - }, 'license': 'AGPL-3.0', 'keywords': ['yarn', 'parse', 'lock', 'dependencies'], - 'codeRepository': { - 'type': 'git', - 'url': 'git+https://github.com/librariesio/yarn-parser.git' - }, - 'description': 'Tiny web service for parsing yarn.lock files', - 'softwareRequirements': { - 'yarn': '^0.21.0', - 'express': '^4.14.0', - 'body-parser': '^1.15.2'} + 'schema:codeRepository': + 'git+https://github.com/librariesio/yarn-parser.git', + 'description': + 'Tiny web service for parsing yarn.lock files', } }] class MockStorage(): """Mock a real swh-storage storage to simplify reading indexers' outputs. """ def origin_get(self, id_): for origin in ORIGINS: for (k, v) in id_.items(): if origin[k] != v: break else: - # This block is run if and only if we didn't break, - # ie. if all supplied parts of the id are set to the - # expected value. + # This block is run iff we didn't break, ie. if all supplied + # parts of the id are set to the expected value. return origin assert False, id_ def snapshot_get_latest(self, origin_id): if origin_id in SNAPSHOTS: return SNAPSHOTS[origin_id] else: assert False, origin_id def revision_get(self, revisions): return [{ 'id': b'8dbb6aeb036e7fd80664eb8bfd1507881af1ba9f', 'committer': { 'id': 26, 'name': b'Andrew Nesbitt', 'fullname': b'Andrew Nesbitt ', 'email': b'andrewnez@gmail.com' }, 'synthetic': False, 'date': { 'negative_utc': False, 'timestamp': { 'seconds': 1487596456, 'microseconds': 0 }, 'offset': 0 }, 'directory': b'10' }] def directory_ls(self, directory, recursive=False, cur=None): # with directory: b'\x9d', return [{ 'sha1_git': b'abc', 'name': b'index.js', 'target': b'abc', 'length': 897, 'status': 'visible', 'type': 'file', 'perms': 33188, 'dir_id': b'10', 'sha1': b'bcd' }, { 'sha1_git': b'aab', 'name': b'package.json', 'target': b'aab', 'length': 712, 'status': 'visible', 'type': 'file', 'perms': 33188, 'dir_id': b'10', 'sha1': b'cde' }, { 'dir_id': b'10', 'target': b'11', 'type': 'dir', 'length': None, 'name': b'.github', 'sha1': None, 'perms': 16384, 'sha1_git': None, 'status': None, 'sha256': None }] + + +class BasicMockStorage(): + """In memory implementation to fake the content_get_range api. + + FIXME: To remove when the actual in-memory lands. + + """ + contents = [] + + def __init__(self, contents): + self.contents = contents + + def content_get_range(self, start, end, limit=1000): + # to make input test data consilient with actual runtime the + # other way of doing properly things would be to rewrite all + # tests (that's another task entirely so not right now) + if isinstance(start, bytes): + start = hashutil.hash_to_hex(start) + if isinstance(end, bytes): + end = hashutil.hash_to_hex(end) + results = [] + _next_id = None + counter = 0 + for c in self.contents: + _id = c['sha1'] + if start <= _id and _id <= end: + results.append(c) + if counter >= limit: + break + counter += 1 + + return { + 'contents': results, + 'next': _next_id + } + + +class BasicMockIndexerStorage(): + """Mock Indexer storage to simplify reading indexers' outputs. + + """ + state = [] + + def _internal_add(self, data, conflict_update=None): + """All content indexer have the same structure. So reuse `data` as the + same data. It's either mimetype, language, + fossology_license, etc... + + """ + self.state = data + self.conflict_update = conflict_update + + def content_mimetype_add(self, data, conflict_update=None): + self._internal_add(data, conflict_update=conflict_update) + + def content_fossology_license_add(self, data, conflict_update=None): + self._internal_add(data, conflict_update=conflict_update) + + def content_language_add(self, data, conflict_update=None): + self._internal_add(data, conflict_update=conflict_update) + + def content_ctags_add(self, data, conflict_update=None): + self._internal_add(data, conflict_update=conflict_update) + + def _internal_get_range(self, start, end, + indexer_configuration_id, limit=1000): + """Same logic as _internal_add, we retrieve indexed data given an + identifier. So the code here does not change even though + the underlying data does. + + """ + # to make input test data consilient with actual runtime the + # other way of doing properly things would be to rewrite all + # tests (that's another task entirely so not right now) + if isinstance(start, bytes): + start = hashutil.hash_to_hex(start) + if isinstance(end, bytes): + end = hashutil.hash_to_hex(end) + results = [] + _next = None + counter = 0 + for m in self.state: + _id = m['id'] + _tool_id = m['indexer_configuration_id'] + if (start <= _id and _id <= end and + _tool_id == indexer_configuration_id): + results.append(_id) + if counter >= limit: + break + counter += 1 + + return { + 'ids': results, + 'next': _next + } + + def content_mimetype_get_range( + self, start, end, indexer_configuration_id, limit=1000): + return self._internal_get_range( + start, end, indexer_configuration_id, limit=limit) + + def content_fossology_license_get_range( + self, start, end, indexer_configuration_id, limit=1000): + return self._internal_get_range( + start, end, indexer_configuration_id, limit=limit) + + def indexer_configuration_add(self, tools): + return [{ + 'id': 10, + }] + + +class CommonIndexerNoTool: + """Mixin to wronly initialize content indexer""" + def prepare(self): + super().prepare() + self.tools = None + + +class CommonIndexerWithErrorsTest: + """Test indexer configuration checks. + + """ + Indexer = None + RangeIndexer = None + + def test_wrong_unknown_configuration_tool(self): + """Indexer with unknown configuration tool fails check""" + with self.assertRaisesRegex(ValueError, 'Tools None is unknown'): + print('indexer: %s' % self.Indexer) + self.Indexer() + + def test_wrong_unknown_configuration_tool_range(self): + """Range Indexer with unknown configuration tool fails check""" + if self.RangeIndexer is not None: + with self.assertRaisesRegex(ValueError, 'Tools None is unknown'): + self.RangeIndexer() + + +class CommonContentIndexerTest: + def assert_results_ok(self, actual_results, expected_results=None): + if expected_results is None: + expected_results = self.expected_results + + for indexed_data in actual_results: + _id = indexed_data['id'] + self.assertEqual(indexed_data, expected_results[_id]) + _tool_id = indexed_data['indexer_configuration_id'] + self.assertEqual(_tool_id, self.indexer.tool['id']) + + def test_index(self): + """Known sha1 have their data indexed + + """ + sha1s = [self.id0, self.id1, self.id2] + + # when + self.indexer.run(sha1s, policy_update='update-dups') + + actual_results = self.indexer.idx_storage.state + self.assertTrue(self.indexer.idx_storage.conflict_update) + self.assert_results_ok(actual_results) + + # 2nd pass + self.indexer.run(sha1s, policy_update='ignore-dups') + + self.assertFalse(self.indexer.idx_storage.conflict_update) + self.assert_results_ok(actual_results) + + def test_index_one_unknown_sha1(self): + """Unknown sha1 are not indexed""" + sha1s = [self.id1, + '799a5ef812c53907562fe379d4b3851e69c7cb15', # unknown + '800a5ef812c53907562fe379d4b3851e69c7cb15'] # unknown + + # when + self.indexer.run(sha1s, policy_update='update-dups') + actual_results = self.indexer.idx_storage.state + + # then + expected_results = { + k: v for k, v in self.expected_results.items() if k in sha1s + } + + self.assert_results_ok(actual_results, expected_results) + + +class CommonContentIndexerRangeTest: + """Allows to factorize tests on range indexer. + + """ + def assert_results_ok(self, start, end, actual_results, + expected_results=None): + if expected_results is None: + expected_results = self.expected_results + + for indexed_data in actual_results: + _id = indexed_data['id'] + self.assertEqual(indexed_data, expected_results[_id]) + self.assertTrue(start <= _id and _id <= end) + _tool_id = indexed_data['indexer_configuration_id'] + self.assertEqual(_tool_id, self.indexer.tool['id']) + + def test__index_contents(self): + """Indexing contents without existing data results in indexed data + + """ + start, end = [self.contents[0], self.contents[2]] # output hex ids + # given + actual_results = list(self.indexer._index_contents( + start, end, indexed={})) + + self.assert_results_ok(start, end, actual_results) + + def test__index_contents_with_indexed_data(self): + """Indexing contents with existing data results in less indexed data + + """ + start, end = [self.contents[0], self.contents[2]] # output hex ids + data_indexed = [self.id0, self.id2] + + # given + actual_results = self.indexer._index_contents( + start, end, indexed=set(data_indexed)) + + # craft the expected results + expected_results = self.expected_results.copy() + for already_indexed_key in data_indexed: + expected_results.pop(already_indexed_key) + + self.assert_results_ok( + start, end, actual_results, expected_results) + + def test_generate_content_get(self): + """Optimal indexing should result in indexed data + + """ + start, end = [self.contents[0], self.contents[2]] # output hex ids + # given + actual_results = self.indexer.run(start, end) + + # then + self.assertTrue(actual_results) + + def test_generate_content_get_input_as_bytes(self): + """Optimal indexing should result in indexed data + + Input are in bytes here. + + """ + _start, _end = [self.contents[0], self.contents[2]] # output hex ids + start, end = map(hashutil.hash_to_bytes, (_start, _end)) + + # given + actual_results = self.indexer.run( # checks the bytes input this time + start, end, skip_existing=False) # no data so same result + + # then + self.assertTrue(actual_results) + + def test_generate_content_get_no_result(self): + """No result indexed returns False""" + start, end = ['0000000000000000000000000000000000000000', + '0000000000000000000000000000000000000001'] + # given + actual_results = self.indexer.run( + start, end, incremental=False) + + # then + self.assertFalse(actual_results) + + +class NoDiskIndexer: + """Mixin to override the DiskIndexer behavior avoiding side-effects in + tests. + + """ + + def write_to_temp(self, filename, data): # noop + return filename + + def cleanup(self, content_path): # noop + return None diff --git a/tox.ini b/tox.ini index 70265ee..a2d8b63 100644 --- a/tox.ini +++ b/tox.ini @@ -1,17 +1,33 @@ [tox] envlist=flake8,py3 [testenv:py3] deps = .[testing] pytest-cov pifpaf commands = - pifpaf run postgresql -- pytest --cov=swh --cov-branch {posargs} + pifpaf run postgresql -- pytest --hypothesis-profile=fast --cov=swh --cov-branch {posargs} + +[testenv:py3-slow] +deps = + .[testing] + pytest-cov + pifpaf +commands = + pifpaf run postgresql -- pytest --hypothesis-profile=slow --cov=swh --cov-branch {posargs} + +[testenv:py3-prop] +deps = + .[testing] + pytest-cov + pifpaf +commands = + pifpaf run postgresql -- pytest --hypothesis-profile=fast -m property_based --disable-warnings [testenv:flake8] skip_install = true deps = flake8 commands = {envpython} -m flake8 diff --git a/version.txt b/version.txt index 3179a09..c18756c 100644 --- a/version.txt +++ b/version.txt @@ -1 +1 @@ -v0.0.55-0-g178870d \ No newline at end of file +v0.0.56-0-g373b432 \ No newline at end of file