diff --git a/PKG-INFO b/PKG-INFO index 2d6dd0b..9732f47 100644 --- a/PKG-INFO +++ b/PKG-INFO @@ -1,71 +1,71 @@ Metadata-Version: 2.1 Name: swh.indexer -Version: 0.5.0 +Version: 0.6.0 Summary: Software Heritage Content Indexer Home-page: https://forge.softwareheritage.org/diffusion/78/ Author: Software Heritage developers Author-email: swh-devel@inria.fr License: UNKNOWN Project-URL: Bug Reports, https://forge.softwareheritage.org/maniphest Project-URL: Funding, https://www.softwareheritage.org/donate Project-URL: Source, https://forge.softwareheritage.org/source/swh-indexer Project-URL: Documentation, https://docs.softwareheritage.org/devel/swh-indexer/ Description: swh-indexer ============ Tools to compute multiple indexes on SWH's raw contents: - content: - mimetype - ctags - language - fossology-license - metadata - revision: - metadata An indexer is in charge of: - looking up objects - extracting information from those objects - store those information in the swh-indexer db There are multiple indexers working on different object types: - content indexer: works with content sha1 hashes - revision indexer: works with revision sha1 hashes - origin indexer: works with origin identifiers Indexation procedure: - receive batch of ids - retrieve the associated data depending on object type - compute for that object some index - store the result to swh's storage Current content indexers: - mimetype (queue swh_indexer_content_mimetype): detect the encoding and mimetype - language (queue swh_indexer_content_language): detect the programming language - ctags (queue swh_indexer_content_ctags): compute tags information - fossology-license (queue swh_indexer_fossology_license): compute the license - metadata: translate file into translated_metadata dict Current revision indexers: - metadata: detects files containing metadata and retrieves translated_metadata in content_metadata table in storage or run content indexer to translate files. Platform: UNKNOWN Classifier: Programming Language :: Python :: 3 Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3) Classifier: Operating System :: OS Independent Classifier: Development Status :: 5 - Production/Stable Requires-Python: >=3.7 Description-Content-Type: text/markdown Provides-Extra: testing diff --git a/docs/dev-info.rst b/docs/dev-info.rst index 493b102..cca3d2a 100644 --- a/docs/dev-info.rst +++ b/docs/dev-info.rst @@ -1,206 +1,199 @@ Hacking on swh-indexer ====================== This tutorial will guide you through the hacking on the swh-indexer. If you do not have a local copy of the Software Heritage archive, go to the `getting started tutorial `_ Configuration files ------------------- You will need the following YAML configuration files to run the swh-indexer commands: - Orchestrator at ``~/.config/swh/indexer/orchestrator.yml`` .. code-block:: yaml indexers: mimetype: check_presence: false batch_size: 100 - Orchestrator-text at ``~/.config/swh/indexer/orchestrator-text.yml`` .. code-block:: yaml indexers: # language: # batch_size: 10 # check_presence: false fossology_license: batch_size: 10 check_presence: false # ctags: # batch_size: 2 # check_presence: false - Mimetype indexer at ``~/.config/swh/indexer/mimetype.yml`` .. code-block:: yaml # storage to read sha1's metadata (path) # storage: # cls: local - # args: - # db: "service=swh-dev" - # objstorage: - # cls: pathslicing - # args: - # root: /home/storage/swh-storage/ - # slicing: 0:1/1:5 + # db: "service=swh-dev" + # objstorage: + # cls: pathslicing + # root: /home/storage/swh-storage/ + # slicing: 0:1/1:5 storage: cls: remote - args: - url: http://localhost:5002/ + url: http://localhost:5002/ indexer_storage: cls: remote args: url: http://localhost:5007/ # storage to read sha1's content # adapt this to your need # locally: this needs to match your storage's setup objstorage: cls: pathslicing - args: - slicing: 0:1/1:5 - root: /home/storage/swh-storage/ + slicing: 0:1/1:5 + root: /home/storage/swh-storage/ destination_task: swh.indexer.tasks.SWHOrchestratorTextContentsTask rescheduling_task: swh.indexer.tasks.SWHContentMimetypeTask - Fossology indexer at ``~/.config/swh/indexer/fossology_license.yml`` .. code-block:: yaml # storage to read sha1's metadata (path) # storage: # cls: local - # args: - # db: "service=swh-dev" - # objstorage: - # cls: pathslicing - # args: - # root: /home/storage/swh-storage/ - # slicing: 0:1/1:5 + # db: "service=swh-dev" + # objstorage: + # cls: pathslicing + # root: /home/storage/swh-storage/ + # slicing: 0:1/1:5 storage: cls: remote url: http://localhost:5002/ indexer_storage: cls: remote args: url: http://localhost:5007/ # storage to read sha1's content # adapt this to your need # locally: this needs to match your storage's setup objstorage: cls: pathslicing - args: - slicing: 0:1/1:5 - root: /home/storage/swh-storage/ + slicing: 0:1/1:5 + root: /home/storage/swh-storage/ workdir: /tmp/swh/worker.indexer/license/ tools: name: 'nomos' version: '3.1.0rc2-31-ga2cbb8c' configuration: command_line: 'nomossa ' - Worker at ``~/.config/swh/worker.yml`` .. code-block:: yaml task_broker: amqp://guest@localhost// task_modules: - swh.loader.svn.tasks - swh.loader.tar.tasks - swh.loader.git.tasks - swh.storage.archiver.tasks - swh.indexer.tasks - swh.indexer.orchestrator task_queues: - swh_loader_svn - swh_loader_tar - swh_reader_git_to_azure_archive - swh_storage_archive_worker_to_backend - swh_indexer_orchestrator_content_all - swh_indexer_orchestrator_content_text - swh_indexer_content_mimetype - swh_indexer_content_language - swh_indexer_content_ctags - swh_indexer_content_fossology_license - swh_loader_svn_mount_and_load - swh_loader_git_express - swh_loader_git_archive - swh_loader_svn_archive task_soft_time_limit: 0 Database -------- swh-indxer uses a database to store the indexed content. The default db is expected to be called swh-indexer-dev. Create or add ``swh-dev`` and ``swh-indexer-dev`` to the ``~/.pg_service.conf`` and ``~/.pgpass`` files, which are postgresql's configuration files. Add data to local DB -------------------- from within the ``swh-environment``, run the following command:: make rebuild-testdata and fetch some real data to work with, using:: python3 -m swh.loader.git.updater --origin-url Then you can list all content files using this script:: #!/usr/bin/env bash psql service=swh-dev -c "copy (select sha1 from content) to stdin" | sed -e 's/^\\\\x//g' Run the indexers ----------------- Use the list off contents to feed the indexers with with the following command:: ./list-sha1.sh | python3 -m swh.indexer.producer --batch 100 --task-name orchestrator_all Activate the workers -------------------- To send messages to different queues using rabbitmq (which should already be installed through dependencies installation), run the following command in a dedicated terminal:: python3 -m celery worker --app=swh.scheduler.celery_backend.config.app \ --pool=prefork \ --concurrency=1 \ -Ofair \ --loglevel=info \ --without-gossip \ --without-mingle \ --without-heartbeat 2>&1 With this command rabbitmq will consume message using the worker configuration file. Note: for the fossology_license indexer, you need a package fossology-nomossa which is in our `public debian repository `_. diff --git a/requirements-swh.txt b/requirements-swh.txt index 8b20b53..40b5ff6 100644 --- a/requirements-swh.txt +++ b/requirements-swh.txt @@ -1,6 +1,6 @@ swh.core[db,http] >= 0.3 swh.model >= 0.0.15 -swh.objstorage >= 0.0.43 +swh.objstorage >= 0.2.2 swh.scheduler >= 0.5.2 swh.storage >= 0.12.0 swh.journal >= 0.1.0 diff --git a/requirements.txt b/requirements.txt index 77379af..bde0b88 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,6 +1,5 @@ -vcversioner click python-magic >= 0.4.13 pyld xmltodict typing-extensions diff --git a/swh.indexer.egg-info/PKG-INFO b/swh.indexer.egg-info/PKG-INFO index 2d6dd0b..9732f47 100644 --- a/swh.indexer.egg-info/PKG-INFO +++ b/swh.indexer.egg-info/PKG-INFO @@ -1,71 +1,71 @@ Metadata-Version: 2.1 Name: swh.indexer -Version: 0.5.0 +Version: 0.6.0 Summary: Software Heritage Content Indexer Home-page: https://forge.softwareheritage.org/diffusion/78/ Author: Software Heritage developers Author-email: swh-devel@inria.fr License: UNKNOWN Project-URL: Bug Reports, https://forge.softwareheritage.org/maniphest Project-URL: Funding, https://www.softwareheritage.org/donate Project-URL: Source, https://forge.softwareheritage.org/source/swh-indexer Project-URL: Documentation, https://docs.softwareheritage.org/devel/swh-indexer/ Description: swh-indexer ============ Tools to compute multiple indexes on SWH's raw contents: - content: - mimetype - ctags - language - fossology-license - metadata - revision: - metadata An indexer is in charge of: - looking up objects - extracting information from those objects - store those information in the swh-indexer db There are multiple indexers working on different object types: - content indexer: works with content sha1 hashes - revision indexer: works with revision sha1 hashes - origin indexer: works with origin identifiers Indexation procedure: - receive batch of ids - retrieve the associated data depending on object type - compute for that object some index - store the result to swh's storage Current content indexers: - mimetype (queue swh_indexer_content_mimetype): detect the encoding and mimetype - language (queue swh_indexer_content_language): detect the programming language - ctags (queue swh_indexer_content_ctags): compute tags information - fossology-license (queue swh_indexer_fossology_license): compute the license - metadata: translate file into translated_metadata dict Current revision indexers: - metadata: detects files containing metadata and retrieves translated_metadata in content_metadata table in storage or run content indexer to translate files. Platform: UNKNOWN Classifier: Programming Language :: Python :: 3 Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3) Classifier: Operating System :: OS Independent Classifier: Development Status :: 5 - Production/Stable Requires-Python: >=3.7 Description-Content-Type: text/markdown Provides-Extra: testing diff --git a/swh.indexer.egg-info/SOURCES.txt b/swh.indexer.egg-info/SOURCES.txt index 83d5e9f..da35213 100644 --- a/swh.indexer.egg-info/SOURCES.txt +++ b/swh.indexer.egg-info/SOURCES.txt @@ -1,139 +1,140 @@ .gitignore .pre-commit-config.yaml AUTHORS CODE_OF_CONDUCT.md CONTRIBUTORS LICENSE MANIFEST.in Makefile Makefile.local README.md codemeta.json conftest.py mypy.ini pyproject.toml pytest.ini requirements-swh.txt requirements-test.txt requirements.txt setup.cfg setup.py tox.ini docs/.gitignore docs/Makefile docs/Makefile.local docs/README.md docs/conf.py docs/dev-info.rst docs/index.rst docs/metadata-workflow.rst docs/_static/.placeholder docs/_templates/.placeholder docs/images/.gitignore docs/images/Makefile docs/images/tasks-metadata-indexers.uml sql/bin/db-upgrade sql/bin/dot_add_content sql/doc/json sql/doc/json/.gitignore sql/doc/json/Makefile sql/doc/json/indexer_configuration.tool_configuration.schema.json sql/doc/json/revision_metadata.translated_metadata.json sql/json/.gitignore sql/json/Makefile sql/json/indexer_configuration.tool_configuration.schema.json sql/json/revision_metadata.translated_metadata.json sql/upgrades/115.sql sql/upgrades/116.sql sql/upgrades/117.sql sql/upgrades/118.sql sql/upgrades/119.sql sql/upgrades/120.sql sql/upgrades/121.sql sql/upgrades/122.sql sql/upgrades/123.sql sql/upgrades/124.sql sql/upgrades/125.sql sql/upgrades/126.sql sql/upgrades/127.sql sql/upgrades/128.sql sql/upgrades/129.sql sql/upgrades/130.sql sql/upgrades/131.sql sql/upgrades/132.sql sql/upgrades/133.sql swh/__init__.py swh.indexer.egg-info/PKG-INFO swh.indexer.egg-info/SOURCES.txt swh.indexer.egg-info/dependency_links.txt swh.indexer.egg-info/entry_points.txt swh.indexer.egg-info/requires.txt swh.indexer.egg-info/top_level.txt swh/indexer/__init__.py swh/indexer/cli.py swh/indexer/codemeta.py swh/indexer/ctags.py swh/indexer/fossology_license.py swh/indexer/indexer.py swh/indexer/journal_client.py swh/indexer/metadata.py swh/indexer/metadata_detector.py swh/indexer/mimetype.py swh/indexer/origin_head.py swh/indexer/py.typed swh/indexer/rehash.py swh/indexer/tasks.py swh/indexer/data/codemeta/CITATION swh/indexer/data/codemeta/LICENSE swh/indexer/data/codemeta/codemeta.jsonld swh/indexer/data/codemeta/crosswalk.csv swh/indexer/metadata_dictionary/__init__.py swh/indexer/metadata_dictionary/base.py swh/indexer/metadata_dictionary/codemeta.py swh/indexer/metadata_dictionary/maven.py swh/indexer/metadata_dictionary/npm.py swh/indexer/metadata_dictionary/python.py swh/indexer/metadata_dictionary/ruby.py swh/indexer/sql/10-superuser-init.sql swh/indexer/sql/20-enums.sql swh/indexer/sql/30-schema.sql swh/indexer/sql/50-data.sql swh/indexer/sql/50-func.sql swh/indexer/sql/60-indexes.sql swh/indexer/storage/__init__.py swh/indexer/storage/converters.py swh/indexer/storage/db.py swh/indexer/storage/exc.py swh/indexer/storage/in_memory.py swh/indexer/storage/interface.py swh/indexer/storage/metrics.py swh/indexer/storage/model.py +swh/indexer/storage/writer.py swh/indexer/storage/api/__init__.py swh/indexer/storage/api/client.py swh/indexer/storage/api/serializers.py swh/indexer/storage/api/server.py swh/indexer/tests/__init__.py swh/indexer/tests/conftest.py swh/indexer/tests/tasks.py swh/indexer/tests/test_cli.py swh/indexer/tests/test_codemeta.py swh/indexer/tests/test_ctags.py swh/indexer/tests/test_fossology_license.py swh/indexer/tests/test_indexer.py swh/indexer/tests/test_journal_client.py swh/indexer/tests/test_metadata.py swh/indexer/tests/test_mimetype.py swh/indexer/tests/test_origin_head.py swh/indexer/tests/test_origin_metadata.py swh/indexer/tests/test_tasks.py swh/indexer/tests/utils.py swh/indexer/tests/storage/__init__.py swh/indexer/tests/storage/conftest.py swh/indexer/tests/storage/generate_data_test.py swh/indexer/tests/storage/test_api_client.py swh/indexer/tests/storage/test_converters.py swh/indexer/tests/storage/test_in_memory.py swh/indexer/tests/storage/test_init.py swh/indexer/tests/storage/test_metrics.py swh/indexer/tests/storage/test_server.py swh/indexer/tests/storage/test_storage.py \ No newline at end of file diff --git a/swh.indexer.egg-info/requires.txt b/swh.indexer.egg-info/requires.txt index ffb0ae2..25feb9c 100644 --- a/swh.indexer.egg-info/requires.txt +++ b/swh.indexer.egg-info/requires.txt @@ -1,20 +1,19 @@ -vcversioner click python-magic>=0.4.13 pyld xmltodict typing-extensions swh.core[db,http]>=0.3 swh.model>=0.0.15 -swh.objstorage>=0.0.43 +swh.objstorage>=0.2.2 swh.scheduler>=0.5.2 swh.storage>=0.12.0 swh.journal>=0.1.0 [testing] confluent-kafka pytest pytest-mock hypothesis>=3.11.0 swh.scheduler[testing]>=0.5.0 swh.storage[testing]>=0.10.0 diff --git a/swh/indexer/cli.py b/swh/indexer/cli.py index 0463ef4..bdf7530 100644 --- a/swh/indexer/cli.py +++ b/swh/indexer/cli.py @@ -1,301 +1,315 @@ # Copyright (C) 2019-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information from typing import Iterator # WARNING: do not import unnecessary things here to keep cli startup time under # control import click from swh.core.cli import CONTEXT_SETTINGS, AliasedGroup from swh.core.cli import swh as swh_cli_group @swh_cli_group.group( name="indexer", context_settings=CONTEXT_SETTINGS, cls=AliasedGroup ) @click.option( "--config-file", "-C", default=None, type=click.Path(exists=True, dir_okay=False,), help="Configuration file.", ) @click.pass_context def indexer_cli_group(ctx, config_file): """Software Heritage Indexer tools. The Indexer is used to mine the content of the archive and extract derived information from archive source code artifacts. """ from swh.core import config ctx.ensure_object(dict) conf = config.read(config_file) ctx.obj["config"] = conf def _get_api(getter, config, config_key, url): if url: - config[config_key] = {"cls": "remote", "args": {"url": url}} + config[config_key] = {"cls": "remote", "url": url} elif config_key not in config: raise click.ClickException("Missing configuration for {}".format(config_key)) return getter(**config[config_key]) @indexer_cli_group.group("mapping") def mapping(): """Manage Software Heritage Indexer mappings.""" pass @mapping.command("list") def mapping_list(): """Prints the list of known mappings.""" from swh.indexer import metadata_dictionary mapping_names = [mapping.name for mapping in metadata_dictionary.MAPPINGS.values()] mapping_names.sort() for mapping_name in mapping_names: click.echo(mapping_name) @mapping.command("list-terms") @click.option( "--exclude-mapping", multiple=True, help="Exclude the given mapping from the output" ) @click.option( "--concise", is_flag=True, default=False, help="Don't print the list of mappings supporting each term.", ) def mapping_list_terms(concise, exclude_mapping): """Prints the list of known CodeMeta terms, and which mappings support them.""" from swh.indexer import metadata_dictionary properties = metadata_dictionary.list_terms() for (property_name, supported_mappings) in sorted(properties.items()): supported_mappings = {m.name for m in supported_mappings} supported_mappings -= set(exclude_mapping) if supported_mappings: if concise: click.echo(property_name) else: click.echo("{}:".format(property_name)) click.echo("\t" + ", ".join(sorted(supported_mappings))) @mapping.command("translate") @click.argument("mapping-name") @click.argument("file", type=click.File("rb")) def mapping_translate(mapping_name, file): """Prints the list of known mappings.""" import json from swh.indexer import metadata_dictionary mapping_cls = [ cls for cls in metadata_dictionary.MAPPINGS.values() if cls.name == mapping_name ] if not mapping_cls: raise click.ClickException("Unknown mapping {}".format(mapping_name)) assert len(mapping_cls) == 1 mapping_cls = mapping_cls[0] mapping = mapping_cls() codemeta_doc = mapping.translate(file.read()) click.echo(json.dumps(codemeta_doc, indent=4)) @indexer_cli_group.group("schedule") @click.option("--scheduler-url", "-s", default=None, help="URL of the scheduler API") @click.option( "--indexer-storage-url", "-i", default=None, help="URL of the indexer storage API" ) @click.option( "--storage-url", "-g", default=None, help="URL of the (graph) storage API" ) @click.option( "--dry-run/--no-dry-run", is_flag=True, default=False, help="List only what would be scheduled.", ) @click.pass_context def schedule(ctx, scheduler_url, storage_url, indexer_storage_url, dry_run): """Manipulate Software Heritage Indexer tasks. Via SWH Scheduler's API.""" from swh.indexer.storage import get_indexer_storage from swh.scheduler import get_scheduler from swh.storage import get_storage ctx.obj["indexer_storage"] = _get_api( get_indexer_storage, ctx.obj["config"], "indexer_storage", indexer_storage_url ) ctx.obj["storage"] = _get_api( get_storage, ctx.obj["config"], "storage", storage_url ) ctx.obj["scheduler"] = _get_api( get_scheduler, ctx.obj["config"], "scheduler", scheduler_url ) if dry_run: ctx.obj["scheduler"] = None def list_origins_by_producer(idx_storage, mappings, tool_ids) -> Iterator[str]: next_page_token = "" limit = 10000 while next_page_token is not None: result = idx_storage.origin_intrinsic_metadata_search_by_producer( page_token=next_page_token, limit=limit, ids_only=True, mappings=mappings or None, tool_ids=tool_ids or None, ) next_page_token = result.next_page_token yield from result.results @schedule.command("reindex_origin_metadata") @click.option( "--batch-size", "-b", "origin_batch_size", default=10, show_default=True, type=int, help="Number of origins per task", ) @click.option( "--tool-id", "-t", "tool_ids", type=int, multiple=True, help="Restrict search of old metadata to this/these tool ids.", ) @click.option( "--mapping", "-m", "mappings", multiple=True, help="Mapping(s) that should be re-scheduled (eg. 'npm', 'gemspec', 'maven')", ) @click.option( "--task-type", default="index-origin-metadata", show_default=True, help="Name of the task type to schedule.", ) @click.pass_context def schedule_origin_metadata_reindex( ctx, origin_batch_size, tool_ids, mappings, task_type ): """Schedules indexing tasks for origins that were already indexed.""" from swh.scheduler.cli_utils import schedule_origin_batches idx_storage = ctx.obj["indexer_storage"] scheduler = ctx.obj["scheduler"] origins = list_origins_by_producer(idx_storage, mappings, tool_ids) kwargs = {"retries_left": 1} schedule_origin_batches(scheduler, task_type, origins, origin_batch_size, kwargs) @indexer_cli_group.command("journal-client") @click.option("--scheduler-url", "-s", default=None, help="URL of the scheduler API") @click.option( "--origin-metadata-task-type", default="index-origin-metadata", help="Name of the task running the origin metadata indexer.", ) @click.option( "--broker", "brokers", type=str, multiple=True, help="Kafka broker to connect to." ) @click.option( "--prefix", type=str, default=None, help="Prefix of Kafka topic names to read from." ) @click.option("--group-id", type=str, help="Consumer/group id for reading from Kafka.") @click.option( "--stop-after-objects", "-m", default=None, type=int, help="Maximum number of objects to replay. Default is to run forever.", ) @click.pass_context def journal_client( ctx, scheduler_url, origin_metadata_task_type, brokers, prefix, group_id, stop_after_objects, ): """Listens for new objects from the SWH Journal, and schedules tasks to run relevant indexers (currently, only origin-intrinsic-metadata) on these new objects.""" import functools from swh.indexer.journal_client import process_journal_objects from swh.journal.client import get_journal_client from swh.scheduler import get_scheduler - scheduler = _get_api(get_scheduler, ctx.obj["config"], "scheduler", scheduler_url) + cfg = ctx.obj["config"] + journal_cfg = cfg.get("journal", {}) + + scheduler = _get_api(get_scheduler, cfg, "scheduler", scheduler_url) + + brokers = brokers or journal_cfg.get("brokers") + if not brokers: + raise ValueError("The brokers configuration is mandatory.") + + prefix = prefix or journal_cfg.get("prefix") + group_id = group_id or journal_cfg.get("group_id") + origin_metadata_task_type = origin_metadata_task_type or journal_cfg.get( + "origin_metadata_task_type" + ) + stop_after_objects = stop_after_objects or journal_cfg.get("stop_after_objects") client = get_journal_client( cls="kafka", brokers=brokers, prefix=prefix, group_id=group_id, - object_types=["origin_visit"], + object_types=["origin_visit_status"], stop_after_objects=stop_after_objects, ) worker_fn = functools.partial( process_journal_objects, scheduler=scheduler, task_names={"origin_metadata": origin_metadata_task_type,}, ) try: client.process(worker_fn) except KeyboardInterrupt: ctx.exit(0) else: print("Done.") finally: client.close() @indexer_cli_group.command("rpc-serve") @click.argument("config-path", required=True) @click.option("--host", default="0.0.0.0", help="Host to run the server") @click.option("--port", default=5007, type=click.INT, help="Binding port of the server") @click.option( "--debug/--nodebug", default=True, help="Indicates if the server should run in debug mode", ) def rpc_server(config_path, host, port, debug): """Starts a Software Heritage Indexer RPC HTTP server.""" from swh.indexer.storage.api.server import app, load_and_check_config api_cfg = load_and_check_config(config_path, type="any") app.config.update(api_cfg) app.run(host, port=int(port), debug=bool(debug)) def main(): return indexer_cli_group(auto_envvar_prefix="SWH_INDEXER") if __name__ == "__main__": main() diff --git a/swh/indexer/indexer.py b/swh/indexer/indexer.py index 3c76c41..984ffbf 100644 --- a/swh/indexer/indexer.py +++ b/swh/indexer/indexer.py @@ -1,614 +1,613 @@ # Copyright (C) 2016-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import abc from contextlib import contextmanager import logging import os import shutil import tempfile from typing import Any, Dict, Generic, Iterator, List, Optional, Set, TypeVar, Union import warnings from swh.core import utils from swh.core.config import load_from_envvar, merge_configs from swh.indexer.storage import INDEXER_CFG_KEY, PagedResult, Sha1, get_indexer_storage from swh.indexer.storage.interface import IndexerStorageInterface from swh.model import hashutil from swh.model.model import Revision, Sha1Git from swh.objstorage.exc import ObjNotFoundError from swh.objstorage.factory import get_objstorage from swh.scheduler import CONFIG as SWH_CONFIG from swh.storage import get_storage from swh.storage.interface import StorageInterface @contextmanager def write_to_temp(filename: str, data: bytes, working_directory: str) -> Iterator[str]: """Write the sha1's content in a temporary file. Args: filename: one of sha1's many filenames data: the sha1's content to write in temporary file working_directory: the directory into which the file is written Returns: The path to the temporary file created. That file is filled in with the raw content's data. """ os.makedirs(working_directory, exist_ok=True) temp_dir = tempfile.mkdtemp(dir=working_directory) content_path = os.path.join(temp_dir, filename) with open(content_path, "wb") as f: f.write(data) yield content_path shutil.rmtree(temp_dir) DEFAULT_CONFIG = { INDEXER_CFG_KEY: {"cls": "memory"}, "storage": {"cls": "memory"}, "objstorage": {"cls": "memory"}, } TId = TypeVar("TId") """type of the ids of index()ed objects.""" TData = TypeVar("TData") """type of the objects passed to index().""" TResult = TypeVar("TResult") """return type of index()""" class BaseIndexer(Generic[TId, TData, TResult], metaclass=abc.ABCMeta): """Base class for indexers to inherit from. The main entry point is the :func:`run` function which is in charge of triggering the computations on the batch dict/ids received. Indexers can: - filter out ids whose data has already been indexed. - retrieve ids data from storage or objstorage - index this data depending on the object and store the result in storage. To implement a new object type indexer, inherit from the BaseIndexer and implement indexing: :meth:`~BaseIndexer.run`: object_ids are different depending on object. For example: sha1 for content, sha1_git for revision, directory, release, and id for origin To implement a new concrete indexer, inherit from the object level classes: :class:`ContentIndexer`, :class:`RevisionIndexer`, :class:`OriginIndexer`. Then you need to implement the following functions: :meth:`~BaseIndexer.filter`: filter out data already indexed (in storage). :meth:`~BaseIndexer.index_object`: compute index on id with data (retrieved from the storage or the objstorage by the id key) and return the resulting index computation. :meth:`~BaseIndexer.persist_index_computations`: persist the results of multiple index computations in the storage. The new indexer implementation can also override the following functions: :meth:`~BaseIndexer.prepare`: Configuration preparation for the indexer. When overriding, this must call the `super().prepare()` instruction. :meth:`~BaseIndexer.check`: Configuration check for the indexer. When overriding, this must call the `super().check()` instruction. :meth:`~BaseIndexer.register_tools`: This should return a dict of the tool(s) to use when indexing or filtering. """ results: List[TResult] USE_TOOLS = True catch_exceptions = True """Prevents exceptions in `index()` from raising too high. Set to False in tests to properly catch all exceptions.""" scheduler: Any storage: StorageInterface objstorage: Any idx_storage: IndexerStorageInterface def __init__(self, config=None, **kw) -> None: """Prepare and check that the indexer is ready to run. """ super().__init__() if config is not None: self.config = config elif SWH_CONFIG: self.config = SWH_CONFIG.copy() else: self.config = load_from_envvar() self.config = merge_configs(DEFAULT_CONFIG, self.config) self.prepare() self.check() self.log.debug("%s: config=%s", self, self.config) def prepare(self) -> None: """Prepare the indexer's needed runtime configuration. Without this step, the indexer cannot possibly run. """ config_storage = self.config.get("storage") if config_storage: self.storage = get_storage(**config_storage) - objstorage = self.config["objstorage"] - self.objstorage = get_objstorage(objstorage["cls"], objstorage["args"]) + self.objstorage = get_objstorage(**self.config["objstorage"]) idx_storage = self.config[INDEXER_CFG_KEY] self.idx_storage = get_indexer_storage(**idx_storage) _log = logging.getLogger("requests.packages.urllib3.connectionpool") _log.setLevel(logging.WARN) self.log = logging.getLogger("swh.indexer") if self.USE_TOOLS: self.tools = list(self.register_tools(self.config.get("tools", []))) self.results = [] @property def tool(self) -> Dict: return self.tools[0] def check(self) -> None: """Check the indexer's configuration is ok before proceeding. If ok, does nothing. If not raise error. """ if self.USE_TOOLS and not self.tools: raise ValueError("Tools %s is unknown, cannot continue" % self.tools) def _prepare_tool(self, tool: Dict[str, Any]) -> Dict[str, Any]: """Prepare the tool dict to be compliant with the storage api. """ return {"tool_%s" % key: value for key, value in tool.items()} def register_tools( self, tools: Union[Dict[str, Any], List[Dict[str, Any]]] ) -> List[Dict[str, Any]]: """Permit to register tools to the storage. Add a sensible default which can be overridden if not sufficient. (For now, all indexers use only one tool) Expects the self.config['tools'] property to be set with one or more tools. Args: tools: Either a dict or a list of dict. Returns: list: List of dicts with additional id key. Raises: ValueError: if not a list nor a dict. """ if isinstance(tools, list): tools = list(map(self._prepare_tool, tools)) elif isinstance(tools, dict): tools = [self._prepare_tool(tools)] else: raise ValueError("Configuration tool(s) must be a dict or list!") if tools: return self.idx_storage.indexer_configuration_add(tools) else: return [] def index(self, id: TId, data: Optional[TData], **kwargs) -> List[TResult]: """Index computation for the id and associated raw data. Args: id: identifier or Dict object data: id's data from storage or objstorage depending on object type Returns: dict: a dict that makes sense for the :meth:`.persist_index_computations` method. """ raise NotImplementedError() def filter(self, ids: List[TId]) -> Iterator[TId]: """Filter missing ids for that particular indexer. Args: ids: list of ids Yields: iterator of missing ids """ yield from ids @abc.abstractmethod def persist_index_computations(self, results: List[TResult]) -> Dict[str, int]: """Persist the computation resulting from the index. Args: results: List of results. One result is the result of the index function. Returns: a summary dict of what has been inserted in the storage """ return {} class ContentIndexer(BaseIndexer[Sha1, bytes, TResult], Generic[TResult]): """A content indexer working on a list of ids directly. To work on indexer partition, use the :class:`ContentPartitionIndexer` instead. Note: :class:`ContentIndexer` is not an instantiable object. To use it, one should inherit from this class and override the methods mentioned in the :class:`BaseIndexer` class. """ def run(self, ids: List[Sha1], **kwargs) -> Dict: """Given a list of ids: - retrieve the content from the storage - execute the indexing computations - store the results Args: ids (Iterable[Union[bytes, str]]): sha1's identifier list **kwargs: passed to the `index` method Returns: A summary Dict of the task's status """ if "policy_update" in kwargs: warnings.warn( "'policy_update' argument is deprecated and ignored.", DeprecationWarning, ) del kwargs["policy_update"] sha1s = [ hashutil.hash_to_bytes(id_) if isinstance(id_, str) else id_ for id_ in ids ] results = [] summary: Dict = {"status": "uneventful"} try: for sha1 in sha1s: try: raw_content = self.objstorage.get(sha1) except ObjNotFoundError: self.log.warning( "Content %s not found in objstorage" % hashutil.hash_to_hex(sha1) ) continue res = self.index(sha1, raw_content, **kwargs) if res: # If no results, skip it results.extend(res) summary["status"] = "eventful" summary = self.persist_index_computations(results) self.results = results except Exception: if not self.catch_exceptions: raise self.log.exception("Problem when reading contents metadata.") summary["status"] = "failed" return summary class ContentPartitionIndexer(BaseIndexer[Sha1, bytes, TResult], Generic[TResult]): """A content partition indexer. This expects as input a partition_id and a nb_partitions. This will then index the contents within that partition. To work on a list of ids, use the :class:`ContentIndexer` instead. Note: :class:`ContentPartitionIndexer` is not an instantiable object. To use it, one should inherit from this class and override the methods mentioned in the :class:`BaseIndexer` class. """ @abc.abstractmethod def indexed_contents_in_partition( self, partition_id: int, nb_partitions: int, page_token: Optional[str] = None ) -> PagedResult[Sha1]: """Retrieve indexed contents within range [start, end]. Args: partition_id: Index of the partition to fetch nb_partitions: Total number of partitions to split into page_token: opaque token used for pagination Returns: PagedResult of Sha1. If next_page_token is None, there is no more data to fetch """ pass def _list_contents_to_index( self, partition_id: int, nb_partitions: int, indexed: Set[Sha1] ) -> Iterator[Sha1]: """Compute from storage the new contents to index in the partition_id . The already indexed contents are skipped. Args: partition_id: Index of the partition to fetch data from nb_partitions: Total number of partition indexed: Set of content already indexed. Yields: Sha1 id (bytes) of contents to index """ if not isinstance(partition_id, int) or not isinstance(nb_partitions, int): raise TypeError( f"identifiers must be int, not {partition_id!r} and {nb_partitions!r}." ) next_page_token = None while True: result = self.storage.content_get_partition( partition_id, nb_partitions, page_token=next_page_token ) contents = result.results for c in contents: _id = hashutil.hash_to_bytes(c.sha1) if _id in indexed: continue yield _id next_page_token = result.next_page_token if next_page_token is None: break def _index_contents( self, partition_id: int, nb_partitions: int, indexed: Set[Sha1], **kwargs: Any ) -> Iterator[TResult]: """Index the contents within the partition_id. Args: start: Starting bound from range identifier end: End range identifier indexed: Set of content already indexed. Yields: indexing result as dict to persist in the indexer backend """ for sha1 in self._list_contents_to_index(partition_id, nb_partitions, indexed): try: raw_content = self.objstorage.get(sha1) except ObjNotFoundError: self.log.warning(f"Content {sha1.hex()} not found in objstorage") continue yield from self.index(sha1, raw_content, **kwargs) def _index_with_skipping_already_done( self, partition_id: int, nb_partitions: int ) -> Iterator[TResult]: """Index not already indexed contents within the partition partition_id Args: partition_id: Index of the partition to fetch nb_partitions: Total number of partitions to split into Yields: indexing result as dict to persist in the indexer backend """ next_page_token = None contents = set() while True: indexed_page = self.indexed_contents_in_partition( partition_id, nb_partitions, page_token=next_page_token ) for sha1 in indexed_page.results: contents.add(sha1) yield from self._index_contents(partition_id, nb_partitions, contents) next_page_token = indexed_page.next_page_token if next_page_token is None: break def run( self, partition_id: int, nb_partitions: int, skip_existing: bool = True, **kwargs, ) -> Dict: """Given a partition of content ids, index the contents within. Either the indexer is incremental (filter out existing computed data) or it computes everything from scratch. Args: partition_id: Index of the partition to fetch nb_partitions: Total number of partitions to split into skip_existing: Skip existing indexed data (default) or not **kwargs: passed to the `index` method Returns: dict with the indexing task status """ summary: Dict[str, Any] = {"status": "uneventful"} count = 0 try: if skip_existing: gen = self._index_with_skipping_already_done( partition_id, nb_partitions ) else: gen = self._index_contents(partition_id, nb_partitions, indexed=set([])) count_object_added_key: Optional[str] = None for contents in utils.grouper(gen, n=self.config["write_batch_size"]): res = self.persist_index_computations(list(contents)) if not count_object_added_key: count_object_added_key = list(res.keys())[0] count += res[count_object_added_key] if count > 0: summary["status"] = "eventful" except Exception: if not self.catch_exceptions: raise self.log.exception("Problem when computing metadata.") summary["status"] = "failed" if count > 0 and count_object_added_key: summary[count_object_added_key] = count return summary class OriginIndexer(BaseIndexer[str, None, TResult], Generic[TResult]): """An object type indexer, inherits from the :class:`BaseIndexer` and implements Origin indexing using the run method Note: the :class:`OriginIndexer` is not an instantiable object. To use it in another context one should inherit from this class and override the methods mentioned in the :class:`BaseIndexer` class. """ def run(self, origin_urls: List[str], **kwargs) -> Dict: """Given a list of origin urls: - retrieve origins from storage - execute the indexing computations - store the results Args: origin_urls: list of origin urls. **kwargs: passed to the `index` method """ if "policy_update" in kwargs: warnings.warn( "'policy_update' argument is deprecated and ignored.", DeprecationWarning, ) del kwargs["policy_update"] summary: Dict[str, Any] = {"status": "uneventful"} try: results = self.index_list(origin_urls, **kwargs) except Exception: if not self.catch_exceptions: raise summary["status"] = "failed" return summary summary_persist = self.persist_index_computations(results) self.results = results if summary_persist: for value in summary_persist.values(): if value > 0: summary["status"] = "eventful" summary.update(summary_persist) return summary def index_list(self, origin_urls: List[str], **kwargs) -> List[TResult]: results = [] for origin_url in origin_urls: try: results.extend(self.index(origin_url, **kwargs)) except Exception: self.log.exception("Problem when processing origin %s", origin_url) raise return results class RevisionIndexer(BaseIndexer[Sha1Git, Revision, TResult], Generic[TResult]): """An object type indexer, inherits from the :class:`BaseIndexer` and implements Revision indexing using the run method Note: the :class:`RevisionIndexer` is not an instantiable object. To use it in another context one should inherit from this class and override the methods mentioned in the :class:`BaseIndexer` class. """ def run(self, ids: List[Sha1Git], **kwargs) -> Dict: """Given a list of sha1_gits: - retrieve revisions from storage - execute the indexing computations - store the results Args: ids: sha1_git's identifier list """ if "policy_update" in kwargs: warnings.warn( "'policy_update' argument is deprecated and ignored.", DeprecationWarning, ) del kwargs["policy_update"] summary: Dict[str, Any] = {"status": "uneventful"} results = [] revision_ids = [ hashutil.hash_to_bytes(id_) if isinstance(id_, str) else id_ for id_ in ids ] for (rev_id, rev) in zip(revision_ids, self.storage.revision_get(revision_ids)): if not rev: # TODO: call self.index() with rev=None? self.log.warning( "Revision %s not found in storage", hashutil.hash_to_hex(rev_id) ) continue try: results.extend(self.index(rev_id, rev)) except Exception: if not self.catch_exceptions: raise self.log.exception("Problem when processing revision") summary["status"] = "failed" return summary summary_persist = self.persist_index_computations(results) if summary_persist: for value in summary_persist.values(): if value > 0: summary["status"] = "eventful" summary.update(summary_persist) self.results = results return summary diff --git a/swh/indexer/journal_client.py b/swh/indexer/journal_client.py index 06842a8..32f4a42 100644 --- a/swh/indexer/journal_client.py +++ b/swh/indexer/journal_client.py @@ -1,44 +1,44 @@ # Copyright (C) 2018-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import logging from swh.core.utils import grouper from swh.scheduler.utils import create_task_dict MAX_ORIGINS_PER_TASK = 100 def process_journal_objects(messages, *, scheduler, task_names): """Worker function for `JournalClient.process(worker_fn)`, after currification of `scheduler` and `task_names`.""" - assert set(messages) == {"origin_visit"}, set(messages) - process_origin_visits(messages["origin_visit"], scheduler, task_names) + assert set(messages) == {"origin_visit_status"}, set(messages) + process_origin_visits(messages["origin_visit_status"], scheduler, task_names) def process_origin_visits(visits, scheduler, task_names): task_dicts = [] logging.debug("processing origin visits %r", visits) if task_names.get("origin_metadata"): visits = [visit for visit in visits if visit["status"] == "full"] visit_batches = grouper(visits, MAX_ORIGINS_PER_TASK) for visit_batch in visit_batches: visit_urls = [] for visit in visit_batch: if isinstance(visit["origin"], str): visit_urls.append(visit["origin"]) else: visit_urls.append(visit["origin"]["url"]) task_dicts.append( create_task_dict( task_names["origin_metadata"], "oneshot", visit_urls, retries_left=1, ) ) if task_dicts: scheduler.create_tasks(task_dicts) diff --git a/swh/indexer/storage/__init__.py b/swh/indexer/storage/__init__.py index 422ed19..8d902ce 100644 --- a/swh/indexer/storage/__init__.py +++ b/swh/indexer/storage/__init__.py @@ -1,697 +1,723 @@ # Copyright (C) 2015-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information from collections import Counter from importlib import import_module import json from typing import Dict, Iterable, List, Optional, Tuple, Union import warnings import psycopg2 import psycopg2.pool from swh.core.db.common import db_transaction from swh.indexer.storage.interface import IndexerStorageInterface from swh.model.hashutil import hash_to_bytes, hash_to_hex from swh.model.model import SHA1_SIZE from swh.storage.exc import StorageDBError from swh.storage.utils import get_partition_bounds_bytes from . import converters from .db import Db from .exc import DuplicateId, IndexerStorageArgumentException from .interface import PagedResult, Sha1 from .metrics import process_metrics, send_metric, timed from .model import ( ContentCtagsRow, ContentLanguageRow, ContentLicenseRow, ContentMetadataRow, ContentMimetypeRow, OriginIntrinsicMetadataRow, RevisionIntrinsicMetadataRow, ) +from .writer import JournalWriter INDEXER_CFG_KEY = "indexer_storage" MAPPING_NAMES = ["codemeta", "gemspec", "maven", "npm", "pkg-info"] SERVER_IMPLEMENTATIONS: Dict[str, str] = { "local": ".IndexerStorage", "remote": ".api.client.RemoteStorage", "memory": ".in_memory.IndexerStorage", } def get_indexer_storage(cls: str, **kwargs) -> IndexerStorageInterface: """Instantiate an indexer storage implementation of class `cls` with arguments `kwargs`. Args: cls: indexer storage class (local, remote or memory) kwargs: dictionary of arguments passed to the indexer storage class constructor Returns: an instance of swh.indexer.storage Raises: ValueError if passed an unknown storage class. """ if "args" in kwargs: warnings.warn( 'Explicit "args" key is deprecated, use keys directly instead.', DeprecationWarning, ) kwargs = kwargs["args"] class_path = SERVER_IMPLEMENTATIONS.get(cls) if class_path is None: raise ValueError( f"Unknown indexer storage class `{cls}`. " f"Supported: {', '.join(SERVER_IMPLEMENTATIONS)}" ) (module_path, class_name) = class_path.rsplit(".", 1) module = import_module(module_path if module_path else ".", package=__package__) BackendClass = getattr(module, class_name) check_config = kwargs.pop("check_config", {}) idx_storage = BackendClass(**kwargs) if check_config: if not idx_storage.check_config(**check_config): raise EnvironmentError("Indexer storage check config failed") return idx_storage def check_id_duplicates(data): """ If any two row models in `data` have the same unique key, raises a `ValueError`. Values associated to the key must be hashable. Args: data (List[dict]): List of dictionaries to be inserted >>> check_id_duplicates([ ... ContentLanguageRow(id=b'foo', indexer_configuration_id=42, lang="python"), ... ContentLanguageRow(id=b'foo', indexer_configuration_id=32, lang="python"), ... ]) >>> check_id_duplicates([ ... ContentLanguageRow(id=b'foo', indexer_configuration_id=42, lang="python"), ... ContentLanguageRow(id=b'foo', indexer_configuration_id=42, lang="python"), ... ]) Traceback (most recent call last): ... swh.indexer.storage.exc.DuplicateId: [{'id': b'foo', 'indexer_configuration_id': 42}] """ # noqa counter = Counter(tuple(sorted(item.unique_key().items())) for item in data) duplicates = [id_ for (id_, count) in counter.items() if count >= 2] if duplicates: raise DuplicateId(list(map(dict, duplicates))) class IndexerStorage: """SWH Indexer Storage """ - def __init__(self, db, min_pool_conns=1, max_pool_conns=10): + def __init__(self, db, min_pool_conns=1, max_pool_conns=10, journal_writer=None): """ Args: - db_conn: either a libpq connection string, or a psycopg2 connection + db: either a libpq connection string, or a psycopg2 connection + journal_writer: configuration passed to + `swh.journal.writer.get_journal_writer` """ + self.journal_writer = JournalWriter(self._tool_get_from_id, journal_writer) try: if isinstance(db, psycopg2.extensions.connection): self._pool = None self._db = Db(db) else: self._pool = psycopg2.pool.ThreadedConnectionPool( min_pool_conns, max_pool_conns, db ) self._db = None except psycopg2.OperationalError as e: raise StorageDBError(e) def get_db(self): if self._db: return self._db return Db.from_pool(self._pool) def put_db(self, db): if db is not self._db: db.put_conn() @timed @db_transaction() def check_config(self, *, check_write, db=None, cur=None): # Check permissions on one of the tables if check_write: check = "INSERT" else: check = "SELECT" cur.execute( "select has_table_privilege(current_user, 'content_mimetype', %s)", # noqa (check,), ) return cur.fetchone()[0] @timed @db_transaction() def content_mimetype_missing( self, mimetypes: Iterable[Dict], db=None, cur=None ) -> List[Tuple[Sha1, int]]: return [obj[0] for obj in db.content_mimetype_missing_from_list(mimetypes, cur)] @timed @db_transaction() def get_partition( self, indexer_type: str, indexer_configuration_id: int, partition_id: int, nb_partitions: int, page_token: Optional[str] = None, limit: int = 1000, with_textual_data=False, db=None, cur=None, ) -> PagedResult[Sha1]: """Retrieve ids of content with `indexer_type` within within partition partition_id bound by limit. Args: **indexer_type**: Type of data content to index (mimetype, language, etc...) **indexer_configuration_id**: The tool used to index data **partition_id**: index of the partition to fetch **nb_partitions**: total number of partitions to split into **page_token**: opaque token used for pagination **limit**: Limit result (default to 1000) **with_textual_data** (bool): Deal with only textual content (True) or all content (all contents by defaults, False) Raises: IndexerStorageArgumentException for; - limit to None - wrong indexer_type provided Returns: PagedResult of Sha1. If next_page_token is None, there is no more data to fetch """ if limit is None: raise IndexerStorageArgumentException("limit should not be None") if indexer_type not in db.content_indexer_names: err = f"Wrong type. Should be one of [{','.join(db.content_indexer_names)}]" raise IndexerStorageArgumentException(err) start, end = get_partition_bounds_bytes(partition_id, nb_partitions, SHA1_SIZE) if page_token is not None: start = hash_to_bytes(page_token) if end is None: end = b"\xff" * SHA1_SIZE next_page_token: Optional[str] = None ids = [ row[0] for row in db.content_get_range( indexer_type, start, end, indexer_configuration_id, limit=limit + 1, with_textual_data=with_textual_data, cur=cur, ) ] if len(ids) >= limit: next_page_token = hash_to_hex(ids[-1]) ids = ids[:limit] assert len(ids) <= limit return PagedResult(results=ids, next_page_token=next_page_token) @timed @db_transaction() def content_mimetype_get_partition( self, indexer_configuration_id: int, partition_id: int, nb_partitions: int, page_token: Optional[str] = None, limit: int = 1000, db=None, cur=None, ) -> PagedResult[Sha1]: return self.get_partition( "mimetype", indexer_configuration_id, partition_id, nb_partitions, page_token=page_token, limit=limit, db=db, cur=cur, ) @timed @process_metrics @db_transaction() def content_mimetype_add( self, mimetypes: List[ContentMimetypeRow], db=None, cur=None, ) -> Dict[str, int]: check_id_duplicates(mimetypes) mimetypes.sort(key=lambda m: m.id) + self.journal_writer.write_additions("content_mimetype", mimetypes) db.mktemp_content_mimetype(cur) db.copy_to( [m.to_dict() for m in mimetypes], "tmp_content_mimetype", ["id", "mimetype", "encoding", "indexer_configuration_id"], cur, ) count = db.content_mimetype_add_from_temp(cur) return {"content_mimetype:add": count} @timed @db_transaction() def content_mimetype_get( self, ids: Iterable[Sha1], db=None, cur=None ) -> List[ContentMimetypeRow]: return [ ContentMimetypeRow.from_dict( converters.db_to_mimetype(dict(zip(db.content_mimetype_cols, c))) ) for c in db.content_mimetype_get_from_list(ids, cur) ] @timed @db_transaction() def content_language_missing( self, languages: Iterable[Dict], db=None, cur=None ) -> List[Tuple[Sha1, int]]: return [obj[0] for obj in db.content_language_missing_from_list(languages, cur)] @timed @db_transaction() def content_language_get( self, ids: Iterable[Sha1], db=None, cur=None ) -> List[ContentLanguageRow]: return [ ContentLanguageRow.from_dict( converters.db_to_language(dict(zip(db.content_language_cols, c))) ) for c in db.content_language_get_from_list(ids, cur) ] @timed @process_metrics @db_transaction() def content_language_add( self, languages: List[ContentLanguageRow], db=None, cur=None, ) -> Dict[str, int]: check_id_duplicates(languages) languages.sort(key=lambda m: m.id) + self.journal_writer.write_additions("content_language", languages) db.mktemp_content_language(cur) # empty language is mapped to 'unknown' db.copy_to( ( { "id": lang.id, "lang": lang.lang or "unknown", "indexer_configuration_id": lang.indexer_configuration_id, } for lang in languages ), "tmp_content_language", ["id", "lang", "indexer_configuration_id"], cur, ) count = db.content_language_add_from_temp(cur) return {"content_language:add": count} @timed @db_transaction() def content_ctags_missing( self, ctags: Iterable[Dict], db=None, cur=None ) -> List[Tuple[Sha1, int]]: return [obj[0] for obj in db.content_ctags_missing_from_list(ctags, cur)] @timed @db_transaction() def content_ctags_get( self, ids: Iterable[Sha1], db=None, cur=None ) -> List[ContentCtagsRow]: return [ ContentCtagsRow.from_dict( converters.db_to_ctags(dict(zip(db.content_ctags_cols, c))) ) for c in db.content_ctags_get_from_list(ids, cur) ] @timed @process_metrics @db_transaction() def content_ctags_add( self, ctags: List[ContentCtagsRow], db=None, cur=None, ) -> Dict[str, int]: check_id_duplicates(ctags) ctags.sort(key=lambda m: m.id) + self.journal_writer.write_additions("content_ctags", ctags) db.mktemp_content_ctags(cur) db.copy_to( [ctag.to_dict() for ctag in ctags], tblname="tmp_content_ctags", columns=["id", "name", "kind", "line", "lang", "indexer_configuration_id"], cur=cur, ) count = db.content_ctags_add_from_temp(cur) return {"content_ctags:add": count} @timed @db_transaction() def content_ctags_search( self, expression: str, limit: int = 10, last_sha1: Optional[Sha1] = None, db=None, cur=None, ) -> List[ContentCtagsRow]: return [ ContentCtagsRow.from_dict( converters.db_to_ctags(dict(zip(db.content_ctags_cols, obj))) ) for obj in db.content_ctags_search(expression, last_sha1, limit, cur=cur) ] @timed @db_transaction() def content_fossology_license_get( self, ids: Iterable[Sha1], db=None, cur=None ) -> List[ContentLicenseRow]: return [ ContentLicenseRow.from_dict( converters.db_to_fossology_license( dict(zip(db.content_fossology_license_cols, c)) ) ) for c in db.content_fossology_license_get_from_list(ids, cur) ] @timed @process_metrics @db_transaction() def content_fossology_license_add( self, licenses: List[ContentLicenseRow], db=None, cur=None, ) -> Dict[str, int]: check_id_duplicates(licenses) licenses.sort(key=lambda m: m.id) + self.journal_writer.write_additions("content_fossology_license", licenses) db.mktemp_content_fossology_license(cur) db.copy_to( [license.to_dict() for license in licenses], tblname="tmp_content_fossology_license", columns=["id", "license", "indexer_configuration_id"], cur=cur, ) count = db.content_fossology_license_add_from_temp(cur) return {"content_fossology_license:add": count} @timed @db_transaction() def content_fossology_license_get_partition( self, indexer_configuration_id: int, partition_id: int, nb_partitions: int, page_token: Optional[str] = None, limit: int = 1000, db=None, cur=None, ) -> PagedResult[Sha1]: return self.get_partition( "fossology_license", indexer_configuration_id, partition_id, nb_partitions, page_token=page_token, limit=limit, with_textual_data=True, db=db, cur=cur, ) @timed @db_transaction() def content_metadata_missing( self, metadata: Iterable[Dict], db=None, cur=None ) -> List[Tuple[Sha1, int]]: return [obj[0] for obj in db.content_metadata_missing_from_list(metadata, cur)] @timed @db_transaction() def content_metadata_get( self, ids: Iterable[Sha1], db=None, cur=None ) -> List[ContentMetadataRow]: return [ ContentMetadataRow.from_dict( converters.db_to_metadata(dict(zip(db.content_metadata_cols, c))) ) for c in db.content_metadata_get_from_list(ids, cur) ] @timed @process_metrics @db_transaction() def content_metadata_add( self, metadata: List[ContentMetadataRow], db=None, cur=None, ) -> Dict[str, int]: check_id_duplicates(metadata) metadata.sort(key=lambda m: m.id) + self.journal_writer.write_additions("content_metadata", metadata) db.mktemp_content_metadata(cur) db.copy_to( [m.to_dict() for m in metadata], "tmp_content_metadata", ["id", "metadata", "indexer_configuration_id"], cur, ) count = db.content_metadata_add_from_temp(cur) return { "content_metadata:add": count, } @timed @db_transaction() def revision_intrinsic_metadata_missing( self, metadata: Iterable[Dict], db=None, cur=None ) -> List[Tuple[Sha1, int]]: return [ obj[0] for obj in db.revision_intrinsic_metadata_missing_from_list(metadata, cur) ] @timed @db_transaction() def revision_intrinsic_metadata_get( self, ids: Iterable[Sha1], db=None, cur=None ) -> List[RevisionIntrinsicMetadataRow]: return [ RevisionIntrinsicMetadataRow.from_dict( converters.db_to_metadata( dict(zip(db.revision_intrinsic_metadata_cols, c)) ) ) for c in db.revision_intrinsic_metadata_get_from_list(ids, cur) ] @timed @process_metrics @db_transaction() def revision_intrinsic_metadata_add( self, metadata: List[RevisionIntrinsicMetadataRow], db=None, cur=None, ) -> Dict[str, int]: check_id_duplicates(metadata) metadata.sort(key=lambda m: m.id) + self.journal_writer.write_additions("revision_intrinsic_metadata", metadata) db.mktemp_revision_intrinsic_metadata(cur) db.copy_to( [m.to_dict() for m in metadata], "tmp_revision_intrinsic_metadata", ["id", "metadata", "mappings", "indexer_configuration_id"], cur, ) count = db.revision_intrinsic_metadata_add_from_temp(cur) return { "revision_intrinsic_metadata:add": count, } @timed @db_transaction() def origin_intrinsic_metadata_get( self, urls: Iterable[str], db=None, cur=None ) -> List[OriginIntrinsicMetadataRow]: return [ OriginIntrinsicMetadataRow.from_dict( converters.db_to_metadata( dict(zip(db.origin_intrinsic_metadata_cols, c)) ) ) for c in db.origin_intrinsic_metadata_get_from_list(urls, cur) ] @timed @process_metrics @db_transaction() def origin_intrinsic_metadata_add( self, metadata: List[OriginIntrinsicMetadataRow], db=None, cur=None, ) -> Dict[str, int]: check_id_duplicates(metadata) metadata.sort(key=lambda m: m.id) + self.journal_writer.write_additions("origin_intrinsic_metadata", metadata) db.mktemp_origin_intrinsic_metadata(cur) db.copy_to( [m.to_dict() for m in metadata], "tmp_origin_intrinsic_metadata", ["id", "metadata", "indexer_configuration_id", "from_revision", "mappings"], cur, ) count = db.origin_intrinsic_metadata_add_from_temp(cur) return { "origin_intrinsic_metadata:add": count, } @timed @db_transaction() def origin_intrinsic_metadata_search_fulltext( self, conjunction: List[str], limit: int = 100, db=None, cur=None ) -> List[OriginIntrinsicMetadataRow]: return [ OriginIntrinsicMetadataRow.from_dict( converters.db_to_metadata( dict(zip(db.origin_intrinsic_metadata_cols, c)) ) ) for c in db.origin_intrinsic_metadata_search_fulltext( conjunction, limit=limit, cur=cur ) ] @timed @db_transaction() def origin_intrinsic_metadata_search_by_producer( self, page_token: str = "", limit: int = 100, ids_only: bool = False, mappings: Optional[List[str]] = None, tool_ids: Optional[List[int]] = None, db=None, cur=None, ) -> PagedResult[Union[str, OriginIntrinsicMetadataRow]]: assert isinstance(page_token, str) # we go to limit+1 to check whether we should add next_page_token in # the response rows = db.origin_intrinsic_metadata_search_by_producer( page_token, limit + 1, ids_only, mappings, tool_ids, cur ) next_page_token = None if ids_only: results = [origin for (origin,) in rows] if len(results) > limit: results[limit:] = [] next_page_token = results[-1] else: results = [ OriginIntrinsicMetadataRow.from_dict( converters.db_to_metadata( dict(zip(db.origin_intrinsic_metadata_cols, row)) ) ) for row in rows ] if len(results) > limit: results[limit:] = [] next_page_token = results[-1].id return PagedResult(results=results, next_page_token=next_page_token,) @timed @db_transaction() def origin_intrinsic_metadata_stats(self, db=None, cur=None): mapping_names = [m for m in MAPPING_NAMES] select_parts = [] # Count rows for each mapping for mapping_name in mapping_names: select_parts.append( ( "sum(case when (mappings @> ARRAY['%s']) " " then 1 else 0 end)" ) % mapping_name ) # Total select_parts.append("sum(1)") # Rows whose metadata has at least one key that is not '@context' select_parts.append( "sum(case when ('{}'::jsonb @> (metadata - '@context')) " " then 0 else 1 end)" ) cur.execute( "select " + ", ".join(select_parts) + " from origin_intrinsic_metadata" ) results = dict(zip(mapping_names + ["total", "non_empty"], cur.fetchone())) return { "total": results.pop("total"), "non_empty": results.pop("non_empty"), "per_mapping": results, } @timed @db_transaction() def indexer_configuration_add(self, tools, db=None, cur=None): db.mktemp_indexer_configuration(cur) db.copy_to( tools, "tmp_indexer_configuration", ["tool_name", "tool_version", "tool_configuration"], cur, ) tools = db.indexer_configuration_add_from_temp(cur) results = [dict(zip(db.indexer_configuration_cols, line)) for line in tools] send_metric( "indexer_configuration:add", len(results), method_name="indexer_configuration_add", ) return results @timed @db_transaction() def indexer_configuration_get(self, tool, db=None, cur=None): tool_conf = tool["tool_configuration"] if isinstance(tool_conf, dict): tool_conf = json.dumps(tool_conf) idx = db.indexer_configuration_get( tool["tool_name"], tool["tool_version"], tool_conf ) if not idx: return None return dict(zip(db.indexer_configuration_cols, idx)) + + @db_transaction() + def _tool_get_from_id(self, id_, db, cur): + tool = dict( + zip( + db.indexer_configuration_cols, + db.indexer_configuration_get_from_id(id_, cur), + ) + ) + return { + "id": tool["id"], + "name": tool["tool_name"], + "version": tool["tool_version"], + "configuration": tool["tool_configuration"], + } diff --git a/swh/indexer/storage/db.py b/swh/indexer/storage/db.py index b2cb63b..ec2e084 100644 --- a/swh/indexer/storage/db.py +++ b/swh/indexer/storage/db.py @@ -1,538 +1,550 @@ # Copyright (C) 2015-2018 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information from typing import Dict, Iterable, Iterator, List from swh.core.db import BaseDb from swh.core.db.db_utils import execute_values_generator, stored_procedure from swh.model import hashutil from .interface import Sha1 class Db(BaseDb): """Proxy to the SWH Indexer DB, with wrappers around stored procedures """ content_mimetype_hash_keys = ["id", "indexer_configuration_id"] def _missing_from_list( self, table: str, data: Iterable[Dict], hash_keys: List[str], cur=None ): """Read from table the data with hash_keys that are missing. Args: table: Table name (e.g content_mimetype, content_language, etc...) data: Dict of data to read from hash_keys: List of keys to read in the data dict. Yields: The data which is missing from the db. """ cur = self._cursor(cur) keys = ", ".join(hash_keys) equality = " AND ".join(("t.%s = c.%s" % (key, key)) for key in hash_keys) yield from execute_values_generator( cur, """ select %s from (values %%s) as t(%s) where not exists ( select 1 from %s c where %s ) """ % (keys, keys, table, equality), (tuple(m[k] for k in hash_keys) for m in data), ) def content_mimetype_missing_from_list( self, mimetypes: Iterable[Dict], cur=None ) -> Iterator[Sha1]: """List missing mimetypes. """ yield from self._missing_from_list( "content_mimetype", mimetypes, self.content_mimetype_hash_keys, cur=cur ) content_mimetype_cols = [ "id", "mimetype", "encoding", "tool_id", "tool_name", "tool_version", "tool_configuration", ] @stored_procedure("swh_mktemp_content_mimetype") def mktemp_content_mimetype(self, cur=None): pass def content_mimetype_add_from_temp(self, cur=None): cur = self._cursor(cur) cur.execute("select * from swh_content_mimetype_add()") return cur.fetchone()[0] def _convert_key(self, key, main_table="c"): """Convert keys according to specific use in the module. Args: key (str): Key expression to change according to the alias used in the query main_table (str): Alias to use for the main table. Default to c for content_{something}. Expected: Tables content_{something} being aliased as 'c' (something in {language, mimetype, ...}), table indexer_configuration being aliased as 'i'. """ if key == "id": return "%s.id" % main_table elif key == "tool_id": return "i.id as tool_id" elif key == "license": return ( """ ( select name from fossology_license where id = %s.license_id ) as licenses""" % main_table ) return key def _get_from_list(self, table, ids, cols, cur=None, id_col="id"): """Fetches entries from the `table` such that their `id` field (or whatever is given to `id_col`) is in `ids`. Returns the columns `cols`. The `cur` parameter is used to connect to the database. """ cur = self._cursor(cur) keys = map(self._convert_key, cols) query = """ select {keys} from (values %s) as t(id) inner join {table} c on c.{id_col}=t.id inner join indexer_configuration i on c.indexer_configuration_id=i.id; """.format( keys=", ".join(keys), id_col=id_col, table=table ) yield from execute_values_generator(cur, query, ((_id,) for _id in ids)) content_indexer_names = { "mimetype": "content_mimetype", "fossology_license": "content_fossology_license", } def content_get_range( self, content_type, start, end, indexer_configuration_id, limit=1000, with_textual_data=False, cur=None, ): """Retrieve contents with content_type, within range [start, end] bound by limit and associated to the given indexer configuration id. When asking to work on textual content, that filters on the mimetype table with any mimetype that is not binary. """ cur = self._cursor(cur) table = self.content_indexer_names[content_type] if with_textual_data: extra = """inner join content_mimetype cm on (t.id=cm.id and cm.mimetype like 'text/%%' and %(start)s <= cm.id and cm.id <= %(end)s) """ else: extra = "" query = f"""select t.id from {table} t {extra} where t.indexer_configuration_id=%(tool_id)s and %(start)s <= t.id and t.id <= %(end)s order by t.indexer_configuration_id, t.id limit %(limit)s""" cur.execute( query, { "start": start, "end": end, "tool_id": indexer_configuration_id, "limit": limit, }, ) yield from cur def content_mimetype_get_from_list(self, ids, cur=None): yield from self._get_from_list( "content_mimetype", ids, self.content_mimetype_cols, cur=cur ) content_language_hash_keys = ["id", "indexer_configuration_id"] def content_language_missing_from_list(self, languages, cur=None): """List missing languages. """ yield from self._missing_from_list( "content_language", languages, self.content_language_hash_keys, cur=cur ) content_language_cols = [ "id", "lang", "tool_id", "tool_name", "tool_version", "tool_configuration", ] @stored_procedure("swh_mktemp_content_language") def mktemp_content_language(self, cur=None): pass def content_language_add_from_temp(self, cur=None): cur = self._cursor(cur) cur.execute("select * from swh_content_language_add()") return cur.fetchone()[0] def content_language_get_from_list(self, ids, cur=None): yield from self._get_from_list( "content_language", ids, self.content_language_cols, cur=cur ) content_ctags_hash_keys = ["id", "indexer_configuration_id"] def content_ctags_missing_from_list(self, ctags, cur=None): """List missing ctags. """ yield from self._missing_from_list( "content_ctags", ctags, self.content_ctags_hash_keys, cur=cur ) content_ctags_cols = [ "id", "name", "kind", "line", "lang", "tool_id", "tool_name", "tool_version", "tool_configuration", ] @stored_procedure("swh_mktemp_content_ctags") def mktemp_content_ctags(self, cur=None): pass def content_ctags_add_from_temp(self, cur=None): cur = self._cursor(cur) cur.execute("select * from swh_content_ctags_add()") return cur.fetchone()[0] def content_ctags_get_from_list(self, ids, cur=None): cur = self._cursor(cur) keys = map(self._convert_key, self.content_ctags_cols) yield from execute_values_generator( cur, """ select %s from (values %%s) as t(id) inner join content_ctags c on c.id=t.id inner join indexer_configuration i on c.indexer_configuration_id=i.id order by line """ % ", ".join(keys), ((_id,) for _id in ids), ) def content_ctags_search(self, expression, last_sha1, limit, cur=None): cur = self._cursor(cur) if not last_sha1: query = """SELECT %s FROM swh_content_ctags_search(%%s, %%s)""" % ( ",".join(self.content_ctags_cols) ) cur.execute(query, (expression, limit)) else: if last_sha1 and isinstance(last_sha1, bytes): last_sha1 = "\\x%s" % hashutil.hash_to_hex(last_sha1) elif last_sha1: last_sha1 = "\\x%s" % last_sha1 query = """SELECT %s FROM swh_content_ctags_search(%%s, %%s, %%s)""" % ( ",".join(self.content_ctags_cols) ) cur.execute(query, (expression, limit, last_sha1)) yield from cur content_fossology_license_cols = [ "id", "tool_id", "tool_name", "tool_version", "tool_configuration", "license", ] @stored_procedure("swh_mktemp_content_fossology_license") def mktemp_content_fossology_license(self, cur=None): pass def content_fossology_license_add_from_temp(self, cur=None): """Add new licenses per content. """ cur = self._cursor(cur) cur.execute("select * from swh_content_fossology_license_add()") return cur.fetchone()[0] def content_fossology_license_get_from_list(self, ids, cur=None): """Retrieve licenses per id. """ cur = self._cursor(cur) keys = map(self._convert_key, self.content_fossology_license_cols) yield from execute_values_generator( cur, """ select %s from (values %%s) as t(id) inner join content_fossology_license c on t.id=c.id inner join indexer_configuration i on i.id=c.indexer_configuration_id """ % ", ".join(keys), ((_id,) for _id in ids), ) content_metadata_hash_keys = ["id", "indexer_configuration_id"] def content_metadata_missing_from_list(self, metadata, cur=None): """List missing metadata. """ yield from self._missing_from_list( "content_metadata", metadata, self.content_metadata_hash_keys, cur=cur ) content_metadata_cols = [ "id", "metadata", "tool_id", "tool_name", "tool_version", "tool_configuration", ] @stored_procedure("swh_mktemp_content_metadata") def mktemp_content_metadata(self, cur=None): pass def content_metadata_add_from_temp(self, cur=None): cur = self._cursor(cur) cur.execute("select * from swh_content_metadata_add()") return cur.fetchone()[0] def content_metadata_get_from_list(self, ids, cur=None): yield from self._get_from_list( "content_metadata", ids, self.content_metadata_cols, cur=cur ) revision_intrinsic_metadata_hash_keys = ["id", "indexer_configuration_id"] def revision_intrinsic_metadata_missing_from_list(self, metadata, cur=None): """List missing metadata. """ yield from self._missing_from_list( "revision_intrinsic_metadata", metadata, self.revision_intrinsic_metadata_hash_keys, cur=cur, ) revision_intrinsic_metadata_cols = [ "id", "metadata", "mappings", "tool_id", "tool_name", "tool_version", "tool_configuration", ] @stored_procedure("swh_mktemp_revision_intrinsic_metadata") def mktemp_revision_intrinsic_metadata(self, cur=None): pass def revision_intrinsic_metadata_add_from_temp(self, cur=None): cur = self._cursor(cur) cur.execute("select * from swh_revision_intrinsic_metadata_add()") return cur.fetchone()[0] def revision_intrinsic_metadata_get_from_list(self, ids, cur=None): yield from self._get_from_list( "revision_intrinsic_metadata", ids, self.revision_intrinsic_metadata_cols, cur=cur, ) origin_intrinsic_metadata_cols = [ "id", "metadata", "from_revision", "mappings", "tool_id", "tool_name", "tool_version", "tool_configuration", ] origin_intrinsic_metadata_regconfig = "pg_catalog.simple" """The dictionary used to normalize 'metadata' and queries. 'pg_catalog.simple' provides no stopword, so it should be suitable for proper names and non-English content. When updating this value, make sure to add a new index on origin_intrinsic_metadata.metadata.""" @stored_procedure("swh_mktemp_origin_intrinsic_metadata") def mktemp_origin_intrinsic_metadata(self, cur=None): pass def origin_intrinsic_metadata_add_from_temp(self, cur=None): cur = self._cursor(cur) cur.execute("select * from swh_origin_intrinsic_metadata_add()") return cur.fetchone()[0] def origin_intrinsic_metadata_get_from_list(self, ids, cur=None): yield from self._get_from_list( "origin_intrinsic_metadata", ids, self.origin_intrinsic_metadata_cols, cur=cur, id_col="id", ) def origin_intrinsic_metadata_search_fulltext(self, terms, *, limit, cur): regconfig = self.origin_intrinsic_metadata_regconfig tsquery_template = " && ".join( "plainto_tsquery('%s', %%s)" % regconfig for _ in terms ) tsquery_args = [(term,) for term in terms] keys = ( self._convert_key(col, "oim") for col in self.origin_intrinsic_metadata_cols ) query = ( "SELECT {keys} FROM origin_intrinsic_metadata AS oim " "INNER JOIN indexer_configuration AS i " "ON oim.indexer_configuration_id=i.id " "JOIN LATERAL (SELECT {tsquery_template}) AS s(tsq) ON true " "WHERE oim.metadata_tsvector @@ tsq " "ORDER BY ts_rank(oim.metadata_tsvector, tsq, 1) DESC " "LIMIT %s;" ).format(keys=", ".join(keys), tsquery_template=tsquery_template) cur.execute(query, tsquery_args + [limit]) yield from cur def origin_intrinsic_metadata_search_by_producer( self, last, limit, ids_only, mappings, tool_ids, cur ): if ids_only: keys = "oim.id" else: keys = ", ".join( ( self._convert_key(col, "oim") for col in self.origin_intrinsic_metadata_cols ) ) query_parts = [ "SELECT %s" % keys, "FROM origin_intrinsic_metadata AS oim", "INNER JOIN indexer_configuration AS i", "ON oim.indexer_configuration_id=i.id", ] args = [] where = [] if last: where.append("oim.id > %s") args.append(last) if mappings is not None: where.append("oim.mappings && %s") args.append(mappings) if tool_ids is not None: where.append("oim.indexer_configuration_id = ANY(%s)") args.append(tool_ids) if where: query_parts.append("WHERE") query_parts.append(" AND ".join(where)) if limit: query_parts.append("LIMIT %s") args.append(limit) cur.execute(" ".join(query_parts), args) yield from cur indexer_configuration_cols = [ "id", "tool_name", "tool_version", "tool_configuration", ] @stored_procedure("swh_mktemp_indexer_configuration") def mktemp_indexer_configuration(self, cur=None): pass def indexer_configuration_add_from_temp(self, cur=None): cur = self._cursor(cur) cur.execute( "SELECT %s from swh_indexer_configuration_add()" % (",".join(self.indexer_configuration_cols),) ) yield from cur def indexer_configuration_get( self, tool_name, tool_version, tool_configuration, cur=None ): cur = self._cursor(cur) cur.execute( """select %s from indexer_configuration where tool_name=%%s and tool_version=%%s and tool_configuration=%%s""" % (",".join(self.indexer_configuration_cols)), (tool_name, tool_version, tool_configuration), ) return cur.fetchone() + + def indexer_configuration_get_from_id(self, id_, cur=None): + cur = self._cursor(cur) + cur.execute( + """select %s + from indexer_configuration + where id=%%s""" + % (",".join(self.indexer_configuration_cols)), + (id_,), + ) + + return cur.fetchone() diff --git a/swh/indexer/storage/in_memory.py b/swh/indexer/storage/in_memory.py index 792535d..0071ebe 100644 --- a/swh/indexer/storage/in_memory.py +++ b/swh/indexer/storage/in_memory.py @@ -1,486 +1,501 @@ # Copyright (C) 2018-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information from collections import Counter, defaultdict import itertools import json import math import operator import re from typing import ( Any, Dict, Generic, Iterable, List, Optional, Set, Tuple, Type, TypeVar, Union, ) from swh.core.collections import SortedList from swh.model.hashutil import hash_to_bytes, hash_to_hex from swh.model.model import SHA1_SIZE, Sha1Git from swh.storage.utils import get_partition_bounds_bytes from . import MAPPING_NAMES, check_id_duplicates from .exc import IndexerStorageArgumentException from .interface import PagedResult, Sha1 from .model import ( BaseRow, ContentCtagsRow, ContentLanguageRow, ContentLicenseRow, ContentMetadataRow, ContentMimetypeRow, OriginIntrinsicMetadataRow, RevisionIntrinsicMetadataRow, ) +from .writer import JournalWriter SHA1_DIGEST_SIZE = 160 +ToolId = int + def _transform_tool(tool): return { "id": tool["id"], "name": tool["tool_name"], "version": tool["tool_version"], "configuration": tool["tool_configuration"], } def check_id_types(data: List[Dict[str, Any]]): """Checks all elements of the list have an 'id' whose type is 'bytes'.""" if not all(isinstance(item.get("id"), bytes) for item in data): raise IndexerStorageArgumentException("identifiers must be bytes.") def _key_from_dict(d): return tuple(sorted(d.items())) -ToolId = int TValue = TypeVar("TValue", bound=BaseRow) class SubStorage(Generic[TValue]): """Implements common missing/get/add logic for each indexer type.""" _data: Dict[Sha1, Dict[Tuple, Dict[str, Any]]] _tools_per_id: Dict[Sha1, Set[ToolId]] - def __init__(self, row_class: Type[TValue], tools): + def __init__(self, row_class: Type[TValue], tools, journal_writer): self.row_class = row_class self._tools = tools self._sorted_ids = SortedList[bytes, Sha1]() self._data = defaultdict(dict) + self._journal_writer = journal_writer self._tools_per_id = defaultdict(set) def _key_from_dict(self, d) -> Tuple: """Like the global _key_from_dict, but filters out dict keys that don't belong in the unique key.""" return _key_from_dict({k: d[k] for k in self.row_class.UNIQUE_KEY_FIELDS}) def missing(self, keys: Iterable[Dict]) -> List[Sha1]: """List data missing from storage. Args: data (iterable): dictionaries with keys: - **id** (bytes): sha1 identifier - **indexer_configuration_id** (int): tool used to compute the results Yields: missing sha1s """ results = [] for key in keys: tool_id = key["indexer_configuration_id"] id_ = key["id"] if tool_id not in self._tools_per_id.get(id_, set()): results.append(id_) return results def get(self, ids: Iterable[Sha1]) -> List[TValue]: """Retrieve data per id. Args: ids (iterable): sha1 checksums Yields: dict: dictionaries with the following keys: - **id** (bytes) - **tool** (dict): tool used to compute metadata - arbitrary data (as provided to `add`) """ results = [] for id_ in ids: for entry in self._data[id_].values(): entry = entry.copy() tool_id = entry.pop("indexer_configuration_id") results.append( self.row_class( id=id_, tool=_transform_tool(self._tools[tool_id]), **entry, ) ) return results def get_all(self) -> List[TValue]: return self.get(self._sorted_ids) def get_partition( self, indexer_configuration_id: int, partition_id: int, nb_partitions: int, page_token: Optional[str] = None, limit: int = 1000, ) -> PagedResult[Sha1]: """Retrieve ids of content with `indexer_type` within partition partition_id bound by limit. Args: **indexer_type**: Type of data content to index (mimetype, language, etc...) **indexer_configuration_id**: The tool used to index data **partition_id**: index of the partition to fetch **nb_partitions**: total number of partitions to split into **page_token**: opaque token used for pagination **limit**: Limit result (default to 1000) **with_textual_data** (bool): Deal with only textual content (True) or all content (all contents by defaults, False) Raises: IndexerStorageArgumentException for; - limit to None - wrong indexer_type provided Returns: PagedResult of Sha1. If next_page_token is None, there is no more data to fetch """ if limit is None: raise IndexerStorageArgumentException("limit should not be None") (start, end) = get_partition_bounds_bytes( partition_id, nb_partitions, SHA1_SIZE ) if page_token: start = hash_to_bytes(page_token) if end is None: end = b"\xff" * SHA1_SIZE next_page_token: Optional[str] = None ids: List[Sha1] = [] sha1s = (sha1 for sha1 in self._sorted_ids.iter_from(start)) for counter, sha1 in enumerate(sha1s): if sha1 > end: break if counter >= limit: next_page_token = hash_to_hex(sha1) break ids.append(sha1) assert len(ids) <= limit return PagedResult(results=ids, next_page_token=next_page_token) def add(self, data: Iterable[TValue]) -> int: """Add data not present in storage. Args: data (iterable): dictionaries with keys: - **id**: sha1 - **indexer_configuration_id**: tool used to compute the results - arbitrary data """ data = list(data) check_id_duplicates(data) + object_type = self.row_class.object_type # type: ignore + self._journal_writer.write_additions(object_type, data) count = 0 for obj in data: item = obj.to_dict() id_ = item.pop("id") tool_id = item["indexer_configuration_id"] key = _key_from_dict(obj.unique_key()) self._data[id_][key] = item self._tools_per_id[id_].add(tool_id) count += 1 if id_ not in self._sorted_ids: self._sorted_ids.add(id_) return count class IndexerStorage: """In-memory SWH indexer storage.""" - def __init__(self): + def __init__(self, journal_writer=None): self._tools = {} - self._mimetypes = SubStorage(ContentMimetypeRow, self._tools) - self._languages = SubStorage(ContentLanguageRow, self._tools) - self._content_ctags = SubStorage(ContentCtagsRow, self._tools) - self._licenses = SubStorage(ContentLicenseRow, self._tools) - self._content_metadata = SubStorage(ContentMetadataRow, self._tools) + + def tool_getter(id_): + tool = self._tools[id_] + return { + "id": tool["id"], + "name": tool["tool_name"], + "version": tool["tool_version"], + "configuration": tool["tool_configuration"], + } + + self.journal_writer = JournalWriter(tool_getter, journal_writer) + args = (self._tools, self.journal_writer) + self._mimetypes = SubStorage(ContentMimetypeRow, *args) + self._languages = SubStorage(ContentLanguageRow, *args) + self._content_ctags = SubStorage(ContentCtagsRow, *args) + self._licenses = SubStorage(ContentLicenseRow, *args) + self._content_metadata = SubStorage(ContentMetadataRow, *args) self._revision_intrinsic_metadata = SubStorage( - RevisionIntrinsicMetadataRow, self._tools - ) - self._origin_intrinsic_metadata = SubStorage( - OriginIntrinsicMetadataRow, self._tools + RevisionIntrinsicMetadataRow, *args ) + self._origin_intrinsic_metadata = SubStorage(OriginIntrinsicMetadataRow, *args) def check_config(self, *, check_write): return True def content_mimetype_missing( self, mimetypes: Iterable[Dict] ) -> List[Tuple[Sha1, int]]: return self._mimetypes.missing(mimetypes) def content_mimetype_get_partition( self, indexer_configuration_id: int, partition_id: int, nb_partitions: int, page_token: Optional[str] = None, limit: int = 1000, ) -> PagedResult[Sha1]: return self._mimetypes.get_partition( indexer_configuration_id, partition_id, nb_partitions, page_token, limit ) def content_mimetype_add( self, mimetypes: List[ContentMimetypeRow] ) -> Dict[str, int]: added = self._mimetypes.add(mimetypes) return {"content_mimetype:add": added} def content_mimetype_get(self, ids: Iterable[Sha1]) -> List[ContentMimetypeRow]: return self._mimetypes.get(ids) def content_language_missing( self, languages: Iterable[Dict] ) -> List[Tuple[Sha1, int]]: return self._languages.missing(languages) def content_language_get(self, ids: Iterable[Sha1]) -> List[ContentLanguageRow]: return self._languages.get(ids) def content_language_add( self, languages: List[ContentLanguageRow] ) -> Dict[str, int]: added = self._languages.add(languages) return {"content_language:add": added} def content_ctags_missing(self, ctags: Iterable[Dict]) -> List[Tuple[Sha1, int]]: return self._content_ctags.missing(ctags) def content_ctags_get(self, ids: Iterable[Sha1]) -> List[ContentCtagsRow]: return self._content_ctags.get(ids) def content_ctags_add(self, ctags: List[ContentCtagsRow]) -> Dict[str, int]: added = self._content_ctags.add(ctags) return {"content_ctags:add": added} def content_ctags_search( self, expression: str, limit: int = 10, last_sha1: Optional[Sha1] = None ) -> List[ContentCtagsRow]: nb_matches = 0 items_per_id: Dict[Tuple[Sha1Git, ToolId], List[ContentCtagsRow]] = {} for item in sorted(self._content_ctags.get_all()): if item.id <= (last_sha1 or bytes(0 for _ in range(SHA1_DIGEST_SIZE))): continue items_per_id.setdefault( (item.id, item.indexer_configuration_id), [] ).append(item) results = [] for items in items_per_id.values(): for item in items: if item.name != expression: continue nb_matches += 1 if nb_matches > limit: break results.append(item) return results def content_fossology_license_get( self, ids: Iterable[Sha1] ) -> List[ContentLicenseRow]: return self._licenses.get(ids) def content_fossology_license_add( self, licenses: List[ContentLicenseRow] ) -> Dict[str, int]: added = self._licenses.add(licenses) return {"content_fossology_license:add": added} def content_fossology_license_get_partition( self, indexer_configuration_id: int, partition_id: int, nb_partitions: int, page_token: Optional[str] = None, limit: int = 1000, ) -> PagedResult[Sha1]: return self._licenses.get_partition( indexer_configuration_id, partition_id, nb_partitions, page_token, limit ) def content_metadata_missing( self, metadata: Iterable[Dict] ) -> List[Tuple[Sha1, int]]: return self._content_metadata.missing(metadata) def content_metadata_get(self, ids: Iterable[Sha1]) -> List[ContentMetadataRow]: return self._content_metadata.get(ids) def content_metadata_add( self, metadata: List[ContentMetadataRow] ) -> Dict[str, int]: added = self._content_metadata.add(metadata) return {"content_metadata:add": added} def revision_intrinsic_metadata_missing( self, metadata: Iterable[Dict] ) -> List[Tuple[Sha1, int]]: return self._revision_intrinsic_metadata.missing(metadata) def revision_intrinsic_metadata_get( self, ids: Iterable[Sha1] ) -> List[RevisionIntrinsicMetadataRow]: return self._revision_intrinsic_metadata.get(ids) def revision_intrinsic_metadata_add( self, metadata: List[RevisionIntrinsicMetadataRow] ) -> Dict[str, int]: added = self._revision_intrinsic_metadata.add(metadata) return {"revision_intrinsic_metadata:add": added} def origin_intrinsic_metadata_get( self, urls: Iterable[str] ) -> List[OriginIntrinsicMetadataRow]: return self._origin_intrinsic_metadata.get(urls) def origin_intrinsic_metadata_add( self, metadata: List[OriginIntrinsicMetadataRow] ) -> Dict[str, int]: added = self._origin_intrinsic_metadata.add(metadata) return {"origin_intrinsic_metadata:add": added} def origin_intrinsic_metadata_search_fulltext( self, conjunction: List[str], limit: int = 100 ) -> List[OriginIntrinsicMetadataRow]: # A very crude fulltext search implementation, but that's enough # to work on English metadata tokens_re = re.compile("[a-zA-Z0-9]+") search_tokens = list(itertools.chain(*map(tokens_re.findall, conjunction))) def rank(data): # Tokenize the metadata text = json.dumps(data.metadata) text_tokens = tokens_re.findall(text) text_token_occurences = Counter(text_tokens) # Count the number of occurrences of search tokens in the text score = 0 for search_token in search_tokens: if text_token_occurences[search_token] == 0: # Search token is not in the text. return 0 score += text_token_occurences[search_token] # Normalize according to the text's length return score / math.log(len(text_tokens)) results = [ (rank(data), data) for data in self._origin_intrinsic_metadata.get_all() ] results = [(rank_, data) for (rank_, data) in results if rank_ > 0] results.sort( key=operator.itemgetter(0), reverse=True # Don't try to order 'data' ) return [result for (rank_, result) in results[:limit]] def origin_intrinsic_metadata_search_by_producer( self, page_token: str = "", limit: int = 100, ids_only: bool = False, mappings: Optional[List[str]] = None, tool_ids: Optional[List[int]] = None, ) -> PagedResult[Union[str, OriginIntrinsicMetadataRow]]: assert isinstance(page_token, str) nb_results = 0 if mappings is not None: mapping_set = frozenset(mappings) if tool_ids is not None: tool_id_set = frozenset(tool_ids) rows = [] # we go to limit+1 to check whether we should add next_page_token in # the response for entry in self._origin_intrinsic_metadata.get_all(): if entry.id <= page_token: continue if nb_results >= (limit + 1): break if mappings and mapping_set.isdisjoint(entry.mappings): continue if tool_ids and entry.tool["id"] not in tool_id_set: continue rows.append(entry) nb_results += 1 if len(rows) > limit: rows = rows[:limit] next_page_token = rows[-1].id else: next_page_token = None if ids_only: rows = [row.id for row in rows] return PagedResult(results=rows, next_page_token=next_page_token,) def origin_intrinsic_metadata_stats(self): mapping_count = {m: 0 for m in MAPPING_NAMES} total = non_empty = 0 for data in self._origin_intrinsic_metadata.get_all(): total += 1 if set(data.metadata) - {"@context"}: non_empty += 1 for mapping in data.mappings: mapping_count[mapping] += 1 return {"per_mapping": mapping_count, "total": total, "non_empty": non_empty} def indexer_configuration_add(self, tools): inserted = [] for tool in tools: tool = tool.copy() id_ = self._tool_key(tool) tool["id"] = id_ self._tools[id_] = tool inserted.append(tool) return inserted def indexer_configuration_get(self, tool): return self._tools.get(self._tool_key(tool)) def _tool_key(self, tool): return hash( ( tool["tool_name"], tool["tool_version"], json.dumps(tool["tool_configuration"], sort_keys=True), ) ) diff --git a/swh/indexer/storage/model.py b/swh/indexer/storage/model.py index cc9c954..d14d107 100644 --- a/swh/indexer/storage/model.py +++ b/swh/indexer/storage/model.py @@ -1,122 +1,135 @@ # Copyright (C) 2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information """Classes used internally by the in-memory idx-storage, and will be used for the interface of the idx-storage in the near future.""" from __future__ import annotations from typing import Any, Dict, List, Optional, Tuple, Type, TypeVar import attr +from typing_extensions import Final from swh.model.model import Sha1Git, dictify TSelf = TypeVar("TSelf") @attr.s class BaseRow: UNIQUE_KEY_FIELDS: Tuple = ("id", "indexer_configuration_id") id = attr.ib(type=Any) indexer_configuration_id = attr.ib(type=Optional[int], default=None, kw_only=True) tool = attr.ib(type=Optional[Dict], default=None, kw_only=True) def __attrs_post_init__(self): if self.indexer_configuration_id is None and self.tool is None: raise TypeError("Either indexer_configuration_id or tool must be not None.") if self.indexer_configuration_id is not None and self.tool is not None: raise TypeError( "indexer_configuration_id and tool are mutually exclusive; " "only one may be not None." ) def anonymize(self: TSelf) -> Optional[TSelf]: # Needed to implement swh.journal.writer.ValueProtocol return None def to_dict(self) -> Dict[str, Any]: """Wrapper of `attr.asdict` that can be overridden by subclasses that have special handling of some of the fields.""" d = dictify(attr.asdict(self, recurse=False)) if d["indexer_configuration_id"] is None: del d["indexer_configuration_id"] if d["tool"] is None: del d["tool"] return d @classmethod def from_dict(cls: Type[TSelf], d) -> TSelf: return cls(**d) # type: ignore def unique_key(self) -> Dict: if self.indexer_configuration_id is None: raise ValueError( "Can only call unique_key() on objects without " "indexer_configuration_id." ) return {key: getattr(self, key) for key in self.UNIQUE_KEY_FIELDS} @attr.s class ContentMimetypeRow(BaseRow): + object_type: Final = "content_mimetype" + id = attr.ib(type=Sha1Git) mimetype = attr.ib(type=str) encoding = attr.ib(type=str) @attr.s class ContentLanguageRow(BaseRow): + object_type: Final = "content_language" + id = attr.ib(type=Sha1Git) lang = attr.ib(type=str) @attr.s class ContentCtagsRow(BaseRow): + object_type: Final = "content_ctags" UNIQUE_KEY_FIELDS = ( "id", "indexer_configuration_id", "name", "kind", "line", "lang", ) id = attr.ib(type=Sha1Git) name = attr.ib(type=str) kind = attr.ib(type=str) line = attr.ib(type=int) lang = attr.ib(type=str) @attr.s class ContentLicenseRow(BaseRow): + object_type: Final = "content_fossology_license" UNIQUE_KEY_FIELDS = ("id", "indexer_configuration_id", "license") id = attr.ib(type=Sha1Git) license = attr.ib(type=str) @attr.s class ContentMetadataRow(BaseRow): + object_type: Final = "content_metadata" + id = attr.ib(type=Sha1Git) metadata = attr.ib(type=Dict[str, Any]) @attr.s class RevisionIntrinsicMetadataRow(BaseRow): + object_type: Final = "revision_intrinsic_metadata" + id = attr.ib(type=Sha1Git) metadata = attr.ib(type=Dict[str, Any]) mappings = attr.ib(type=List[str]) @attr.s class OriginIntrinsicMetadataRow(BaseRow): + object_type: Final = "origin_intrinsic_metadata" + id = attr.ib(type=str) metadata = attr.ib(type=Dict[str, Any]) from_revision = attr.ib(type=Sha1Git) mappings = attr.ib(type=List[str]) diff --git a/swh/indexer/storage/writer.py b/swh/indexer/storage/writer.py new file mode 100644 index 0000000..adae76d --- /dev/null +++ b/swh/indexer/storage/writer.py @@ -0,0 +1,64 @@ +# Copyright (C) 2020 The Software Heritage developers +# See the AUTHORS file at the top-level directory of this distribution +# License: GNU General Public License version 3, or any later version +# See top-level LICENSE file for more information + +from typing import Any, Callable, Dict, Iterable + +import attr + +try: + from swh.journal.writer import get_journal_writer +except ImportError: + get_journal_writer = None # type: ignore + # mypy limitation, see https://github.com/python/mypy/issues/1153 + +from .model import BaseRow + + +class JournalWriter: + """Journal writer storage collaborator. It's in charge of adding objects to + the journal. + + """ + + def __init__(self, tool_getter: Callable[[int], Dict[str, Any]], journal_writer): + """ + Args: + tool_getter: a callable that takes a tool_id and return a dict representing + a tool object + journal_writer: configuration passed to + `swh.journal.writer.get_journal_writer` + """ + self._tool_getter = tool_getter + if journal_writer: + if get_journal_writer is None: + raise EnvironmentError( + "You need the swh.journal package to use the " + "journal_writer feature" + ) + self.journal = get_journal_writer(**journal_writer) + else: + self.journal = None + + def write_additions(self, obj_type, entries: Iterable[BaseRow]) -> None: + if not self.journal: + return + + # usually, all the additions in a batch are from the same indexer, + # so this cache allows doing a single query for all the entries. + tool_cache = {} + + for entry in entries: + assert entry.object_type == obj_type # type: ignore + # get the tool used to generate this addition + tool_id = entry.indexer_configuration_id + assert tool_id + if tool_id not in tool_cache: + tool_cache[tool_id] = self._tool_getter(tool_id) + entry = attr.evolve( + entry, tool=tool_cache[tool_id], indexer_configuration_id=None + ) + + # write to kafka + self.journal.write_addition(obj_type, entry) diff --git a/swh/indexer/tests/conftest.py b/swh/indexer/tests/conftest.py index fc8acea..e16588b 100644 --- a/swh/indexer/tests/conftest.py +++ b/swh/indexer/tests/conftest.py @@ -1,105 +1,105 @@ # Copyright (C) 2019-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information from datetime import timedelta import os from typing import List, Tuple from unittest.mock import patch import pytest import yaml from swh.indexer.storage import get_indexer_storage from swh.objstorage.factory import get_objstorage from swh.storage import get_storage from .utils import fill_obj_storage, fill_storage TASK_NAMES: List[Tuple[str, str]] = [ # (scheduler-task-type, task-class-test-name) ("index-revision-metadata", "revision_intrinsic_metadata"), ("index-origin-metadata", "origin_intrinsic_metadata"), ] @pytest.fixture def indexer_scheduler(swh_scheduler): # Insert the expected task types within the scheduler for task_name, task_class_name in TASK_NAMES: swh_scheduler.create_task_type( { "type": task_name, "description": f"The {task_class_name} indexer testing task", "backend_name": f"swh.indexer.tests.tasks.{task_class_name}", "default_interval": timedelta(days=1), "min_interval": timedelta(hours=6), "max_interval": timedelta(days=12), "num_retries": 3, } ) return swh_scheduler @pytest.fixture def idx_storage(): """An instance of in-memory indexer storage that gets injected into all indexers classes. """ idx_storage = get_indexer_storage("memory") with patch("swh.indexer.storage.in_memory.IndexerStorage") as idx_storage_mock: idx_storage_mock.return_value = idx_storage yield idx_storage @pytest.fixture def storage(): """An instance of in-memory storage that gets injected into all indexers classes. """ storage = get_storage(cls="memory") fill_storage(storage) with patch("swh.storage.in_memory.InMemoryStorage") as storage_mock: storage_mock.return_value = storage yield storage @pytest.fixture def obj_storage(): """An instance of in-memory objstorage that gets injected into all indexers classes. """ - objstorage = get_objstorage("memory", {}) + objstorage = get_objstorage("memory") fill_obj_storage(objstorage) with patch.dict( "swh.objstorage.factory._STORAGE_CLASSES", {"memory": lambda: objstorage} ): yield objstorage @pytest.fixture def swh_indexer_config(): return { "storage": {"cls": "memory"}, - "objstorage": {"cls": "memory", "args": {},}, - "indexer_storage": {"cls": "memory", "args": {},}, + "objstorage": {"cls": "memory"}, + "indexer_storage": {"cls": "memory"}, "tools": { "name": "file", "version": "1:5.30-1+deb9u1", "configuration": {"type": "library", "debian-package": "python3-magic"}, }, "compute_checksums": ["blake2b512"], # for rehash indexer } @pytest.fixture def swh_config(swh_indexer_config, monkeypatch, tmp_path): conffile = os.path.join(str(tmp_path), "indexer.yml") with open(conffile, "w") as f: f.write(yaml.dump(swh_indexer_config)) monkeypatch.setenv("SWH_CONFIG_FILENAME", conffile) return conffile diff --git a/swh/indexer/tests/storage/conftest.py b/swh/indexer/tests/storage/conftest.py index 80a013f..133404b 100644 --- a/swh/indexer/tests/storage/conftest.py +++ b/swh/indexer/tests/storage/conftest.py @@ -1,76 +1,80 @@ # Copyright (C) 2015-2019 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information from os.path import join import pytest from swh.indexer.storage import get_indexer_storage from swh.indexer.storage.model import ContentLicenseRow, ContentMimetypeRow from swh.model.hashutil import hash_to_bytes from swh.storage.pytest_plugin import postgresql_fact from . import SQL_DIR from .generate_data_test import FOSSOLOGY_LICENSES, MIMETYPE_OBJECTS, TOOLS DUMP_FILES = join(SQL_DIR, "*.sql") class DataObj(dict): def __getattr__(self, key): return self.__getitem__(key) def __setattr__(self, key, value): return self.__setitem__(key, value) @pytest.fixture def swh_indexer_storage_with_data(swh_indexer_storage): data = DataObj() tools = { tool["tool_name"]: { "id": tool["id"], "name": tool["tool_name"], "version": tool["tool_version"], "configuration": tool["tool_configuration"], } for tool in swh_indexer_storage.indexer_configuration_add(TOOLS) } data.tools = tools data.sha1_1 = hash_to_bytes("34973274ccef6ab4dfaaf86599792fa9c3fe4689") data.sha1_2 = hash_to_bytes("61c2b3a30496d329e21af70dd2d7e097046d07b7") data.revision_id_1 = hash_to_bytes("7026b7c1a2af56521e951c01ed20f255fa054238") data.revision_id_2 = hash_to_bytes("7026b7c1a2af56521e9587659012345678904321") data.revision_id_3 = hash_to_bytes("7026b7c1a2af56521e9587659012345678904320") data.origin_url_1 = "file:///dev/0/zero" # 44434341 data.origin_url_2 = "file:///dev/1/one" # 44434342 data.origin_url_3 = "file:///dev/2/two" # 54974445 data.mimetypes = [ ContentMimetypeRow(indexer_configuration_id=tools["file"]["id"], **mimetype_obj) for mimetype_obj in MIMETYPE_OBJECTS ] swh_indexer_storage.content_mimetype_add(data.mimetypes) data.fossology_licenses = [ ContentLicenseRow( id=fossology_obj["id"], indexer_configuration_id=tools["nomos"]["id"], license=license, ) for fossology_obj in FOSSOLOGY_LICENSES for license in fossology_obj["licenses"] ] swh_indexer_storage._test_data = data return (swh_indexer_storage, data) swh_indexer_storage_postgresql = postgresql_fact( "postgresql_proc", dump_files=DUMP_FILES ) @pytest.fixture def swh_indexer_storage(swh_indexer_storage_postgresql): - return get_indexer_storage("local", db=swh_indexer_storage_postgresql.dsn) + return get_indexer_storage( + "local", + db=swh_indexer_storage_postgresql.dsn, + journal_writer={"cls": "memory",}, + ) diff --git a/swh/indexer/tests/storage/test_api_client.py b/swh/indexer/tests/storage/test_api_client.py index 2769309..993596d 100644 --- a/swh/indexer/tests/storage/test_api_client.py +++ b/swh/indexer/tests/storage/test_api_client.py @@ -1,35 +1,54 @@ # Copyright (C) 2015-2019 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import pytest from swh.indexer.storage import get_indexer_storage from swh.indexer.storage.api.client import RemoteStorage import swh.indexer.storage.api.server as server from .test_storage import * # noqa @pytest.fixture -def app(swh_indexer_storage_postgresql): - server.storage = get_indexer_storage("local", db=swh_indexer_storage_postgresql.dsn) - return server.app +def app_server(swh_indexer_storage_postgresql): + server.storage = get_indexer_storage( + "local", + db=swh_indexer_storage_postgresql.dsn, + journal_writer={"cls": "memory",}, + ) + yield server + + +@pytest.fixture +def app(app_server): + return app_server.app @pytest.fixture def swh_rpc_client_class(): # these are needed for the swh_indexer_storage_with_data fixture assert hasattr(RemoteStorage, "indexer_configuration_add") assert hasattr(RemoteStorage, "content_mimetype_add") return RemoteStorage @pytest.fixture -def swh_indexer_storage(swh_rpc_client, app): +def swh_indexer_storage(swh_rpc_client, app_server): # This version of the swh_storage fixture uses the swh_rpc_client fixture # to instantiate a RemoteStorage (see swh_rpc_client_class above) that # proxies, via the swh.core RPC mechanism, the local (in memory) storage # configured in the app fixture above. - return swh_rpc_client + # + # Also note that, for the sake of + # making it easier to write tests, the in-memory journal writer of the + # in-memory backend storage is attached to the RemoteStorage as its + # journal_writer attribute. + storage = swh_rpc_client + + journal_writer = getattr(storage, "journal_writer", None) + storage.journal_writer = app_server.storage.journal_writer + yield storage + storage.journal_writer = journal_writer diff --git a/swh/indexer/tests/storage/test_in_memory.py b/swh/indexer/tests/storage/test_in_memory.py index 6457891..854d4fd 100644 --- a/swh/indexer/tests/storage/test_in_memory.py +++ b/swh/indexer/tests/storage/test_in_memory.py @@ -1,15 +1,15 @@ # Copyright (C) 2015-2019 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import pytest from swh.indexer.storage import get_indexer_storage from .test_storage import * # noqa @pytest.fixture def swh_indexer_storage(): - return get_indexer_storage("memory") + return get_indexer_storage("memory", journal_writer={"cls": "memory",}) diff --git a/swh/indexer/tests/storage/test_storage.py b/swh/indexer/tests/storage/test_storage.py index cd55a2b..c17c3e8 100644 --- a/swh/indexer/tests/storage/test_storage.py +++ b/swh/indexer/tests/storage/test_storage.py @@ -1,1705 +1,1722 @@ # Copyright (C) 2015-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import math import threading from typing import Any, Dict, List, Tuple, Type import attr import pytest from swh.indexer.storage.exc import DuplicateId, IndexerStorageArgumentException from swh.indexer.storage.interface import IndexerStorageInterface, PagedResult from swh.indexer.storage.model import ( BaseRow, ContentCtagsRow, ContentLanguageRow, ContentLicenseRow, ContentMetadataRow, ContentMimetypeRow, OriginIntrinsicMetadataRow, RevisionIntrinsicMetadataRow, ) from swh.model.hashutil import hash_to_bytes def prepare_mimetypes_from_licenses( fossology_licenses: List[ContentLicenseRow], ) -> List[ContentMimetypeRow]: """Fossology license needs some consistent data in db to run. """ mimetypes = [] for c in fossology_licenses: mimetypes.append( ContentMimetypeRow( id=c.id, mimetype="text/plain", # for filtering on textual data to work encoding="utf-8", indexer_configuration_id=c.indexer_configuration_id, ) ) return mimetypes def endpoint_name(etype: str, ename: str) -> str: """Compute the storage's endpoint's name >>> endpoint_name('content_mimetype', 'add') 'content_mimetype_add' >>> endpoint_name('content_fosso_license', 'delete') 'content_fosso_license_delete' """ return f"{etype}_{ename}" def endpoint(storage, etype: str, ename: str): return getattr(storage, endpoint_name(etype, ename)) def expected_summary(count: int, etype: str, ename: str = "add") -> Dict[str, int]: """Compute the expected summary The key is determine according to etype and ename >>> expected_summary(10, 'content_mimetype', 'add') {'content_mimetype:add': 10} >>> expected_summary(9, 'origin_intrinsic_metadata', 'delete') {'origin_intrinsic_metadata:del': 9} """ pattern = ename[0:3] key = endpoint_name(etype, ename).replace(f"_{ename}", f":{pattern}") return {key: count} def test_check_config(swh_indexer_storage) -> None: assert swh_indexer_storage.check_config(check_write=True) assert swh_indexer_storage.check_config(check_write=False) class StorageETypeTester: """Base class for testing a series of common behaviour between a bunch of endpoint types supported by an IndexerStorage. This is supposed to be inherited with the following class attributes: - endpoint_type - tool_name - example_data See below for example usage. """ endpoint_type: str tool_name: str example_data: List[Dict] row_class: Type[BaseRow] def test_missing( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: storage, data = swh_indexer_storage_with_data etype = self.endpoint_type tool_id = data.tools[self.tool_name]["id"] # given 2 (hopefully) unknown objects query = [ {"id": data.sha1_1, "indexer_configuration_id": tool_id,}, {"id": data.sha1_2, "indexer_configuration_id": tool_id,}, ] # we expect these are both returned by the xxx_missing endpoint actual_missing = endpoint(storage, etype, "missing")(query) assert list(actual_missing) == [ data.sha1_1, data.sha1_2, ] # now, when we add one of them summary = endpoint(storage, etype, "add")( [ self.row_class.from_dict( { "id": data.sha1_2, **self.example_data[0], "indexer_configuration_id": tool_id, } ) ] ) assert summary == expected_summary(1, etype) # we expect only the other one returned actual_missing = endpoint(storage, etype, "missing")(query) assert list(actual_missing) == [data.sha1_1] def test_add__update_in_place_duplicate( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: storage, data = swh_indexer_storage_with_data etype = self.endpoint_type tool = data.tools[self.tool_name] data_v1 = { "id": data.sha1_2, **self.example_data[0], "indexer_configuration_id": tool["id"], } # given summary = endpoint(storage, etype, "add")([self.row_class.from_dict(data_v1)]) assert summary == expected_summary(1, etype) # not added # when actual_data = list(endpoint(storage, etype, "get")([data.sha1_2])) expected_data_v1 = [ self.row_class.from_dict( {"id": data.sha1_2, **self.example_data[0], "tool": tool} ) ] # then assert actual_data == expected_data_v1 # given data_v2 = data_v1.copy() data_v2.update(self.example_data[1]) endpoint(storage, etype, "add")([self.row_class.from_dict(data_v2)]) assert summary == expected_summary(1, etype) # modified so counted actual_data = list(endpoint(storage, etype, "get")([data.sha1_2])) expected_data_v2 = [ self.row_class.from_dict( {"id": data.sha1_2, **self.example_data[1], "tool": tool,} ) ] # data did change as the v2 was used to overwrite v1 assert actual_data == expected_data_v2 def test_add_deadlock( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: storage, data = swh_indexer_storage_with_data etype = self.endpoint_type tool = data.tools[self.tool_name] hashes = [ hash_to_bytes("34973274ccef6ab4dfaaf86599792fa9c3fe4{:03d}".format(i)) for i in range(1000) ] data_v1 = [ self.row_class.from_dict( { "id": hash_, **self.example_data[0], "indexer_configuration_id": tool["id"], } ) for hash_ in hashes ] data_v2 = [ self.row_class.from_dict( { "id": hash_, **self.example_data[1], "indexer_configuration_id": tool["id"], } ) for hash_ in hashes ] # Remove one item from each, so that both queries have to succeed for # all items to be in the DB. data_v2a = data_v2[1:] data_v2b = list(reversed(data_v2[0:-1])) # given endpoint(storage, etype, "add")(data_v1) # when actual_data = sorted( endpoint(storage, etype, "get")(hashes), key=lambda x: x.id, ) expected_data_v1 = [ self.row_class.from_dict( {"id": hash_, **self.example_data[0], "tool": tool} ) for hash_ in hashes ] # then assert actual_data == expected_data_v1 # given def f1() -> None: endpoint(storage, etype, "add")(data_v2a) def f2() -> None: endpoint(storage, etype, "add")(data_v2b) t1 = threading.Thread(target=f1) t2 = threading.Thread(target=f2) t2.start() t1.start() t1.join() t2.join() actual_data = sorted( endpoint(storage, etype, "get")(hashes), key=lambda x: x.id, ) expected_data_v2 = [ self.row_class.from_dict( {"id": hash_, **self.example_data[1], "tool": tool} ) for hash_ in hashes ] assert len(actual_data) == len(expected_data_v1) == len(expected_data_v2) for (item, expected_item_v1, expected_item_v2) in zip( actual_data, expected_data_v1, expected_data_v2 ): assert item in (expected_item_v1, expected_item_v2) def test_add__duplicate_twice( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: storage, data = swh_indexer_storage_with_data etype = self.endpoint_type tool = data.tools[self.tool_name] data_rev1 = self.row_class.from_dict( { "id": data.revision_id_2, **self.example_data[0], "indexer_configuration_id": tool["id"], } ) data_rev2 = self.row_class.from_dict( { "id": data.revision_id_2, **self.example_data[1], "indexer_configuration_id": tool["id"], } ) # when summary = endpoint(storage, etype, "add")([data_rev1]) assert summary == expected_summary(1, etype) with pytest.raises(DuplicateId): endpoint(storage, etype, "add")([data_rev2, data_rev2]) # then actual_data = list( endpoint(storage, etype, "get")([data.revision_id_2, data.revision_id_1]) ) expected_data = [ self.row_class.from_dict( {"id": data.revision_id_2, **self.example_data[0], "tool": tool} ) ] assert actual_data == expected_data - def test_get( + def test_add( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: storage, data = swh_indexer_storage_with_data etype = self.endpoint_type tool = data.tools[self.tool_name] + # conftest fills it with mimetypes + storage.journal_writer.journal.objects = [] # type: ignore + query = [data.sha1_2, data.sha1_1] data1 = self.row_class.from_dict( { "id": data.sha1_2, **self.example_data[0], "indexer_configuration_id": tool["id"], } ) # when summary = endpoint(storage, etype, "add")([data1]) assert summary == expected_summary(1, etype) # then actual_data = list(endpoint(storage, etype, "get")(query)) # then expected_data = [ self.row_class.from_dict( {"id": data.sha1_2, **self.example_data[0], "tool": tool} ) ] assert actual_data == expected_data + journal_objects = storage.journal_writer.journal.objects # type: ignore + actual_journal_data = [ + obj for (obj_type, obj) in journal_objects if obj_type == self.endpoint_type + ] + assert list(sorted(actual_journal_data)) == list(sorted(expected_data)) + class TestIndexerStorageContentMimetypes(StorageETypeTester): """Test Indexer Storage content_mimetype related methods """ endpoint_type = "content_mimetype" tool_name = "file" example_data = [ {"mimetype": "text/plain", "encoding": "utf-8",}, {"mimetype": "text/html", "encoding": "us-ascii",}, ] row_class = ContentMimetypeRow def test_generate_content_mimetype_get_partition_failure( self, swh_indexer_storage: IndexerStorageInterface ) -> None: """get_partition call with wrong limit input should fail""" storage = swh_indexer_storage indexer_configuration_id = 42 with pytest.raises( IndexerStorageArgumentException, match="limit should not be None" ): storage.content_mimetype_get_partition( indexer_configuration_id, 0, 3, limit=None # type: ignore ) def test_generate_content_mimetype_get_partition_no_limit( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: """get_partition should return result""" storage, data = swh_indexer_storage_with_data mimetypes = data.mimetypes expected_ids = set([c.id for c in mimetypes]) indexer_configuration_id = mimetypes[0].indexer_configuration_id assert len(mimetypes) == 16 nb_partitions = 16 actual_ids = [] for partition_id in range(nb_partitions): actual_result = storage.content_mimetype_get_partition( indexer_configuration_id, partition_id, nb_partitions ) assert actual_result.next_page_token is None actual_ids.extend(actual_result.results) assert len(actual_ids) == len(expected_ids) for actual_id in actual_ids: assert actual_id in expected_ids def test_generate_content_mimetype_get_partition_full( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: """get_partition for a single partition should return available ids """ storage, data = swh_indexer_storage_with_data mimetypes = data.mimetypes expected_ids = set([c.id for c in mimetypes]) indexer_configuration_id = mimetypes[0].indexer_configuration_id actual_result = storage.content_mimetype_get_partition( indexer_configuration_id, 0, 1 ) assert actual_result.next_page_token is None actual_ids = actual_result.results assert len(actual_ids) == len(expected_ids) for actual_id in actual_ids: assert actual_id in expected_ids def test_generate_content_mimetype_get_partition_empty( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: """get_partition when at least one of the partitions is empty""" storage, data = swh_indexer_storage_with_data mimetypes = data.mimetypes expected_ids = set([c.id for c in mimetypes]) indexer_configuration_id = mimetypes[0].indexer_configuration_id # nb_partitions = smallest power of 2 such that at least one of # the partitions is empty nb_mimetypes = len(mimetypes) nb_partitions = 1 << math.floor(math.log2(nb_mimetypes) + 1) seen_ids = [] for partition_id in range(nb_partitions): actual_result = storage.content_mimetype_get_partition( indexer_configuration_id, partition_id, nb_partitions, limit=nb_mimetypes + 1, ) for actual_id in actual_result.results: seen_ids.append(actual_id) # Limit is higher than the max number of results assert actual_result.next_page_token is None assert set(seen_ids) == expected_ids def test_generate_content_mimetype_get_partition_with_pagination( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: """get_partition should return ids provided with pagination """ storage, data = swh_indexer_storage_with_data mimetypes = data.mimetypes expected_ids = set([c.id for c in mimetypes]) indexer_configuration_id = mimetypes[0].indexer_configuration_id nb_partitions = 4 actual_ids = [] for partition_id in range(nb_partitions): next_page_token = None while True: actual_result = storage.content_mimetype_get_partition( indexer_configuration_id, partition_id, nb_partitions, limit=2, page_token=next_page_token, ) actual_ids.extend(actual_result.results) next_page_token = actual_result.next_page_token if next_page_token is None: break assert len(set(actual_ids)) == len(set(expected_ids)) for actual_id in actual_ids: assert actual_id in expected_ids class TestIndexerStorageContentLanguage(StorageETypeTester): """Test Indexer Storage content_language related methods """ endpoint_type = "content_language" tool_name = "pygments" example_data = [ {"lang": "haskell",}, {"lang": "common-lisp",}, ] row_class = ContentLanguageRow class TestIndexerStorageContentCTags(StorageETypeTester): """Test Indexer Storage content_ctags related methods """ endpoint_type = "content_ctags" tool_name = "universal-ctags" example_data = [ {"name": "done", "kind": "variable", "line": 119, "lang": "OCaml",}, {"name": "done", "kind": "variable", "line": 100, "lang": "Python",}, {"name": "main", "kind": "function", "line": 119, "lang": "Python",}, ] row_class = ContentCtagsRow # the following tests are disabled because CTAGS behaves differently @pytest.mark.skip def test_add__update_in_place_duplicate(self): pass @pytest.mark.skip def test_add_deadlock(self): pass def test_content_ctags_search( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: storage, data = swh_indexer_storage_with_data # 1. given tool = data.tools["universal-ctags"] tool_id = tool["id"] ctags1 = [ ContentCtagsRow( id=data.sha1_1, indexer_configuration_id=tool_id, **kwargs, # type: ignore ) for kwargs in [ {"name": "hello", "kind": "function", "line": 133, "lang": "Python",}, {"name": "counter", "kind": "variable", "line": 119, "lang": "Python",}, {"name": "hello", "kind": "variable", "line": 210, "lang": "Python",}, ] ] ctags1_with_tool = [ attr.evolve(ctag, indexer_configuration_id=None, tool=tool) for ctag in ctags1 ] ctags2 = [ ContentCtagsRow( id=data.sha1_2, indexer_configuration_id=tool_id, **kwargs, # type: ignore ) for kwargs in [ {"name": "hello", "kind": "variable", "line": 100, "lang": "C",}, {"name": "result", "kind": "variable", "line": 120, "lang": "C",}, ] ] ctags2_with_tool = [ attr.evolve(ctag, indexer_configuration_id=None, tool=tool) for ctag in ctags2 ] storage.content_ctags_add(ctags1 + ctags2) # 1. when actual_ctags = list(storage.content_ctags_search("hello", limit=1)) # 1. then assert actual_ctags == [ctags1_with_tool[0]] # 2. when actual_ctags = list( storage.content_ctags_search("hello", limit=1, last_sha1=data.sha1_1) ) # 2. then assert actual_ctags == [ctags2_with_tool[0]] # 3. when actual_ctags = list(storage.content_ctags_search("hello")) # 3. then assert actual_ctags == [ ctags1_with_tool[0], ctags1_with_tool[2], ctags2_with_tool[0], ] # 4. when actual_ctags = list(storage.content_ctags_search("counter")) # then assert actual_ctags == [ctags1_with_tool[1]] # 5. when actual_ctags = list(storage.content_ctags_search("result", limit=1)) # then assert actual_ctags == [ctags2_with_tool[1]] def test_content_ctags_search_no_result( self, swh_indexer_storage: IndexerStorageInterface ) -> None: storage = swh_indexer_storage actual_ctags = list(storage.content_ctags_search("counter")) assert not actual_ctags def test_content_ctags_add__add_new_ctags_added( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: storage, data = swh_indexer_storage_with_data # given tool = data.tools["universal-ctags"] tool_id = tool["id"] ctag1 = ContentCtagsRow( id=data.sha1_2, indexer_configuration_id=tool_id, name="done", kind="variable", line=100, lang="Scheme", ) ctag1_with_tool = attr.evolve(ctag1, indexer_configuration_id=None, tool=tool) # given storage.content_ctags_add([ctag1]) storage.content_ctags_add([ctag1]) # conflict does nothing # when actual_ctags = list(storage.content_ctags_get([data.sha1_2])) # then assert actual_ctags == [ctag1_with_tool] # given ctag2 = ContentCtagsRow( id=data.sha1_2, indexer_configuration_id=tool_id, name="defn", kind="function", line=120, lang="Scheme", ) ctag2_with_tool = attr.evolve(ctag2, indexer_configuration_id=None, tool=tool) storage.content_ctags_add([ctag2]) actual_ctags = list(storage.content_ctags_get([data.sha1_2])) assert actual_ctags == [ctag1_with_tool, ctag2_with_tool] def test_content_ctags_add__update_in_place( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: storage, data = swh_indexer_storage_with_data # given tool = data.tools["universal-ctags"] tool_id = tool["id"] ctag1 = ContentCtagsRow( id=data.sha1_2, indexer_configuration_id=tool_id, name="done", kind="variable", line=100, lang="Scheme", ) ctag1_with_tool = attr.evolve(ctag1, indexer_configuration_id=None, tool=tool) # given storage.content_ctags_add([ctag1]) # when actual_ctags = list(storage.content_ctags_get([data.sha1_2])) # then assert actual_ctags == [ctag1_with_tool] # given ctag2 = ContentCtagsRow( id=data.sha1_2, indexer_configuration_id=tool_id, name="defn", kind="function", line=120, lang="Scheme", ) ctag2_with_tool = attr.evolve(ctag2, indexer_configuration_id=None, tool=tool) storage.content_ctags_add([ctag1, ctag2]) actual_ctags = list(storage.content_ctags_get([data.sha1_2])) assert actual_ctags == [ctag1_with_tool, ctag2_with_tool] def test_add_empty( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: (storage, data) = swh_indexer_storage_with_data etype = self.endpoint_type summary = endpoint(storage, etype, "add")([]) assert summary == {"content_ctags:add": 0} actual_ctags = list(endpoint(storage, etype, "get")([data.sha1_2])) assert actual_ctags == [] def test_get_unknown( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: (storage, data) = swh_indexer_storage_with_data etype = self.endpoint_type actual_ctags = list(endpoint(storage, etype, "get")([data.sha1_2])) assert actual_ctags == [] class TestIndexerStorageContentMetadata(StorageETypeTester): """Test Indexer Storage content_metadata related methods """ tool_name = "swh-metadata-detector" endpoint_type = "content_metadata" example_data = [ { "metadata": { "other": {}, "codeRepository": { "type": "git", "url": "https://github.com/moranegg/metadata_test", }, "description": "Simple package.json test for indexer", "name": "test_metadata", "version": "0.0.1", }, }, {"metadata": {"other": {}, "name": "test_metadata", "version": "0.0.1"},}, ] row_class = ContentMetadataRow class TestIndexerStorageRevisionIntrinsicMetadata(StorageETypeTester): """Test Indexer Storage revision_intrinsic_metadata related methods """ tool_name = "swh-metadata-detector" endpoint_type = "revision_intrinsic_metadata" example_data = [ { "metadata": { "other": {}, "codeRepository": { "type": "git", "url": "https://github.com/moranegg/metadata_test", }, "description": "Simple package.json test for indexer", "name": "test_metadata", "version": "0.0.1", }, "mappings": ["mapping1"], }, { "metadata": {"other": {}, "name": "test_metadata", "version": "0.0.1"}, "mappings": ["mapping2"], }, ] row_class = RevisionIntrinsicMetadataRow class TestIndexerStorageContentFossologyLicense(StorageETypeTester): endpoint_type = "content_fossology_license" tool_name = "nomos" example_data = [ {"license": "Apache-2.0"}, {"license": "BSD-2-Clause"}, ] row_class = ContentLicenseRow # the following tests are disabled because licenses behaves differently @pytest.mark.skip def test_add__update_in_place_duplicate(self): pass @pytest.mark.skip def test_add_deadlock(self): pass # content_fossology_license_missing does not exist @pytest.mark.skip def test_missing(self): pass def test_content_fossology_license_add__new_license_added( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: storage, data = swh_indexer_storage_with_data # given tool = data.tools["nomos"] tool_id = tool["id"] license1 = ContentLicenseRow( id=data.sha1_1, license="Apache-2.0", indexer_configuration_id=tool_id, ) # given storage.content_fossology_license_add([license1]) # conflict does nothing storage.content_fossology_license_add([license1]) # when actual_licenses = list(storage.content_fossology_license_get([data.sha1_1])) # then expected_licenses = [ ContentLicenseRow(id=data.sha1_1, license="Apache-2.0", tool=tool,) ] assert actual_licenses == expected_licenses # given license2 = ContentLicenseRow( id=data.sha1_1, license="BSD-2-Clause", indexer_configuration_id=tool_id, ) storage.content_fossology_license_add([license2]) actual_licenses = list(storage.content_fossology_license_get([data.sha1_1])) expected_licenses.append( ContentLicenseRow(id=data.sha1_1, license="BSD-2-Clause", tool=tool,) ) # first license was not removed when the second one was added assert sorted(actual_licenses) == sorted(expected_licenses) def test_generate_content_fossology_license_get_partition_failure( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: """get_partition call with wrong limit input should fail""" storage, data = swh_indexer_storage_with_data indexer_configuration_id = 42 with pytest.raises( IndexerStorageArgumentException, match="limit should not be None" ): storage.content_fossology_license_get_partition( indexer_configuration_id, 0, 3, limit=None, # type: ignore ) def test_generate_content_fossology_license_get_partition_no_limit( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: """get_partition should return results""" storage, data = swh_indexer_storage_with_data # craft some consistent mimetypes fossology_licenses = data.fossology_licenses mimetypes = prepare_mimetypes_from_licenses(fossology_licenses) indexer_configuration_id = fossology_licenses[0].indexer_configuration_id storage.content_mimetype_add(mimetypes) # add fossology_licenses to storage storage.content_fossology_license_add(fossology_licenses) # All ids from the db expected_ids = set([c.id for c in fossology_licenses]) assert len(fossology_licenses) == 10 assert len(mimetypes) == 10 nb_partitions = 4 actual_ids = [] for partition_id in range(nb_partitions): actual_result = storage.content_fossology_license_get_partition( indexer_configuration_id, partition_id, nb_partitions ) assert actual_result.next_page_token is None actual_ids.extend(actual_result.results) assert len(set(actual_ids)) == len(expected_ids) for actual_id in actual_ids: assert actual_id in expected_ids def test_generate_content_fossology_license_get_partition_full( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: """get_partition for a single partition should return available ids """ storage, data = swh_indexer_storage_with_data # craft some consistent mimetypes fossology_licenses = data.fossology_licenses mimetypes = prepare_mimetypes_from_licenses(fossology_licenses) indexer_configuration_id = fossology_licenses[0].indexer_configuration_id storage.content_mimetype_add(mimetypes) # add fossology_licenses to storage storage.content_fossology_license_add(fossology_licenses) # All ids from the db expected_ids = set([c.id for c in fossology_licenses]) actual_result = storage.content_fossology_license_get_partition( indexer_configuration_id, 0, 1 ) assert actual_result.next_page_token is None actual_ids = actual_result.results assert len(set(actual_ids)) == len(expected_ids) for actual_id in actual_ids: assert actual_id in expected_ids def test_generate_content_fossology_license_get_partition_empty( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: """get_partition when at least one of the partitions is empty""" storage, data = swh_indexer_storage_with_data # craft some consistent mimetypes fossology_licenses = data.fossology_licenses mimetypes = prepare_mimetypes_from_licenses(fossology_licenses) indexer_configuration_id = fossology_licenses[0].indexer_configuration_id storage.content_mimetype_add(mimetypes) # add fossology_licenses to storage storage.content_fossology_license_add(fossology_licenses) # All ids from the db expected_ids = set([c.id for c in fossology_licenses]) # nb_partitions = smallest power of 2 such that at least one of # the partitions is empty nb_licenses = len(fossology_licenses) nb_partitions = 1 << math.floor(math.log2(nb_licenses) + 1) seen_ids = [] for partition_id in range(nb_partitions): actual_result = storage.content_fossology_license_get_partition( indexer_configuration_id, partition_id, nb_partitions, limit=nb_licenses + 1, ) for actual_id in actual_result.results: seen_ids.append(actual_id) # Limit is higher than the max number of results assert actual_result.next_page_token is None assert set(seen_ids) == expected_ids def test_generate_content_fossology_license_get_partition_with_pagination( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: """get_partition should return ids provided with paginationv """ storage, data = swh_indexer_storage_with_data # craft some consistent mimetypes fossology_licenses = data.fossology_licenses mimetypes = prepare_mimetypes_from_licenses(fossology_licenses) indexer_configuration_id = fossology_licenses[0].indexer_configuration_id storage.content_mimetype_add(mimetypes) # add fossology_licenses to storage storage.content_fossology_license_add(fossology_licenses) # All ids from the db expected_ids = [c.id for c in fossology_licenses] nb_partitions = 4 actual_ids = [] for partition_id in range(nb_partitions): next_page_token = None while True: actual_result = storage.content_fossology_license_get_partition( indexer_configuration_id, partition_id, nb_partitions, limit=2, page_token=next_page_token, ) actual_ids.extend(actual_result.results) next_page_token = actual_result.next_page_token if next_page_token is None: break assert len(set(actual_ids)) == len(set(expected_ids)) for actual_id in actual_ids: assert actual_id in expected_ids def test_add_empty( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: (storage, data) = swh_indexer_storage_with_data etype = self.endpoint_type summary = endpoint(storage, etype, "add")([]) assert summary == {"content_fossology_license:add": 0} actual_license = list(endpoint(storage, etype, "get")([data.sha1_2])) assert actual_license == [] def test_get_unknown( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: (storage, data) = swh_indexer_storage_with_data etype = self.endpoint_type actual_license = list(endpoint(storage, etype, "get")([data.sha1_2])) assert actual_license == [] class TestIndexerStorageOriginIntrinsicMetadata: - def test_origin_intrinsic_metadata_get( + def test_origin_intrinsic_metadata_add( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: storage, data = swh_indexer_storage_with_data # given tool_id = data.tools["swh-metadata-detector"]["id"] metadata = { "version": None, "name": None, } metadata_rev = RevisionIntrinsicMetadataRow( id=data.revision_id_2, metadata=metadata, mappings=["mapping1"], indexer_configuration_id=tool_id, ) metadata_origin = OriginIntrinsicMetadataRow( id=data.origin_url_1, metadata=metadata, indexer_configuration_id=tool_id, mappings=["mapping1"], from_revision=data.revision_id_2, ) # when storage.revision_intrinsic_metadata_add([metadata_rev]) storage.origin_intrinsic_metadata_add([metadata_origin]) # then actual_metadata = list( storage.origin_intrinsic_metadata_get([data.origin_url_1, "no://where"]) ) expected_metadata = [ OriginIntrinsicMetadataRow( id=data.origin_url_1, metadata=metadata, tool=data.tools["swh-metadata-detector"], from_revision=data.revision_id_2, mappings=["mapping1"], ) ] assert actual_metadata == expected_metadata + journal_objects = storage.journal_writer.journal.objects # type: ignore + actual_journal_metadata = [ + obj + for (obj_type, obj) in journal_objects + if obj_type == "origin_intrinsic_metadata" + ] + assert list(sorted(actual_journal_metadata)) == list(sorted(expected_metadata)) + def test_origin_intrinsic_metadata_add_update_in_place_duplicate( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: storage, data = swh_indexer_storage_with_data # given tool_id = data.tools["swh-metadata-detector"]["id"] metadata_v1: Dict[str, Any] = { "version": None, "name": None, } metadata_rev_v1 = RevisionIntrinsicMetadataRow( id=data.revision_id_2, metadata=metadata_v1, mappings=[], indexer_configuration_id=tool_id, ) metadata_origin_v1 = OriginIntrinsicMetadataRow( id=data.origin_url_1, metadata=metadata_v1.copy(), indexer_configuration_id=tool_id, mappings=[], from_revision=data.revision_id_2, ) # given storage.revision_intrinsic_metadata_add([metadata_rev_v1]) storage.origin_intrinsic_metadata_add([metadata_origin_v1]) # when actual_metadata = list( storage.origin_intrinsic_metadata_get([data.origin_url_1]) ) # then expected_metadata_v1 = [ OriginIntrinsicMetadataRow( id=data.origin_url_1, metadata=metadata_v1, tool=data.tools["swh-metadata-detector"], from_revision=data.revision_id_2, mappings=[], ) ] assert actual_metadata == expected_metadata_v1 # given metadata_v2 = metadata_v1.copy() metadata_v2.update( {"name": "test_update_duplicated_metadata", "author": "MG",} ) metadata_rev_v2 = attr.evolve(metadata_rev_v1, metadata=metadata_v2) metadata_origin_v2 = OriginIntrinsicMetadataRow( id=data.origin_url_1, metadata=metadata_v2.copy(), indexer_configuration_id=tool_id, mappings=["npm"], from_revision=data.revision_id_1, ) storage.revision_intrinsic_metadata_add([metadata_rev_v2]) storage.origin_intrinsic_metadata_add([metadata_origin_v2]) actual_metadata = list( storage.origin_intrinsic_metadata_get([data.origin_url_1]) ) expected_metadata_v2 = [ OriginIntrinsicMetadataRow( id=data.origin_url_1, metadata=metadata_v2, tool=data.tools["swh-metadata-detector"], from_revision=data.revision_id_1, mappings=["npm"], ) ] # metadata did change as the v2 was used to overwrite v1 assert actual_metadata == expected_metadata_v2 def test_origin_intrinsic_metadata_add__deadlock( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: storage, data = swh_indexer_storage_with_data # given tool_id = data.tools["swh-metadata-detector"]["id"] origins = ["file:///tmp/origin{:02d}".format(i) for i in range(100)] example_data1: Dict[str, Any] = { "metadata": {"version": None, "name": None,}, "mappings": [], } example_data2: Dict[str, Any] = { "metadata": {"version": "v1.1.1", "name": "foo",}, "mappings": [], } metadata_rev_v1 = RevisionIntrinsicMetadataRow( id=data.revision_id_2, metadata={"version": None, "name": None,}, mappings=[], indexer_configuration_id=tool_id, ) data_v1 = [ OriginIntrinsicMetadataRow( id=origin, from_revision=data.revision_id_2, indexer_configuration_id=tool_id, **example_data1, ) for origin in origins ] data_v2 = [ OriginIntrinsicMetadataRow( id=origin, from_revision=data.revision_id_2, indexer_configuration_id=tool_id, **example_data2, ) for origin in origins ] # Remove one item from each, so that both queries have to succeed for # all items to be in the DB. data_v2a = data_v2[1:] data_v2b = list(reversed(data_v2[0:-1])) # given storage.revision_intrinsic_metadata_add([metadata_rev_v1]) storage.origin_intrinsic_metadata_add(data_v1) # when actual_data = list(storage.origin_intrinsic_metadata_get(origins)) expected_data_v1 = [ OriginIntrinsicMetadataRow( id=origin, from_revision=data.revision_id_2, tool=data.tools["swh-metadata-detector"], **example_data1, ) for origin in origins ] # then assert actual_data == expected_data_v1 # given def f1() -> None: storage.origin_intrinsic_metadata_add(data_v2a) def f2() -> None: storage.origin_intrinsic_metadata_add(data_v2b) t1 = threading.Thread(target=f1) t2 = threading.Thread(target=f2) t2.start() t1.start() t1.join() t2.join() actual_data = list(storage.origin_intrinsic_metadata_get(origins)) expected_data_v2 = [ OriginIntrinsicMetadataRow( id=origin, from_revision=data.revision_id_2, tool=data.tools["swh-metadata-detector"], **example_data2, ) for origin in origins ] actual_data.sort(key=lambda item: item.id) assert len(actual_data) == len(expected_data_v1) == len(expected_data_v2) for (item, expected_item_v1, expected_item_v2) in zip( actual_data, expected_data_v1, expected_data_v2 ): assert item in (expected_item_v1, expected_item_v2) def test_origin_intrinsic_metadata_add__duplicate_twice( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: storage, data = swh_indexer_storage_with_data # given tool_id = data.tools["swh-metadata-detector"]["id"] metadata = { "developmentStatus": None, "name": None, } metadata_rev = RevisionIntrinsicMetadataRow( id=data.revision_id_2, metadata=metadata, mappings=["mapping1"], indexer_configuration_id=tool_id, ) metadata_origin = OriginIntrinsicMetadataRow( id=data.origin_url_1, metadata=metadata, indexer_configuration_id=tool_id, mappings=["mapping1"], from_revision=data.revision_id_2, ) # when storage.revision_intrinsic_metadata_add([metadata_rev]) with pytest.raises(DuplicateId): storage.origin_intrinsic_metadata_add([metadata_origin, metadata_origin]) def test_origin_intrinsic_metadata_search_fulltext( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: storage, data = swh_indexer_storage_with_data # given tool_id = data.tools["swh-metadata-detector"]["id"] metadata1 = { "author": "John Doe", } metadata1_rev = RevisionIntrinsicMetadataRow( id=data.revision_id_1, metadata=metadata1, mappings=[], indexer_configuration_id=tool_id, ) metadata1_origin = OriginIntrinsicMetadataRow( id=data.origin_url_1, metadata=metadata1, mappings=[], indexer_configuration_id=tool_id, from_revision=data.revision_id_1, ) metadata2 = { "author": "Jane Doe", } metadata2_rev = RevisionIntrinsicMetadataRow( id=data.revision_id_2, metadata=metadata2, mappings=[], indexer_configuration_id=tool_id, ) metadata2_origin = OriginIntrinsicMetadataRow( id=data.origin_url_2, metadata=metadata2, mappings=[], indexer_configuration_id=tool_id, from_revision=data.revision_id_2, ) # when storage.revision_intrinsic_metadata_add([metadata1_rev]) storage.origin_intrinsic_metadata_add([metadata1_origin]) storage.revision_intrinsic_metadata_add([metadata2_rev]) storage.origin_intrinsic_metadata_add([metadata2_origin]) # then search = storage.origin_intrinsic_metadata_search_fulltext assert set([res.id for res in search(["Doe"])]) == set( [data.origin_url_1, data.origin_url_2] ) assert [res.id for res in search(["John", "Doe"])] == [data.origin_url_1] assert [res.id for res in search(["John"])] == [data.origin_url_1] assert not list(search(["John", "Jane"])) def test_origin_intrinsic_metadata_search_fulltext_rank( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: storage, data = swh_indexer_storage_with_data # given tool_id = data.tools["swh-metadata-detector"]["id"] # The following authors have "Random Person" to add some more content # to the JSON data, to work around normalization quirks when there # are few words (rank/(1+ln(nb_words)) is very sensitive to nb_words # for small values of nb_words). metadata1 = {"author": ["Random Person", "John Doe", "Jane Doe",]} metadata1_rev = RevisionIntrinsicMetadataRow( id=data.revision_id_1, metadata=metadata1, mappings=[], indexer_configuration_id=tool_id, ) metadata1_origin = OriginIntrinsicMetadataRow( id=data.origin_url_1, metadata=metadata1, mappings=[], indexer_configuration_id=tool_id, from_revision=data.revision_id_1, ) metadata2 = {"author": ["Random Person", "Jane Doe",]} metadata2_rev = RevisionIntrinsicMetadataRow( id=data.revision_id_2, metadata=metadata2, mappings=[], indexer_configuration_id=tool_id, ) metadata2_origin = OriginIntrinsicMetadataRow( id=data.origin_url_2, metadata=metadata2, mappings=[], indexer_configuration_id=tool_id, from_revision=data.revision_id_2, ) # when storage.revision_intrinsic_metadata_add([metadata1_rev]) storage.origin_intrinsic_metadata_add([metadata1_origin]) storage.revision_intrinsic_metadata_add([metadata2_rev]) storage.origin_intrinsic_metadata_add([metadata2_origin]) # then search = storage.origin_intrinsic_metadata_search_fulltext assert [res.id for res in search(["Doe"])] == [ data.origin_url_1, data.origin_url_2, ] assert [res.id for res in search(["Doe"], limit=1)] == [data.origin_url_1] assert [res.id for res in search(["John"])] == [data.origin_url_1] assert [res.id for res in search(["Jane"])] == [ data.origin_url_2, data.origin_url_1, ] assert [res.id for res in search(["John", "Jane"])] == [data.origin_url_1] def _fill_origin_intrinsic_metadata( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: storage, data = swh_indexer_storage_with_data tool1_id = data.tools["swh-metadata-detector"]["id"] tool2_id = data.tools["swh-metadata-detector2"]["id"] metadata1 = { "@context": "foo", "author": "John Doe", } metadata1_rev = RevisionIntrinsicMetadataRow( id=data.revision_id_1, metadata=metadata1, mappings=["npm"], indexer_configuration_id=tool1_id, ) metadata1_origin = OriginIntrinsicMetadataRow( id=data.origin_url_1, metadata=metadata1, mappings=["npm"], indexer_configuration_id=tool1_id, from_revision=data.revision_id_1, ) metadata2 = { "@context": "foo", "author": "Jane Doe", } metadata2_rev = RevisionIntrinsicMetadataRow( id=data.revision_id_2, metadata=metadata2, mappings=["npm", "gemspec"], indexer_configuration_id=tool2_id, ) metadata2_origin = OriginIntrinsicMetadataRow( id=data.origin_url_2, metadata=metadata2, mappings=["npm", "gemspec"], indexer_configuration_id=tool2_id, from_revision=data.revision_id_2, ) metadata3 = { "@context": "foo", } metadata3_rev = RevisionIntrinsicMetadataRow( id=data.revision_id_3, metadata=metadata3, mappings=["npm", "gemspec"], indexer_configuration_id=tool2_id, ) metadata3_origin = OriginIntrinsicMetadataRow( id=data.origin_url_3, metadata=metadata3, mappings=["pkg-info"], indexer_configuration_id=tool2_id, from_revision=data.revision_id_3, ) storage.revision_intrinsic_metadata_add([metadata1_rev]) storage.origin_intrinsic_metadata_add([metadata1_origin]) storage.revision_intrinsic_metadata_add([metadata2_rev]) storage.origin_intrinsic_metadata_add([metadata2_origin]) storage.revision_intrinsic_metadata_add([metadata3_rev]) storage.origin_intrinsic_metadata_add([metadata3_origin]) def test_origin_intrinsic_metadata_search_by_producer( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: storage, data = swh_indexer_storage_with_data self._fill_origin_intrinsic_metadata(swh_indexer_storage_with_data) tool1 = data.tools["swh-metadata-detector"] tool2 = data.tools["swh-metadata-detector2"] endpoint = storage.origin_intrinsic_metadata_search_by_producer # test pagination # no 'page_token' param, return all origins result = endpoint(ids_only=True) assert result == PagedResult( results=[data.origin_url_1, data.origin_url_2, data.origin_url_3,], next_page_token=None, ) # 'page_token' is < than origin_1, return everything result = endpoint(page_token=data.origin_url_1[:-1], ids_only=True) assert result == PagedResult( results=[data.origin_url_1, data.origin_url_2, data.origin_url_3,], next_page_token=None, ) # 'page_token' is origin_3, return nothing result = endpoint(page_token=data.origin_url_3, ids_only=True) assert result == PagedResult(results=[], next_page_token=None) # test limit argument result = endpoint(page_token=data.origin_url_1[:-1], limit=2, ids_only=True) assert result == PagedResult( results=[data.origin_url_1, data.origin_url_2], next_page_token=data.origin_url_2, ) result = endpoint(page_token=data.origin_url_1, limit=2, ids_only=True) assert result == PagedResult( results=[data.origin_url_2, data.origin_url_3], next_page_token=None, ) result = endpoint(page_token=data.origin_url_2, limit=2, ids_only=True) assert result == PagedResult(results=[data.origin_url_3], next_page_token=None,) # test mappings filtering result = endpoint(mappings=["npm"], ids_only=True) assert result == PagedResult( results=[data.origin_url_1, data.origin_url_2], next_page_token=None, ) result = endpoint(mappings=["npm", "gemspec"], ids_only=True) assert result == PagedResult( results=[data.origin_url_1, data.origin_url_2], next_page_token=None, ) result = endpoint(mappings=["gemspec"], ids_only=True) assert result == PagedResult(results=[data.origin_url_2], next_page_token=None,) result = endpoint(mappings=["pkg-info"], ids_only=True) assert result == PagedResult(results=[data.origin_url_3], next_page_token=None,) result = endpoint(mappings=["foobar"], ids_only=True) assert result == PagedResult(results=[], next_page_token=None,) # test pagination + mappings result = endpoint(mappings=["npm"], limit=1, ids_only=True) assert result == PagedResult( results=[data.origin_url_1], next_page_token=data.origin_url_1, ) # test tool filtering result = endpoint(tool_ids=[tool1["id"]], ids_only=True) assert result == PagedResult(results=[data.origin_url_1], next_page_token=None,) result = endpoint(tool_ids=[tool2["id"]], ids_only=True) assert sorted(result.results) == [data.origin_url_2, data.origin_url_3] assert result.next_page_token is None result = endpoint(tool_ids=[tool1["id"], tool2["id"]], ids_only=True) assert sorted(result.results) == [ data.origin_url_1, data.origin_url_2, data.origin_url_3, ] assert result.next_page_token is None # test ids_only=False assert endpoint(mappings=["gemspec"]) == PagedResult( results=[ OriginIntrinsicMetadataRow( id=data.origin_url_2, metadata={"@context": "foo", "author": "Jane Doe",}, mappings=["npm", "gemspec"], tool=tool2, from_revision=data.revision_id_2, ) ], next_page_token=None, ) def test_origin_intrinsic_metadata_stats( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: storage, data = swh_indexer_storage_with_data self._fill_origin_intrinsic_metadata(swh_indexer_storage_with_data) result = storage.origin_intrinsic_metadata_stats() assert result == { "per_mapping": { "gemspec": 1, "npm": 2, "pkg-info": 1, "codemeta": 0, "maven": 0, }, "total": 3, "non_empty": 2, } class TestIndexerStorageIndexerConfiguration: def test_indexer_configuration_add( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: storage, data = swh_indexer_storage_with_data tool = { "tool_name": "some-unknown-tool", "tool_version": "some-version", "tool_configuration": {"debian-package": "some-package"}, } actual_tool = storage.indexer_configuration_get(tool) assert actual_tool is None # does not exist # add it actual_tools = list(storage.indexer_configuration_add([tool])) assert len(actual_tools) == 1 actual_tool = actual_tools[0] assert actual_tool is not None # now it exists new_id = actual_tool.pop("id") assert actual_tool == tool actual_tools2 = list(storage.indexer_configuration_add([tool])) actual_tool2 = actual_tools2[0] assert actual_tool2 is not None # now it exists new_id2 = actual_tool2.pop("id") assert new_id == new_id2 assert actual_tool == actual_tool2 def test_indexer_configuration_add_multiple( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: storage, data = swh_indexer_storage_with_data tool = { "tool_name": "some-unknown-tool", "tool_version": "some-version", "tool_configuration": {"debian-package": "some-package"}, } actual_tools = list(storage.indexer_configuration_add([tool])) assert len(actual_tools) == 1 new_tools = [ tool, { "tool_name": "yet-another-tool", "tool_version": "version", "tool_configuration": {}, }, ] actual_tools = list(storage.indexer_configuration_add(new_tools)) assert len(actual_tools) == 2 # order not guaranteed, so we iterate over results to check for tool in actual_tools: _id = tool.pop("id") assert _id is not None assert tool in new_tools def test_indexer_configuration_get_missing( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: storage, data = swh_indexer_storage_with_data tool = { "tool_name": "unknown-tool", "tool_version": "3.1.0rc2-31-ga2cbb8c", "tool_configuration": {"command_line": "nomossa "}, } actual_tool = storage.indexer_configuration_get(tool) assert actual_tool is None def test_indexer_configuration_get( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: storage, data = swh_indexer_storage_with_data tool = { "tool_name": "nomos", "tool_version": "3.1.0rc2-31-ga2cbb8c", "tool_configuration": {"command_line": "nomossa "}, } actual_tool = storage.indexer_configuration_get(tool) assert actual_tool expected_tool = tool.copy() del actual_tool["id"] assert expected_tool == actual_tool def test_indexer_configuration_metadata_get_missing_context( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: storage, data = swh_indexer_storage_with_data tool = { "tool_name": "swh-metadata-translator", "tool_version": "0.0.1", "tool_configuration": {"context": "unknown-context"}, } actual_tool = storage.indexer_configuration_get(tool) assert actual_tool is None def test_indexer_configuration_metadata_get( self, swh_indexer_storage_with_data: Tuple[IndexerStorageInterface, Any] ) -> None: storage, data = swh_indexer_storage_with_data tool = { "tool_name": "swh-metadata-translator", "tool_version": "0.0.1", "tool_configuration": {"type": "local", "context": "NpmMapping"}, } storage.indexer_configuration_add([tool]) actual_tool = storage.indexer_configuration_get(tool) assert actual_tool expected_tool = tool.copy() expected_tool["id"] = actual_tool["id"] assert expected_tool == actual_tool diff --git a/swh/indexer/tests/test_cli.py b/swh/indexer/tests/test_cli.py index 04d1ad2..3deca15 100644 --- a/swh/indexer/tests/test_cli.py +++ b/swh/indexer/tests/test_cli.py @@ -1,385 +1,397 @@ # Copyright (C) 2019-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information from functools import reduce import re import tempfile from typing import Any, Dict, List from unittest.mock import patch from click.testing import CliRunner from confluent_kafka import Consumer, Producer +import pytest from swh.indexer.cli import indexer_cli_group from swh.indexer.storage.interface import IndexerStorageInterface from swh.indexer.storage.model import ( OriginIntrinsicMetadataRow, RevisionIntrinsicMetadataRow, ) from swh.journal.serializers import value_to_kafka from swh.model.hashutil import hash_to_bytes CLI_CONFIG = """ scheduler: cls: foo args: {} storage: cls: memory indexer_storage: cls: memory """ def fill_idx_storage(idx_storage: IndexerStorageInterface, nb_rows: int) -> List[int]: tools: List[Dict[str, Any]] = [ {"tool_name": "tool %d" % i, "tool_version": "0.0.1", "tool_configuration": {},} for i in range(2) ] tools = idx_storage.indexer_configuration_add(tools) origin_metadata = [ OriginIntrinsicMetadataRow( id="file://dev/%04d" % origin_id, from_revision=hash_to_bytes("abcd{:0>4}".format(origin_id)), indexer_configuration_id=tools[origin_id % 2]["id"], metadata={"name": "origin %d" % origin_id}, mappings=["mapping%d" % (origin_id % 10)], ) for origin_id in range(nb_rows) ] revision_metadata = [ RevisionIntrinsicMetadataRow( id=hash_to_bytes("abcd{:0>4}".format(origin_id)), indexer_configuration_id=tools[origin_id % 2]["id"], metadata={"name": "origin %d" % origin_id}, mappings=["mapping%d" % (origin_id % 10)], ) for origin_id in range(nb_rows) ] idx_storage.revision_intrinsic_metadata_add(revision_metadata) idx_storage.origin_intrinsic_metadata_add(origin_metadata) return [tool["id"] for tool in tools] def _origins_in_task_args(tasks): """Returns the set of origins contained in the arguments of the provided tasks (assumed to be of type index-origin-metadata).""" return reduce( set.union, (set(task["arguments"]["args"][0]) for task in tasks), set() ) def _assert_tasks_for_origins(tasks, origins): expected_kwargs = {} assert {task["type"] for task in tasks} == {"index-origin-metadata"} assert all(len(task["arguments"]["args"]) == 1 for task in tasks) for task in tasks: assert task["arguments"]["kwargs"] == expected_kwargs, task assert _origins_in_task_args(tasks) == set(["file://dev/%04d" % i for i in origins]) def invoke(scheduler, catch_exceptions, args): runner = CliRunner() with patch( "swh.scheduler.get_scheduler" ) as get_scheduler_mock, tempfile.NamedTemporaryFile( "a", suffix=".yml" ) as config_fd: config_fd.write(CLI_CONFIG) config_fd.seek(0) get_scheduler_mock.return_value = scheduler result = runner.invoke(indexer_cli_group, ["-C" + config_fd.name] + args) if not catch_exceptions and result.exception: print(result.output) raise result.exception return result def test_mapping_list(indexer_scheduler): result = invoke(indexer_scheduler, False, ["mapping", "list",]) expected_output = "\n".join( ["codemeta", "gemspec", "maven", "npm", "pkg-info", "",] ) assert result.exit_code == 0, result.output assert result.output == expected_output def test_mapping_list_terms(indexer_scheduler): result = invoke(indexer_scheduler, False, ["mapping", "list-terms",]) assert result.exit_code == 0, result.output assert re.search(r"http://schema.org/url:\n.*npm", result.output) assert re.search(r"http://schema.org/url:\n.*codemeta", result.output) assert re.search( r"https://codemeta.github.io/terms/developmentStatus:\n\tcodemeta", result.output, ) def test_mapping_list_terms_exclude(indexer_scheduler): result = invoke( indexer_scheduler, False, ["mapping", "list-terms", "--exclude-mapping", "codemeta"], ) assert result.exit_code == 0, result.output assert re.search(r"http://schema.org/url:\n.*npm", result.output) assert not re.search(r"http://schema.org/url:\n.*codemeta", result.output) assert not re.search( r"https://codemeta.github.io/terms/developmentStatus:\n\tcodemeta", result.output, ) @patch("swh.scheduler.cli.utils.TASK_BATCH_SIZE", 3) @patch("swh.scheduler.cli_utils.TASK_BATCH_SIZE", 3) def test_origin_metadata_reindex_empty_db(indexer_scheduler, idx_storage, storage): result = invoke(indexer_scheduler, False, ["schedule", "reindex_origin_metadata",]) expected_output = "Nothing to do (no origin metadata matched the criteria).\n" assert result.exit_code == 0, result.output assert result.output == expected_output tasks = indexer_scheduler.search_tasks() assert len(tasks) == 0 @patch("swh.scheduler.cli.utils.TASK_BATCH_SIZE", 3) @patch("swh.scheduler.cli_utils.TASK_BATCH_SIZE", 3) def test_origin_metadata_reindex_divisor(indexer_scheduler, idx_storage, storage): """Tests the re-indexing when origin_batch_size*task_batch_size is a divisor of nb_origins.""" fill_idx_storage(idx_storage, 90) result = invoke(indexer_scheduler, False, ["schedule", "reindex_origin_metadata",]) # Check the output expected_output = ( "Scheduled 3 tasks (30 origins).\n" "Scheduled 6 tasks (60 origins).\n" "Scheduled 9 tasks (90 origins).\n" "Done.\n" ) assert result.exit_code == 0, result.output assert result.output == expected_output # Check scheduled tasks tasks = indexer_scheduler.search_tasks() assert len(tasks) == 9 _assert_tasks_for_origins(tasks, range(90)) @patch("swh.scheduler.cli.utils.TASK_BATCH_SIZE", 3) @patch("swh.scheduler.cli_utils.TASK_BATCH_SIZE", 3) def test_origin_metadata_reindex_dry_run(indexer_scheduler, idx_storage, storage): """Tests the re-indexing when origin_batch_size*task_batch_size is a divisor of nb_origins.""" fill_idx_storage(idx_storage, 90) result = invoke( indexer_scheduler, False, ["schedule", "--dry-run", "reindex_origin_metadata",] ) # Check the output expected_output = ( "Scheduled 3 tasks (30 origins).\n" "Scheduled 6 tasks (60 origins).\n" "Scheduled 9 tasks (90 origins).\n" "Done.\n" ) assert result.exit_code == 0, result.output assert result.output == expected_output # Check scheduled tasks tasks = indexer_scheduler.search_tasks() assert len(tasks) == 0 @patch("swh.scheduler.cli.utils.TASK_BATCH_SIZE", 3) @patch("swh.scheduler.cli_utils.TASK_BATCH_SIZE", 3) def test_origin_metadata_reindex_nondivisor(indexer_scheduler, idx_storage, storage): """Tests the re-indexing when neither origin_batch_size or task_batch_size is a divisor of nb_origins.""" fill_idx_storage(idx_storage, 70) result = invoke( indexer_scheduler, False, ["schedule", "reindex_origin_metadata", "--batch-size", "20",], ) # Check the output expected_output = ( "Scheduled 3 tasks (60 origins).\n" "Scheduled 4 tasks (70 origins).\n" "Done.\n" ) assert result.exit_code == 0, result.output assert result.output == expected_output # Check scheduled tasks tasks = indexer_scheduler.search_tasks() assert len(tasks) == 4 _assert_tasks_for_origins(tasks, range(70)) @patch("swh.scheduler.cli.utils.TASK_BATCH_SIZE", 3) @patch("swh.scheduler.cli_utils.TASK_BATCH_SIZE", 3) def test_origin_metadata_reindex_filter_one_mapping( indexer_scheduler, idx_storage, storage ): """Tests the re-indexing when origin_batch_size*task_batch_size is a divisor of nb_origins.""" fill_idx_storage(idx_storage, 110) result = invoke( indexer_scheduler, False, ["schedule", "reindex_origin_metadata", "--mapping", "mapping1",], ) # Check the output expected_output = "Scheduled 2 tasks (11 origins).\nDone.\n" assert result.exit_code == 0, result.output assert result.output == expected_output # Check scheduled tasks tasks = indexer_scheduler.search_tasks() assert len(tasks) == 2 _assert_tasks_for_origins(tasks, [1, 11, 21, 31, 41, 51, 61, 71, 81, 91, 101]) @patch("swh.scheduler.cli.utils.TASK_BATCH_SIZE", 3) @patch("swh.scheduler.cli_utils.TASK_BATCH_SIZE", 3) def test_origin_metadata_reindex_filter_two_mappings( indexer_scheduler, idx_storage, storage ): """Tests the re-indexing when origin_batch_size*task_batch_size is a divisor of nb_origins.""" fill_idx_storage(idx_storage, 110) result = invoke( indexer_scheduler, False, [ "schedule", "reindex_origin_metadata", "--mapping", "mapping1", "--mapping", "mapping2", ], ) # Check the output expected_output = "Scheduled 3 tasks (22 origins).\nDone.\n" assert result.exit_code == 0, result.output assert result.output == expected_output # Check scheduled tasks tasks = indexer_scheduler.search_tasks() assert len(tasks) == 3 _assert_tasks_for_origins( tasks, [ 1, 11, 21, 31, 41, 51, 61, 71, 81, 91, 101, 2, 12, 22, 32, 42, 52, 62, 72, 82, 92, 102, ], ) @patch("swh.scheduler.cli.utils.TASK_BATCH_SIZE", 3) @patch("swh.scheduler.cli_utils.TASK_BATCH_SIZE", 3) def test_origin_metadata_reindex_filter_one_tool( indexer_scheduler, idx_storage, storage ): """Tests the re-indexing when origin_batch_size*task_batch_size is a divisor of nb_origins.""" tool_ids = fill_idx_storage(idx_storage, 110) result = invoke( indexer_scheduler, False, ["schedule", "reindex_origin_metadata", "--tool-id", str(tool_ids[0]),], ) # Check the output expected_output = ( "Scheduled 3 tasks (30 origins).\n" "Scheduled 6 tasks (55 origins).\n" "Done.\n" ) assert result.exit_code == 0, result.output assert result.output == expected_output # Check scheduled tasks tasks = indexer_scheduler.search_tasks() assert len(tasks) == 6 _assert_tasks_for_origins(tasks, [x * 2 for x in range(55)]) def test_journal_client( storage, indexer_scheduler, kafka_prefix: str, kafka_server, consumer: Consumer ): """Test the 'swh indexer journal-client' cli tool.""" producer = Producer( { "bootstrap.servers": kafka_server, "client.id": "test producer", "acks": "all", } ) STATUS = {"status": "full", "origin": {"url": "file://dev/0000",}} producer.produce( - topic=kafka_prefix + ".origin_visit", + topic=f"{kafka_prefix}.origin_visit_status", key=b"bogus", value=value_to_kafka(STATUS), ) result = invoke( indexer_scheduler, False, [ "journal-client", "--stop-after-objects", "1", "--broker", kafka_server, "--prefix", kafka_prefix, "--group-id", "test-consumer", ], ) # Check the output expected_output = "Done.\n" assert result.exit_code == 0, result.output assert result.output == expected_output # Check scheduled tasks tasks = indexer_scheduler.search_tasks() assert len(tasks) == 1 _assert_tasks_for_origins(tasks, [0]) + + +def test_journal_client_without_brokers( + storage, indexer_scheduler, kafka_prefix: str, kafka_server, consumer: Consumer +): + """Without brokers configuration, the cli fails.""" + + with pytest.raises(ValueError, match="brokers"): + invoke( + indexer_scheduler, False, ["journal-client",], + ) diff --git a/swh/indexer/tests/test_journal_client.py b/swh/indexer/tests/test_journal_client.py index 23f1a09..38e4386 100644 --- a/swh/indexer/tests/test_journal_client.py +++ b/swh/indexer/tests/test_journal_client.py @@ -1,153 +1,153 @@ -# Copyright (C) 2019 The Software Heritage developers +# Copyright (C) 2019-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import unittest from unittest.mock import Mock, patch from swh.indexer.journal_client import process_journal_objects class JournalClientTest(unittest.TestCase): - def testOneOriginVisit(self): + def test_one_origin_visit_status(self): mock_scheduler = Mock() messages = { - "origin_visit": [{"status": "full", "origin": "file:///dev/zero",},] + "origin_visit_status": [{"status": "full", "origin": "file:///dev/zero",},] } process_journal_objects( messages, scheduler=mock_scheduler, task_names={"origin_metadata": "task-name"}, ) self.assertTrue(mock_scheduler.create_tasks.called) call_args = mock_scheduler.create_tasks.call_args (args, kwargs) = call_args self.assertEqual(kwargs, {}) del args[0][0]["next_run"] self.assertEqual( args, ( [ { "arguments": {"kwargs": {}, "args": (["file:///dev/zero"],),}, "policy": "oneshot", "type": "task-name", "retries_left": 1, }, ], ), ) - def testOriginVisitLegacy(self): + def test_origin_visit_legacy(self): mock_scheduler = Mock() messages = { - "origin_visit": [ + "origin_visit_status": [ {"status": "full", "origin": {"url": "file:///dev/zero",}}, ] } process_journal_objects( messages, scheduler=mock_scheduler, task_names={"origin_metadata": "task-name"}, ) self.assertTrue(mock_scheduler.create_tasks.called) call_args = mock_scheduler.create_tasks.call_args (args, kwargs) = call_args self.assertEqual(kwargs, {}) del args[0][0]["next_run"] self.assertEqual( args, ( [ { "arguments": {"kwargs": {}, "args": (["file:///dev/zero"],),}, "policy": "oneshot", "type": "task-name", "retries_left": 1, }, ], ), ) - def testOneOriginVisitBatch(self): + def test_one_origin_visit_batch(self): mock_scheduler = Mock() messages = { - "origin_visit": [ + "origin_visit_status": [ {"status": "full", "origin": "file:///dev/zero",}, {"status": "full", "origin": "file:///tmp/foobar",}, ] } process_journal_objects( messages, scheduler=mock_scheduler, task_names={"origin_metadata": "task-name"}, ) self.assertTrue(mock_scheduler.create_tasks.called) call_args = mock_scheduler.create_tasks.call_args (args, kwargs) = call_args self.assertEqual(kwargs, {}) del args[0][0]["next_run"] self.assertEqual( args, ( [ { "arguments": { "kwargs": {}, "args": (["file:///dev/zero", "file:///tmp/foobar"],), }, "policy": "oneshot", "type": "task-name", "retries_left": 1, }, ], ), ) @patch("swh.indexer.journal_client.MAX_ORIGINS_PER_TASK", 2) - def testOriginVisitBatches(self): + def test_origin_visit_batches(self): mock_scheduler = Mock() messages = { - "origin_visit": [ + "origin_visit_status": [ {"status": "full", "origin": "file:///dev/zero",}, {"status": "full", "origin": "file:///tmp/foobar",}, {"status": "full", "origin": "file:///tmp/spamegg",}, ] } process_journal_objects( messages, scheduler=mock_scheduler, task_names={"origin_metadata": "task-name"}, ) self.assertTrue(mock_scheduler.create_tasks.called) call_args = mock_scheduler.create_tasks.call_args (args, kwargs) = call_args self.assertEqual(kwargs, {}) del args[0][0]["next_run"] del args[0][1]["next_run"] self.assertEqual( args, ( [ { "arguments": { "kwargs": {}, "args": (["file:///dev/zero", "file:///tmp/foobar"],), }, "policy": "oneshot", "type": "task-name", "retries_left": 1, }, { "arguments": { "kwargs": {}, "args": (["file:///tmp/spamegg"],), }, "policy": "oneshot", "type": "task-name", "retries_left": 1, }, ], ), ) diff --git a/swh/indexer/tests/utils.py b/swh/indexer/tests/utils.py index fdbc86f..c3e995a 100644 --- a/swh/indexer/tests/utils.py +++ b/swh/indexer/tests/utils.py @@ -1,719 +1,719 @@ # Copyright (C) 2017-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import abc import functools from typing import Any, Dict import unittest from hypothesis import strategies from swh.core.api.classes import stream_results from swh.indexer.storage import INDEXER_CFG_KEY from swh.model import hashutil from swh.model.hashutil import hash_to_bytes from swh.model.model import ( Content, Directory, DirectoryEntry, Origin, OriginVisit, OriginVisitStatus, Person, Revision, RevisionType, Snapshot, SnapshotBranch, TargetType, Timestamp, TimestampWithTimezone, ) from swh.storage.utils import now BASE_TEST_CONFIG: Dict[str, Dict[str, Any]] = { "storage": {"cls": "memory"}, - "objstorage": {"cls": "memory", "args": {},}, - INDEXER_CFG_KEY: {"cls": "memory", "args": {},}, + "objstorage": {"cls": "memory"}, + INDEXER_CFG_KEY: {"cls": "memory"}, } ORIGINS = [ Origin(url="https://github.com/SoftwareHeritage/swh-storage"), Origin(url="rsync://ftp.gnu.org/gnu/3dldf"), Origin(url="https://forge.softwareheritage.org/source/jesuisgpl/"), Origin(url="https://pypi.org/project/limnoria/"), Origin(url="http://0-512-md.googlecode.com/svn/"), Origin(url="https://github.com/librariesio/yarn-parser"), Origin(url="https://github.com/librariesio/yarn-parser.git"), ] ORIGIN_VISITS = [ {"type": "git", "origin": ORIGINS[0].url}, {"type": "ftp", "origin": ORIGINS[1].url}, {"type": "deposit", "origin": ORIGINS[2].url}, {"type": "pypi", "origin": ORIGINS[3].url}, {"type": "svn", "origin": ORIGINS[4].url}, {"type": "git", "origin": ORIGINS[5].url}, {"type": "git", "origin": ORIGINS[6].url}, ] DIRECTORY = Directory( id=hash_to_bytes("34f335a750111ca0a8b64d8034faec9eedc396be"), entries=( DirectoryEntry( name=b"index.js", type="file", target=hash_to_bytes("01c9379dfc33803963d07c1ccc748d3fe4c96bb5"), perms=0o100644, ), DirectoryEntry( name=b"package.json", type="file", target=hash_to_bytes("26a9f72a7c87cc9205725cfd879f514ff4f3d8d5"), perms=0o100644, ), DirectoryEntry( name=b".github", type="dir", target=Directory(entries=()).id, perms=0o040000, ), ), ) DIRECTORY2 = Directory( id=b"\xf8zz\xa1\x12`<1$\xfav\xf9\x01\xfd5\x85F`\xf2\xb6", entries=( DirectoryEntry( name=b"package.json", type="file", target=hash_to_bytes("f5305243b3ce7ef8dc864ebc73794da304025beb"), perms=0o100644, ), ), ) REVISION = Revision( id=hash_to_bytes("c6201cb1b9b9df9a7542f9665c3b5dfab85e9775"), message=b"Improve search functionality", author=Person( name=b"Andrew Nesbitt", fullname=b"Andrew Nesbitt ", email=b"andrewnez@gmail.com", ), committer=Person( name=b"Andrew Nesbitt", fullname=b"Andrew Nesbitt ", email=b"andrewnez@gmail.com", ), committer_date=TimestampWithTimezone( timestamp=Timestamp(seconds=1380883849, microseconds=0,), offset=120, negative_utc=False, ), type=RevisionType.GIT, synthetic=False, date=TimestampWithTimezone( timestamp=Timestamp(seconds=1487596456, microseconds=0,), offset=0, negative_utc=False, ), directory=DIRECTORY2.id, parents=(), ) REVISIONS = [REVISION] SNAPSHOTS = [ Snapshot( id=hash_to_bytes("a50fde72265343b7d28cecf6db20d98a81d21965"), branches={ b"refs/heads/add-revision-origin-cache": SnapshotBranch( target=b'L[\xce\x1c\x88\x8eF\t\xf1"\x19\x1e\xfb\xc0s\xe7/\xe9l\x1e', target_type=TargetType.REVISION, ), b"refs/head/master": SnapshotBranch( target=b"8K\x12\x00d\x03\xcc\xe4]bS\xe3\x8f{\xd7}\xac\xefrm", target_type=TargetType.REVISION, ), b"HEAD": SnapshotBranch( target=b"refs/head/master", target_type=TargetType.ALIAS ), b"refs/tags/v0.0.103": SnapshotBranch( target=b'\xb6"Im{\xfdLb\xb0\x94N\xea\x96m\x13x\x88+\x0f\xdd', target_type=TargetType.RELEASE, ), }, ), Snapshot( id=hash_to_bytes("2c67f69a416bca4e1f3fcd848c588fab88ad0642"), branches={ b"3DLDF-1.1.4.tar.gz": SnapshotBranch( target=b'dJ\xfb\x1c\x91\xf4\x82B%]6\xa2\x90|\xd3\xfc"G\x99\x11', target_type=TargetType.REVISION, ), b"3DLDF-2.0.2.tar.gz": SnapshotBranch( target=b"\xb6\x0e\xe7\x9e9\xac\xaa\x19\x9e=\xd1\xc5\x00\\\xc6\xfc\xe0\xa6\xb4V", # noqa target_type=TargetType.REVISION, ), b"3DLDF-2.0.3-examples.tar.gz": SnapshotBranch( target=b"!H\x19\xc0\xee\x82-\x12F1\xbd\x97\xfe\xadZ\x80\x80\xc1\x83\xff", # noqa target_type=TargetType.REVISION, ), b"3DLDF-2.0.3.tar.gz": SnapshotBranch( target=b"\x8e\xa9\x8e/\xea}\x9feF\xf4\x9f\xfd\xee\xcc\x1a\xb4`\x8c\x8by", # noqa target_type=TargetType.REVISION, ), b"3DLDF-2.0.tar.gz": SnapshotBranch( target=b"F6*\xff(?\x19a\xef\xb6\xc2\x1fv$S\xe3G\xd3\xd1m", target_type=TargetType.REVISION, ), }, ), Snapshot( id=hash_to_bytes("68c0d26104d47e278dd6be07ed61fafb561d0d20"), branches={ b"master": SnapshotBranch( target=b"\xe7n\xa4\x9c\x9f\xfb\xb7\xf76\x11\x08{\xa6\xe9\x99\xb1\x9e]q\xeb", # noqa target_type=TargetType.REVISION, ) }, ), Snapshot( id=hash_to_bytes("f255245269e15fc99d284affd79f766668de0b67"), branches={ b"HEAD": SnapshotBranch( target=b"releases/2018.09.09", target_type=TargetType.ALIAS ), b"releases/2018.09.01": SnapshotBranch( target=b"<\xee1(\xe8\x8d_\xc1\xc9\xa6rT\xf1\x1d\xbb\xdfF\xfdw\xcf", target_type=TargetType.REVISION, ), b"releases/2018.09.09": SnapshotBranch( target=b"\x83\xb9\xb6\xc7\x05\xb1%\xd0\xfem\xd8kA\x10\x9d\xc5\xfa2\xf8t", # noqa target_type=TargetType.REVISION, ), }, ), Snapshot( id=hash_to_bytes("a1a28c0ab387a8f9e0618cb705eab81fc448f473"), branches={ b"master": SnapshotBranch( target=b"\xe4?r\xe1,\x88\xab\xec\xe7\x9a\x87\xb8\xc9\xad#.\x1bw=\x18", target_type=TargetType.REVISION, ) }, ), Snapshot( id=hash_to_bytes("bb4fd3a836930ce629d912864319637040ff3040"), branches={ b"HEAD": SnapshotBranch( target=REVISION.id, target_type=TargetType.REVISION, ) }, ), Snapshot( id=hash_to_bytes("bb4fd3a836930ce629d912864319637040ff3040"), branches={ b"HEAD": SnapshotBranch( target=REVISION.id, target_type=TargetType.REVISION, ) }, ), ] SHA1_TO_LICENSES = { "01c9379dfc33803963d07c1ccc748d3fe4c96bb5": ["GPL"], "02fb2c89e14f7fab46701478c83779c7beb7b069": ["Apache2.0"], "103bc087db1d26afc3a0283f38663d081e9b01e6": ["MIT"], "688a5ef812c53907562fe379d4b3851e69c7cb15": ["AGPL"], "da39a3ee5e6b4b0d3255bfef95601890afd80709": [], } SHA1_TO_CTAGS = { "01c9379dfc33803963d07c1ccc748d3fe4c96bb5": [ {"name": "foo", "kind": "str", "line": 10, "lang": "bar",} ], "d4c647f0fc257591cc9ba1722484229780d1c607": [ {"name": "let", "kind": "int", "line": 100, "lang": "haskell",} ], "688a5ef812c53907562fe379d4b3851e69c7cb15": [ {"name": "symbol", "kind": "float", "line": 99, "lang": "python",} ], } OBJ_STORAGE_DATA = { "01c9379dfc33803963d07c1ccc748d3fe4c96bb5": b"this is some text", "688a5ef812c53907562fe379d4b3851e69c7cb15": b"another text", "8986af901dd2043044ce8f0d8fc039153641cf17": b"yet another text", "02fb2c89e14f7fab46701478c83779c7beb7b069": b""" import unittest import logging from swh.indexer.mimetype import MimetypeIndexer from swh.indexer.tests.test_utils import MockObjStorage class MockStorage(): def content_mimetype_add(self, mimetypes): self.state = mimetypes def indexer_configuration_add(self, tools): return [{ 'id': 10, }] """, "103bc087db1d26afc3a0283f38663d081e9b01e6": b""" #ifndef __AVL__ #define __AVL__ typedef struct _avl_tree avl_tree; typedef struct _data_t { int content; } data_t; """, "93666f74f1cf635c8c8ac118879da6ec5623c410": b""" (should 'pygments (recognize 'lisp 'easily)) """, "26a9f72a7c87cc9205725cfd879f514ff4f3d8d5": b""" { "name": "test_metadata", "version": "0.0.1", "description": "Simple package.json test for indexer", "repository": { "type": "git", "url": "https://github.com/moranegg/metadata_test" } } """, "d4c647f0fc257591cc9ba1722484229780d1c607": b""" { "version": "5.0.3", "name": "npm", "description": "a package manager for JavaScript", "keywords": [ "install", "modules", "package manager", "package.json" ], "preferGlobal": true, "config": { "publishtest": false }, "homepage": "https://docs.npmjs.com/", "author": "Isaac Z. Schlueter (http://blog.izs.me)", "repository": { "type": "git", "url": "https://github.com/npm/npm" }, "bugs": { "url": "https://github.com/npm/npm/issues" }, "dependencies": { "JSONStream": "~1.3.1", "abbrev": "~1.1.0", "ansi-regex": "~2.1.1", "ansicolors": "~0.3.2", "ansistyles": "~0.1.3" }, "devDependencies": { "tacks": "~1.2.6", "tap": "~10.3.2" }, "license": "Artistic-2.0" } """, "a7ab314d8a11d2c93e3dcf528ca294e7b431c449": b""" """, "da39a3ee5e6b4b0d3255bfef95601890afd80709": b"", # was 626364 / b'bcd' "e3e40fee6ff8a52f06c3b428bfe7c0ed2ef56e92": b"unimportant content for bcd", # was 636465 / b'cde' now yarn-parser package.json "f5305243b3ce7ef8dc864ebc73794da304025beb": b""" { "name": "yarn-parser", "version": "1.0.0", "description": "Tiny web service for parsing yarn.lock files", "main": "index.js", "scripts": { "start": "node index.js", "test": "mocha" }, "engines": { "node": "9.8.0" }, "repository": { "type": "git", "url": "git+https://github.com/librariesio/yarn-parser.git" }, "keywords": [ "yarn", "parse", "lock", "dependencies" ], "author": "Andrew Nesbitt", "license": "AGPL-3.0", "bugs": { "url": "https://github.com/librariesio/yarn-parser/issues" }, "homepage": "https://github.com/librariesio/yarn-parser#readme", "dependencies": { "@yarnpkg/lockfile": "^1.0.0", "body-parser": "^1.15.2", "express": "^4.14.0" }, "devDependencies": { "chai": "^4.1.2", "mocha": "^5.2.0", "request": "^2.87.0", "test": "^0.6.0" } } """, } YARN_PARSER_METADATA = { "@context": "https://doi.org/10.5063/schema/codemeta-2.0", "url": "https://github.com/librariesio/yarn-parser#readme", "codeRepository": "git+git+https://github.com/librariesio/yarn-parser.git", "author": [{"type": "Person", "name": "Andrew Nesbitt"}], "license": "https://spdx.org/licenses/AGPL-3.0", "version": "1.0.0", "description": "Tiny web service for parsing yarn.lock files", "issueTracker": "https://github.com/librariesio/yarn-parser/issues", "name": "yarn-parser", "keywords": ["yarn", "parse", "lock", "dependencies"], "type": "SoftwareSourceCode", } json_dict_keys = strategies.one_of( strategies.characters(), strategies.just("type"), strategies.just("url"), strategies.just("name"), strategies.just("email"), strategies.just("@id"), strategies.just("@context"), strategies.just("repository"), strategies.just("license"), strategies.just("repositories"), strategies.just("licenses"), ) """Hypothesis strategy that generates strings, with an emphasis on those that are often used as dictionary keys in metadata files.""" generic_json_document = strategies.recursive( strategies.none() | strategies.booleans() | strategies.floats() | strategies.characters(), lambda children: ( strategies.lists(children, min_size=1) | strategies.dictionaries(json_dict_keys, children, min_size=1) ), ) """Hypothesis strategy that generates possible values for values of JSON metadata files.""" def json_document_strategy(keys=None): """Generates an hypothesis strategy that generates metadata files for a JSON-based format that uses the given keys.""" if keys is None: keys = strategies.characters() else: keys = strategies.one_of(map(strategies.just, keys)) return strategies.dictionaries(keys, generic_json_document, min_size=1) def _tree_to_xml(root, xmlns, data): def encode(s): "Skips unpaired surrogates generated by json_document_strategy" return s.encode("utf8", "replace") def to_xml(data, indent=b" "): if data is None: return b"" elif isinstance(data, (bool, str, int, float)): return indent + encode(str(data)) elif isinstance(data, list): return b"\n".join(to_xml(v, indent=indent) for v in data) elif isinstance(data, dict): lines = [] for (key, value) in data.items(): lines.append(indent + encode("<{}>".format(key))) lines.append(to_xml(value, indent=indent + b" ")) lines.append(indent + encode("".format(key))) return b"\n".join(lines) else: raise TypeError(data) return b"\n".join( [ '<{} xmlns="{}">'.format(root, xmlns).encode(), to_xml(data), "".format(root).encode(), ] ) class TreeToXmlTest(unittest.TestCase): def test_leaves(self): self.assertEqual( _tree_to_xml("root", "http://example.com", None), b'\n\n', ) self.assertEqual( _tree_to_xml("root", "http://example.com", True), b'\n True\n', ) self.assertEqual( _tree_to_xml("root", "http://example.com", "abc"), b'\n abc\n', ) self.assertEqual( _tree_to_xml("root", "http://example.com", 42), b'\n 42\n', ) self.assertEqual( _tree_to_xml("root", "http://example.com", 3.14), b'\n 3.14\n', ) def test_dict(self): self.assertIn( _tree_to_xml("root", "http://example.com", {"foo": "bar", "baz": "qux"}), [ b'\n' b" \n bar\n \n" b" \n qux\n \n" b"", b'\n' b" \n qux\n \n" b" \n bar\n \n" b"", ], ) def test_list(self): self.assertEqual( _tree_to_xml( "root", "http://example.com", [{"foo": "bar"}, {"foo": "baz"},] ), b'\n' b" \n bar\n \n" b" \n baz\n \n" b"", ) def xml_document_strategy(keys, root, xmlns): """Generates an hypothesis strategy that generates metadata files for an XML format that uses the given keys.""" return strategies.builds( functools.partial(_tree_to_xml, root, xmlns), json_document_strategy(keys) ) def filter_dict(d, keys): "return a copy of the dict with keys deleted" if not isinstance(keys, (list, tuple)): keys = (keys,) return dict((k, v) for (k, v) in d.items() if k not in keys) def fill_obj_storage(obj_storage): """Add some content in an object storage.""" for (obj_id, content) in OBJ_STORAGE_DATA.items(): obj_storage.add(content, obj_id=hash_to_bytes(obj_id)) def fill_storage(storage): storage.origin_add(ORIGINS) storage.directory_add([DIRECTORY, DIRECTORY2]) storage.revision_add(REVISIONS) storage.snapshot_add(SNAPSHOTS) for visit, snapshot in zip(ORIGIN_VISITS, SNAPSHOTS): assert snapshot.id is not None visit = storage.origin_visit_add( [OriginVisit(origin=visit["origin"], date=now(), type=visit["type"])] )[0] visit_status = OriginVisitStatus( origin=visit.origin, visit=visit.visit, date=now(), status="full", snapshot=snapshot.id, ) storage.origin_visit_status_add([visit_status]) contents = [] for (obj_id, content) in OBJ_STORAGE_DATA.items(): content_hashes = hashutil.MultiHash.from_data(content).digest() contents.append( Content( data=content, length=len(content), status="visible", sha1=hash_to_bytes(obj_id), sha1_git=hash_to_bytes(obj_id), sha256=content_hashes["sha256"], blake2s256=content_hashes["blake2s256"], ) ) storage.content_add(contents) class CommonContentIndexerTest(metaclass=abc.ABCMeta): def get_indexer_results(self, ids): """Override this for indexers that don't have a mock storage.""" return self.indexer.idx_storage.state def assert_results_ok(self, sha1s, expected_results=None): sha1s = [ sha1 if isinstance(sha1, bytes) else hash_to_bytes(sha1) for sha1 in sha1s ] actual_results = list(self.get_indexer_results(sha1s)) if expected_results is None: expected_results = self.expected_results self.assertEqual(expected_results, actual_results) def test_index(self): """Known sha1 have their data indexed """ sha1s = [self.id0, self.id1, self.id2] # when self.indexer.run(sha1s) self.assert_results_ok(sha1s) # 2nd pass self.indexer.run(sha1s) self.assert_results_ok(sha1s) def test_index_one_unknown_sha1(self): """Unknown sha1 are not indexed""" sha1s = [ self.id1, "799a5ef812c53907562fe379d4b3851e69c7cb15", # unknown "800a5ef812c53907562fe379d4b3851e69c7cb15", ] # unknown # when self.indexer.run(sha1s) # then expected_results = [ res for res in self.expected_results if hashutil.hash_to_hex(res.id) in sha1s ] self.assert_results_ok(sha1s, expected_results) class CommonContentIndexerPartitionTest: """Allows to factorize tests on range indexer. """ def setUp(self): self.contents = sorted(OBJ_STORAGE_DATA) def assert_results_ok(self, partition_id, nb_partitions, actual_results): expected_ids = [ c.sha1 for c in stream_results( self.indexer.storage.content_get_partition, partition_id=partition_id, nb_partitions=nb_partitions, ) ] actual_results = list(actual_results) for indexed_data in actual_results: _id = indexed_data.id assert _id in expected_ids _tool_id = indexed_data.indexer_configuration_id assert _tool_id == self.indexer.tool["id"] def test__index_contents(self): """Indexing contents without existing data results in indexed data """ partition_id = 0 nb_partitions = 4 actual_results = list( self.indexer._index_contents(partition_id, nb_partitions, indexed={}) ) self.assert_results_ok(partition_id, nb_partitions, actual_results) def test__index_contents_with_indexed_data(self): """Indexing contents with existing data results in less indexed data """ partition_id = 3 nb_partitions = 4 # first pass actual_results = list( self.indexer._index_contents(partition_id, nb_partitions, indexed={}), ) self.assert_results_ok(partition_id, nb_partitions, actual_results) indexed_ids = {res.id for res in actual_results} actual_results = list( self.indexer._index_contents( partition_id, nb_partitions, indexed=indexed_ids ) ) # already indexed, so nothing new assert actual_results == [] def test_generate_content_get(self): """Optimal indexing should result in indexed data """ partition_id = 0 nb_partitions = 1 actual_results = self.indexer.run( partition_id, nb_partitions, skip_existing=False ) assert actual_results["status"] == "eventful", actual_results def test_generate_content_get_no_result(self): """No result indexed returns False""" actual_results = self.indexer.run(1, 2 ** 512, incremental=False) assert actual_results == {"status": "uneventful"}