Page Menu
Home
Software Heritage
Search
Configure Global Search
Log In
Files
F9337132
No One
Temporary
Actions
View File
Edit File
Delete File
View Transforms
Subscribe
Mute Notifications
Award Token
Flag For Later
Size
242 KB
Subscribers
None
View Options
diff --git a/PKG-INFO b/PKG-INFO
index 06fbd34..bbc9a86 100644
--- a/PKG-INFO
+++ b/PKG-INFO
@@ -1,71 +1,71 @@
Metadata-Version: 2.1
Name: swh.indexer
-Version: 0.1.1
+Version: 0.2.0
Summary: Software Heritage Content Indexer
Home-page: https://forge.softwareheritage.org/diffusion/78/
Author: Software Heritage developers
Author-email: swh-devel@inria.fr
License: UNKNOWN
Project-URL: Bug Reports, https://forge.softwareheritage.org/maniphest
Project-URL: Funding, https://www.softwareheritage.org/donate
Project-URL: Source, https://forge.softwareheritage.org/source/swh-indexer
Project-URL: Documentation, https://docs.softwareheritage.org/devel/swh-indexer/
Description: swh-indexer
============
Tools to compute multiple indexes on SWH's raw contents:
- content:
- mimetype
- ctags
- language
- fossology-license
- metadata
- revision:
- metadata
An indexer is in charge of:
- looking up objects
- extracting information from those objects
- store those information in the swh-indexer db
There are multiple indexers working on different object types:
- content indexer: works with content sha1 hashes
- revision indexer: works with revision sha1 hashes
- origin indexer: works with origin identifiers
Indexation procedure:
- receive batch of ids
- retrieve the associated data depending on object type
- compute for that object some index
- store the result to swh's storage
Current content indexers:
- mimetype (queue swh_indexer_content_mimetype): detect the encoding
and mimetype
- language (queue swh_indexer_content_language): detect the
programming language
- ctags (queue swh_indexer_content_ctags): compute tags information
- fossology-license (queue swh_indexer_fossology_license): compute the
license
- metadata: translate file into translated_metadata dict
Current revision indexers:
- metadata: detects files containing metadata and retrieves translated_metadata
in content_metadata table in storage or run content indexer to translate
files.
Platform: UNKNOWN
Classifier: Programming Language :: Python :: 3
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3)
Classifier: Operating System :: OS Independent
Classifier: Development Status :: 5 - Production/Stable
Requires-Python: >=3.7
Description-Content-Type: text/markdown
Provides-Extra: testing
diff --git a/conftest.py b/conftest.py
index de31662..d3cc5fd 100644
--- a/conftest.py
+++ b/conftest.py
@@ -1,19 +1,28 @@
# Copyright (C) 2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
+import pytest
+
from hypothesis import settings
# define tests profile. Full documentation is at:
# https://hypothesis.readthedocs.io/en/latest/settings.html#settings-profiles
settings.register_profile("fast", max_examples=5, deadline=5000)
settings.register_profile("slow", max_examples=20, deadline=5000)
# Ignore the following modules because wsgi module fails as no
# configuration file is found (--doctest-modules forces the module
# loading)
collect_ignore = ["swh/indexer/storage/api/wsgi.py"]
# we use the swh_scheduler fixture
pytest_plugins = ["swh.scheduler.pytest_plugin"]
+
+
+@pytest.fixture(scope="session")
+def swh_scheduler_celery_includes(swh_scheduler_celery_includes):
+ return swh_scheduler_celery_includes + [
+ "swh.indexer.tasks",
+ ]
diff --git a/requirements-swh.txt b/requirements-swh.txt
index 0363717..39e073c 100644
--- a/requirements-swh.txt
+++ b/requirements-swh.txt
@@ -1,6 +1,6 @@
-swh.core[db,http] >= 0.0.87
+swh.core[db,http] >= 0.2.2
swh.model >= 0.0.15
swh.objstorage >= 0.0.43
-swh.scheduler >= 0.0.47
-swh.storage >= 0.8.0
+swh.scheduler >= 0.5.2
+swh.storage >= 0.12.0
swh.journal >= 0.1.0
diff --git a/requirements-test.txt b/requirements-test.txt
index ac0c1f0..c1f90dc 100644
--- a/requirements-test.txt
+++ b/requirements-test.txt
@@ -1,5 +1,6 @@
confluent-kafka
pytest
+pytest-mock
hypothesis>=3.11.0
swh.scheduler[testing] >= 0.5.0
swh.storage[testing] >= 0.10.0
diff --git a/swh.indexer.egg-info/PKG-INFO b/swh.indexer.egg-info/PKG-INFO
index 06fbd34..bbc9a86 100644
--- a/swh.indexer.egg-info/PKG-INFO
+++ b/swh.indexer.egg-info/PKG-INFO
@@ -1,71 +1,71 @@
Metadata-Version: 2.1
Name: swh.indexer
-Version: 0.1.1
+Version: 0.2.0
Summary: Software Heritage Content Indexer
Home-page: https://forge.softwareheritage.org/diffusion/78/
Author: Software Heritage developers
Author-email: swh-devel@inria.fr
License: UNKNOWN
Project-URL: Bug Reports, https://forge.softwareheritage.org/maniphest
Project-URL: Funding, https://www.softwareheritage.org/donate
Project-URL: Source, https://forge.softwareheritage.org/source/swh-indexer
Project-URL: Documentation, https://docs.softwareheritage.org/devel/swh-indexer/
Description: swh-indexer
============
Tools to compute multiple indexes on SWH's raw contents:
- content:
- mimetype
- ctags
- language
- fossology-license
- metadata
- revision:
- metadata
An indexer is in charge of:
- looking up objects
- extracting information from those objects
- store those information in the swh-indexer db
There are multiple indexers working on different object types:
- content indexer: works with content sha1 hashes
- revision indexer: works with revision sha1 hashes
- origin indexer: works with origin identifiers
Indexation procedure:
- receive batch of ids
- retrieve the associated data depending on object type
- compute for that object some index
- store the result to swh's storage
Current content indexers:
- mimetype (queue swh_indexer_content_mimetype): detect the encoding
and mimetype
- language (queue swh_indexer_content_language): detect the
programming language
- ctags (queue swh_indexer_content_ctags): compute tags information
- fossology-license (queue swh_indexer_fossology_license): compute the
license
- metadata: translate file into translated_metadata dict
Current revision indexers:
- metadata: detects files containing metadata and retrieves translated_metadata
in content_metadata table in storage or run content indexer to translate
files.
Platform: UNKNOWN
Classifier: Programming Language :: Python :: 3
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3)
Classifier: Operating System :: OS Independent
Classifier: Development Status :: 5 - Production/Stable
Requires-Python: >=3.7
Description-Content-Type: text/markdown
Provides-Extra: testing
diff --git a/swh.indexer.egg-info/SOURCES.txt b/swh.indexer.egg-info/SOURCES.txt
index 1dc3047..1440ceb 100644
--- a/swh.indexer.egg-info/SOURCES.txt
+++ b/swh.indexer.egg-info/SOURCES.txt
@@ -1,133 +1,134 @@
.gitignore
.pre-commit-config.yaml
AUTHORS
CODE_OF_CONDUCT.md
CONTRIBUTORS
LICENSE
MANIFEST.in
Makefile
Makefile.local
README.md
codemeta.json
conftest.py
mypy.ini
pyproject.toml
pytest.ini
requirements-swh.txt
requirements-test.txt
requirements.txt
setup.cfg
setup.py
tox.ini
docs/.gitignore
docs/Makefile
docs/Makefile.local
docs/README.md
docs/conf.py
docs/dev-info.rst
docs/index.rst
docs/metadata-workflow.rst
docs/_static/.placeholder
docs/_templates/.placeholder
docs/images/.gitignore
docs/images/Makefile
docs/images/tasks-metadata-indexers.uml
sql/bin/db-upgrade
sql/bin/dot_add_content
sql/doc/json
sql/doc/json/.gitignore
sql/doc/json/Makefile
sql/doc/json/indexer_configuration.tool_configuration.schema.json
sql/doc/json/revision_metadata.translated_metadata.json
sql/json/.gitignore
sql/json/Makefile
sql/json/indexer_configuration.tool_configuration.schema.json
sql/json/revision_metadata.translated_metadata.json
sql/upgrades/115.sql
sql/upgrades/116.sql
sql/upgrades/117.sql
sql/upgrades/118.sql
sql/upgrades/119.sql
sql/upgrades/120.sql
sql/upgrades/121.sql
sql/upgrades/122.sql
sql/upgrades/123.sql
sql/upgrades/124.sql
sql/upgrades/125.sql
sql/upgrades/126.sql
sql/upgrades/127.sql
sql/upgrades/128.sql
sql/upgrades/129.sql
sql/upgrades/130.sql
sql/upgrades/131.sql
sql/upgrades/132.sql
swh/__init__.py
swh.indexer.egg-info/PKG-INFO
swh.indexer.egg-info/SOURCES.txt
swh.indexer.egg-info/dependency_links.txt
swh.indexer.egg-info/entry_points.txt
swh.indexer.egg-info/requires.txt
swh.indexer.egg-info/top_level.txt
swh/indexer/__init__.py
swh/indexer/cli.py
swh/indexer/codemeta.py
swh/indexer/ctags.py
swh/indexer/fossology_license.py
swh/indexer/indexer.py
swh/indexer/journal_client.py
swh/indexer/metadata.py
swh/indexer/metadata_detector.py
swh/indexer/mimetype.py
swh/indexer/origin_head.py
swh/indexer/py.typed
swh/indexer/rehash.py
swh/indexer/tasks.py
swh/indexer/data/codemeta/CITATION
swh/indexer/data/codemeta/LICENSE
swh/indexer/data/codemeta/codemeta.jsonld
swh/indexer/data/codemeta/crosswalk.csv
swh/indexer/metadata_dictionary/__init__.py
swh/indexer/metadata_dictionary/base.py
swh/indexer/metadata_dictionary/codemeta.py
swh/indexer/metadata_dictionary/maven.py
swh/indexer/metadata_dictionary/npm.py
swh/indexer/metadata_dictionary/python.py
swh/indexer/metadata_dictionary/ruby.py
swh/indexer/sql/10-swh-init.sql
swh/indexer/sql/20-swh-enums.sql
swh/indexer/sql/30-swh-schema.sql
swh/indexer/sql/40-swh-func.sql
swh/indexer/sql/50-swh-data.sql
swh/indexer/sql/60-swh-indexes.sql
swh/indexer/storage/__init__.py
swh/indexer/storage/converters.py
swh/indexer/storage/db.py
swh/indexer/storage/exc.py
swh/indexer/storage/in_memory.py
swh/indexer/storage/interface.py
swh/indexer/storage/metrics.py
swh/indexer/storage/api/__init__.py
swh/indexer/storage/api/client.py
swh/indexer/storage/api/server.py
swh/indexer/tests/__init__.py
swh/indexer/tests/conftest.py
swh/indexer/tests/tasks.py
swh/indexer/tests/test_cli.py
swh/indexer/tests/test_codemeta.py
swh/indexer/tests/test_ctags.py
swh/indexer/tests/test_fossology_license.py
swh/indexer/tests/test_journal_client.py
swh/indexer/tests/test_metadata.py
swh/indexer/tests/test_mimetype.py
swh/indexer/tests/test_origin_head.py
swh/indexer/tests/test_origin_metadata.py
+swh/indexer/tests/test_tasks.py
swh/indexer/tests/utils.py
swh/indexer/tests/storage/__init__.py
swh/indexer/tests/storage/conftest.py
swh/indexer/tests/storage/generate_data_test.py
swh/indexer/tests/storage/test_api_client.py
swh/indexer/tests/storage/test_converters.py
swh/indexer/tests/storage/test_in_memory.py
swh/indexer/tests/storage/test_metrics.py
swh/indexer/tests/storage/test_server.py
swh/indexer/tests/storage/test_storage.py
\ No newline at end of file
diff --git a/swh.indexer.egg-info/requires.txt b/swh.indexer.egg-info/requires.txt
index 69ab181..4d17096 100644
--- a/swh.indexer.egg-info/requires.txt
+++ b/swh.indexer.egg-info/requires.txt
@@ -1,18 +1,19 @@
vcversioner
click
python-magic>=0.4.13
pyld
xmltodict
-swh.core[db,http]>=0.0.87
+swh.core[db,http]>=0.2.2
swh.model>=0.0.15
swh.objstorage>=0.0.43
-swh.scheduler>=0.0.47
-swh.storage>=0.8.0
+swh.scheduler>=0.5.2
+swh.storage>=0.12.0
swh.journal>=0.1.0
[testing]
confluent-kafka
pytest
+pytest-mock
hypothesis>=3.11.0
swh.scheduler[testing]>=0.5.0
swh.storage[testing]>=0.10.0
diff --git a/swh/indexer/fossology_license.py b/swh/indexer/fossology_license.py
index 1b4cffa..4fc15f1 100644
--- a/swh/indexer/fossology_license.py
+++ b/swh/indexer/fossology_license.py
@@ -1,185 +1,188 @@
# Copyright (C) 2016-2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import logging
import subprocess
-from typing import Any, Dict, List, Optional
+from typing import Any, Dict, List, Optional, Union
from swh.model import hashutil
-from .indexer import ContentIndexer, ContentRangeIndexer, write_to_temp
+from .indexer import ContentIndexer, ContentPartitionIndexer, write_to_temp
+from swh.indexer.storage.interface import PagedResult, Sha1
logger = logging.getLogger(__name__)
def compute_license(path):
"""Determine license from file at path.
Args:
path: filepath to determine the license
Returns:
dict: A dict with the following keys:
- licenses ([str]): associated detected licenses to path
- path (bytes): content filepath
"""
try:
properties = subprocess.check_output(["nomossa", path], universal_newlines=True)
if properties:
res = properties.rstrip().split(" contains license(s) ")
licenses = res[1].split(",")
else:
licenses = []
return {
"licenses": licenses,
"path": path,
}
except subprocess.CalledProcessError:
from os import path as __path
logger.exception(
"Problem during license detection for sha1 %s" % __path.basename(path)
)
return {
"licenses": [],
"path": path,
}
class MixinFossologyLicenseIndexer:
"""Mixin fossology license indexer.
See :class:`FossologyLicenseIndexer` and
- :class:`FossologyLicenseRangeIndexer`
+ :class:`FossologyLicensePartitionIndexer`
"""
ADDITIONAL_CONFIG = {
"workdir": ("str", "/tmp/swh/indexer.fossology.license"),
"tools": (
"dict",
{
"name": "nomos",
"version": "3.1.0rc2-31-ga2cbb8c",
"configuration": {"command_line": "nomossa <filepath>",},
},
),
"write_batch_size": ("int", 1000),
}
CONFIG_BASE_FILENAME = "indexer/fossology_license" # type: Optional[str]
tool: Any
idx_storage: Any
def prepare(self):
super().prepare()
self.working_directory = self.config["workdir"]
def index(
- self, id: bytes, data: Optional[bytes] = None, **kwargs
+ self, id: Union[bytes, Dict], data: Optional[bytes] = None, **kwargs
) -> Dict[str, Any]:
"""Index sha1s' content and store result.
Args:
id (bytes): content's identifier
raw_content (bytes): associated raw content to content id
Returns:
dict: A dict, representing a content_license, with keys:
- id (bytes): content's identifier (sha1)
- license (bytes): license in bytes
- path (bytes): path
- indexer_configuration_id (int): tool used to compute the output
"""
assert isinstance(id, bytes)
assert data is not None
with write_to_temp(
filename=hashutil.hash_to_hex(id), # use the id as pathname
data=data,
working_directory=self.working_directory,
) as content_path:
properties = compute_license(path=content_path)
properties.update(
{"id": id, "indexer_configuration_id": self.tool["id"],}
)
return properties
def persist_index_computations(
self, results: List[Dict], policy_update: str
) -> Dict[str, int]:
"""Persist the results in storage.
Args:
results: list of content_license dict with the
following keys:
- id (bytes): content's identifier (sha1)
- license (bytes): license in bytes
- path (bytes): path
policy_update: either 'update-dups' or 'ignore-dups' to
respectively update duplicates or ignore them
"""
return self.idx_storage.content_fossology_license_add(
results, conflict_update=(policy_update == "update-dups")
)
class FossologyLicenseIndexer(MixinFossologyLicenseIndexer, ContentIndexer):
"""Indexer in charge of:
- filtering out content already indexed
- reading content from objstorage per the content's id (sha1)
- computing {license, encoding} from that content
- store result in storage
"""
def filter(self, ids):
"""Filter out known sha1s and return only missing ones.
"""
yield from self.idx_storage.content_fossology_license_missing(
({"id": sha1, "indexer_configuration_id": self.tool["id"],} for sha1 in ids)
)
-class FossologyLicenseRangeIndexer(MixinFossologyLicenseIndexer, ContentRangeIndexer):
- """FossologyLicense Range Indexer working on range of content identifiers.
+class FossologyLicensePartitionIndexer(
+ MixinFossologyLicenseIndexer, ContentPartitionIndexer
+):
+ """FossologyLicense Range Indexer working on range/partition of content identifiers.
- filters out the non textual content
- (optionally) filters out content already indexed (cf
- :meth:`.indexed_contents_in_range`)
+ :meth:`.indexed_contents_in_partition`)
- reads content from objstorage per the content's id (sha1)
- computes {mimetype, encoding} from that content
- stores result in storage
"""
- def indexed_contents_in_range(self, start, end):
- """Retrieve indexed content id within range [start, end].
+ def indexed_contents_in_partition(
+ self, partition_id: int, nb_partitions: int, page_token: Optional[str] = None
+ ) -> PagedResult[Sha1]:
+ """Retrieve indexed content id within the partition id
Args:
- start (bytes): Starting bound from range identifier
- end (bytes): End range identifier
+ partition_id: Index of the partition to fetch
+ nb_partitions: Total number of partitions to split into
+ page_token: opaque token used for pagination
Returns:
- dict: a dict with keys:
-
- - **ids** [bytes]: iterable of content ids within the range.
- - **next** (Optional[bytes]): The next range of sha1 starts at
- this sha1 if any
+ PagedResult of Sha1. If next_page_token is None, there is no more data
+ to fetch
"""
- return self.idx_storage.content_fossology_license_get_range(
- start, end, self.tool["id"]
+ return self.idx_storage.content_fossology_license_get_partition(
+ self.tool["id"], partition_id, nb_partitions, page_token=page_token
)
diff --git a/swh/indexer/indexer.py b/swh/indexer/indexer.py
index 7e332a7..168639e 100644
--- a/swh/indexer/indexer.py
+++ b/swh/indexer/indexer.py
@@ -1,621 +1,633 @@
# Copyright (C) 2016-2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import abc
import os
import logging
import shutil
import tempfile
from contextlib import contextmanager
from typing import Any, Dict, Iterator, List, Optional, Set, Tuple, Union
from swh.scheduler import CONFIG as SWH_CONFIG
from swh.storage import get_storage
from swh.core.config import SWHConfig
from swh.objstorage import get_objstorage
from swh.objstorage.exc import ObjNotFoundError
-from swh.indexer.storage import get_indexer_storage, INDEXER_CFG_KEY
+from swh.indexer.storage import get_indexer_storage, INDEXER_CFG_KEY, PagedResult, Sha1
from swh.model import hashutil
from swh.core import utils
@contextmanager
def write_to_temp(filename: str, data: bytes, working_directory: str) -> Iterator[str]:
"""Write the sha1's content in a temporary file.
Args:
filename: one of sha1's many filenames
data: the sha1's content to write in temporary
file
working_directory: the directory into which the
file is written
Returns:
The path to the temporary file created. That file is
filled in with the raw content's data.
"""
os.makedirs(working_directory, exist_ok=True)
temp_dir = tempfile.mkdtemp(dir=working_directory)
content_path = os.path.join(temp_dir, filename)
with open(content_path, "wb") as f:
f.write(data)
yield content_path
shutil.rmtree(temp_dir)
class BaseIndexer(SWHConfig, metaclass=abc.ABCMeta):
"""Base class for indexers to inherit from.
The main entry point is the :func:`run` function which is in
charge of triggering the computations on the batch dict/ids
received.
Indexers can:
- filter out ids whose data has already been indexed.
- retrieve ids data from storage or objstorage
- index this data depending on the object and store the result in
storage.
To implement a new object type indexer, inherit from the
BaseIndexer and implement indexing:
:meth:`~BaseIndexer.run`:
object_ids are different depending on object. For example: sha1 for
content, sha1_git for revision, directory, release, and id for origin
To implement a new concrete indexer, inherit from the object level
classes: :class:`ContentIndexer`, :class:`RevisionIndexer`,
:class:`OriginIndexer`.
Then you need to implement the following functions:
:meth:`~BaseIndexer.filter`:
filter out data already indexed (in storage).
:meth:`~BaseIndexer.index_object`:
compute index on id with data (retrieved from the storage or the
objstorage by the id key) and return the resulting index computation.
:meth:`~BaseIndexer.persist_index_computations`:
persist the results of multiple index computations in the storage.
The new indexer implementation can also override the following functions:
:meth:`~BaseIndexer.prepare`:
Configuration preparation for the indexer. When overriding, this must
call the `super().prepare()` instruction.
:meth:`~BaseIndexer.check`:
Configuration check for the indexer. When overriding, this must call the
`super().check()` instruction.
:meth:`~BaseIndexer.register_tools`:
This should return a dict of the tool(s) to use when indexing or
filtering.
"""
results: List[Dict]
CONFIG = "indexer/base"
DEFAULT_CONFIG = {
INDEXER_CFG_KEY: (
"dict",
{"cls": "remote", "args": {"url": "http://localhost:5007/"}},
),
"storage": (
"dict",
{"cls": "remote", "args": {"url": "http://localhost:5002/",}},
),
"objstorage": (
"dict",
{"cls": "remote", "args": {"url": "http://localhost:5003/",}},
),
}
ADDITIONAL_CONFIG = {} # type: Dict[str, Tuple[str, Any]]
USE_TOOLS = True
catch_exceptions = True
"""Prevents exceptions in `index()` from raising too high. Set to False
in tests to properly catch all exceptions."""
scheduler: Any
def __init__(self, config=None, **kw) -> None:
"""Prepare and check that the indexer is ready to run.
"""
super().__init__()
if config is not None:
self.config = config
elif SWH_CONFIG:
self.config = SWH_CONFIG.copy()
else:
config_keys = (
"base_filename",
"config_filename",
"additional_configs",
"global_config",
)
config_args = {k: v for k, v in kw.items() if k in config_keys}
if self.ADDITIONAL_CONFIG:
config_args.setdefault("additional_configs", []).append(
self.ADDITIONAL_CONFIG
)
self.config = self.parse_config_file(**config_args)
self.prepare()
self.check()
self.log.debug("%s: config=%s", self, self.config)
def prepare(self) -> None:
"""Prepare the indexer's needed runtime configuration.
Without this step, the indexer cannot possibly run.
"""
config_storage = self.config.get("storage")
if config_storage:
self.storage = get_storage(**config_storage)
objstorage = self.config["objstorage"]
self.objstorage = get_objstorage(objstorage["cls"], objstorage["args"])
idx_storage = self.config[INDEXER_CFG_KEY]
self.idx_storage = get_indexer_storage(**idx_storage)
_log = logging.getLogger("requests.packages.urllib3.connectionpool")
_log.setLevel(logging.WARN)
self.log = logging.getLogger("swh.indexer")
if self.USE_TOOLS:
self.tools = list(self.register_tools(self.config.get("tools", [])))
self.results = []
@property
def tool(self) -> Dict:
return self.tools[0]
def check(self) -> None:
"""Check the indexer's configuration is ok before proceeding.
If ok, does nothing. If not raise error.
"""
if self.USE_TOOLS and not self.tools:
raise ValueError("Tools %s is unknown, cannot continue" % self.tools)
def _prepare_tool(self, tool: Dict[str, Any]) -> Dict[str, Any]:
"""Prepare the tool dict to be compliant with the storage api.
"""
return {"tool_%s" % key: value for key, value in tool.items()}
def register_tools(
self, tools: Union[Dict[str, Any], List[Dict[str, Any]]]
) -> List[Dict[str, Any]]:
"""Permit to register tools to the storage.
Add a sensible default which can be overridden if not
sufficient. (For now, all indexers use only one tool)
Expects the self.config['tools'] property to be set with
one or more tools.
Args:
tools: Either a dict or a list of dict.
Returns:
list: List of dicts with additional id key.
Raises:
ValueError: if not a list nor a dict.
"""
if isinstance(tools, list):
tools = list(map(self._prepare_tool, tools))
elif isinstance(tools, dict):
tools = [self._prepare_tool(tools)]
else:
raise ValueError("Configuration tool(s) must be a dict or list!")
if tools:
return self.idx_storage.indexer_configuration_add(tools)
else:
return []
def index(
- self, id: bytes, data: Optional[bytes] = None, **kwargs
+ self, id: Union[bytes, Dict], data: Optional[bytes] = None, **kwargs
) -> Dict[str, Any]:
"""Index computation for the id and associated raw data.
Args:
- id: identifier
+ id: identifier or Dict object
data: id's data from storage or objstorage depending on
object type
Returns:
dict: a dict that makes sense for the
:meth:`.persist_index_computations` method.
"""
raise NotImplementedError()
def filter(self, ids: List[bytes]) -> Iterator[bytes]:
"""Filter missing ids for that particular indexer.
Args:
ids: list of ids
Yields:
iterator of missing ids
"""
yield from ids
@abc.abstractmethod
def persist_index_computations(self, results, policy_update) -> Dict[str, int]:
"""Persist the computation resulting from the index.
Args:
results ([result]): List of results. One result is the
result of the index function.
policy_update ([str]): either 'update-dups' or 'ignore-dups' to
respectively update duplicates or ignore them
Returns:
a summary dict of what has been inserted in the storage
"""
return {}
class ContentIndexer(BaseIndexer):
"""A content indexer working on a list of ids directly.
- To work on indexer range, use the :class:`ContentRangeIndexer`
+ To work on indexer partition, use the :class:`ContentPartitionIndexer`
instead.
Note: :class:`ContentIndexer` is not an instantiable object. To
use it, one should inherit from this class and override the
methods mentioned in the :class:`BaseIndexer` class.
"""
def run(
self, ids: Union[List[bytes], bytes, str], policy_update: str, **kwargs
) -> Dict:
"""Given a list of ids:
- retrieve the content from the storage
- execute the indexing computations
- store the results (according to policy_update)
Args:
ids (Iterable[Union[bytes, str]]): sha1's identifier list
policy_update (str): either 'update-dups' or 'ignore-dups' to
respectively update duplicates or ignore
them
**kwargs: passed to the `index` method
Returns:
A summary Dict of the task's status
"""
status = "uneventful"
sha1s = [
hashutil.hash_to_bytes(id_) if isinstance(id_, str) else id_ for id_ in ids
]
results = []
summary: Dict = {}
try:
for sha1 in sha1s:
try:
raw_content = self.objstorage.get(sha1)
except ObjNotFoundError:
self.log.warning(
"Content %s not found in objstorage"
% hashutil.hash_to_hex(sha1)
)
continue
res = self.index(sha1, raw_content, **kwargs)
if res: # If no results, skip it
results.append(res)
status = "eventful"
summary = self.persist_index_computations(results, policy_update)
self.results = results
except Exception:
if not self.catch_exceptions:
raise
self.log.exception("Problem when reading contents metadata.")
status = "failed"
finally:
summary["status"] = status
return summary
-class ContentRangeIndexer(BaseIndexer):
- """A content range indexer.
+class ContentPartitionIndexer(BaseIndexer):
+ """A content partition indexer.
- This expects as input a range of ids to index.
+ This expects as input a partition_id and a nb_partitions. This will then index the
+ contents within that partition.
To work on a list of ids, use the :class:`ContentIndexer` instead.
- Note: :class:`ContentRangeIndexer` is not an instantiable
+ Note: :class:`ContentPartitionIndexer` is not an instantiable
object. To use it, one should inherit from this class and override
the methods mentioned in the :class:`BaseIndexer` class.
"""
@abc.abstractmethod
- def indexed_contents_in_range(self, start: bytes, end: bytes) -> Any:
+ def indexed_contents_in_partition(
+ self, partition_id: int, nb_partitions: int, page_token: Optional[str] = None
+ ) -> PagedResult[Sha1]:
"""Retrieve indexed contents within range [start, end].
Args:
- start: Starting bound from range identifier
- end: End range identifier
+ partition_id: Index of the partition to fetch
+ nb_partitions: Total number of partitions to split into
+ page_token: opaque token used for pagination
- Yields:
- bytes: Content identifier present in the range ``[start, end]``
+ Returns:
+ PagedResult of Sha1. If next_page_token is None, there is no more data
+ to fetch
"""
pass
def _list_contents_to_index(
- self, start: bytes, end: bytes, indexed: Set[bytes]
- ) -> Iterator[bytes]:
- """Compute from storage the new contents to index in the range [start,
- end]. The already indexed contents are skipped.
+ self, partition_id: int, nb_partitions: int, indexed: Set[Sha1]
+ ) -> Iterator[Sha1]:
+ """Compute from storage the new contents to index in the partition_id . The already
+ indexed contents are skipped.
Args:
- start: Starting bound from range identifier
- end: End range identifier
+ partition_id: Index of the partition to fetch data from
+ nb_partitions: Total number of partition
indexed: Set of content already indexed.
Yields:
- bytes: Identifier of contents to index.
+ Sha1 id (bytes) of contents to index
"""
- if not isinstance(start, bytes) or not isinstance(end, bytes):
- raise TypeError("identifiers must be bytes, not %r and %r." % (start, end))
- while start:
- result = self.storage.content_get_range(start, end)
- contents = result["contents"]
+ if not isinstance(partition_id, int) or not isinstance(nb_partitions, int):
+ raise TypeError(
+ f"identifiers must be int, not {partition_id!r} and {nb_partitions!r}."
+ )
+ next_page_token = None
+ while True:
+ result = self.storage.content_get_partition(
+ partition_id, nb_partitions, page_token=next_page_token
+ )
+ contents = result.results
for c in contents:
- _id = hashutil.hash_to_bytes(c["sha1"])
+ _id = hashutil.hash_to_bytes(c.sha1)
if _id in indexed:
continue
yield _id
- start = result["next"]
+ next_page_token = result.next_page_token
+ if next_page_token is None:
+ break
def _index_contents(
- self, start: bytes, end: bytes, indexed: Set[bytes], **kwargs: Any
+ self, partition_id: int, nb_partitions: int, indexed: Set[Sha1], **kwargs: Any
) -> Iterator[Dict]:
- """Index the contents from within range [start, end]
+ """Index the contents within the partition_id.
Args:
start: Starting bound from range identifier
end: End range identifier
indexed: Set of content already indexed.
Yields:
- dict: Data indexed to persist using the indexer storage
+ indexing result as dict to persist in the indexer backend
"""
- for sha1 in self._list_contents_to_index(start, end, indexed):
+ for sha1 in self._list_contents_to_index(partition_id, nb_partitions, indexed):
try:
raw_content = self.objstorage.get(sha1)
except ObjNotFoundError:
- self.log.warning(
- "Content %s not found in objstorage" % hashutil.hash_to_hex(sha1)
- )
+ self.log.warning(f"Content {sha1.hex()} not found in objstorage")
continue
res = self.index(sha1, raw_content, **kwargs)
if res:
if not isinstance(res["id"], bytes):
raise TypeError(
"%r.index should return ids as bytes, not %r"
% (self.__class__.__name__, res["id"])
)
yield res
def _index_with_skipping_already_done(
- self, start: bytes, end: bytes
+ self, partition_id: int, nb_partitions: int
) -> Iterator[Dict]:
- """Index not already indexed contents in range [start, end].
+ """Index not already indexed contents within the partition partition_id
Args:
- start: Starting range identifier
- end: Ending range identifier
+ partition_id: Index of the partition to fetch
+ nb_partitions: Total number of partitions to split into
Yields:
- dict: Content identifier present in the range
- ``[start, end]`` which are not already indexed.
+ indexing result as dict to persist in the indexer backend
"""
- while start:
- indexed_page = self.indexed_contents_in_range(start, end)
- contents = indexed_page["ids"]
- _end = contents[-1] if contents else end
- yield from self._index_contents(start, _end, contents)
- start = indexed_page["next"]
+ next_page_token = None
+ contents = set()
+ while True:
+ indexed_page = self.indexed_contents_in_partition(
+ partition_id, nb_partitions, page_token=next_page_token
+ )
+ for sha1 in indexed_page.results:
+ contents.add(sha1)
+ yield from self._index_contents(partition_id, nb_partitions, contents)
+ next_page_token = indexed_page.next_page_token
+ if next_page_token is None:
+ break
def run(
self,
- start: Union[bytes, str],
- end: Union[bytes, str],
+ partition_id: int,
+ nb_partitions: int,
skip_existing: bool = True,
- **kwargs
+ **kwargs,
) -> Dict:
- """Given a range of content ids, compute the indexing computations on
- the contents within. Either the indexer is incremental
- (filter out existing computed data) or not (compute
- everything from scratch).
+ """Given a partition of content ids, index the contents within.
+
+ Either the indexer is incremental (filter out existing computed data) or it
+ computes everything from scratch.
Args:
- start: Starting range identifier
- end: Ending range identifier
+ partition_id: Index of the partition to fetch
+ nb_partitions: Total number of partitions to split into
skip_existing: Skip existing indexed data
- (default) or not
+ (default) or not
**kwargs: passed to the `index` method
Returns:
- A dict with the task's status
+ dict with the indexing task status
"""
status = "uneventful"
summary: Dict[str, Any] = {}
count = 0
try:
- range_start = (
- hashutil.hash_to_bytes(start) if isinstance(start, str) else start
- )
- range_end = hashutil.hash_to_bytes(end) if isinstance(end, str) else end
-
if skip_existing:
- gen = self._index_with_skipping_already_done(range_start, range_end)
+ gen = self._index_with_skipping_already_done(
+ partition_id, nb_partitions
+ )
else:
- gen = self._index_contents(range_start, range_end, indexed=set([]))
+ gen = self._index_contents(partition_id, nb_partitions, indexed=set([]))
count_object_added_key: Optional[str] = None
for contents in utils.grouper(gen, n=self.config["write_batch_size"]):
res = self.persist_index_computations(
contents, policy_update="update-dups"
)
if not count_object_added_key:
count_object_added_key = list(res.keys())[0]
count += res[count_object_added_key]
if count > 0:
status = "eventful"
except Exception:
if not self.catch_exceptions:
raise
self.log.exception("Problem when computing metadata.")
status = "failed"
finally:
summary["status"] = status
if count > 0 and count_object_added_key:
summary[count_object_added_key] = count
return summary
class OriginIndexer(BaseIndexer):
"""An object type indexer, inherits from the :class:`BaseIndexer` and
implements Origin indexing using the run method
Note: the :class:`OriginIndexer` is not an instantiable object.
To use it in another context one should inherit from this class
and override the methods mentioned in the :class:`BaseIndexer`
class.
"""
def run(
self, origin_urls: List[str], policy_update: str = "update-dups", **kwargs
) -> Dict:
"""Given a list of origin urls:
- retrieve origins from storage
- execute the indexing computations
- store the results (according to policy_update)
Args:
origin_urls: list of origin urls.
policy_update: either 'update-dups' or 'ignore-dups' to
respectively update duplicates (default) or ignore them
**kwargs: passed to the `index` method
"""
summary: Dict[str, Any] = {}
status = "uneventful"
results = self.index_list(origin_urls, **kwargs)
summary_persist = self.persist_index_computations(results, policy_update)
self.results = results
if summary_persist:
for value in summary_persist.values():
if value > 0:
status = "eventful"
summary.update(summary_persist)
summary["status"] = status
return summary
def index_list(self, origins: List[Any], **kwargs: Any) -> List[Dict]:
results = []
for origin in origins:
try:
res = self.index(origin, **kwargs)
if res: # If no results, skip it
results.append(res)
except Exception:
if not self.catch_exceptions:
raise
self.log.exception("Problem when processing origin %s", origin["id"])
return results
class RevisionIndexer(BaseIndexer):
"""An object type indexer, inherits from the :class:`BaseIndexer` and
implements Revision indexing using the run method
Note: the :class:`RevisionIndexer` is not an instantiable object.
To use it in another context one should inherit from this class
and override the methods mentioned in the :class:`BaseIndexer`
class.
"""
def run(self, ids: Union[str, bytes], policy_update: str) -> Dict:
"""Given a list of sha1_gits:
- retrieve revisions from storage
- execute the indexing computations
- store the results (according to policy_update)
Args:
ids: sha1_git's identifier list
policy_update: either 'update-dups' or 'ignore-dups' to
respectively update duplicates or ignore them
"""
summary: Dict[str, Any] = {}
status = "uneventful"
results = []
- revs = self.storage.revision_get(
- hashutil.hash_to_bytes(id_) if isinstance(id_, str) else id_ for id_ in ids
- )
- for rev in revs:
+ revision_ids = [
+ hashutil.hash_to_bytes(id_) if isinstance(id_, str) else id_ for id_ in ids
+ ]
+ for rev in self.storage.revision_get(revision_ids):
if not rev:
self.log.warning(
"Revisions %s not found in storage"
% list(map(hashutil.hash_to_hex, ids))
)
continue
try:
res = self.index(rev)
if res: # If no results, skip it
results.append(res)
except Exception:
if not self.catch_exceptions:
raise
self.log.exception("Problem when processing revision")
status = "failed"
summary_persist = self.persist_index_computations(results, policy_update)
if summary_persist:
for value in summary_persist.values():
if value > 0:
status = "eventful"
summary.update(summary_persist)
self.results = results
summary["status"] = status
return summary
diff --git a/swh/indexer/mimetype.py b/swh/indexer/mimetype.py
index 384e5bc..890dcd8 100644
--- a/swh/indexer/mimetype.py
+++ b/swh/indexer/mimetype.py
@@ -1,157 +1,161 @@
# Copyright (C) 2016-2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
-from typing import Optional, Dict, Any, List
import magic
-from .indexer import ContentIndexer, ContentRangeIndexer
+from typing import Any, Optional, Dict, List, Union
+
+from swh.indexer.storage.interface import PagedResult, Sha1
+
+from .indexer import ContentIndexer, ContentPartitionIndexer
if not hasattr(magic.Magic, "from_buffer"):
raise ImportError(
'Expected "import magic" to import python-magic, but file_magic '
"was imported instead."
)
def compute_mimetype_encoding(raw_content: bytes) -> Dict[str, bytes]:
"""Determine mimetype and encoding from the raw content.
Args:
raw_content: content's raw data
Returns:
dict: mimetype and encoding key and corresponding values.
"""
m = magic.Magic(mime=True, mime_encoding=True)
res = m.from_buffer(raw_content)
try:
mimetype, encoding = res.split("; charset=")
except ValueError:
mimetype, encoding = res, ""
return {
"mimetype": mimetype,
"encoding": encoding,
}
class MixinMimetypeIndexer:
"""Mixin mimetype indexer.
- See :class:`MimetypeIndexer` and :class:`MimetypeRangeIndexer`
+ See :class:`MimetypeIndexer` and :class:`MimetypePartitionIndexer`
"""
tool: Any
idx_storage: Any
ADDITIONAL_CONFIG = {
"tools": (
"dict",
{
"name": "file",
"version": "1:5.30-1+deb9u1",
"configuration": {"type": "library", "debian-package": "python3-magic"},
},
),
"write_batch_size": ("int", 1000),
}
CONFIG_BASE_FILENAME = "indexer/mimetype" # type: Optional[str]
def index(
- self, id: bytes, data: Optional[bytes] = None, **kwargs
+ self, id: Union[bytes, Dict], data: Optional[bytes] = None, **kwargs
) -> Dict[str, Any]:
"""Index sha1s' content and store result.
Args:
id: content's identifier
data: raw content in bytes
Returns:
dict: content's mimetype; dict keys being
- id: content's identifier (sha1)
- mimetype: mimetype in bytes
- encoding: encoding in bytes
"""
assert data is not None
properties = compute_mimetype_encoding(data)
+ assert isinstance(id, bytes)
properties.update(
{"id": id, "indexer_configuration_id": self.tool["id"],}
)
return properties
def persist_index_computations(
self, results: List[Dict], policy_update: str
) -> Dict[str, int]:
"""Persist the results in storage.
Args:
results: list of content's mimetype dicts
(see :meth:`.index`)
policy_update: either 'update-dups' or 'ignore-dups' to
respectively update duplicates or ignore them
"""
return self.idx_storage.content_mimetype_add(
results, conflict_update=(policy_update == "update-dups")
)
class MimetypeIndexer(MixinMimetypeIndexer, ContentIndexer):
"""Mimetype Indexer working on list of content identifiers.
It:
- (optionally) filters out content already indexed (cf.
:meth:`.filter`)
- reads content from objstorage per the content's id (sha1)
- computes {mimetype, encoding} from that content
- stores result in storage
"""
def filter(self, ids):
"""Filter out known sha1s and return only missing ones.
"""
yield from self.idx_storage.content_mimetype_missing(
({"id": sha1, "indexer_configuration_id": self.tool["id"],} for sha1 in ids)
)
-class MimetypeRangeIndexer(MixinMimetypeIndexer, ContentRangeIndexer):
+class MimetypePartitionIndexer(MixinMimetypeIndexer, ContentPartitionIndexer):
"""Mimetype Range Indexer working on range of content identifiers.
It:
- (optionally) filters out content already indexed (cf
- :meth:`.indexed_contents_in_range`)
+ :meth:`.indexed_contents_in_partition`)
- reads content from objstorage per the content's id (sha1)
- computes {mimetype, encoding} from that content
- stores result in storage
"""
- def indexed_contents_in_range(
- self, start: bytes, end: bytes
- ) -> Dict[str, Optional[bytes]]:
- """Retrieve indexed content id within range [start, end].
+ def indexed_contents_in_partition(
+ self, partition_id: int, nb_partitions: int, page_token: Optional[str] = None,
+ ) -> PagedResult[Sha1]:
+ """Retrieve indexed content ids within partition_id.
Args:
- start: Starting bound from range identifier
- end: End range identifier
+ partition_id: Index of the partition to fetch
+ nb_partitions: Total number of partitions to split into
+ page_token: opaque token used for pagination
Returns:
- dict: a dict with keys:
-
- - ids: iterable of content ids within the range.
- - next: The next range of sha1 starts at
- this sha1 if any
+ PagedResult of Sha1. If next_page_token is None, there is no more data
+ to fetch
"""
- return self.idx_storage.content_mimetype_get_range(start, end, self.tool["id"])
+ return self.idx_storage.content_mimetype_get_partition(
+ self.tool["id"], partition_id, nb_partitions, page_token=page_token
+ )
diff --git a/swh/indexer/rehash.py b/swh/indexer/rehash.py
index 2593d67..06d907a 100644
--- a/swh/indexer/rehash.py
+++ b/swh/indexer/rehash.py
@@ -1,189 +1,192 @@
# Copyright (C) 2017-2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import logging
import itertools
from collections import defaultdict
from typing import Dict, Any, Tuple, List, Generator
from swh.core import utils
from swh.core.config import SWHConfig
from swh.model import hashutil
from swh.objstorage import get_objstorage
from swh.objstorage.exc import ObjNotFoundError
from swh.storage import get_storage
class RecomputeChecksums(SWHConfig):
"""Class in charge of (re)computing content's hashes.
Hashes to compute are defined across 2 configuration options:
compute_checksums ([str])
list of hash algorithms that
py:func:`swh.model.hashutil.MultiHash.from_data` function should
be able to deal with. For variable-length checksums, a desired
checksum length should also be provided. Their format is
<algorithm's name>:<variable-length> e.g: blake2:512
recompute_checksums (bool)
a boolean to notify that we also want to recompute potential existing
hashes specified in compute_checksums. Default to False.
"""
DEFAULT_CONFIG = {
# The storage to read from or update metadata to
"storage": (
"dict",
{"cls": "remote", "args": {"url": "http://localhost:5002/"},},
),
# The objstorage to read contents' data from
"objstorage": (
"dict",
{
"cls": "pathslicing",
"args": {
"root": "/srv/softwareheritage/objects",
"slicing": "0:2/2:4/4:6",
},
},
),
# the set of checksums that should be computed.
# Examples: 'sha1_git', 'blake2b512', 'blake2s256'
"compute_checksums": ("list[str]", []),
# whether checksums that already exist in the DB should be
# recomputed/updated or left untouched
"recompute_checksums": ("bool", False),
# Number of contents to retrieve blobs at the same time
"batch_size_retrieve_content": ("int", 10),
# Number of contents to update at the same time
"batch_size_update": ("int", 100),
}
CONFIG_BASE_FILENAME = "indexer/rehash"
def __init__(self) -> None:
self.config = self.parse_config_file()
self.storage = get_storage(**self.config["storage"])
self.objstorage = get_objstorage(**self.config["objstorage"])
self.compute_checksums = self.config["compute_checksums"]
self.recompute_checksums = self.config["recompute_checksums"]
self.batch_size_retrieve_content = self.config["batch_size_retrieve_content"]
self.batch_size_update = self.config["batch_size_update"]
self.log = logging.getLogger("swh.indexer.rehash")
if not self.compute_checksums:
raise ValueError("Checksums list should not be empty.")
def _read_content_ids(
self, contents: List[Dict[str, Any]]
) -> Generator[bytes, Any, None]:
"""Read the content identifiers from the contents.
"""
for c in contents:
h = c["sha1"]
if isinstance(h, str):
h = hashutil.hash_to_bytes(h)
yield h
def get_new_contents_metadata(
self, all_contents: List[Dict[str, Any]]
) -> Generator[Tuple[Dict[str, Any], List[Any]], Any, None]:
"""Retrieve raw contents and compute new checksums on the
contents. Unknown or corrupted contents are skipped.
Args:
all_contents: List of contents as dictionary with
the necessary primary keys
Yields:
tuple: tuple of (content to update, list of checksums computed)
"""
content_ids = self._read_content_ids(all_contents)
for contents in utils.grouper(content_ids, self.batch_size_retrieve_content):
contents_iter = itertools.tee(contents, 2)
try:
- content_metadata = self.storage.content_get_metadata(
+ content_metadata: Dict[
+ bytes, List[Dict]
+ ] = self.storage.content_get_metadata( # noqa
[s for s in contents_iter[0]]
)
except Exception:
self.log.exception("Problem when reading contents metadata.")
continue
- for content in content_metadata:
+ for sha1, content_dicts in content_metadata.items():
+ if not content_dicts:
+ continue
+ content: Dict = content_dicts[0]
# Recompute checksums provided in compute_checksums options
if self.recompute_checksums:
checksums_to_compute = list(self.compute_checksums)
else:
# Compute checksums provided in compute_checksums
# options not already defined for that content
checksums_to_compute = [
h for h in self.compute_checksums if not content.get(h)
]
if not checksums_to_compute: # Nothing to recompute
continue
try:
- raw_content = self.objstorage.get(content["sha1"])
+ raw_content = self.objstorage.get(sha1)
except ObjNotFoundError:
- self.log.warning(
- "Content %s not found in objstorage!" % content["sha1"]
- )
+ self.log.warning("Content %s not found in objstorage!", sha1)
continue
content_hashes = hashutil.MultiHash.from_data(
raw_content, hash_names=checksums_to_compute
).digest()
content.update(content_hashes)
yield content, checksums_to_compute
def run(self, contents: List[Dict[str, Any]]) -> Dict:
"""Given a list of content:
- (re)compute a given set of checksums on contents available in our
object storage
- update those contents with the new metadata
Args:
contents: contents as dictionary with necessary keys.
key present in such dictionary should be the ones defined in
the 'primary_key' option.
Returns:
A summary dict with key 'status', task' status and 'count' the
number of updated contents.
"""
status = "uneventful"
count = 0
for data in utils.grouper(
self.get_new_contents_metadata(contents), self.batch_size_update
):
groups: Dict[str, List[Any]] = defaultdict(list)
for content, keys_to_update in data:
- keys = ",".join(keys_to_update)
- groups[keys].append(content)
+ keys_str = ",".join(keys_to_update)
+ groups[keys_str].append(content)
for keys_to_update, contents in groups.items():
- keys = keys_to_update.split(",")
+ keys: List[str] = keys_to_update.split(",")
try:
self.storage.content_update(contents, keys=keys)
count += len(contents)
status = "eventful"
except Exception:
self.log.exception("Problem during update.")
continue
return {
"status": status,
"count": count,
}
diff --git a/swh/indexer/storage/__init__.py b/swh/indexer/storage/__init__.py
index 9cad65d..c023d5d 100644
--- a/swh/indexer/storage/__init__.py
+++ b/swh/indexer/storage/__init__.py
@@ -1,599 +1,649 @@
# Copyright (C) 2015-2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import json
import psycopg2
import psycopg2.pool
from collections import defaultdict, Counter
-from typing import Dict, List
+from typing import Dict, List, Optional
+from swh.model.hashutil import hash_to_bytes, hash_to_hex
+from swh.model.model import SHA1_SIZE
from swh.storage.common import db_transaction_generator, db_transaction
from swh.storage.exc import StorageDBError
+from swh.storage.utils import get_partition_bounds_bytes
+
+from .interface import PagedResult, Sha1
from . import converters
from .db import Db
from .exc import IndexerStorageArgumentException, DuplicateId
from .metrics import process_metrics, send_metric, timed
INDEXER_CFG_KEY = "indexer_storage"
MAPPING_NAMES = ["codemeta", "gemspec", "maven", "npm", "pkg-info"]
def get_indexer_storage(cls, args):
"""Get an indexer storage object of class `storage_class` with
arguments `storage_args`.
Args:
cls (str): storage's class, either 'local' or 'remote'
args (dict): dictionary of arguments passed to the
storage class constructor
Returns:
an instance of swh.indexer's storage (either local or remote)
Raises:
ValueError if passed an unknown storage class.
"""
if cls == "remote":
from .api.client import RemoteStorage as IndexerStorage
elif cls == "local":
from . import IndexerStorage
elif cls == "memory":
from .in_memory import IndexerStorage
else:
raise ValueError("Unknown indexer storage class `%s`" % cls)
return IndexerStorage(**args)
def check_id_duplicates(data):
"""
If any two dictionaries in `data` have the same id, raises
a `ValueError`.
Values associated to the key must be hashable.
Args:
data (List[dict]): List of dictionaries to be inserted
>>> check_id_duplicates([
... {'id': 'foo', 'data': 'spam'},
... {'id': 'bar', 'data': 'egg'},
... ])
>>> check_id_duplicates([
... {'id': 'foo', 'data': 'spam'},
... {'id': 'foo', 'data': 'egg'},
... ])
Traceback (most recent call last):
...
swh.indexer.storage.exc.DuplicateId: ['foo']
"""
counter = Counter(item["id"] for item in data)
duplicates = [id_ for (id_, count) in counter.items() if count >= 2]
if duplicates:
raise DuplicateId(duplicates)
class IndexerStorage:
"""SWH Indexer Storage
"""
def __init__(self, db, min_pool_conns=1, max_pool_conns=10):
"""
Args:
db_conn: either a libpq connection string, or a psycopg2 connection
"""
try:
if isinstance(db, psycopg2.extensions.connection):
self._pool = None
self._db = Db(db)
else:
self._pool = psycopg2.pool.ThreadedConnectionPool(
min_pool_conns, max_pool_conns, db
)
self._db = None
except psycopg2.OperationalError as e:
raise StorageDBError(e)
def get_db(self):
if self._db:
return self._db
return Db.from_pool(self._pool)
def put_db(self, db):
if db is not self._db:
db.put_conn()
@timed
@db_transaction()
def check_config(self, *, check_write, db=None, cur=None):
# Check permissions on one of the tables
if check_write:
check = "INSERT"
else:
check = "SELECT"
cur.execute(
"select has_table_privilege(current_user, 'content_mimetype', %s)", # noqa
(check,),
)
return cur.fetchone()[0]
@timed
@db_transaction_generator()
def content_mimetype_missing(self, mimetypes, db=None, cur=None):
for obj in db.content_mimetype_missing_from_list(mimetypes, cur):
yield obj[0]
- def _content_get_range(
+ @timed
+ @db_transaction()
+ def get_partition(
self,
- content_type,
- start,
- end,
- indexer_configuration_id,
- limit=1000,
+ indexer_type: str,
+ indexer_configuration_id: int,
+ partition_id: int,
+ nb_partitions: int,
+ page_token: Optional[str] = None,
+ limit: int = 1000,
with_textual_data=False,
db=None,
cur=None,
- ):
+ ) -> PagedResult[Sha1]:
+ """Retrieve ids of content with `indexer_type` within within partition partition_id
+ bound by limit.
+
+ Args:
+ **indexer_type**: Type of data content to index (mimetype, language, etc...)
+ **indexer_configuration_id**: The tool used to index data
+ **partition_id**: index of the partition to fetch
+ **nb_partitions**: total number of partitions to split into
+ **page_token**: opaque token used for pagination
+ **limit**: Limit result (default to 1000)
+ **with_textual_data** (bool): Deal with only textual content (True) or all
+ content (all contents by defaults, False)
+
+ Raises:
+ IndexerStorageArgumentException for;
+ - limit to None
+ - wrong indexer_type provided
+
+ Returns:
+ PagedResult of Sha1. If next_page_token is None, there is no more data to
+ fetch
+
+ """
if limit is None:
raise IndexerStorageArgumentException("limit should not be None")
- if content_type not in db.content_indexer_names:
- err = "Wrong type. Should be one of [%s]" % (
- ",".join(db.content_indexer_names)
- )
+ if indexer_type not in db.content_indexer_names:
+ err = f"Wrong type. Should be one of [{','.join(db.content_indexer_names)}]"
raise IndexerStorageArgumentException(err)
- ids = []
- next_id = None
- for counter, obj in enumerate(
- db.content_get_range(
- content_type,
+ start, end = get_partition_bounds_bytes(partition_id, nb_partitions, SHA1_SIZE)
+ if page_token is not None:
+ start = hash_to_bytes(page_token)
+ if end is None:
+ end = b"\xff" * SHA1_SIZE
+
+ next_page_token: Optional[str] = None
+ ids = [
+ row[0]
+ for row in db.content_get_range(
+ indexer_type,
start,
end,
indexer_configuration_id,
limit=limit + 1,
with_textual_data=with_textual_data,
cur=cur,
)
- ):
- _id = obj[0]
- if counter >= limit:
- next_id = _id
- break
+ ]
- ids.append(_id)
+ if len(ids) >= limit:
+ next_page_token = hash_to_hex(ids[-1])
+ ids = ids[:limit]
- return {"ids": ids, "next": next_id}
+ assert len(ids) <= limit
+ return PagedResult(results=ids, next_page_token=next_page_token)
@timed
@db_transaction()
- def content_mimetype_get_range(
- self, start, end, indexer_configuration_id, limit=1000, db=None, cur=None
- ):
- return self._content_get_range(
+ def content_mimetype_get_partition(
+ self,
+ indexer_configuration_id: int,
+ partition_id: int,
+ nb_partitions: int,
+ page_token: Optional[str] = None,
+ limit: int = 1000,
+ db=None,
+ cur=None,
+ ) -> PagedResult[Sha1]:
+ return self.get_partition(
"mimetype",
- start,
- end,
indexer_configuration_id,
+ partition_id,
+ nb_partitions,
+ page_token=page_token,
limit=limit,
db=db,
cur=cur,
)
@timed
@process_metrics
@db_transaction()
def content_mimetype_add(
self, mimetypes: List[Dict], conflict_update: bool = False, db=None, cur=None
) -> Dict[str, int]:
"""Add mimetypes to the storage (if conflict_update is True, this will
override existing data if any).
Returns:
A dict with the number of new elements added to the storage.
"""
check_id_duplicates(mimetypes)
mimetypes.sort(key=lambda m: m["id"])
db.mktemp_content_mimetype(cur)
db.copy_to(
mimetypes,
"tmp_content_mimetype",
["id", "mimetype", "encoding", "indexer_configuration_id"],
cur,
)
count = db.content_mimetype_add_from_temp(conflict_update, cur)
return {"content_mimetype:add": count}
@timed
@db_transaction_generator()
def content_mimetype_get(self, ids, db=None, cur=None):
for c in db.content_mimetype_get_from_list(ids, cur):
yield converters.db_to_mimetype(dict(zip(db.content_mimetype_cols, c)))
@timed
@db_transaction_generator()
def content_language_missing(self, languages, db=None, cur=None):
for obj in db.content_language_missing_from_list(languages, cur):
yield obj[0]
@timed
@db_transaction_generator()
def content_language_get(self, ids, db=None, cur=None):
for c in db.content_language_get_from_list(ids, cur):
yield converters.db_to_language(dict(zip(db.content_language_cols, c)))
@timed
@process_metrics
@db_transaction()
def content_language_add(
self, languages: List[Dict], conflict_update: bool = False, db=None, cur=None
) -> Dict[str, int]:
check_id_duplicates(languages)
languages.sort(key=lambda m: m["id"])
db.mktemp_content_language(cur)
# empty language is mapped to 'unknown'
db.copy_to(
(
{
"id": lang["id"],
"lang": "unknown" if not lang["lang"] else lang["lang"],
"indexer_configuration_id": lang["indexer_configuration_id"],
}
for lang in languages
),
"tmp_content_language",
["id", "lang", "indexer_configuration_id"],
cur,
)
count = db.content_language_add_from_temp(conflict_update, cur)
return {"content_language:add": count}
@timed
@db_transaction_generator()
def content_ctags_missing(self, ctags, db=None, cur=None):
for obj in db.content_ctags_missing_from_list(ctags, cur):
yield obj[0]
@timed
@db_transaction_generator()
def content_ctags_get(self, ids, db=None, cur=None):
for c in db.content_ctags_get_from_list(ids, cur):
yield converters.db_to_ctags(dict(zip(db.content_ctags_cols, c)))
@timed
@process_metrics
@db_transaction()
def content_ctags_add(
self, ctags: List[Dict], conflict_update: bool = False, db=None, cur=None
) -> Dict[str, int]:
check_id_duplicates(ctags)
ctags.sort(key=lambda m: m["id"])
def _convert_ctags(__ctags):
"""Convert ctags dict to list of ctags.
"""
for ctags in __ctags:
yield from converters.ctags_to_db(ctags)
db.mktemp_content_ctags(cur)
db.copy_to(
list(_convert_ctags(ctags)),
tblname="tmp_content_ctags",
columns=["id", "name", "kind", "line", "lang", "indexer_configuration_id"],
cur=cur,
)
count = db.content_ctags_add_from_temp(conflict_update, cur)
return {"content_ctags:add": count}
@timed
@db_transaction_generator()
def content_ctags_search(
self, expression, limit=10, last_sha1=None, db=None, cur=None
):
for obj in db.content_ctags_search(expression, last_sha1, limit, cur=cur):
yield converters.db_to_ctags(dict(zip(db.content_ctags_cols, obj)))
@timed
@db_transaction_generator()
def content_fossology_license_get(self, ids, db=None, cur=None):
d = defaultdict(list)
for c in db.content_fossology_license_get_from_list(ids, cur):
license = dict(zip(db.content_fossology_license_cols, c))
id_ = license["id"]
d[id_].append(converters.db_to_fossology_license(license))
for id_, facts in d.items():
yield {id_: facts}
@timed
@process_metrics
@db_transaction()
def content_fossology_license_add(
self, licenses: List[Dict], conflict_update: bool = False, db=None, cur=None
) -> Dict[str, int]:
check_id_duplicates(licenses)
licenses.sort(key=lambda m: m["id"])
db.mktemp_content_fossology_license(cur)
db.copy_to(
(
{
"id": sha1["id"],
"indexer_configuration_id": sha1["indexer_configuration_id"],
"license": license,
}
for sha1 in licenses
for license in sha1["licenses"]
),
tblname="tmp_content_fossology_license",
columns=["id", "license", "indexer_configuration_id"],
cur=cur,
)
count = db.content_fossology_license_add_from_temp(conflict_update, cur)
return {"content_fossology_license:add": count}
@timed
@db_transaction()
- def content_fossology_license_get_range(
- self, start, end, indexer_configuration_id, limit=1000, db=None, cur=None
- ):
- return self._content_get_range(
+ def content_fossology_license_get_partition(
+ self,
+ indexer_configuration_id: int,
+ partition_id: int,
+ nb_partitions: int,
+ page_token: Optional[str] = None,
+ limit: int = 1000,
+ db=None,
+ cur=None,
+ ) -> PagedResult[Sha1]:
+ return self.get_partition(
"fossology_license",
- start,
- end,
indexer_configuration_id,
+ partition_id,
+ nb_partitions,
+ page_token=page_token,
limit=limit,
with_textual_data=True,
db=db,
cur=cur,
)
@timed
@db_transaction_generator()
def content_metadata_missing(self, metadata, db=None, cur=None):
for obj in db.content_metadata_missing_from_list(metadata, cur):
yield obj[0]
@timed
@db_transaction_generator()
def content_metadata_get(self, ids, db=None, cur=None):
for c in db.content_metadata_get_from_list(ids, cur):
yield converters.db_to_metadata(dict(zip(db.content_metadata_cols, c)))
@timed
@process_metrics
@db_transaction()
def content_metadata_add(
self, metadata: List[Dict], conflict_update: bool = False, db=None, cur=None
) -> Dict[str, int]:
check_id_duplicates(metadata)
metadata.sort(key=lambda m: m["id"])
db.mktemp_content_metadata(cur)
db.copy_to(
metadata,
"tmp_content_metadata",
["id", "metadata", "indexer_configuration_id"],
cur,
)
count = db.content_metadata_add_from_temp(conflict_update, cur)
return {
"content_metadata:add": count,
}
@timed
@db_transaction_generator()
def revision_intrinsic_metadata_missing(self, metadata, db=None, cur=None):
for obj in db.revision_intrinsic_metadata_missing_from_list(metadata, cur):
yield obj[0]
@timed
@db_transaction_generator()
def revision_intrinsic_metadata_get(self, ids, db=None, cur=None):
for c in db.revision_intrinsic_metadata_get_from_list(ids, cur):
yield converters.db_to_metadata(
dict(zip(db.revision_intrinsic_metadata_cols, c))
)
@timed
@process_metrics
@db_transaction()
def revision_intrinsic_metadata_add(
self, metadata: List[Dict], conflict_update: bool = False, db=None, cur=None
) -> Dict[str, int]:
check_id_duplicates(metadata)
metadata.sort(key=lambda m: m["id"])
db.mktemp_revision_intrinsic_metadata(cur)
db.copy_to(
metadata,
"tmp_revision_intrinsic_metadata",
["id", "metadata", "mappings", "indexer_configuration_id"],
cur,
)
count = db.revision_intrinsic_metadata_add_from_temp(conflict_update, cur)
return {
"revision_intrinsic_metadata:add": count,
}
@timed
@process_metrics
@db_transaction()
def revision_intrinsic_metadata_delete(
self, entries: List[Dict], db=None, cur=None
) -> Dict:
count = db.revision_intrinsic_metadata_delete(entries, cur)
return {"revision_intrinsic_metadata:del": count}
@timed
@db_transaction_generator()
def origin_intrinsic_metadata_get(self, ids, db=None, cur=None):
for c in db.origin_intrinsic_metadata_get_from_list(ids, cur):
yield converters.db_to_metadata(
dict(zip(db.origin_intrinsic_metadata_cols, c))
)
@timed
@process_metrics
@db_transaction()
def origin_intrinsic_metadata_add(
self, metadata: List[Dict], conflict_update: bool = False, db=None, cur=None
) -> Dict[str, int]:
check_id_duplicates(metadata)
metadata.sort(key=lambda m: m["id"])
db.mktemp_origin_intrinsic_metadata(cur)
db.copy_to(
metadata,
"tmp_origin_intrinsic_metadata",
["id", "metadata", "indexer_configuration_id", "from_revision", "mappings"],
cur,
)
count = db.origin_intrinsic_metadata_add_from_temp(conflict_update, cur)
return {
"origin_intrinsic_metadata:add": count,
}
@timed
@process_metrics
@db_transaction()
def origin_intrinsic_metadata_delete(
self, entries: List[Dict], db=None, cur=None
) -> Dict:
count = db.origin_intrinsic_metadata_delete(entries, cur)
return {
"origin_intrinsic_metadata:del": count,
}
@timed
@db_transaction_generator()
def origin_intrinsic_metadata_search_fulltext(
self, conjunction, limit=100, db=None, cur=None
):
for c in db.origin_intrinsic_metadata_search_fulltext(
conjunction, limit=limit, cur=cur
):
yield converters.db_to_metadata(
dict(zip(db.origin_intrinsic_metadata_cols, c))
)
@timed
@db_transaction()
def origin_intrinsic_metadata_search_by_producer(
self,
page_token="",
limit=100,
ids_only=False,
mappings=None,
tool_ids=None,
db=None,
cur=None,
):
assert isinstance(page_token, str)
# we go to limit+1 to check whether we should add next_page_token in
# the response
res = db.origin_intrinsic_metadata_search_by_producer(
page_token, limit + 1, ids_only, mappings, tool_ids, cur
)
result = {}
if ids_only:
result["origins"] = [origin for (origin,) in res]
if len(result["origins"]) > limit:
result["origins"][limit:] = []
result["next_page_token"] = result["origins"][-1]
else:
result["origins"] = [
converters.db_to_metadata(
dict(zip(db.origin_intrinsic_metadata_cols, c))
)
for c in res
]
if len(result["origins"]) > limit:
result["origins"][limit:] = []
result["next_page_token"] = result["origins"][-1]["id"]
return result
@timed
@db_transaction()
def origin_intrinsic_metadata_stats(self, db=None, cur=None):
mapping_names = [m for m in MAPPING_NAMES]
select_parts = []
# Count rows for each mapping
for mapping_name in mapping_names:
select_parts.append(
(
"sum(case when (mappings @> ARRAY['%s']) "
" then 1 else 0 end)"
)
% mapping_name
)
# Total
select_parts.append("sum(1)")
# Rows whose metadata has at least one key that is not '@context'
select_parts.append(
"sum(case when ('{}'::jsonb @> (metadata - '@context')) "
" then 0 else 1 end)"
)
cur.execute(
"select " + ", ".join(select_parts) + " from origin_intrinsic_metadata"
)
results = dict(zip(mapping_names + ["total", "non_empty"], cur.fetchone()))
return {
"total": results.pop("total"),
"non_empty": results.pop("non_empty"),
"per_mapping": results,
}
@timed
@db_transaction_generator()
def indexer_configuration_add(self, tools, db=None, cur=None):
db.mktemp_indexer_configuration(cur)
db.copy_to(
tools,
"tmp_indexer_configuration",
["tool_name", "tool_version", "tool_configuration"],
cur,
)
tools = db.indexer_configuration_add_from_temp(cur)
count = 0
for line in tools:
yield dict(zip(db.indexer_configuration_cols, line))
count += 1
send_metric(
"indexer_configuration:add", count, method_name="indexer_configuration_add"
)
@timed
@db_transaction()
def indexer_configuration_get(self, tool, db=None, cur=None):
tool_conf = tool["tool_configuration"]
if isinstance(tool_conf, dict):
tool_conf = json.dumps(tool_conf)
idx = db.indexer_configuration_get(
tool["tool_name"], tool["tool_version"], tool_conf
)
if not idx:
return None
return dict(zip(db.indexer_configuration_cols, idx))
diff --git a/swh/indexer/storage/in_memory.py b/swh/indexer/storage/in_memory.py
index 1bcc46d..a70f212 100644
--- a/swh/indexer/storage/in_memory.py
+++ b/swh/indexer/storage/in_memory.py
@@ -1,455 +1,496 @@
# Copyright (C) 2018-2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
-import bisect
-from collections import defaultdict, Counter
import itertools
import json
import operator
import math
import re
-from typing import Any, Dict, List
+
+from collections import defaultdict, Counter
+from typing import Any, Dict, List, Optional
+
+from swh.model.model import SHA1_SIZE
+from swh.model.hashutil import hash_to_hex, hash_to_bytes
+from swh.storage.utils import get_partition_bounds_bytes
+from swh.storage.in_memory import SortedList
from . import MAPPING_NAMES, check_id_duplicates
from .exc import IndexerStorageArgumentException
+from .interface import PagedResult, Sha1
+
SHA1_DIGEST_SIZE = 160
def _transform_tool(tool):
return {
"id": tool["id"],
"name": tool["tool_name"],
"version": tool["tool_version"],
"configuration": tool["tool_configuration"],
}
def check_id_types(data: List[Dict[str, Any]]):
"""Checks all elements of the list have an 'id' whose type is 'bytes'."""
if not all(isinstance(item.get("id"), bytes) for item in data):
raise IndexerStorageArgumentException("identifiers must be bytes.")
class SubStorage:
"""Implements common missing/get/add logic for each indexer type."""
def __init__(self, tools):
self._tools = tools
- self._sorted_ids = []
+ self._sorted_ids = SortedList[bytes, bytes]()
self._data = {} # map (id_, tool_id) -> metadata_dict
self._tools_per_id = defaultdict(set) # map id_ -> Set[tool_id]
def missing(self, ids):
"""List data missing from storage.
Args:
data (iterable): dictionaries with keys:
- **id** (bytes): sha1 identifier
- **indexer_configuration_id** (int): tool used to compute
the results
Yields:
missing sha1s
"""
for id_ in ids:
tool_id = id_["indexer_configuration_id"]
id_ = id_["id"]
if tool_id not in self._tools_per_id.get(id_, set()):
yield id_
def get(self, ids):
"""Retrieve data per id.
Args:
ids (iterable): sha1 checksums
Yields:
dict: dictionaries with the following keys:
- **id** (bytes)
- **tool** (dict): tool used to compute metadata
- arbitrary data (as provided to `add`)
"""
for id_ in ids:
for tool_id in self._tools_per_id.get(id_, set()):
key = (id_, tool_id)
yield {
"id": id_,
"tool": _transform_tool(self._tools[tool_id]),
**self._data[key],
}
def get_all(self):
yield from self.get(self._sorted_ids)
- def get_range(self, start, end, indexer_configuration_id, limit):
- """Retrieve data within range [start, end] bound by limit.
+ def get_partition(
+ self,
+ indexer_configuration_id: int,
+ partition_id: int,
+ nb_partitions: int,
+ page_token: Optional[str] = None,
+ limit: int = 1000,
+ ) -> PagedResult[Sha1]:
+ """Retrieve ids of content with `indexer_type` within partition partition_id
+ bound by limit.
Args:
- **start** (bytes): Starting identifier range (expected smaller
- than end)
- **end** (bytes): Ending identifier range (expected larger
- than start)
- **indexer_configuration_id** (int): The tool used to index data
- **limit** (int): Limit result
+ **indexer_type**: Type of data content to index (mimetype, language, etc...)
+ **indexer_configuration_id**: The tool used to index data
+ **partition_id**: index of the partition to fetch
+ **nb_partitions**: total number of partitions to split into
+ **page_token**: opaque token used for pagination
+ **limit**: Limit result (default to 1000)
+ **with_textual_data** (bool): Deal with only textual content (True) or all
+ content (all contents by defaults, False)
Raises:
- IndexerStorageArgumentException for limit to None
+ IndexerStorageArgumentException for;
+ - limit to None
+ - wrong indexer_type provided
Returns:
- a dict with keys:
- - **ids** [bytes]: iterable of content ids within the range.
- - **next** (Optional[bytes]): The next range of sha1 starts at
- this sha1 if any
+ PagedResult of Sha1. If next_page_token is None, there is no more data to
+ fetch
"""
if limit is None:
raise IndexerStorageArgumentException("limit should not be None")
- from_index = bisect.bisect_left(self._sorted_ids, start)
- to_index = bisect.bisect_right(self._sorted_ids, end, lo=from_index)
- if to_index - from_index >= limit:
- return {
- "ids": self._sorted_ids[from_index : from_index + limit],
- "next": self._sorted_ids[from_index + limit],
- }
- else:
- return {
- "ids": self._sorted_ids[from_index:to_index],
- "next": None,
- }
+ (start, end) = get_partition_bounds_bytes(
+ partition_id, nb_partitions, SHA1_SIZE
+ )
+
+ if page_token:
+ start = hash_to_bytes(page_token)
+ if end is None:
+ end = b"\xff" * SHA1_SIZE
+
+ next_page_token: Optional[str] = None
+ ids: List[Sha1] = []
+ sha1s = (sha1 for sha1 in self._sorted_ids.iter_from(start))
+ for counter, sha1 in enumerate(sha1s):
+ if sha1 > end:
+ break
+ if counter >= limit:
+ next_page_token = hash_to_hex(sha1)
+ break
+ ids.append(sha1)
+
+ assert len(ids) <= limit
+ return PagedResult(results=ids, next_page_token=next_page_token)
def add(self, data: List[Dict], conflict_update: bool) -> int:
"""Add data not present in storage.
Args:
data (iterable): dictionaries with keys:
- **id**: sha1
- **indexer_configuration_id**: tool used to compute the
results
- arbitrary data
conflict_update (bool): Flag to determine if we want to overwrite
(true) or skip duplicates (false)
"""
data = list(data)
check_id_duplicates(data)
count = 0
for item in data:
item = item.copy()
tool_id = item.pop("indexer_configuration_id")
id_ = item.pop("id")
data_item = item
if not conflict_update and tool_id in self._tools_per_id.get(id_, set()):
# Duplicate, should not be updated
continue
key = (id_, tool_id)
self._data[key] = data_item
self._tools_per_id[id_].add(tool_id)
count += 1
if id_ not in self._sorted_ids:
- bisect.insort(self._sorted_ids, id_)
+ self._sorted_ids.add(id_)
return count
def add_merge(
self, new_data: List[Dict], conflict_update: bool, merged_key: str
) -> int:
added = 0
all_subitems: List
for new_item in new_data:
id_ = new_item["id"]
tool_id = new_item["indexer_configuration_id"]
if conflict_update:
all_subitems = []
else:
existing = list(self.get([id_]))
all_subitems = [
old_subitem
for existing_item in existing
if existing_item["tool"]["id"] == tool_id
for old_subitem in existing_item[merged_key]
]
for new_subitem in new_item[merged_key]:
if new_subitem not in all_subitems:
all_subitems.append(new_subitem)
added += self.add(
[
{
"id": id_,
"indexer_configuration_id": tool_id,
merged_key: all_subitems,
}
],
conflict_update=True,
)
if id_ not in self._sorted_ids:
- bisect.insort(self._sorted_ids, id_)
+ self._sorted_ids.add(id_)
return added
def delete(self, entries: List[Dict]) -> int:
"""Delete entries and return the number of entries deleted.
"""
deleted = 0
for entry in entries:
(id_, tool_id) = (entry["id"], entry["indexer_configuration_id"])
key = (id_, tool_id)
if tool_id in self._tools_per_id[id_]:
self._tools_per_id[id_].remove(tool_id)
if key in self._data:
deleted += 1
del self._data[key]
return deleted
class IndexerStorage:
"""In-memory SWH indexer storage."""
def __init__(self):
self._tools = {}
self._mimetypes = SubStorage(self._tools)
self._languages = SubStorage(self._tools)
self._content_ctags = SubStorage(self._tools)
self._licenses = SubStorage(self._tools)
self._content_metadata = SubStorage(self._tools)
self._revision_intrinsic_metadata = SubStorage(self._tools)
self._origin_intrinsic_metadata = SubStorage(self._tools)
def check_config(self, *, check_write):
return True
def content_mimetype_missing(self, mimetypes):
yield from self._mimetypes.missing(mimetypes)
- def content_mimetype_get_range(
- self, start, end, indexer_configuration_id, limit=1000
- ):
- return self._mimetypes.get_range(start, end, indexer_configuration_id, limit)
+ def content_mimetype_get_partition(
+ self,
+ indexer_configuration_id: int,
+ partition_id: int,
+ nb_partitions: int,
+ page_token: Optional[str] = None,
+ limit: int = 1000,
+ ) -> PagedResult[Sha1]:
+ return self._mimetypes.get_partition(
+ indexer_configuration_id, partition_id, nb_partitions, page_token, limit
+ )
def content_mimetype_add(
self, mimetypes: List[Dict], conflict_update: bool = False
) -> Dict[str, int]:
check_id_types(mimetypes)
added = self._mimetypes.add(mimetypes, conflict_update)
return {"content_mimetype:add": added}
def content_mimetype_get(self, ids):
yield from self._mimetypes.get(ids)
def content_language_missing(self, languages):
yield from self._languages.missing(languages)
def content_language_get(self, ids):
yield from self._languages.get(ids)
def content_language_add(
self, languages: List[Dict], conflict_update: bool = False
) -> Dict[str, int]:
check_id_types(languages)
added = self._languages.add(languages, conflict_update)
return {"content_language:add": added}
def content_ctags_missing(self, ctags):
yield from self._content_ctags.missing(ctags)
def content_ctags_get(self, ids):
for item in self._content_ctags.get(ids):
for item_ctags_item in item["ctags"]:
yield {"id": item["id"], "tool": item["tool"], **item_ctags_item}
def content_ctags_add(
self, ctags: List[Dict], conflict_update: bool = False
) -> Dict[str, int]:
check_id_types(ctags)
added = self._content_ctags.add_merge(ctags, conflict_update, "ctags")
return {"content_ctags:add": added}
def content_ctags_search(self, expression, limit=10, last_sha1=None):
nb_matches = 0
for ((id_, tool_id), item) in sorted(self._content_ctags._data.items()):
if id_ <= (last_sha1 or bytes(0 for _ in range(SHA1_DIGEST_SIZE))):
continue
for ctags_item in item["ctags"]:
if ctags_item["name"] != expression:
continue
nb_matches += 1
yield {
"id": id_,
"tool": _transform_tool(self._tools[tool_id]),
**ctags_item,
}
if nb_matches >= limit:
return
def content_fossology_license_get(self, ids):
# Rewrites the output of SubStorage.get from the old format to
# the new one. SubStorage.get should be updated once all other
# *_get methods use the new format.
# See: https://forge.softwareheritage.org/T1433
res = {}
for d in self._licenses.get(ids):
res.setdefault(d.pop("id"), []).append(d)
for (id_, facts) in res.items():
yield {id_: facts}
def content_fossology_license_add(
self, licenses: List[Dict], conflict_update: bool = False
) -> Dict[str, int]:
check_id_types(licenses)
added = self._licenses.add_merge(licenses, conflict_update, "licenses")
return {"fossology_license_add:add": added}
- def content_fossology_license_get_range(
- self, start, end, indexer_configuration_id, limit=1000
- ):
- return self._licenses.get_range(start, end, indexer_configuration_id, limit)
+ def content_fossology_license_get_partition(
+ self,
+ indexer_configuration_id: int,
+ partition_id: int,
+ nb_partitions: int,
+ page_token: Optional[str] = None,
+ limit: int = 1000,
+ ) -> PagedResult[Sha1]:
+ return self._licenses.get_partition(
+ indexer_configuration_id, partition_id, nb_partitions, page_token, limit
+ )
def content_metadata_missing(self, metadata):
yield from self._content_metadata.missing(metadata)
def content_metadata_get(self, ids):
yield from self._content_metadata.get(ids)
def content_metadata_add(
self, metadata: List[Dict], conflict_update: bool = False
) -> Dict[str, int]:
check_id_types(metadata)
added = self._content_metadata.add(metadata, conflict_update)
return {"content_metadata:add": added}
def revision_intrinsic_metadata_missing(self, metadata):
yield from self._revision_intrinsic_metadata.missing(metadata)
def revision_intrinsic_metadata_get(self, ids):
yield from self._revision_intrinsic_metadata.get(ids)
def revision_intrinsic_metadata_add(
self, metadata: List[Dict], conflict_update: bool = False
) -> Dict[str, int]:
check_id_types(metadata)
added = self._revision_intrinsic_metadata.add(metadata, conflict_update)
return {"revision_intrinsic_metadata:add": added}
def revision_intrinsic_metadata_delete(self, entries: List[Dict]) -> Dict:
deleted = self._revision_intrinsic_metadata.delete(entries)
return {"revision_intrinsic_metadata:del": deleted}
def origin_intrinsic_metadata_get(self, ids):
yield from self._origin_intrinsic_metadata.get(ids)
def origin_intrinsic_metadata_add(
self, metadata: List[Dict], conflict_update: bool = False
) -> Dict[str, int]:
added = self._origin_intrinsic_metadata.add(metadata, conflict_update)
return {"origin_intrinsic_metadata:add": added}
def origin_intrinsic_metadata_delete(self, entries: List[Dict]) -> Dict:
deleted = self._origin_intrinsic_metadata.delete(entries)
return {"origin_intrinsic_metadata:del": deleted}
def origin_intrinsic_metadata_search_fulltext(self, conjunction, limit=100):
# A very crude fulltext search implementation, but that's enough
# to work on English metadata
tokens_re = re.compile("[a-zA-Z0-9]+")
search_tokens = list(itertools.chain(*map(tokens_re.findall, conjunction)))
def rank(data):
# Tokenize the metadata
text = json.dumps(data["metadata"])
text_tokens = tokens_re.findall(text)
text_token_occurences = Counter(text_tokens)
# Count the number of occurrences of search tokens in the text
score = 0
for search_token in search_tokens:
if text_token_occurences[search_token] == 0:
# Search token is not in the text.
return 0
score += text_token_occurences[search_token]
# Normalize according to the text's length
return score / math.log(len(text_tokens))
results = [
(rank(data), data) for data in self._origin_intrinsic_metadata.get_all()
]
results = [(rank_, data) for (rank_, data) in results if rank_ > 0]
results.sort(
key=operator.itemgetter(0), reverse=True # Don't try to order 'data'
)
for (rank_, result) in results[:limit]:
yield result
def origin_intrinsic_metadata_search_by_producer(
self, page_token="", limit=100, ids_only=False, mappings=None, tool_ids=None
):
assert isinstance(page_token, str)
nb_results = 0
if mappings is not None:
mappings = frozenset(mappings)
if tool_ids is not None:
tool_ids = frozenset(tool_ids)
origins = []
# we go to limit+1 to check whether we should add next_page_token in
# the response
for entry in self._origin_intrinsic_metadata.get_all():
if entry["id"] <= page_token:
continue
if nb_results >= (limit + 1):
break
if mappings is not None and mappings.isdisjoint(entry["mappings"]):
continue
if tool_ids is not None and entry["tool"]["id"] not in tool_ids:
continue
origins.append(entry)
nb_results += 1
result = {}
if len(origins) > limit:
origins = origins[:limit]
result["next_page_token"] = origins[-1]["id"]
if ids_only:
origins = [origin["id"] for origin in origins]
result["origins"] = origins
return result
def origin_intrinsic_metadata_stats(self):
mapping_count = {m: 0 for m in MAPPING_NAMES}
total = non_empty = 0
for data in self._origin_intrinsic_metadata.get_all():
total += 1
if set(data["metadata"]) - {"@context"}:
non_empty += 1
for mapping in data["mappings"]:
mapping_count[mapping] += 1
return {"per_mapping": mapping_count, "total": total, "non_empty": non_empty}
def indexer_configuration_add(self, tools):
inserted = []
for tool in tools:
tool = tool.copy()
id_ = self._tool_key(tool)
tool["id"] = id_
self._tools[id_] = tool
inserted.append(tool)
return inserted
def indexer_configuration_get(self, tool):
return self._tools.get(self._tool_key(tool))
def _tool_key(self, tool):
return hash(
(
tool["tool_name"],
tool["tool_version"],
json.dumps(tool["tool_configuration"], sort_keys=True),
)
)
diff --git a/swh/indexer/storage/interface.py b/swh/indexer/storage/interface.py
index d059cf5..0fb6620 100644
--- a/swh/indexer/storage/interface.py
+++ b/swh/indexer/storage/interface.py
@@ -1,641 +1,617 @@
# Copyright (C) 2015-2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
-from typing import Dict, List
+from typing import Dict, List, Optional, TypeVar
from swh.core.api import remote_api_endpoint
+from swh.core.api.classes import PagedResult as CorePagedResult
+
+
+TResult = TypeVar("TResult")
+PagedResult = CorePagedResult[TResult, str]
+
+
+Sha1 = bytes
class IndexerStorageInterface:
@remote_api_endpoint("check_config")
def check_config(self, *, check_write):
"""Check that the storage is configured and ready to go."""
...
@remote_api_endpoint("content_mimetype/missing")
def content_mimetype_missing(self, mimetypes):
"""Generate mimetypes missing from storage.
Args:
mimetypes (iterable): iterable of dict with keys:
- **id** (bytes): sha1 identifier
- **indexer_configuration_id** (int): tool used to compute the
results
Yields:
tuple (id, indexer_configuration_id): missing id
"""
...
- def _content_get_range(
+ @remote_api_endpoint("content_mimetype/range")
+ def content_mimetype_get_partition(
self,
- content_type,
- start,
- end,
- indexer_configuration_id,
- limit=1000,
- with_textual_data=False,
- ):
- """Retrieve ids of type content_type within range [start, end] bound
- by limit.
+ indexer_configuration_id: int,
+ partition_id: int,
+ nb_partitions: int,
+ page_token: Optional[str] = None,
+ limit: int = 1000,
+ ) -> PagedResult[Sha1]:
+ """Retrieve mimetypes within partition partition_id bound by limit.
Args:
- **content_type** (str): content's type (mimetype, language, etc...)
- **start** (bytes): Starting identifier range (expected smaller
- than end)
- **end** (bytes): Ending identifier range (expected larger
- than start)
- **indexer_configuration_id** (int): The tool used to index data
- **limit** (int): Limit result (default to 1000)
- **with_textual_data** (bool): Deal with only textual
- content (True) or all
- content (all contents by
- defaults, False)
+ **indexer_configuration_id**: The tool used to index data
+ **partition_id**: index of the partition to fetch
+ **nb_partitions**: total number of partitions to split into
+ **page_token**: opaque token used for pagination
+ **limit**: Limit result (default to 1000)
Raises:
- ValueError for;
+ IndexerStorageArgumentException for;
- limit to None
- - wrong content_type provided
-
- Returns:
- a dict with keys:
- - **ids** [bytes]: iterable of content ids within the range.
- - **next** (Optional[bytes]): The next range of sha1 starts at
- this sha1 if any
-
- """
- ...
-
- @remote_api_endpoint("content_mimetype/range")
- def content_mimetype_get_range(
- self, start, end, indexer_configuration_id, limit=1000
- ):
- """Retrieve mimetypes within range [start, end] bound by limit.
-
- Args:
- **start** (bytes): Starting identifier range (expected smaller
- than end)
- **end** (bytes): Ending identifier range (expected larger
- than start)
- **indexer_configuration_id** (int): The tool used to index data
- **limit** (int): Limit result (default to 1000)
-
- Raises:
- ValueError for limit to None
+ - wrong indexer_type provided
Returns:
- a dict with keys:
- - **ids** [bytes]: iterable of content ids within the range.
- - **next** (Optional[bytes]): The next range of sha1 starts at
- this sha1 if any
+ PagedResult of Sha1. If next_page_token is None, there is no more data
+ to fetch
"""
...
@remote_api_endpoint("content_mimetype/add")
def content_mimetype_add(
self, mimetypes: List[Dict], conflict_update: bool = False
) -> Dict[str, int]:
"""Add mimetypes not present in storage.
Args:
mimetypes (iterable): dictionaries with keys:
- **id** (bytes): sha1 identifier
- **mimetype** (bytes): raw content's mimetype
- **encoding** (bytes): raw content's encoding
- **indexer_configuration_id** (int): tool's id used to
compute the results
- **conflict_update** (bool): Flag to determine if we want to
overwrite (``True``) or skip duplicates (``False``, the
default)
Returns:
Dict summary of number of rows added
"""
...
@remote_api_endpoint("content_mimetype")
def content_mimetype_get(self, ids):
"""Retrieve full content mimetype per ids.
Args:
ids (iterable): sha1 identifier
Yields:
mimetypes (iterable): dictionaries with keys:
- **id** (bytes): sha1 identifier
- **mimetype** (bytes): raw content's mimetype
- **encoding** (bytes): raw content's encoding
- **tool** (dict): Tool used to compute the language
"""
...
@remote_api_endpoint("content_language/missing")
def content_language_missing(self, languages):
"""List languages missing from storage.
Args:
languages (iterable): dictionaries with keys:
- **id** (bytes): sha1 identifier
- **indexer_configuration_id** (int): tool used to compute
the results
Yields:
an iterable of missing id for the tuple (id,
indexer_configuration_id)
"""
...
@remote_api_endpoint("content_language")
def content_language_get(self, ids):
"""Retrieve full content language per ids.
Args:
ids (iterable): sha1 identifier
Yields:
languages (iterable): dictionaries with keys:
- **id** (bytes): sha1 identifier
- **lang** (bytes): raw content's language
- **tool** (dict): Tool used to compute the language
"""
...
@remote_api_endpoint("content_language/add")
def content_language_add(
self, languages: List[Dict], conflict_update: bool = False
) -> Dict[str, int]:
"""Add languages not present in storage.
Args:
languages (iterable): dictionaries with keys:
- **id** (bytes): sha1
- **lang** (bytes): language detected
conflict_update (bool): Flag to determine if we want to
overwrite (true) or skip duplicates (false, the
default)
Returns:
Dict summary of number of rows added
"""
...
@remote_api_endpoint("content/ctags/missing")
def content_ctags_missing(self, ctags):
"""List ctags missing from storage.
Args:
ctags (iterable): dicts with keys:
- **id** (bytes): sha1 identifier
- **indexer_configuration_id** (int): tool used to compute
the results
Yields:
an iterable of missing id for the tuple (id,
indexer_configuration_id)
"""
...
@remote_api_endpoint("content/ctags")
def content_ctags_get(self, ids):
"""Retrieve ctags per id.
Args:
ids (iterable): sha1 checksums
Yields:
Dictionaries with keys:
- **id** (bytes): content's identifier
- **name** (str): symbol's name
- **kind** (str): symbol's kind
- **lang** (str): language for that content
- **tool** (dict): tool used to compute the ctags' info
"""
...
@remote_api_endpoint("content/ctags/add")
def content_ctags_add(
self, ctags: List[Dict], conflict_update: bool = False
) -> Dict[str, int]:
"""Add ctags not present in storage
Args:
ctags (iterable): dictionaries with keys:
- **id** (bytes): sha1
- **ctags** ([list): List of dictionary with keys: name, kind,
line, lang
Returns:
Dict summary of number of rows added
"""
...
@remote_api_endpoint("content/ctags/search")
def content_ctags_search(self, expression, limit=10, last_sha1=None):
"""Search through content's raw ctags symbols.
Args:
expression (str): Expression to search for
limit (int): Number of rows to return (default to 10).
last_sha1 (str): Offset from which retrieving data (default to '').
Yields:
rows of ctags including id, name, lang, kind, line, etc...
"""
...
@remote_api_endpoint("content/fossology_license")
def content_fossology_license_get(self, ids):
"""Retrieve licenses per id.
Args:
ids (iterable): sha1 checksums
Yields:
dict: ``{id: facts}`` where ``facts`` is a dict with the
following keys:
- **licenses** ([str]): associated licenses for that content
- **tool** (dict): Tool used to compute the license
"""
...
@remote_api_endpoint("content/fossology_license/add")
def content_fossology_license_add(
self, licenses: List[Dict], conflict_update: bool = False
) -> Dict[str, int]:
"""Add licenses not present in storage.
Args:
licenses (iterable): dictionaries with keys:
- **id**: sha1
- **licenses** ([bytes]): List of licenses associated to sha1
- **tool** (str): nomossa
conflict_update: Flag to determine if we want to overwrite (true)
or skip duplicates (false, the default)
Returns:
Dict summary of number of rows added
"""
...
@remote_api_endpoint("content/fossology_license/range")
- def content_fossology_license_get_range(
- self, start, end, indexer_configuration_id, limit=1000
- ):
- """Retrieve licenses within range [start, end] bound by limit.
+ def content_fossology_license_get_partition(
+ self,
+ indexer_configuration_id: int,
+ partition_id: int,
+ nb_partitions: int,
+ page_token: Optional[str] = None,
+ limit: int = 1000,
+ ) -> PagedResult[Sha1]:
+ """Retrieve licenses within the partition partition_id bound by limit.
Args:
- **start** (bytes): Starting identifier range (expected smaller
- than end)
- **end** (bytes): Ending identifier range (expected larger
- than start)
- **indexer_configuration_id** (int): The tool used to index data
- **limit** (int): Limit result (default to 1000)
+ **indexer_configuration_id**: The tool used to index data
+ **partition_id**: index of the partition to fetch
+ **nb_partitions**: total number of partitions to split into
+ **page_token**: opaque token used for pagination
+ **limit**: Limit result (default to 1000)
Raises:
- ValueError for limit to None
+ IndexerStorageArgumentException for;
+ - limit to None
+ - wrong indexer_type provided
- Returns:
- a dict with keys:
- - **ids** [bytes]: iterable of content ids within the range.
- - **next** (Optional[bytes]): The next range of sha1 starts at
- this sha1 if any
+ Returns: PagedResult of Sha1. If next_page_token is None, there is no more data
+ to fetch
"""
...
@remote_api_endpoint("content_metadata/missing")
def content_metadata_missing(self, metadata):
"""List metadata missing from storage.
Args:
metadata (iterable): dictionaries with keys:
- **id** (bytes): sha1 identifier
- **indexer_configuration_id** (int): tool used to compute
the results
Yields:
missing sha1s
"""
...
@remote_api_endpoint("content_metadata")
def content_metadata_get(self, ids):
"""Retrieve metadata per id.
Args:
ids (iterable): sha1 checksums
Yields:
dictionaries with the following keys:
id (bytes)
metadata (str): associated metadata
tool (dict): tool used to compute metadata
"""
...
@remote_api_endpoint("content_metadata/add")
def content_metadata_add(
self, metadata: List[Dict], conflict_update: bool = False
) -> Dict[str, int]:
"""Add metadata not present in storage.
Args:
metadata (iterable): dictionaries with keys:
- **id**: sha1
- **metadata**: arbitrary dict
conflict_update: Flag to determine if we want to overwrite (true)
or skip duplicates (false, the default)
Returns:
Dict summary of number of rows added
"""
...
@remote_api_endpoint("revision_intrinsic_metadata/missing")
def revision_intrinsic_metadata_missing(self, metadata):
"""List metadata missing from storage.
Args:
metadata (iterable): dictionaries with keys:
- **id** (bytes): sha1_git revision identifier
- **indexer_configuration_id** (int): tool used to compute
the results
Yields:
missing ids
"""
...
@remote_api_endpoint("revision_intrinsic_metadata")
def revision_intrinsic_metadata_get(self, ids):
"""Retrieve revision metadata per id.
Args:
ids (iterable): sha1 checksums
Yields:
: dictionaries with the following keys:
- **id** (bytes)
- **metadata** (str): associated metadata
- **tool** (dict): tool used to compute metadata
- **mappings** (List[str]): list of mappings used to translate
these metadata
"""
...
@remote_api_endpoint("revision_intrinsic_metadata/add")
def revision_intrinsic_metadata_add(
self, metadata: List[Dict], conflict_update: bool = False
) -> Dict[str, int]:
"""Add metadata not present in storage.
Args:
metadata (iterable): dictionaries with keys:
- **id**: sha1_git of revision
- **metadata**: arbitrary dict
- **indexer_configuration_id**: tool used to compute metadata
- **mappings** (List[str]): list of mappings used to translate
these metadata
conflict_update: Flag to determine if we want to overwrite (true)
or skip duplicates (false, the default)
Returns:
Dict summary of number of rows added
"""
...
@remote_api_endpoint("revision_intrinsic_metadata/delete")
def revision_intrinsic_metadata_delete(self, entries: List[Dict]) -> Dict:
"""Remove revision metadata from the storage.
Args:
entries (dict): dictionaries with the following keys:
- **id** (bytes): revision identifier
- **indexer_configuration_id** (int): tool used to compute
metadata
Returns:
Summary of number of rows deleted
"""
...
@remote_api_endpoint("origin_intrinsic_metadata")
def origin_intrinsic_metadata_get(self, ids):
"""Retrieve origin metadata per id.
Args:
ids (iterable): origin identifiers
Yields:
list: dictionaries with the following keys:
- **id** (str): origin url
- **from_revision** (bytes): which revision this metadata
was extracted from
- **metadata** (str): associated metadata
- **tool** (dict): tool used to compute metadata
- **mappings** (List[str]): list of mappings used to translate
these metadata
"""
...
@remote_api_endpoint("origin_intrinsic_metadata/add")
def origin_intrinsic_metadata_add(
self, metadata: List[Dict], conflict_update: bool = False
) -> Dict[str, int]:
"""Add origin metadata not present in storage.
Args:
metadata (iterable): dictionaries with keys:
- **id**: origin urls
- **from_revision**: sha1 id of the revision used to generate
these metadata.
- **metadata**: arbitrary dict
- **indexer_configuration_id**: tool used to compute metadata
- **mappings** (List[str]): list of mappings used to translate
these metadata
conflict_update: Flag to determine if we want to overwrite (true)
or skip duplicates (false, the default)
Returns:
Dict summary of number of rows added
"""
...
@remote_api_endpoint("origin_intrinsic_metadata/delete")
def origin_intrinsic_metadata_delete(self, entries: List[Dict]) -> Dict:
"""Remove origin metadata from the storage.
Args:
entries (dict): dictionaries with the following keys:
- **id** (str): origin urls
- **indexer_configuration_id** (int): tool used to compute
metadata
Returns:
Summary of number of rows deleted
"""
...
@remote_api_endpoint("origin_intrinsic_metadata/search/fulltext")
def origin_intrinsic_metadata_search_fulltext(self, conjunction, limit=100):
"""Returns the list of origins whose metadata contain all the terms.
Args:
conjunction (List[str]): List of terms to be searched for.
limit (int): The maximum number of results to return
Yields:
list: dictionaries with the following keys:
- **id** (str): origin urls
- **from_revision**: sha1 id of the revision used to generate
these metadata.
- **metadata** (str): associated metadata
- **tool** (dict): tool used to compute metadata
- **mappings** (List[str]): list of mappings used to translate
these metadata
"""
...
@remote_api_endpoint("origin_intrinsic_metadata/search/by_producer")
def origin_intrinsic_metadata_search_by_producer(
self, page_token="", limit=100, ids_only=False, mappings=None, tool_ids=None
):
"""Returns the list of origins whose metadata contain all the terms.
Args:
page_token (str): Opaque token used for pagination.
limit (int): The maximum number of results to return
ids_only (bool): Determines whether only origin urls are
returned or the content as well
mappings (List[str]): Returns origins whose intrinsic metadata
were generated using at least one of these mappings.
Returns:
dict: dict with the following keys:
- **next_page_token** (str, optional): opaque token to be used as
`page_token` for retrieving the next page. If absent, there is
no more pages to gather.
- **origins** (list): list of origin url (str) if `ids_only=True`
else dictionaries with the following keys:
- **id** (str): origin urls
- **from_revision**: sha1 id of the revision used to generate
these metadata.
- **metadata** (str): associated metadata
- **tool** (dict): tool used to compute metadata
- **mappings** (List[str]): list of mappings used to translate
these metadata
"""
...
@remote_api_endpoint("origin_intrinsic_metadata/stats")
def origin_intrinsic_metadata_stats(self):
"""Returns counts of indexed metadata per origins, broken down
into metadata types.
Returns:
dict: dictionary with keys:
- total (int): total number of origins that were indexed
(possibly yielding an empty metadata dictionary)
- non_empty (int): total number of origins that we extracted
a non-empty metadata dictionary from
- per_mapping (dict): a dictionary with mapping names as
keys and number of origins whose indexing used this
mapping. Note that indexing a given origin may use
0, 1, or many mappings.
"""
...
@remote_api_endpoint("indexer_configuration/add")
def indexer_configuration_add(self, tools):
"""Add new tools to the storage.
Args:
tools ([dict]): List of dictionary representing tool to
insert in the db. Dictionary with the following keys:
- **tool_name** (str): tool's name
- **tool_version** (str): tool's version
- **tool_configuration** (dict): tool's configuration
(free form dict)
Returns:
List of dict inserted in the db (holding the id key as
well). The order of the list is not guaranteed to match
the order of the initial list.
"""
...
@remote_api_endpoint("indexer_configuration/data")
def indexer_configuration_get(self, tool):
"""Retrieve tool information.
Args:
tool (dict): Dictionary representing a tool with the
following keys:
- **tool_name** (str): tool's name
- **tool_version** (str): tool's version
- **tool_configuration** (dict): tool's configuration
(free form dict)
Returns:
The same dictionary with an `id` key, None otherwise.
"""
...
diff --git a/swh/indexer/tasks.py b/swh/indexer/tasks.py
index 2ca6cdd..51c07ef 100644
--- a/swh/indexer/tasks.py
+++ b/swh/indexer/tasks.py
@@ -1,48 +1,48 @@
# Copyright (C) 2016-2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
-from celery import current_app as app
+from celery import shared_task
-from .mimetype import MimetypeIndexer, MimetypeRangeIndexer
+from .mimetype import MimetypeIndexer, MimetypePartitionIndexer
from .ctags import CtagsIndexer
-from .fossology_license import FossologyLicenseIndexer, FossologyLicenseRangeIndexer
+from .fossology_license import FossologyLicenseIndexer, FossologyLicensePartitionIndexer
from .rehash import RecomputeChecksums
from .metadata import OriginMetadataIndexer
-@app.task(name=__name__ + ".OriginMetadata")
+@shared_task(name=__name__ + ".OriginMetadata")
def origin_metadata(*args, **kwargs):
return OriginMetadataIndexer().run(*args, **kwargs)
-@app.task(name=__name__ + ".Ctags")
+@shared_task(name=__name__ + ".Ctags")
def ctags(*args, **kwargs):
return CtagsIndexer().run(*args, **kwargs)
-@app.task(name=__name__ + ".ContentFossologyLicense")
+@shared_task(name=__name__ + ".ContentFossologyLicense")
def fossology_license(*args, **kwargs):
return FossologyLicenseIndexer().run(*args, **kwargs)
-@app.task(name=__name__ + ".RecomputeChecksums")
+@shared_task(name=__name__ + ".RecomputeChecksums")
def recompute_checksums(*args, **kwargs):
return RecomputeChecksums().run(*args, **kwargs)
-@app.task(name=__name__ + ".ContentMimetype")
+@shared_task(name=__name__ + ".ContentMimetype")
def mimetype(*args, **kwargs):
return MimetypeIndexer().run(*args, **kwargs)
-@app.task(name=__name__ + ".ContentRangeMimetype")
-def range_mimetype(*args, **kwargs):
- return MimetypeRangeIndexer().run(*args, **kwargs)
+@shared_task(name=__name__ + ".ContentMimetypePartition")
+def mimetype_partition(*args, **kwargs):
+ return MimetypePartitionIndexer().run(*args, **kwargs)
-@app.task(name=__name__ + ".ContentRangeFossologyLicense")
-def range_license(*args, **kwargs):
- return FossologyLicenseRangeIndexer().run(*args, **kwargs)
+@shared_task(name=__name__ + ".ContentFossologyLicensePartition")
+def license_partition(*args, **kwargs):
+ return FossologyLicensePartitionIndexer().run(*args, **kwargs)
diff --git a/swh/indexer/tests/conftest.py b/swh/indexer/tests/conftest.py
index 1ba1528..6438b89 100644
--- a/swh/indexer/tests/conftest.py
+++ b/swh/indexer/tests/conftest.py
@@ -1,74 +1,101 @@
# Copyright (C) 2019-2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
+import os
+
from datetime import timedelta
from unittest.mock import patch
+import yaml
import pytest
from swh.objstorage import get_objstorage
from swh.storage import get_storage
from swh.indexer.storage import get_indexer_storage
from .utils import fill_storage, fill_obj_storage
TASK_NAMES = ["revision_intrinsic_metadata", "origin_intrinsic_metadata"]
@pytest.fixture
def indexer_scheduler(swh_scheduler):
for taskname in TASK_NAMES:
swh_scheduler.create_task_type(
{
"type": taskname,
"description": "The {} indexer testing task".format(taskname),
"backend_name": "swh.indexer.tests.tasks.{}".format(taskname),
"default_interval": timedelta(days=1),
"min_interval": timedelta(hours=6),
"max_interval": timedelta(days=12),
"num_retries": 3,
}
)
return swh_scheduler
@pytest.fixture
def idx_storage():
"""An instance of in-memory indexer storage that gets injected into all
indexers classes.
"""
idx_storage = get_indexer_storage("memory", {})
with patch("swh.indexer.storage.in_memory.IndexerStorage") as idx_storage_mock:
idx_storage_mock.return_value = idx_storage
yield idx_storage
@pytest.fixture
def storage():
"""An instance of in-memory storage that gets injected into all indexers
classes.
"""
storage = get_storage(cls="memory")
fill_storage(storage)
with patch("swh.storage.in_memory.InMemoryStorage") as storage_mock:
storage_mock.return_value = storage
yield storage
@pytest.fixture
def obj_storage():
"""An instance of in-memory objstorage that gets injected into all indexers
classes.
"""
objstorage = get_objstorage("memory", {})
fill_obj_storage(objstorage)
with patch.dict(
"swh.objstorage.factory._STORAGE_CLASSES", {"memory": lambda: objstorage}
):
yield objstorage
+
+
+@pytest.fixture
+def swh_indexer_config():
+ return {
+ "storage": {"cls": "memory"},
+ "objstorage": {"cls": "memory", "args": {},},
+ "indexer_storage": {"cls": "memory", "args": {},},
+ "tools": {
+ "name": "file",
+ "version": "1:5.30-1+deb9u1",
+ "configuration": {"type": "library", "debian-package": "python3-magic"},
+ },
+ "compute_checksums": ["blake2b512"], # for rehash indexer
+ }
+
+
+@pytest.fixture
+def swh_config(swh_indexer_config, monkeypatch, tmp_path):
+ conffile = os.path.join(str(tmp_path), "indexer.yml")
+ with open(conffile, "w") as f:
+ f.write(yaml.dump(swh_indexer_config))
+ monkeypatch.setenv("SWH_CONFIG_FILENAME", conffile)
+ return conffile
diff --git a/swh/indexer/tests/storage/test_storage.py b/swh/indexer/tests/storage/test_storage.py
index 9c5b892..2101caa 100644
--- a/swh/indexer/tests/storage/test_storage.py
+++ b/swh/indexer/tests/storage/test_storage.py
@@ -1,1831 +1,1899 @@
# Copyright (C) 2015-2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import inspect
+import math
import threading
from typing import Dict
import pytest
from swh.model.hashutil import hash_to_bytes
from swh.indexer.storage.exc import (
IndexerStorageArgumentException,
DuplicateId,
)
from swh.indexer.storage.interface import IndexerStorageInterface
def prepare_mimetypes_from(fossology_licenses):
"""Fossology license needs some consistent data in db to run.
"""
mimetypes = []
for c in fossology_licenses:
mimetypes.append(
{
"id": c["id"],
- "mimetype": "text/plain",
+ "mimetype": "text/plain", # for filtering on textual data to work
"encoding": "utf-8",
"indexer_configuration_id": c["indexer_configuration_id"],
}
)
return mimetypes
def endpoint_name(etype: str, ename: str) -> str:
"""Compute the storage's endpoint's name
>>> endpoint_name('content_mimetype', 'add')
'content_mimetype_add'
>>> endpoint_name('content_fosso_license', 'delete')
'content_fosso_license_delete'
"""
return f"{etype}_{ename}"
def endpoint(storage, etype: str, ename: str):
return getattr(storage, endpoint_name(etype, ename))
def expected_summary(count: int, etype: str, ename: str = "add") -> Dict[str, int]:
"""Compute the expected summary
The key is determine according to etype and ename
>>> expected_summary(10, 'content_mimetype', 'add')
{'content_mimetype:add': 10}
>>> expected_summary(9, 'origin_intrinsic_metadata', 'delete')
{'origin_intrinsic_metadata:del': 9}
"""
pattern = ename[0:3]
key = endpoint_name(etype, ename).replace(f"_{ename}", f":{pattern}")
return {key: count}
def test_check_config(swh_indexer_storage):
assert swh_indexer_storage.check_config(check_write=True)
assert swh_indexer_storage.check_config(check_write=False)
def test_types(swh_indexer_storage):
"""Checks all methods of StorageInterface are implemented by this
backend, and that they have the same signature."""
# Create an instance of the protocol (which cannot be instantiated
# directly, so this creates a subclass, then instantiates it)
interface = type("_", (IndexerStorageInterface,), {})()
assert "content_mimetype_add" in dir(interface)
missing_methods = []
for meth_name in dir(interface):
if meth_name.startswith("_"):
continue
interface_meth = getattr(interface, meth_name)
try:
concrete_meth = getattr(swh_indexer_storage, meth_name)
except AttributeError:
missing_methods.append(meth_name)
continue
expected_signature = inspect.signature(interface_meth)
actual_signature = inspect.signature(concrete_meth)
assert expected_signature == actual_signature, meth_name
assert missing_methods == []
class StorageETypeTester:
"""Base class for testing a series of common behaviour between a bunch of
endpoint types supported by an IndexerStorage.
This is supposed to be inherited with the following class attributes:
- endpoint_type
- tool_name
- example_data
See below for example usage.
"""
def test_missing(self, swh_indexer_storage_with_data):
storage, data = swh_indexer_storage_with_data
etype = self.endpoint_type
tool_id = data.tools[self.tool_name]["id"]
# given 2 (hopefully) unknown objects
query = [
{"id": data.sha1_1, "indexer_configuration_id": tool_id,},
{"id": data.sha1_2, "indexer_configuration_id": tool_id,},
]
# we expect these are both returned by the xxx_missing endpoint
actual_missing = endpoint(storage, etype, "missing")(query)
assert list(actual_missing) == [
data.sha1_1,
data.sha1_2,
]
# now, when we add one of them
summary = endpoint(storage, etype, "add")(
[
{
"id": data.sha1_2,
**self.example_data[0],
"indexer_configuration_id": tool_id,
}
]
)
assert summary == expected_summary(1, etype)
# we expect only the other one returned
actual_missing = endpoint(storage, etype, "missing")(query)
assert list(actual_missing) == [data.sha1_1]
def test_add__drop_duplicate(self, swh_indexer_storage_with_data):
storage, data = swh_indexer_storage_with_data
etype = self.endpoint_type
tool_id = data.tools[self.tool_name]["id"]
# add the first object
data_v1 = {
"id": data.sha1_2,
**self.example_data[0],
"indexer_configuration_id": tool_id,
}
summary = endpoint(storage, etype, "add")([data_v1])
assert summary == expected_summary(1, etype)
# should be able to retrieve it
actual_data = list(endpoint(storage, etype, "get")([data.sha1_2]))
expected_data_v1 = [
{
"id": data.sha1_2,
**self.example_data[0],
"tool": data.tools[self.tool_name],
}
]
assert actual_data == expected_data_v1
# now if we add a modified version of the same object (same id)
data_v2 = data_v1.copy()
data_v2.update(self.example_data[1])
summary2 = endpoint(storage, etype, "add")([data_v2])
assert summary2 == expected_summary(0, etype) # not added
# we expect to retrieve the original data, not the modified one
actual_data = list(endpoint(storage, etype, "get")([data.sha1_2]))
assert actual_data == expected_data_v1
def test_add__update_in_place_duplicate(self, swh_indexer_storage_with_data):
storage, data = swh_indexer_storage_with_data
etype = self.endpoint_type
tool = data.tools[self.tool_name]
data_v1 = {
"id": data.sha1_2,
**self.example_data[0],
"indexer_configuration_id": tool["id"],
}
# given
summary = endpoint(storage, etype, "add")([data_v1])
assert summary == expected_summary(1, etype) # not added
# when
actual_data = list(endpoint(storage, etype, "get")([data.sha1_2]))
expected_data_v1 = [{"id": data.sha1_2, **self.example_data[0], "tool": tool,}]
# then
assert actual_data == expected_data_v1
# given
data_v2 = data_v1.copy()
data_v2.update(self.example_data[1])
endpoint(storage, etype, "add")([data_v2], conflict_update=True)
assert summary == expected_summary(1, etype) # modified so counted
actual_data = list(endpoint(storage, etype, "get")([data.sha1_2]))
expected_data_v2 = [{"id": data.sha1_2, **self.example_data[1], "tool": tool,}]
# data did change as the v2 was used to overwrite v1
assert actual_data == expected_data_v2
def test_add__update_in_place_deadlock(self, swh_indexer_storage_with_data):
storage, data = swh_indexer_storage_with_data
etype = self.endpoint_type
tool = data.tools[self.tool_name]
hashes = [
hash_to_bytes("34973274ccef6ab4dfaaf86599792fa9c3fe4{:03d}".format(i))
for i in range(1000)
]
data_v1 = [
{
"id": hash_,
**self.example_data[0],
"indexer_configuration_id": tool["id"],
}
for hash_ in hashes
]
data_v2 = [
{
"id": hash_,
**self.example_data[1],
"indexer_configuration_id": tool["id"],
}
for hash_ in hashes
]
# Remove one item from each, so that both queries have to succeed for
# all items to be in the DB.
data_v2a = data_v2[1:]
data_v2b = list(reversed(data_v2[0:-1]))
# given
endpoint(storage, etype, "add")(data_v1)
# when
actual_data = list(endpoint(storage, etype, "get")(hashes))
expected_data_v1 = [
{"id": hash_, **self.example_data[0], "tool": tool,} for hash_ in hashes
]
# then
assert actual_data == expected_data_v1
# given
def f1():
endpoint(storage, etype, "add")(data_v2a, conflict_update=True)
def f2():
endpoint(storage, etype, "add")(data_v2b, conflict_update=True)
t1 = threading.Thread(target=f1)
t2 = threading.Thread(target=f2)
t2.start()
t1.start()
t1.join()
t2.join()
actual_data = sorted(
endpoint(storage, etype, "get")(hashes), key=lambda x: x["id"]
)
expected_data_v2 = [
{"id": hash_, **self.example_data[1], "tool": tool,} for hash_ in hashes
]
assert actual_data == expected_data_v2
def test_add__duplicate_twice(self, swh_indexer_storage_with_data):
storage, data = swh_indexer_storage_with_data
etype = self.endpoint_type
tool = data.tools[self.tool_name]
data_rev1 = {
"id": data.revision_id_2,
**self.example_data[0],
"indexer_configuration_id": tool["id"],
}
data_rev2 = {
"id": data.revision_id_2,
**self.example_data[1],
"indexer_configuration_id": tool["id"],
}
# when
summary = endpoint(storage, etype, "add")([data_rev1])
assert summary == expected_summary(1, etype)
with pytest.raises(DuplicateId):
endpoint(storage, etype, "add")(
[data_rev2, data_rev2], conflict_update=True
)
# then
actual_data = list(
endpoint(storage, etype, "get")([data.revision_id_2, data.revision_id_1])
)
expected_data = [
{"id": data.revision_id_2, **self.example_data[0], "tool": tool,}
]
assert actual_data == expected_data
def test_get(self, swh_indexer_storage_with_data):
storage, data = swh_indexer_storage_with_data
etype = self.endpoint_type
tool = data.tools[self.tool_name]
query = [data.sha1_2, data.sha1_1]
data1 = {
"id": data.sha1_2,
**self.example_data[0],
"indexer_configuration_id": tool["id"],
}
# when
summary = endpoint(storage, etype, "add")([data1])
assert summary == expected_summary(1, etype)
# then
actual_data = list(endpoint(storage, etype, "get")(query))
# then
expected_data = [{"id": data.sha1_2, **self.example_data[0], "tool": tool,}]
assert actual_data == expected_data
class TestIndexerStorageContentMimetypes(StorageETypeTester):
"""Test Indexer Storage content_mimetype related methods
"""
endpoint_type = "content_mimetype"
tool_name = "file"
example_data = [
{"mimetype": "text/plain", "encoding": "utf-8",},
{"mimetype": "text/html", "encoding": "us-ascii",},
]
- def test_generate_content_mimetype_get_range_limit_none(self, swh_indexer_storage):
- """mimetype_get_range call with wrong limit input should fail"""
+ def test_generate_content_mimetype_get_partition_failure(self, swh_indexer_storage):
+ """get_partition call with wrong limit input should fail"""
storage = swh_indexer_storage
- with pytest.raises(IndexerStorageArgumentException) as e:
- storage.content_mimetype_get_range(
- start=None, end=None, indexer_configuration_id=None, limit=None
+ indexer_configuration_id = None
+ with pytest.raises(
+ IndexerStorageArgumentException, match="limit should not be None"
+ ):
+ storage.content_mimetype_get_partition(
+ indexer_configuration_id, 0, 3, limit=None
)
- assert e.value.args == ("limit should not be None",)
-
- def test_generate_content_mimetype_get_range_no_limit(
+ def test_generate_content_mimetype_get_partition_no_limit(
self, swh_indexer_storage_with_data
):
- """mimetype_get_range returns mimetypes within range provided"""
+ """get_partition should return result"""
storage, data = swh_indexer_storage_with_data
mimetypes = data.mimetypes
- # All ids from the db
- content_ids = sorted([c["id"] for c in mimetypes])
-
- start = content_ids[0]
- end = content_ids[-1]
+ expected_ids = set([c["id"] for c in mimetypes])
+ indexer_configuration_id = mimetypes[0]["indexer_configuration_id"]
- # retrieve mimetypes
- tool_id = mimetypes[0]["indexer_configuration_id"]
- actual_result = storage.content_mimetype_get_range(
- start, end, indexer_configuration_id=tool_id
- )
+ assert len(mimetypes) == 16
+ nb_partitions = 16
- actual_ids = actual_result["ids"]
- actual_next = actual_result["next"]
+ actual_ids = []
+ for partition_id in range(nb_partitions):
+ actual_result = storage.content_mimetype_get_partition(
+ indexer_configuration_id, partition_id, nb_partitions
+ )
+ assert actual_result.next_page_token is None
+ actual_ids.extend(actual_result.results)
- assert len(mimetypes) == len(actual_ids)
- assert actual_next is None
- assert content_ids == actual_ids
+ assert len(actual_ids) == len(expected_ids)
+ for actual_id in actual_ids:
+ assert actual_id in expected_ids
- def test_generate_content_mimetype_get_range_limit(
+ def test_generate_content_mimetype_get_partition_full(
self, swh_indexer_storage_with_data
):
- """mimetype_get_range paginates results if limit exceeded"""
- storage, data = swh_indexer_storage_with_data
+ """get_partition for a single partition should return available ids
- indexer_configuration_id = data.tools["file"]["id"]
-
- # input the list of sha1s we want from storage
- content_ids = sorted([c["id"] for c in data.mimetypes])
- mimetypes = list(storage.content_mimetype_get(content_ids))
- assert len(mimetypes) == len(data.mimetypes)
+ """
+ storage, data = swh_indexer_storage_with_data
+ mimetypes = data.mimetypes
+ expected_ids = set([c["id"] for c in mimetypes])
+ indexer_configuration_id = mimetypes[0]["indexer_configuration_id"]
- start = content_ids[0]
- end = content_ids[-1]
- # retrieve mimetypes limited to 10 results
- actual_result = storage.content_mimetype_get_range(
- start, end, indexer_configuration_id=indexer_configuration_id, limit=10
+ actual_result = storage.content_mimetype_get_partition(
+ indexer_configuration_id, 0, 1
)
+ assert actual_result.next_page_token is None
+ actual_ids = actual_result.results
+ assert len(actual_ids) == len(expected_ids)
+ for actual_id in actual_ids:
+ assert actual_id in expected_ids
- assert actual_result
- assert set(actual_result.keys()) == {"ids", "next"}
- actual_ids = actual_result["ids"]
- actual_next = actual_result["next"]
+ def test_generate_content_mimetype_get_partition_empty(
+ self, swh_indexer_storage_with_data
+ ):
+ """get_partition when at least one of the partitions is empty"""
+ storage, data = swh_indexer_storage_with_data
+ mimetypes = data.mimetypes
+ expected_ids = set([c["id"] for c in mimetypes])
+ indexer_configuration_id = mimetypes[0]["indexer_configuration_id"]
+
+ # nb_partitions = smallest power of 2 such that at least one of
+ # the partitions is empty
+ nb_mimetypes = len(mimetypes)
+ nb_partitions = 1 << math.floor(math.log2(nb_mimetypes) + 1)
+
+ seen_ids = []
+
+ for partition_id in range(nb_partitions):
+ actual_result = storage.content_mimetype_get_partition(
+ indexer_configuration_id,
+ partition_id,
+ nb_partitions,
+ limit=nb_mimetypes + 1,
+ )
- assert len(actual_ids) == 10
- assert actual_next is not None
- assert actual_next == content_ids[10]
+ for actual_id in actual_result.results:
+ seen_ids.append(actual_id)
- expected_mimetypes = content_ids[:10]
- assert expected_mimetypes == actual_ids
+ # Limit is higher than the max number of results
+ assert actual_result.next_page_token is None
- # retrieve next part
- actual_result = storage.content_mimetype_get_range(
- start=end, end=end, indexer_configuration_id=indexer_configuration_id
- )
- assert set(actual_result.keys()) == {"ids", "next"}
- actual_ids = actual_result["ids"]
- actual_next = actual_result["next"]
+ assert set(seen_ids) == expected_ids
- assert actual_next is None
- expected_mimetypes = [content_ids[-1]]
- assert expected_mimetypes == actual_ids
+ def test_generate_content_mimetype_get_partition_with_pagination(
+ self, swh_indexer_storage_with_data
+ ):
+ """get_partition should return ids provided with pagination
+
+ """
+ storage, data = swh_indexer_storage_with_data
+ mimetypes = data.mimetypes
+ expected_ids = set([c["id"] for c in mimetypes])
+ indexer_configuration_id = mimetypes[0]["indexer_configuration_id"]
+
+ nb_partitions = 4
+
+ actual_ids = []
+ for partition_id in range(nb_partitions):
+ next_page_token = None
+ while True:
+ actual_result = storage.content_mimetype_get_partition(
+ indexer_configuration_id,
+ partition_id,
+ nb_partitions,
+ limit=2,
+ page_token=next_page_token,
+ )
+ actual_ids.extend(actual_result.results)
+ next_page_token = actual_result.next_page_token
+ if next_page_token is None:
+ break
+
+ assert len(set(actual_ids)) == len(set(expected_ids))
+ for actual_id in actual_ids:
+ assert actual_id in expected_ids
class TestIndexerStorageContentLanguage(StorageETypeTester):
"""Test Indexer Storage content_language related methods
"""
endpoint_type = "content_language"
tool_name = "pygments"
example_data = [
{"lang": "haskell",},
{"lang": "common-lisp",},
]
class TestIndexerStorageContentCTags(StorageETypeTester):
"""Test Indexer Storage content_ctags related methods
"""
endpoint_type = "content_ctags"
tool_name = "universal-ctags"
example_data = [
{
"ctags": [
{"name": "done", "kind": "variable", "line": 119, "lang": "OCaml",}
]
},
{
"ctags": [
{"name": "done", "kind": "variable", "line": 100, "lang": "Python",},
{"name": "main", "kind": "function", "line": 119, "lang": "Python",},
]
},
]
# the following tests are disabled because CTAGS behaves differently
@pytest.mark.skip
def test_add__drop_duplicate(self):
pass
@pytest.mark.skip
def test_add__update_in_place_duplicate(self):
pass
@pytest.mark.skip
def test_add__update_in_place_deadlock(self):
pass
@pytest.mark.skip
def test_add__duplicate_twice(self):
pass
@pytest.mark.skip
def test_get(self):
pass
def test_content_ctags_search(self, swh_indexer_storage_with_data):
storage, data = swh_indexer_storage_with_data
# 1. given
tool = data.tools["universal-ctags"]
tool_id = tool["id"]
ctag1 = {
"id": data.sha1_1,
"indexer_configuration_id": tool_id,
"ctags": [
{"name": "hello", "kind": "function", "line": 133, "lang": "Python",},
{"name": "counter", "kind": "variable", "line": 119, "lang": "Python",},
{"name": "hello", "kind": "variable", "line": 210, "lang": "Python",},
],
}
ctag2 = {
"id": data.sha1_2,
"indexer_configuration_id": tool_id,
"ctags": [
{"name": "hello", "kind": "variable", "line": 100, "lang": "C",},
{"name": "result", "kind": "variable", "line": 120, "lang": "C",},
],
}
storage.content_ctags_add([ctag1, ctag2])
# 1. when
actual_ctags = list(storage.content_ctags_search("hello", limit=1))
# 1. then
assert actual_ctags == [
{
"id": ctag1["id"],
"tool": tool,
"name": "hello",
"kind": "function",
"line": 133,
"lang": "Python",
}
]
# 2. when
actual_ctags = list(
storage.content_ctags_search("hello", limit=1, last_sha1=ctag1["id"])
)
# 2. then
assert actual_ctags == [
{
"id": ctag2["id"],
"tool": tool,
"name": "hello",
"kind": "variable",
"line": 100,
"lang": "C",
}
]
# 3. when
actual_ctags = list(storage.content_ctags_search("hello"))
# 3. then
assert actual_ctags == [
{
"id": ctag1["id"],
"tool": tool,
"name": "hello",
"kind": "function",
"line": 133,
"lang": "Python",
},
{
"id": ctag1["id"],
"tool": tool,
"name": "hello",
"kind": "variable",
"line": 210,
"lang": "Python",
},
{
"id": ctag2["id"],
"tool": tool,
"name": "hello",
"kind": "variable",
"line": 100,
"lang": "C",
},
]
# 4. when
actual_ctags = list(storage.content_ctags_search("counter"))
# then
assert actual_ctags == [
{
"id": ctag1["id"],
"tool": tool,
"name": "counter",
"kind": "variable",
"line": 119,
"lang": "Python",
}
]
# 5. when
actual_ctags = list(storage.content_ctags_search("result", limit=1))
# then
assert actual_ctags == [
{
"id": ctag2["id"],
"tool": tool,
"name": "result",
"kind": "variable",
"line": 120,
"lang": "C",
}
]
def test_content_ctags_search_no_result(self, swh_indexer_storage):
storage = swh_indexer_storage
actual_ctags = list(storage.content_ctags_search("counter"))
assert not actual_ctags
def test_content_ctags_add__add_new_ctags_added(
self, swh_indexer_storage_with_data
):
storage, data = swh_indexer_storage_with_data
# given
tool = data.tools["universal-ctags"]
tool_id = tool["id"]
ctag_v1 = {
"id": data.sha1_2,
"indexer_configuration_id": tool_id,
"ctags": [
{"name": "done", "kind": "variable", "line": 100, "lang": "Scheme",}
],
}
# given
storage.content_ctags_add([ctag_v1])
storage.content_ctags_add([ctag_v1]) # conflict does nothing
# when
actual_ctags = list(storage.content_ctags_get([data.sha1_2]))
# then
expected_ctags = [
{
"id": data.sha1_2,
"name": "done",
"kind": "variable",
"line": 100,
"lang": "Scheme",
"tool": tool,
}
]
assert actual_ctags == expected_ctags
# given
ctag_v2 = ctag_v1.copy()
ctag_v2.update(
{
"ctags": [
{"name": "defn", "kind": "function", "line": 120, "lang": "Scheme",}
]
}
)
storage.content_ctags_add([ctag_v2])
expected_ctags = [
{
"id": data.sha1_2,
"name": "done",
"kind": "variable",
"line": 100,
"lang": "Scheme",
"tool": tool,
},
{
"id": data.sha1_2,
"name": "defn",
"kind": "function",
"line": 120,
"lang": "Scheme",
"tool": tool,
},
]
actual_ctags = list(storage.content_ctags_get([data.sha1_2]))
assert actual_ctags == expected_ctags
def test_content_ctags_add__update_in_place(self, swh_indexer_storage_with_data):
storage, data = swh_indexer_storage_with_data
# given
tool = data.tools["universal-ctags"]
tool_id = tool["id"]
ctag_v1 = {
"id": data.sha1_2,
"indexer_configuration_id": tool_id,
"ctags": [
{"name": "done", "kind": "variable", "line": 100, "lang": "Scheme",}
],
}
# given
storage.content_ctags_add([ctag_v1])
# when
actual_ctags = list(storage.content_ctags_get([data.sha1_2]))
# then
expected_ctags = [
{
"id": data.sha1_2,
"name": "done",
"kind": "variable",
"line": 100,
"lang": "Scheme",
"tool": tool,
}
]
assert actual_ctags == expected_ctags
# given
ctag_v2 = ctag_v1.copy()
ctag_v2.update(
{
"ctags": [
{
"name": "done",
"kind": "variable",
"line": 100,
"lang": "Scheme",
},
{
"name": "defn",
"kind": "function",
"line": 120,
"lang": "Scheme",
},
]
}
)
storage.content_ctags_add([ctag_v2], conflict_update=True)
actual_ctags = list(storage.content_ctags_get([data.sha1_2]))
# ctag did change as the v2 was used to overwrite v1
expected_ctags = [
{
"id": data.sha1_2,
"name": "done",
"kind": "variable",
"line": 100,
"lang": "Scheme",
"tool": tool,
},
{
"id": data.sha1_2,
"name": "defn",
"kind": "function",
"line": 120,
"lang": "Scheme",
"tool": tool,
},
]
assert actual_ctags == expected_ctags
class TestIndexerStorageContentMetadata(StorageETypeTester):
"""Test Indexer Storage content_metadata related methods
"""
tool_name = "swh-metadata-detector"
endpoint_type = "content_metadata"
example_data = [
{
"metadata": {
"other": {},
"codeRepository": {
"type": "git",
"url": "https://github.com/moranegg/metadata_test",
},
"description": "Simple package.json test for indexer",
"name": "test_metadata",
"version": "0.0.1",
},
},
{"metadata": {"other": {}, "name": "test_metadata", "version": "0.0.1"},},
]
class TestIndexerStorageRevisionIntrinsicMetadata(StorageETypeTester):
"""Test Indexer Storage revision_intrinsic_metadata related methods
"""
tool_name = "swh-metadata-detector"
endpoint_type = "revision_intrinsic_metadata"
example_data = [
{
"metadata": {
"other": {},
"codeRepository": {
"type": "git",
"url": "https://github.com/moranegg/metadata_test",
},
"description": "Simple package.json test for indexer",
"name": "test_metadata",
"version": "0.0.1",
},
"mappings": ["mapping1"],
},
{
"metadata": {"other": {}, "name": "test_metadata", "version": "0.0.1"},
"mappings": ["mapping2"],
},
]
def test_revision_intrinsic_metadata_delete(self, swh_indexer_storage_with_data):
storage, data = swh_indexer_storage_with_data
etype = self.endpoint_type
tool = data.tools[self.tool_name]
query = [data.sha1_2, data.sha1_1]
data1 = {
"id": data.sha1_2,
**self.example_data[0],
"indexer_configuration_id": tool["id"],
}
# when
summary = endpoint(storage, etype, "add")([data1])
assert summary == expected_summary(1, etype)
summary2 = endpoint(storage, etype, "delete")(
[{"id": data.sha1_2, "indexer_configuration_id": tool["id"],}]
)
assert summary2 == expected_summary(1, etype, "del")
# then
actual_data = list(endpoint(storage, etype, "get")(query))
# then
assert not actual_data
def test_revision_intrinsic_metadata_delete_nonexisting(
self, swh_indexer_storage_with_data
):
storage, data = swh_indexer_storage_with_data
etype = self.endpoint_type
tool = data.tools[self.tool_name]
endpoint(storage, etype, "delete")(
[{"id": data.sha1_2, "indexer_configuration_id": tool["id"],}]
)
class TestIndexerStorageContentFossologyLicence:
def test_content_fossology_license_add__new_license_added(
self, swh_indexer_storage_with_data
):
storage, data = swh_indexer_storage_with_data
# given
tool = data.tools["nomos"]
tool_id = tool["id"]
license_v1 = {
"id": data.sha1_1,
"licenses": ["Apache-2.0"],
"indexer_configuration_id": tool_id,
}
# given
storage.content_fossology_license_add([license_v1])
# conflict does nothing
storage.content_fossology_license_add([license_v1])
# when
actual_licenses = list(storage.content_fossology_license_get([data.sha1_1]))
# then
expected_license = {data.sha1_1: [{"licenses": ["Apache-2.0"], "tool": tool,}]}
assert actual_licenses == [expected_license]
# given
license_v2 = license_v1.copy()
license_v2.update(
{"licenses": ["BSD-2-Clause"],}
)
storage.content_fossology_license_add([license_v2])
actual_licenses = list(storage.content_fossology_license_get([data.sha1_1]))
expected_license = {
data.sha1_1: [{"licenses": ["Apache-2.0", "BSD-2-Clause"], "tool": tool}]
}
# license did not change as the v2 was dropped.
assert actual_licenses == [expected_license]
- def test_generate_content_fossology_license_get_range_limit_none(
+ def test_generate_content_fossology_license_get_partition_failure(
self, swh_indexer_storage_with_data
):
+ """get_partition call with wrong limit input should fail"""
storage, data = swh_indexer_storage_with_data
- """license_get_range call with wrong limit input should fail"""
- with pytest.raises(IndexerStorageArgumentException) as e:
- storage.content_fossology_license_get_range(
- start=None, end=None, indexer_configuration_id=None, limit=None
+ indexer_configuration_id = None
+ with pytest.raises(
+ IndexerStorageArgumentException, match="limit should not be None"
+ ):
+ storage.content_fossology_license_get_partition(
+ indexer_configuration_id, 0, 3, limit=None,
)
- assert e.value.args == ("limit should not be None",)
-
- def test_generate_content_fossology_license_get_range_no_limit(
+ def test_generate_content_fossology_license_get_partition_no_limit(
self, swh_indexer_storage_with_data
):
- """license_get_range returns licenses within range provided"""
+ """get_partition should return results"""
storage, data = swh_indexer_storage_with_data
# craft some consistent mimetypes
fossology_licenses = data.fossology_licenses
mimetypes = prepare_mimetypes_from(fossology_licenses)
+ indexer_configuration_id = fossology_licenses[0]["indexer_configuration_id"]
storage.content_mimetype_add(mimetypes, conflict_update=True)
# add fossology_licenses to storage
storage.content_fossology_license_add(fossology_licenses)
# All ids from the db
- content_ids = sorted([c["id"] for c in fossology_licenses])
+ expected_ids = set([c["id"] for c in fossology_licenses])
- start = content_ids[0]
- end = content_ids[-1]
+ assert len(fossology_licenses) == 10
+ assert len(mimetypes) == 10
+ nb_partitions = 4
- # retrieve fossology_licenses
- tool_id = fossology_licenses[0]["indexer_configuration_id"]
- actual_result = storage.content_fossology_license_get_range(
- start, end, indexer_configuration_id=tool_id
- )
+ actual_ids = []
+ for partition_id in range(nb_partitions):
- actual_ids = actual_result["ids"]
- actual_next = actual_result["next"]
+ actual_result = storage.content_fossology_license_get_partition(
+ indexer_configuration_id, partition_id, nb_partitions
+ )
+ assert actual_result.next_page_token is None
+ actual_ids.extend(actual_result.results)
- assert len(fossology_licenses) == len(actual_ids)
- assert actual_next is None
- assert content_ids == actual_ids
+ assert len(set(actual_ids)) == len(expected_ids)
+ for actual_id in actual_ids:
+ assert actual_id in expected_ids
- def test_generate_content_fossology_license_get_range_no_limit_with_filter(
+ def test_generate_content_fossology_license_get_partition_full(
self, swh_indexer_storage_with_data
):
- """This filters non textual, then returns results within range"""
- storage, data = swh_indexer_storage_with_data
- fossology_licenses = data.fossology_licenses
- mimetypes = data.mimetypes
+ """get_partition for a single partition should return available ids
+ """
+ storage, data = swh_indexer_storage_with_data
# craft some consistent mimetypes
- _mimetypes = prepare_mimetypes_from(fossology_licenses)
- # add binary mimetypes which will get filtered out in results
- for m in mimetypes:
- _mimetypes.append(
- {"mimetype": "binary", **m,}
- )
+ fossology_licenses = data.fossology_licenses
+ mimetypes = prepare_mimetypes_from(fossology_licenses)
+ indexer_configuration_id = fossology_licenses[0]["indexer_configuration_id"]
- storage.content_mimetype_add(_mimetypes, conflict_update=True)
+ storage.content_mimetype_add(mimetypes, conflict_update=True)
# add fossology_licenses to storage
storage.content_fossology_license_add(fossology_licenses)
# All ids from the db
- content_ids = sorted([c["id"] for c in fossology_licenses])
+ expected_ids = set([c["id"] for c in fossology_licenses])
- start = content_ids[0]
- end = content_ids[-1]
-
- # retrieve fossology_licenses
- tool_id = fossology_licenses[0]["indexer_configuration_id"]
- actual_result = storage.content_fossology_license_get_range(
- start, end, indexer_configuration_id=tool_id
+ actual_result = storage.content_fossology_license_get_partition(
+ indexer_configuration_id, 0, 1
)
+ assert actual_result.next_page_token is None
+ actual_ids = actual_result.results
+ assert len(set(actual_ids)) == len(expected_ids)
+ for actual_id in actual_ids:
+ assert actual_id in expected_ids
- actual_ids = actual_result["ids"]
- actual_next = actual_result["next"]
-
- assert len(fossology_licenses) == len(actual_ids)
- assert actual_next is None
- assert content_ids == actual_ids
-
- def test_generate_fossology_license_get_range_limit(
+ def test_generate_content_fossology_license_get_partition_empty(
self, swh_indexer_storage_with_data
):
- """fossology_license_get_range paginates results if limit exceeded"""
+ """get_partition when at least one of the partitions is empty"""
storage, data = swh_indexer_storage_with_data
- fossology_licenses = data.fossology_licenses
-
# craft some consistent mimetypes
+ fossology_licenses = data.fossology_licenses
mimetypes = prepare_mimetypes_from(fossology_licenses)
+ indexer_configuration_id = fossology_licenses[0]["indexer_configuration_id"]
- # add fossology_licenses to storage
storage.content_mimetype_add(mimetypes, conflict_update=True)
+ # add fossology_licenses to storage
storage.content_fossology_license_add(fossology_licenses)
- # input the list of sha1s we want from storage
- content_ids = sorted([c["id"] for c in fossology_licenses])
- start = content_ids[0]
- end = content_ids[-1]
+ # All ids from the db
+ expected_ids = set([c["id"] for c in fossology_licenses])
- # retrieve fossology_licenses limited to 3 results
- limited_results = len(fossology_licenses) - 1
- tool_id = fossology_licenses[0]["indexer_configuration_id"]
- actual_result = storage.content_fossology_license_get_range(
- start, end, indexer_configuration_id=tool_id, limit=limited_results
- )
+ # nb_partitions = smallest power of 2 such that at least one of
+ # the partitions is empty
+ nb_licenses = len(fossology_licenses)
+ nb_partitions = 1 << math.floor(math.log2(nb_licenses) + 1)
- actual_ids = actual_result["ids"]
- actual_next = actual_result["next"]
+ seen_ids = []
- assert limited_results == len(actual_ids)
- assert actual_next is not None
- assert actual_next == content_ids[-1]
+ for partition_id in range(nb_partitions):
+ actual_result = storage.content_fossology_license_get_partition(
+ indexer_configuration_id,
+ partition_id,
+ nb_partitions,
+ limit=nb_licenses + 1,
+ )
- expected_fossology_licenses = content_ids[:-1]
- assert expected_fossology_licenses == actual_ids
+ for actual_id in actual_result.results:
+ seen_ids.append(actual_id)
- # retrieve next part
- actual_results2 = storage.content_fossology_license_get_range(
- start=end, end=end, indexer_configuration_id=tool_id
- )
- actual_ids2 = actual_results2["ids"]
- actual_next2 = actual_results2["next"]
+ # Limit is higher than the max number of results
+ assert actual_result.next_page_token is None
- assert actual_next2 is None
- expected_fossology_licenses2 = [content_ids[-1]]
- assert expected_fossology_licenses2 == actual_ids2
+ assert set(seen_ids) == expected_ids
+
+ def test_generate_content_fossology_license_get_partition_with_pagination(
+ self, swh_indexer_storage_with_data
+ ):
+ """get_partition should return ids provided with paginationv
+
+ """
+ storage, data = swh_indexer_storage_with_data
+ # craft some consistent mimetypes
+ fossology_licenses = data.fossology_licenses
+ mimetypes = prepare_mimetypes_from(fossology_licenses)
+ indexer_configuration_id = fossology_licenses[0]["indexer_configuration_id"]
+
+ storage.content_mimetype_add(mimetypes, conflict_update=True)
+ # add fossology_licenses to storage
+ storage.content_fossology_license_add(fossology_licenses)
+
+ # All ids from the db
+ expected_ids = [c["id"] for c in fossology_licenses]
+
+ nb_partitions = 4
+
+ actual_ids = []
+ for partition_id in range(nb_partitions):
+ next_page_token = None
+ while True:
+ actual_result = storage.content_fossology_license_get_partition(
+ indexer_configuration_id,
+ partition_id,
+ nb_partitions,
+ limit=2,
+ page_token=next_page_token,
+ )
+ actual_ids.extend(actual_result.results)
+ next_page_token = actual_result.next_page_token
+ if next_page_token is None:
+ break
+
+ assert len(set(actual_ids)) == len(set(expected_ids))
+ for actual_id in actual_ids:
+ assert actual_id in expected_ids
class TestIndexerStorageOriginIntrinsicMetadata:
def test_origin_intrinsic_metadata_get(self, swh_indexer_storage_with_data):
storage, data = swh_indexer_storage_with_data
# given
tool_id = data.tools["swh-metadata-detector"]["id"]
metadata = {
"version": None,
"name": None,
}
metadata_rev = {
"id": data.revision_id_2,
"metadata": metadata,
"mappings": ["mapping1"],
"indexer_configuration_id": tool_id,
}
metadata_origin = {
"id": data.origin_url_1,
"metadata": metadata,
"indexer_configuration_id": tool_id,
"mappings": ["mapping1"],
"from_revision": data.revision_id_2,
}
# when
storage.revision_intrinsic_metadata_add([metadata_rev])
storage.origin_intrinsic_metadata_add([metadata_origin])
# then
actual_metadata = list(
storage.origin_intrinsic_metadata_get([data.origin_url_1, "no://where"])
)
expected_metadata = [
{
"id": data.origin_url_1,
"metadata": metadata,
"tool": data.tools["swh-metadata-detector"],
"from_revision": data.revision_id_2,
"mappings": ["mapping1"],
}
]
assert actual_metadata == expected_metadata
def test_origin_intrinsic_metadata_delete(self, swh_indexer_storage_with_data):
storage, data = swh_indexer_storage_with_data
# given
tool_id = data.tools["swh-metadata-detector"]["id"]
metadata = {
"version": None,
"name": None,
}
metadata_rev = {
"id": data.revision_id_2,
"metadata": metadata,
"mappings": ["mapping1"],
"indexer_configuration_id": tool_id,
}
metadata_origin = {
"id": data.origin_url_1,
"metadata": metadata,
"indexer_configuration_id": tool_id,
"mappings": ["mapping1"],
"from_revision": data.revision_id_2,
}
metadata_origin2 = metadata_origin.copy()
metadata_origin2["id"] = data.origin_url_2
# when
storage.revision_intrinsic_metadata_add([metadata_rev])
storage.origin_intrinsic_metadata_add([metadata_origin, metadata_origin2])
storage.origin_intrinsic_metadata_delete(
[{"id": data.origin_url_1, "indexer_configuration_id": tool_id}]
)
# then
actual_metadata = list(
storage.origin_intrinsic_metadata_get(
[data.origin_url_1, data.origin_url_2, "no://where"]
)
)
for item in actual_metadata:
item["indexer_configuration_id"] = item.pop("tool")["id"]
assert actual_metadata == [metadata_origin2]
def test_origin_intrinsic_metadata_delete_nonexisting(
self, swh_indexer_storage_with_data
):
storage, data = swh_indexer_storage_with_data
tool_id = data.tools["swh-metadata-detector"]["id"]
storage.origin_intrinsic_metadata_delete(
[{"id": data.origin_url_1, "indexer_configuration_id": tool_id}]
)
def test_origin_intrinsic_metadata_add_drop_duplicate(
self, swh_indexer_storage_with_data
):
storage, data = swh_indexer_storage_with_data
# given
tool_id = data.tools["swh-metadata-detector"]["id"]
metadata_v1 = {
"version": None,
"name": None,
}
metadata_rev_v1 = {
"id": data.revision_id_1,
"metadata": metadata_v1.copy(),
"mappings": [],
"indexer_configuration_id": tool_id,
}
metadata_origin_v1 = {
"id": data.origin_url_1,
"metadata": metadata_v1.copy(),
"indexer_configuration_id": tool_id,
"mappings": [],
"from_revision": data.revision_id_1,
}
# given
storage.revision_intrinsic_metadata_add([metadata_rev_v1])
storage.origin_intrinsic_metadata_add([metadata_origin_v1])
# when
actual_metadata = list(
storage.origin_intrinsic_metadata_get([data.origin_url_1, "no://where"])
)
expected_metadata_v1 = [
{
"id": data.origin_url_1,
"metadata": metadata_v1,
"tool": data.tools["swh-metadata-detector"],
"from_revision": data.revision_id_1,
"mappings": [],
}
]
assert actual_metadata == expected_metadata_v1
# given
metadata_v2 = metadata_v1.copy()
metadata_v2.update(
{"name": "test_metadata", "author": "MG",}
)
metadata_rev_v2 = metadata_rev_v1.copy()
metadata_origin_v2 = metadata_origin_v1.copy()
metadata_rev_v2["metadata"] = metadata_v2
metadata_origin_v2["metadata"] = metadata_v2
storage.revision_intrinsic_metadata_add([metadata_rev_v2])
storage.origin_intrinsic_metadata_add([metadata_origin_v2])
# then
actual_metadata = list(
storage.origin_intrinsic_metadata_get([data.origin_url_1])
)
# metadata did not change as the v2 was dropped.
assert actual_metadata == expected_metadata_v1
def test_origin_intrinsic_metadata_add_update_in_place_duplicate(
self, swh_indexer_storage_with_data
):
storage, data = swh_indexer_storage_with_data
# given
tool_id = data.tools["swh-metadata-detector"]["id"]
metadata_v1 = {
"version": None,
"name": None,
}
metadata_rev_v1 = {
"id": data.revision_id_2,
"metadata": metadata_v1,
"mappings": [],
"indexer_configuration_id": tool_id,
}
metadata_origin_v1 = {
"id": data.origin_url_1,
"metadata": metadata_v1.copy(),
"indexer_configuration_id": tool_id,
"mappings": [],
"from_revision": data.revision_id_2,
}
# given
storage.revision_intrinsic_metadata_add([metadata_rev_v1])
storage.origin_intrinsic_metadata_add([metadata_origin_v1])
# when
actual_metadata = list(
storage.origin_intrinsic_metadata_get([data.origin_url_1])
)
# then
expected_metadata_v1 = [
{
"id": data.origin_url_1,
"metadata": metadata_v1,
"tool": data.tools["swh-metadata-detector"],
"from_revision": data.revision_id_2,
"mappings": [],
}
]
assert actual_metadata == expected_metadata_v1
# given
metadata_v2 = metadata_v1.copy()
metadata_v2.update(
{"name": "test_update_duplicated_metadata", "author": "MG",}
)
metadata_rev_v2 = metadata_rev_v1.copy()
metadata_origin_v2 = metadata_origin_v1.copy()
metadata_rev_v2["metadata"] = metadata_v2
metadata_origin_v2 = {
"id": data.origin_url_1,
"metadata": metadata_v2.copy(),
"indexer_configuration_id": tool_id,
"mappings": ["npm"],
"from_revision": data.revision_id_1,
}
storage.revision_intrinsic_metadata_add([metadata_rev_v2], conflict_update=True)
storage.origin_intrinsic_metadata_add(
[metadata_origin_v2], conflict_update=True
)
actual_metadata = list(
storage.origin_intrinsic_metadata_get([data.origin_url_1])
)
expected_metadata_v2 = [
{
"id": data.origin_url_1,
"metadata": metadata_v2,
"tool": data.tools["swh-metadata-detector"],
"from_revision": data.revision_id_1,
"mappings": ["npm"],
}
]
# metadata did change as the v2 was used to overwrite v1
assert actual_metadata == expected_metadata_v2
def test_origin_intrinsic_metadata_add__update_in_place_deadlock(
self, swh_indexer_storage_with_data
):
storage, data = swh_indexer_storage_with_data
# given
tool_id = data.tools["swh-metadata-detector"]["id"]
ids = list(range(10))
example_data1 = {
"metadata": {"version": None, "name": None,},
"mappings": [],
}
example_data2 = {
"metadata": {"version": "v1.1.1", "name": "foo",},
"mappings": [],
}
metadata_rev_v1 = {
"id": data.revision_id_2,
"metadata": {"version": None, "name": None,},
"mappings": [],
"indexer_configuration_id": tool_id,
}
data_v1 = [
{
"id": "file:///tmp/origin%d" % id_,
"from_revision": data.revision_id_2,
**example_data1,
"indexer_configuration_id": tool_id,
}
for id_ in ids
]
data_v2 = [
{
"id": "file:///tmp/origin%d" % id_,
"from_revision": data.revision_id_2,
**example_data2,
"indexer_configuration_id": tool_id,
}
for id_ in ids
]
# Remove one item from each, so that both queries have to succeed for
# all items to be in the DB.
data_v2a = data_v2[1:]
data_v2b = list(reversed(data_v2[0:-1]))
# given
storage.revision_intrinsic_metadata_add([metadata_rev_v1])
storage.origin_intrinsic_metadata_add(data_v1)
# when
origins = ["file:///tmp/origin%d" % i for i in ids]
actual_data = list(storage.origin_intrinsic_metadata_get(origins))
expected_data_v1 = [
{
"id": "file:///tmp/origin%d" % id_,
"from_revision": data.revision_id_2,
**example_data1,
"tool": data.tools["swh-metadata-detector"],
}
for id_ in ids
]
# then
assert actual_data == expected_data_v1
# given
def f1():
storage.origin_intrinsic_metadata_add(data_v2a, conflict_update=True)
def f2():
storage.origin_intrinsic_metadata_add(data_v2b, conflict_update=True)
t1 = threading.Thread(target=f1)
t2 = threading.Thread(target=f2)
t2.start()
t1.start()
t1.join()
t2.join()
actual_data = list(storage.origin_intrinsic_metadata_get(origins))
expected_data_v2 = [
{
"id": "file:///tmp/origin%d" % id_,
"from_revision": data.revision_id_2,
**example_data2,
"tool": data.tools["swh-metadata-detector"],
}
for id_ in ids
]
assert len(actual_data) == len(expected_data_v2)
assert sorted(actual_data, key=lambda x: x["id"]) == expected_data_v2
def test_origin_intrinsic_metadata_add__duplicate_twice(
self, swh_indexer_storage_with_data
):
storage, data = swh_indexer_storage_with_data
# given
tool_id = data.tools["swh-metadata-detector"]["id"]
metadata = {
"developmentStatus": None,
"name": None,
}
metadata_rev = {
"id": data.revision_id_2,
"metadata": metadata,
"mappings": ["mapping1"],
"indexer_configuration_id": tool_id,
}
metadata_origin = {
"id": data.origin_url_1,
"metadata": metadata,
"indexer_configuration_id": tool_id,
"mappings": ["mapping1"],
"from_revision": data.revision_id_2,
}
# when
storage.revision_intrinsic_metadata_add([metadata_rev])
with pytest.raises(DuplicateId):
storage.origin_intrinsic_metadata_add([metadata_origin, metadata_origin])
def test_origin_intrinsic_metadata_search_fulltext(
self, swh_indexer_storage_with_data
):
storage, data = swh_indexer_storage_with_data
# given
tool_id = data.tools["swh-metadata-detector"]["id"]
metadata1 = {
"author": "John Doe",
}
metadata1_rev = {
"id": data.revision_id_1,
"metadata": metadata1,
"mappings": [],
"indexer_configuration_id": tool_id,
}
metadata1_origin = {
"id": data.origin_url_1,
"metadata": metadata1,
"mappings": [],
"indexer_configuration_id": tool_id,
"from_revision": data.revision_id_1,
}
metadata2 = {
"author": "Jane Doe",
}
metadata2_rev = {
"id": data.revision_id_2,
"metadata": metadata2,
"mappings": [],
"indexer_configuration_id": tool_id,
}
metadata2_origin = {
"id": data.origin_url_2,
"metadata": metadata2,
"mappings": [],
"indexer_configuration_id": tool_id,
"from_revision": data.revision_id_2,
}
# when
storage.revision_intrinsic_metadata_add([metadata1_rev])
storage.origin_intrinsic_metadata_add([metadata1_origin])
storage.revision_intrinsic_metadata_add([metadata2_rev])
storage.origin_intrinsic_metadata_add([metadata2_origin])
# then
search = storage.origin_intrinsic_metadata_search_fulltext
assert set([res["id"] for res in search(["Doe"])]) == set(
[data.origin_url_1, data.origin_url_2]
)
assert [res["id"] for res in search(["John", "Doe"])] == [data.origin_url_1]
assert [res["id"] for res in search(["John"])] == [data.origin_url_1]
assert not list(search(["John", "Jane"]))
def test_origin_intrinsic_metadata_search_fulltext_rank(
self, swh_indexer_storage_with_data
):
storage, data = swh_indexer_storage_with_data
# given
tool_id = data.tools["swh-metadata-detector"]["id"]
# The following authors have "Random Person" to add some more content
# to the JSON data, to work around normalization quirks when there
# are few words (rank/(1+ln(nb_words)) is very sensitive to nb_words
# for small values of nb_words).
metadata1 = {"author": ["Random Person", "John Doe", "Jane Doe",]}
metadata1_rev = {
"id": data.revision_id_1,
"metadata": metadata1,
"mappings": [],
"indexer_configuration_id": tool_id,
}
metadata1_origin = {
"id": data.origin_url_1,
"metadata": metadata1,
"mappings": [],
"indexer_configuration_id": tool_id,
"from_revision": data.revision_id_1,
}
metadata2 = {"author": ["Random Person", "Jane Doe",]}
metadata2_rev = {
"id": data.revision_id_2,
"metadata": metadata2,
"mappings": [],
"indexer_configuration_id": tool_id,
}
metadata2_origin = {
"id": data.origin_url_2,
"metadata": metadata2,
"mappings": [],
"indexer_configuration_id": tool_id,
"from_revision": data.revision_id_2,
}
# when
storage.revision_intrinsic_metadata_add([metadata1_rev])
storage.origin_intrinsic_metadata_add([metadata1_origin])
storage.revision_intrinsic_metadata_add([metadata2_rev])
storage.origin_intrinsic_metadata_add([metadata2_origin])
# then
search = storage.origin_intrinsic_metadata_search_fulltext
assert [res["id"] for res in search(["Doe"])] == [
data.origin_url_1,
data.origin_url_2,
]
assert [res["id"] for res in search(["Doe"], limit=1)] == [data.origin_url_1]
assert [res["id"] for res in search(["John"])] == [data.origin_url_1]
assert [res["id"] for res in search(["Jane"])] == [
data.origin_url_2,
data.origin_url_1,
]
assert [res["id"] for res in search(["John", "Jane"])] == [data.origin_url_1]
def _fill_origin_intrinsic_metadata(self, swh_indexer_storage_with_data):
storage, data = swh_indexer_storage_with_data
tool1_id = data.tools["swh-metadata-detector"]["id"]
tool2_id = data.tools["swh-metadata-detector2"]["id"]
metadata1 = {
"@context": "foo",
"author": "John Doe",
}
metadata1_rev = {
"id": data.revision_id_1,
"metadata": metadata1,
"mappings": ["npm"],
"indexer_configuration_id": tool1_id,
}
metadata1_origin = {
"id": data.origin_url_1,
"metadata": metadata1,
"mappings": ["npm"],
"indexer_configuration_id": tool1_id,
"from_revision": data.revision_id_1,
}
metadata2 = {
"@context": "foo",
"author": "Jane Doe",
}
metadata2_rev = {
"id": data.revision_id_2,
"metadata": metadata2,
"mappings": ["npm", "gemspec"],
"indexer_configuration_id": tool2_id,
}
metadata2_origin = {
"id": data.origin_url_2,
"metadata": metadata2,
"mappings": ["npm", "gemspec"],
"indexer_configuration_id": tool2_id,
"from_revision": data.revision_id_2,
}
metadata3 = {
"@context": "foo",
}
metadata3_rev = {
"id": data.revision_id_3,
"metadata": metadata3,
"mappings": ["npm", "gemspec"],
"indexer_configuration_id": tool2_id,
}
metadata3_origin = {
"id": data.origin_url_3,
"metadata": metadata3,
"mappings": ["pkg-info"],
"indexer_configuration_id": tool2_id,
"from_revision": data.revision_id_3,
}
storage.revision_intrinsic_metadata_add([metadata1_rev])
storage.origin_intrinsic_metadata_add([metadata1_origin])
storage.revision_intrinsic_metadata_add([metadata2_rev])
storage.origin_intrinsic_metadata_add([metadata2_origin])
storage.revision_intrinsic_metadata_add([metadata3_rev])
storage.origin_intrinsic_metadata_add([metadata3_origin])
def test_origin_intrinsic_metadata_search_by_producer(
self, swh_indexer_storage_with_data
):
storage, data = swh_indexer_storage_with_data
self._fill_origin_intrinsic_metadata(swh_indexer_storage_with_data)
tool1 = data.tools["swh-metadata-detector"]
tool2 = data.tools["swh-metadata-detector2"]
endpoint = storage.origin_intrinsic_metadata_search_by_producer
# test pagination
# no 'page_token' param, return all origins
result = endpoint(ids_only=True)
assert result["origins"] == [
data.origin_url_1,
data.origin_url_2,
data.origin_url_3,
]
assert "next_page_token" not in result
# 'page_token' is < than origin_1, return everything
result = endpoint(page_token=data.origin_url_1[:-1], ids_only=True)
assert result["origins"] == [
data.origin_url_1,
data.origin_url_2,
data.origin_url_3,
]
assert "next_page_token" not in result
# 'page_token' is origin_3, return nothing
result = endpoint(page_token=data.origin_url_3, ids_only=True)
assert not result["origins"]
assert "next_page_token" not in result
# test limit argument
result = endpoint(page_token=data.origin_url_1[:-1], limit=2, ids_only=True)
assert result["origins"] == [data.origin_url_1, data.origin_url_2]
assert result["next_page_token"] == result["origins"][-1]
result = endpoint(page_token=data.origin_url_1, limit=2, ids_only=True)
assert result["origins"] == [data.origin_url_2, data.origin_url_3]
assert "next_page_token" not in result
result = endpoint(page_token=data.origin_url_2, limit=2, ids_only=True)
assert result["origins"] == [data.origin_url_3]
assert "next_page_token" not in result
# test mappings filtering
result = endpoint(mappings=["npm"], ids_only=True)
assert result["origins"] == [data.origin_url_1, data.origin_url_2]
assert "next_page_token" not in result
result = endpoint(mappings=["npm", "gemspec"], ids_only=True)
assert result["origins"] == [data.origin_url_1, data.origin_url_2]
assert "next_page_token" not in result
result = endpoint(mappings=["gemspec"], ids_only=True)
assert result["origins"] == [data.origin_url_2]
assert "next_page_token" not in result
result = endpoint(mappings=["pkg-info"], ids_only=True)
assert result["origins"] == [data.origin_url_3]
assert "next_page_token" not in result
result = endpoint(mappings=["foobar"], ids_only=True)
assert not result["origins"]
assert "next_page_token" not in result
# test pagination + mappings
result = endpoint(mappings=["npm"], limit=1, ids_only=True)
assert result["origins"] == [data.origin_url_1]
assert result["next_page_token"] == result["origins"][-1]
# test tool filtering
result = endpoint(tool_ids=[tool1["id"]], ids_only=True)
assert result["origins"] == [data.origin_url_1]
assert "next_page_token" not in result
result = endpoint(tool_ids=[tool2["id"]], ids_only=True)
assert sorted(result["origins"]) == [data.origin_url_2, data.origin_url_3]
assert "next_page_token" not in result
result = endpoint(tool_ids=[tool1["id"], tool2["id"]], ids_only=True)
assert sorted(result["origins"]) == [
data.origin_url_1,
data.origin_url_2,
data.origin_url_3,
]
assert "next_page_token" not in result
# test ids_only=False
assert endpoint(mappings=["gemspec"])["origins"] == [
{
"id": data.origin_url_2,
"metadata": {"@context": "foo", "author": "Jane Doe",},
"mappings": ["npm", "gemspec"],
"tool": tool2,
"from_revision": data.revision_id_2,
}
]
def test_origin_intrinsic_metadata_stats(self, swh_indexer_storage_with_data):
storage, data = swh_indexer_storage_with_data
self._fill_origin_intrinsic_metadata(swh_indexer_storage_with_data)
result = storage.origin_intrinsic_metadata_stats()
assert result == {
"per_mapping": {
"gemspec": 1,
"npm": 2,
"pkg-info": 1,
"codemeta": 0,
"maven": 0,
},
"total": 3,
"non_empty": 2,
}
class TestIndexerStorageIndexerCondifuration:
def test_indexer_configuration_add(self, swh_indexer_storage_with_data):
storage, data = swh_indexer_storage_with_data
tool = {
"tool_name": "some-unknown-tool",
"tool_version": "some-version",
"tool_configuration": {"debian-package": "some-package"},
}
actual_tool = storage.indexer_configuration_get(tool)
assert actual_tool is None # does not exist
# add it
actual_tools = list(storage.indexer_configuration_add([tool]))
assert len(actual_tools) == 1
actual_tool = actual_tools[0]
assert actual_tool is not None # now it exists
new_id = actual_tool.pop("id")
assert actual_tool == tool
actual_tools2 = list(storage.indexer_configuration_add([tool]))
actual_tool2 = actual_tools2[0]
assert actual_tool2 is not None # now it exists
new_id2 = actual_tool2.pop("id")
assert new_id == new_id2
assert actual_tool == actual_tool2
def test_indexer_configuration_add_multiple(self, swh_indexer_storage_with_data):
storage, data = swh_indexer_storage_with_data
tool = {
"tool_name": "some-unknown-tool",
"tool_version": "some-version",
"tool_configuration": {"debian-package": "some-package"},
}
actual_tools = list(storage.indexer_configuration_add([tool]))
assert len(actual_tools) == 1
new_tools = [
tool,
{
"tool_name": "yet-another-tool",
"tool_version": "version",
"tool_configuration": {},
},
]
actual_tools = list(storage.indexer_configuration_add(new_tools))
assert len(actual_tools) == 2
# order not guaranteed, so we iterate over results to check
for tool in actual_tools:
_id = tool.pop("id")
assert _id is not None
assert tool in new_tools
def test_indexer_configuration_get_missing(self, swh_indexer_storage_with_data):
storage, data = swh_indexer_storage_with_data
tool = {
"tool_name": "unknown-tool",
"tool_version": "3.1.0rc2-31-ga2cbb8c",
"tool_configuration": {"command_line": "nomossa <filepath>"},
}
actual_tool = storage.indexer_configuration_get(tool)
assert actual_tool is None
def test_indexer_configuration_get(self, swh_indexer_storage_with_data):
storage, data = swh_indexer_storage_with_data
tool = {
"tool_name": "nomos",
"tool_version": "3.1.0rc2-31-ga2cbb8c",
"tool_configuration": {"command_line": "nomossa <filepath>"},
}
actual_tool = storage.indexer_configuration_get(tool)
assert actual_tool
expected_tool = tool.copy()
del actual_tool["id"]
assert expected_tool == actual_tool
def test_indexer_configuration_metadata_get_missing_context(
self, swh_indexer_storage_with_data
):
storage, data = swh_indexer_storage_with_data
tool = {
"tool_name": "swh-metadata-translator",
"tool_version": "0.0.1",
"tool_configuration": {"context": "unknown-context"},
}
actual_tool = storage.indexer_configuration_get(tool)
assert actual_tool is None
def test_indexer_configuration_metadata_get(self, swh_indexer_storage_with_data):
storage, data = swh_indexer_storage_with_data
tool = {
"tool_name": "swh-metadata-translator",
"tool_version": "0.0.1",
"tool_configuration": {"type": "local", "context": "NpmMapping"},
}
storage.indexer_configuration_add([tool])
actual_tool = storage.indexer_configuration_get(tool)
assert actual_tool
expected_tool = tool.copy()
expected_tool["id"] = actual_tool["id"]
assert expected_tool == actual_tool
diff --git a/swh/indexer/tests/test_fossology_license.py b/swh/indexer/tests/test_fossology_license.py
index 5ee22e3..6cd0d9d 100644
--- a/swh/indexer/tests/test_fossology_license.py
+++ b/swh/indexer/tests/test_fossology_license.py
@@ -1,170 +1,148 @@
# Copyright (C) 2017-2018 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import unittest
import pytest
from unittest.mock import patch
from typing import Any, Dict
from swh.indexer import fossology_license
from swh.indexer.fossology_license import (
FossologyLicenseIndexer,
- FossologyLicenseRangeIndexer,
+ FossologyLicensePartitionIndexer,
compute_license,
)
from swh.indexer.tests.utils import (
SHA1_TO_LICENSES,
CommonContentIndexerTest,
- CommonContentIndexerRangeTest,
+ CommonContentIndexerPartitionTest,
BASE_TEST_CONFIG,
fill_storage,
fill_obj_storage,
filter_dict,
)
class BasicTest(unittest.TestCase):
@patch("swh.indexer.fossology_license.subprocess")
def test_compute_license(self, mock_subprocess):
"""Computing licenses from a raw content should return results
"""
for path, intermediary_result, output in [
(b"some/path", None, []),
(b"some/path/2", [], []),
(b"other/path", " contains license(s) GPL,AGPL", ["GPL", "AGPL"]),
]:
mock_subprocess.check_output.return_value = intermediary_result
actual_result = compute_license(path)
self.assertEqual(actual_result, {"licenses": output, "path": path,})
def mock_compute_license(path):
"""path is the content identifier
"""
if isinstance(id, bytes):
path = path.decode("utf-8")
# path is something like /tmp/tmpXXX/<sha1> so we keep only the sha1 part
path = path.split("/")[-1]
return {"licenses": SHA1_TO_LICENSES.get(path)}
CONFIG = {
**BASE_TEST_CONFIG,
"workdir": "/tmp",
"tools": {
"name": "nomos",
"version": "3.1.0rc2-31-ga2cbb8c",
"configuration": {"command_line": "nomossa <filepath>",},
},
} # type: Dict[str, Any]
RANGE_CONFIG = dict(list(CONFIG.items()) + [("write_batch_size", 100)])
class TestFossologyLicenseIndexer(CommonContentIndexerTest, unittest.TestCase):
"""Language indexer test scenarios:
- Known sha1s in the input list have their data indexed
- Unknown sha1 in the input list are not indexed
"""
def get_indexer_results(self, ids):
yield from self.idx_storage.content_fossology_license_get(ids)
def setUp(self):
super().setUp()
# replace actual license computation with a mock
self.orig_compute_license = fossology_license.compute_license
fossology_license.compute_license = mock_compute_license
self.indexer = FossologyLicenseIndexer(CONFIG)
self.indexer.catch_exceptions = False
self.idx_storage = self.indexer.idx_storage
fill_storage(self.indexer.storage)
fill_obj_storage(self.indexer.objstorage)
self.id0 = "01c9379dfc33803963d07c1ccc748d3fe4c96bb5"
self.id1 = "688a5ef812c53907562fe379d4b3851e69c7cb15"
self.id2 = "da39a3ee5e6b4b0d3255bfef95601890afd80709" # empty content
tool = {k.replace("tool_", ""): v for (k, v) in self.indexer.tool.items()}
# then
self.expected_results = {
self.id0: {"tool": tool, "licenses": SHA1_TO_LICENSES[self.id0],},
self.id1: {"tool": tool, "licenses": SHA1_TO_LICENSES[self.id1],},
self.id2: {"tool": tool, "licenses": SHA1_TO_LICENSES[self.id2],},
}
def tearDown(self):
super().tearDown()
fossology_license.compute_license = self.orig_compute_license
-class TestFossologyLicenseRangeIndexer(
- CommonContentIndexerRangeTest, unittest.TestCase
+class TestFossologyLicensePartitionIndexer(
+ CommonContentIndexerPartitionTest, unittest.TestCase
):
"""Range Fossology License Indexer tests.
- new data within range are indexed
- no data outside a range are indexed
- with filtering existing indexed data prior to compute new index
- without filtering existing indexed data prior to compute new index
"""
def setUp(self):
super().setUp()
# replace actual license computation with a mock
self.orig_compute_license = fossology_license.compute_license
fossology_license.compute_license = mock_compute_license
- self.indexer = FossologyLicenseRangeIndexer(config=RANGE_CONFIG)
+ self.indexer = FossologyLicensePartitionIndexer(config=RANGE_CONFIG)
self.indexer.catch_exceptions = False
fill_storage(self.indexer.storage)
fill_obj_storage(self.indexer.objstorage)
- self.id0 = "01c9379dfc33803963d07c1ccc748d3fe4c96bb5"
- self.id1 = "02fb2c89e14f7fab46701478c83779c7beb7b069"
- self.id2 = "103bc087db1d26afc3a0283f38663d081e9b01e6"
- tool_id = self.indexer.tool["id"]
- self.expected_results = {
- self.id0: {
- "id": self.id0,
- "indexer_configuration_id": tool_id,
- "licenses": SHA1_TO_LICENSES[self.id0],
- },
- self.id1: {
- "id": self.id1,
- "indexer_configuration_id": tool_id,
- "licenses": SHA1_TO_LICENSES[self.id1],
- },
- self.id2: {
- "id": self.id2,
- "indexer_configuration_id": tool_id,
- "licenses": SHA1_TO_LICENSES[self.id2],
- },
- }
-
def tearDown(self):
super().tearDown()
fossology_license.compute_license = self.orig_compute_license
def test_fossology_w_no_tool():
with pytest.raises(ValueError):
FossologyLicenseIndexer(config=filter_dict(CONFIG, "tools"))
def test_fossology_range_w_no_tool():
with pytest.raises(ValueError):
- FossologyLicenseRangeIndexer(config=filter_dict(RANGE_CONFIG, "tools"))
+ FossologyLicensePartitionIndexer(config=filter_dict(RANGE_CONFIG, "tools"))
diff --git a/swh/indexer/tests/test_mimetype.py b/swh/indexer/tests/test_mimetype.py
index 9a9e3b1..483743f 100644
--- a/swh/indexer/tests/test_mimetype.py
+++ b/swh/indexer/tests/test_mimetype.py
@@ -1,150 +1,126 @@
# Copyright (C) 2017-2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import pytest
import unittest
from typing import Any, Dict
from swh.indexer.mimetype import (
MimetypeIndexer,
- MimetypeRangeIndexer,
+ MimetypePartitionIndexer,
compute_mimetype_encoding,
)
from swh.indexer.tests.utils import (
CommonContentIndexerTest,
- CommonContentIndexerRangeTest,
+ CommonContentIndexerPartitionTest,
BASE_TEST_CONFIG,
fill_storage,
fill_obj_storage,
filter_dict,
)
def test_compute_mimetype_encoding():
"""Compute mimetype encoding should return results"""
for _input, _mimetype, _encoding in [
("du français".encode(), "text/plain", "utf-8"),
(b"def __init__(self):", "text/x-python", "us-ascii"),
(b"\xff\xfe\x00\x00\x00\x00\xff\xfe\xff\xff", "application/octet-stream", ""),
]:
actual_result = compute_mimetype_encoding(_input)
assert actual_result == {"mimetype": _mimetype, "encoding": _encoding}
CONFIG = {
**BASE_TEST_CONFIG,
"tools": {
"name": "file",
"version": "1:5.30-1+deb9u1",
"configuration": {"type": "library", "debian-package": "python3-magic"},
},
} # type: Dict[str, Any]
class TestMimetypeIndexer(CommonContentIndexerTest, unittest.TestCase):
"""Mimetype indexer test scenarios:
- Known sha1s in the input list have their data indexed
- Unknown sha1 in the input list are not indexed
"""
legacy_get_format = True
def get_indexer_results(self, ids):
yield from self.idx_storage.content_mimetype_get(ids)
def setUp(self):
self.indexer = MimetypeIndexer(config=CONFIG)
self.indexer.catch_exceptions = False
self.idx_storage = self.indexer.idx_storage
fill_storage(self.indexer.storage)
fill_obj_storage(self.indexer.objstorage)
self.id0 = "01c9379dfc33803963d07c1ccc748d3fe4c96bb5"
self.id1 = "688a5ef812c53907562fe379d4b3851e69c7cb15"
self.id2 = "da39a3ee5e6b4b0d3255bfef95601890afd80709"
tool = {k.replace("tool_", ""): v for (k, v) in self.indexer.tool.items()}
self.expected_results = {
self.id0: {
"id": self.id0,
"tool": tool,
"mimetype": "text/plain",
"encoding": "us-ascii",
},
self.id1: {
"id": self.id1,
"tool": tool,
"mimetype": "text/plain",
"encoding": "us-ascii",
},
self.id2: {
"id": self.id2,
"tool": tool,
"mimetype": "application/x-empty",
"encoding": "binary",
},
}
RANGE_CONFIG = dict(list(CONFIG.items()) + [("write_batch_size", 100)])
-class TestMimetypeRangeIndexer(CommonContentIndexerRangeTest, unittest.TestCase):
+class TestMimetypePartitionIndexer(
+ CommonContentIndexerPartitionTest, unittest.TestCase
+):
"""Range Mimetype Indexer tests.
- new data within range are indexed
- no data outside a range are indexed
- with filtering existing indexed data prior to compute new index
- without filtering existing indexed data prior to compute new index
"""
def setUp(self):
super().setUp()
- self.indexer = MimetypeRangeIndexer(config=RANGE_CONFIG)
+ self.indexer = MimetypePartitionIndexer(config=RANGE_CONFIG)
self.indexer.catch_exceptions = False
fill_storage(self.indexer.storage)
fill_obj_storage(self.indexer.objstorage)
- self.id0 = "01c9379dfc33803963d07c1ccc748d3fe4c96bb5"
- self.id1 = "02fb2c89e14f7fab46701478c83779c7beb7b069"
- self.id2 = "103bc087db1d26afc3a0283f38663d081e9b01e6"
- tool_id = self.indexer.tool["id"]
-
- self.expected_results = {
- self.id0: {
- "encoding": "us-ascii",
- "id": self.id0,
- "indexer_configuration_id": tool_id,
- "mimetype": "text/plain",
- },
- self.id1: {
- "encoding": "us-ascii",
- "id": self.id1,
- "indexer_configuration_id": tool_id,
- "mimetype": "text/x-python",
- },
- self.id2: {
- "encoding": "us-ascii",
- "id": self.id2,
- "indexer_configuration_id": tool_id,
- "mimetype": "text/plain",
- },
- }
-
def test_mimetype_w_no_tool():
with pytest.raises(ValueError):
MimetypeIndexer(config=filter_dict(CONFIG, "tools"))
def test_mimetype_range_w_no_tool():
with pytest.raises(ValueError):
- MimetypeRangeIndexer(config=filter_dict(CONFIG, "tools"))
+ MimetypePartitionIndexer(config=filter_dict(CONFIG, "tools"))
diff --git a/swh/indexer/tests/test_tasks.py b/swh/indexer/tests/test_tasks.py
new file mode 100644
index 0000000..1058f10
--- /dev/null
+++ b/swh/indexer/tests/test_tasks.py
@@ -0,0 +1,123 @@
+# Copyright (C) 2020 The Software Heritage developers
+# See the AUTHORS file at the top-level directory of this distribution
+# License: GNU General Public License version 3, or any later version
+# See top-level LICENSE file for more information
+
+
+def test_task_origin_metadata(
+ mocker, swh_scheduler_celery_app, swh_scheduler_celery_worker, swh_config
+):
+
+ mock_indexer = mocker.patch("swh.indexer.tasks.OriginMetadataIndexer.run")
+ mock_indexer.return_value = {"status": "eventful"}
+
+ res = swh_scheduler_celery_app.send_task(
+ "swh.indexer.tasks.OriginMetadata", args=["origin-url"],
+ )
+ assert res
+ res.wait()
+ assert res.successful()
+
+ assert res.result == {"status": "eventful"}
+
+
+def test_task_ctags(
+ mocker, swh_scheduler_celery_app, swh_scheduler_celery_worker, swh_config
+):
+
+ mock_indexer = mocker.patch("swh.indexer.tasks.CtagsIndexer.run")
+ mock_indexer.return_value = {"status": "eventful"}
+
+ res = swh_scheduler_celery_app.send_task("swh.indexer.tasks.Ctags", args=["id0"],)
+ assert res
+ res.wait()
+ assert res.successful()
+
+ assert res.result == {"status": "eventful"}
+
+
+def test_task_fossology_license(
+ mocker, swh_scheduler_celery_app, swh_scheduler_celery_worker, swh_config
+):
+
+ mock_indexer = mocker.patch("swh.indexer.tasks.FossologyLicenseIndexer.run")
+ mock_indexer.return_value = {"status": "eventful"}
+
+ res = swh_scheduler_celery_app.send_task(
+ "swh.indexer.tasks.ContentFossologyLicense", args=["id0"],
+ )
+ assert res
+ res.wait()
+ assert res.successful()
+
+ assert res.result == {"status": "eventful"}
+
+
+def test_task_recompute_checksums(
+ mocker, swh_scheduler_celery_app, swh_scheduler_celery_worker, swh_config
+):
+
+ mock_indexer = mocker.patch("swh.indexer.tasks.RecomputeChecksums.run")
+ mock_indexer.return_value = {"status": "eventful"}
+
+ res = swh_scheduler_celery_app.send_task(
+ "swh.indexer.tasks.RecomputeChecksums", args=[[{"blake2b256": "id"}]],
+ )
+ assert res
+ res.wait()
+ assert res.successful()
+
+ assert res.result == {"status": "eventful"}
+
+
+def test_task_mimetype(
+ mocker, swh_scheduler_celery_app, swh_scheduler_celery_worker, swh_config
+):
+
+ mock_indexer = mocker.patch("swh.indexer.tasks.MimetypeIndexer.run")
+ mock_indexer.return_value = {"status": "eventful"}
+
+ res = swh_scheduler_celery_app.send_task(
+ "swh.indexer.tasks.ContentMimetype", args=["id0"],
+ )
+ assert res
+ res.wait()
+ assert res.successful()
+
+ assert res.result == {"status": "eventful"}
+
+
+def test_task_mimetype_partition(
+ mocker, swh_scheduler_celery_app, swh_scheduler_celery_worker, swh_config
+):
+
+ mock_indexer = mocker.patch("swh.indexer.tasks.MimetypePartitionIndexer.run")
+ mock_indexer.return_value = {"status": "eventful"}
+
+ res = swh_scheduler_celery_app.send_task(
+ "swh.indexer.tasks.ContentMimetypePartition", args=[0, 4],
+ )
+ assert res
+ res.wait()
+ assert res.successful()
+
+ assert res.result == {"status": "eventful"}
+
+
+def test_task_license_partition(
+ mocker, swh_scheduler_celery_app, swh_scheduler_celery_worker, swh_config
+):
+
+ mock_indexer = mocker.patch(
+ "swh.indexer.tasks.FossologyLicensePartitionIndexer.run"
+ )
+ mock_indexer.return_value = {"status": "eventful"}
+
+ res = swh_scheduler_celery_app.send_task(
+ "swh.indexer.tasks.ContentFossologyLicensePartition", args=[0, 4],
+ )
+ assert res
+ res.wait()
+ assert res.successful()
+
+ assert res.result == {"status": "eventful"}
diff --git a/swh/indexer/tests/utils.py b/swh/indexer/tests/utils.py
index b3f0612..04a34db 100644
--- a/swh/indexer/tests/utils.py
+++ b/swh/indexer/tests/utils.py
@@ -1,774 +1,770 @@
# Copyright (C) 2017-2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import abc
import functools
from typing import Dict, Any
import unittest
from hypothesis import strategies
+from swh.core.api.classes import stream_results
from swh.model import hashutil
-from swh.model.hashutil import hash_to_bytes, hash_to_hex
+from swh.model.hashutil import hash_to_bytes
from swh.model.model import (
Content,
Directory,
DirectoryEntry,
Origin,
OriginVisit,
OriginVisitStatus,
Person,
Revision,
RevisionType,
+ SHA1_SIZE,
Snapshot,
SnapshotBranch,
TargetType,
Timestamp,
TimestampWithTimezone,
)
-from swh.storage.utils import now
+from swh.storage.utils import now, get_partition_bounds_bytes
from swh.indexer.storage import INDEXER_CFG_KEY
BASE_TEST_CONFIG: Dict[str, Dict[str, Any]] = {
"storage": {"cls": "memory"},
"objstorage": {"cls": "memory", "args": {},},
INDEXER_CFG_KEY: {"cls": "memory", "args": {},},
}
ORIGINS = [
Origin(url="https://github.com/SoftwareHeritage/swh-storage"),
Origin(url="rsync://ftp.gnu.org/gnu/3dldf"),
Origin(url="https://forge.softwareheritage.org/source/jesuisgpl/"),
Origin(url="https://pypi.org/project/limnoria/"),
Origin(url="http://0-512-md.googlecode.com/svn/"),
Origin(url="https://github.com/librariesio/yarn-parser"),
Origin(url="https://github.com/librariesio/yarn-parser.git"),
]
ORIGIN_VISITS = [
{"type": "git", "origin": ORIGINS[0].url},
{"type": "ftp", "origin": ORIGINS[1].url},
{"type": "deposit", "origin": ORIGINS[2].url},
{"type": "pypi", "origin": ORIGINS[3].url},
{"type": "svn", "origin": ORIGINS[4].url},
{"type": "git", "origin": ORIGINS[5].url},
{"type": "git", "origin": ORIGINS[6].url},
]
DIRECTORY = Directory(
id=hash_to_bytes("34f335a750111ca0a8b64d8034faec9eedc396be"),
entries=(
DirectoryEntry(
name=b"index.js",
type="file",
target=hash_to_bytes("01c9379dfc33803963d07c1ccc748d3fe4c96bb5"),
perms=0o100644,
),
DirectoryEntry(
name=b"package.json",
type="file",
target=hash_to_bytes("26a9f72a7c87cc9205725cfd879f514ff4f3d8d5"),
perms=0o100644,
),
DirectoryEntry(
name=b".github",
type="dir",
target=Directory(entries=()).id,
perms=0o040000,
),
),
)
DIRECTORY2 = Directory(
id=b"\xf8zz\xa1\x12`<1$\xfav\xf9\x01\xfd5\x85F`\xf2\xb6",
entries=(
DirectoryEntry(
name=b"package.json",
type="file",
target=hash_to_bytes("f5305243b3ce7ef8dc864ebc73794da304025beb"),
perms=0o100644,
),
),
)
REVISION = Revision(
id=hash_to_bytes("c6201cb1b9b9df9a7542f9665c3b5dfab85e9775"),
message=b"Improve search functionality",
author=Person(
name=b"Andrew Nesbitt",
fullname=b"Andrew Nesbitt <andrewnez@gmail.com>",
email=b"andrewnez@gmail.com",
),
committer=Person(
name=b"Andrew Nesbitt",
fullname=b"Andrew Nesbitt <andrewnez@gmail.com>",
email=b"andrewnez@gmail.com",
),
committer_date=TimestampWithTimezone(
timestamp=Timestamp(seconds=1380883849, microseconds=0,),
offset=120,
negative_utc=False,
),
type=RevisionType.GIT,
synthetic=False,
date=TimestampWithTimezone(
timestamp=Timestamp(seconds=1487596456, microseconds=0,),
offset=0,
negative_utc=False,
),
directory=DIRECTORY2.id,
parents=(),
)
REVISIONS = [REVISION]
SNAPSHOTS = [
Snapshot(
id=hash_to_bytes("a50fde72265343b7d28cecf6db20d98a81d21965"),
branches={
b"refs/heads/add-revision-origin-cache": SnapshotBranch(
target=b'L[\xce\x1c\x88\x8eF\t\xf1"\x19\x1e\xfb\xc0s\xe7/\xe9l\x1e',
target_type=TargetType.REVISION,
),
b"refs/head/master": SnapshotBranch(
target=b"8K\x12\x00d\x03\xcc\xe4]bS\xe3\x8f{\xd7}\xac\xefrm",
target_type=TargetType.REVISION,
),
b"HEAD": SnapshotBranch(
target=b"refs/head/master", target_type=TargetType.ALIAS
),
b"refs/tags/v0.0.103": SnapshotBranch(
target=b'\xb6"Im{\xfdLb\xb0\x94N\xea\x96m\x13x\x88+\x0f\xdd',
target_type=TargetType.RELEASE,
),
},
),
Snapshot(
id=hash_to_bytes("2c67f69a416bca4e1f3fcd848c588fab88ad0642"),
branches={
b"3DLDF-1.1.4.tar.gz": SnapshotBranch(
target=b'dJ\xfb\x1c\x91\xf4\x82B%]6\xa2\x90|\xd3\xfc"G\x99\x11',
target_type=TargetType.REVISION,
),
b"3DLDF-2.0.2.tar.gz": SnapshotBranch(
target=b"\xb6\x0e\xe7\x9e9\xac\xaa\x19\x9e=\xd1\xc5\x00\\\xc6\xfc\xe0\xa6\xb4V", # noqa
target_type=TargetType.REVISION,
),
b"3DLDF-2.0.3-examples.tar.gz": SnapshotBranch(
target=b"!H\x19\xc0\xee\x82-\x12F1\xbd\x97\xfe\xadZ\x80\x80\xc1\x83\xff", # noqa
target_type=TargetType.REVISION,
),
b"3DLDF-2.0.3.tar.gz": SnapshotBranch(
target=b"\x8e\xa9\x8e/\xea}\x9feF\xf4\x9f\xfd\xee\xcc\x1a\xb4`\x8c\x8by", # noqa
target_type=TargetType.REVISION,
),
b"3DLDF-2.0.tar.gz": SnapshotBranch(
target=b"F6*\xff(?\x19a\xef\xb6\xc2\x1fv$S\xe3G\xd3\xd1m",
target_type=TargetType.REVISION,
),
},
),
Snapshot(
id=hash_to_bytes("68c0d26104d47e278dd6be07ed61fafb561d0d20"),
branches={
b"master": SnapshotBranch(
target=b"\xe7n\xa4\x9c\x9f\xfb\xb7\xf76\x11\x08{\xa6\xe9\x99\xb1\x9e]q\xeb", # noqa
target_type=TargetType.REVISION,
)
},
),
Snapshot(
id=hash_to_bytes("f255245269e15fc99d284affd79f766668de0b67"),
branches={
b"HEAD": SnapshotBranch(
target=b"releases/2018.09.09", target_type=TargetType.ALIAS
),
b"releases/2018.09.01": SnapshotBranch(
target=b"<\xee1(\xe8\x8d_\xc1\xc9\xa6rT\xf1\x1d\xbb\xdfF\xfdw\xcf",
target_type=TargetType.REVISION,
),
b"releases/2018.09.09": SnapshotBranch(
target=b"\x83\xb9\xb6\xc7\x05\xb1%\xd0\xfem\xd8kA\x10\x9d\xc5\xfa2\xf8t", # noqa
target_type=TargetType.REVISION,
),
},
),
Snapshot(
id=hash_to_bytes("a1a28c0ab387a8f9e0618cb705eab81fc448f473"),
branches={
b"master": SnapshotBranch(
target=b"\xe4?r\xe1,\x88\xab\xec\xe7\x9a\x87\xb8\xc9\xad#.\x1bw=\x18",
target_type=TargetType.REVISION,
)
},
),
Snapshot(
id=hash_to_bytes("bb4fd3a836930ce629d912864319637040ff3040"),
branches={
b"HEAD": SnapshotBranch(
target=REVISION.id, target_type=TargetType.REVISION,
)
},
),
Snapshot(
id=hash_to_bytes("bb4fd3a836930ce629d912864319637040ff3040"),
branches={
b"HEAD": SnapshotBranch(
target=REVISION.id, target_type=TargetType.REVISION,
)
},
),
]
SHA1_TO_LICENSES = {
"01c9379dfc33803963d07c1ccc748d3fe4c96bb5": ["GPL"],
"02fb2c89e14f7fab46701478c83779c7beb7b069": ["Apache2.0"],
"103bc087db1d26afc3a0283f38663d081e9b01e6": ["MIT"],
"688a5ef812c53907562fe379d4b3851e69c7cb15": ["AGPL"],
"da39a3ee5e6b4b0d3255bfef95601890afd80709": [],
}
SHA1_TO_CTAGS = {
"01c9379dfc33803963d07c1ccc748d3fe4c96bb5": [
{"name": "foo", "kind": "str", "line": 10, "lang": "bar",}
],
"d4c647f0fc257591cc9ba1722484229780d1c607": [
{"name": "let", "kind": "int", "line": 100, "lang": "haskell",}
],
"688a5ef812c53907562fe379d4b3851e69c7cb15": [
{"name": "symbol", "kind": "float", "line": 99, "lang": "python",}
],
}
OBJ_STORAGE_DATA = {
"01c9379dfc33803963d07c1ccc748d3fe4c96bb5": b"this is some text",
"688a5ef812c53907562fe379d4b3851e69c7cb15": b"another text",
"8986af901dd2043044ce8f0d8fc039153641cf17": b"yet another text",
"02fb2c89e14f7fab46701478c83779c7beb7b069": b"""
import unittest
import logging
from swh.indexer.mimetype import MimetypeIndexer
from swh.indexer.tests.test_utils import MockObjStorage
class MockStorage():
def content_mimetype_add(self, mimetypes):
self.state = mimetypes
self.conflict_update = conflict_update
def indexer_configuration_add(self, tools):
return [{
'id': 10,
}]
""",
"103bc087db1d26afc3a0283f38663d081e9b01e6": b"""
#ifndef __AVL__
#define __AVL__
typedef struct _avl_tree avl_tree;
typedef struct _data_t {
int content;
} data_t;
""",
"93666f74f1cf635c8c8ac118879da6ec5623c410": b"""
(should 'pygments (recognize 'lisp 'easily))
""",
"26a9f72a7c87cc9205725cfd879f514ff4f3d8d5": b"""
{
"name": "test_metadata",
"version": "0.0.1",
"description": "Simple package.json test for indexer",
"repository": {
"type": "git",
"url": "https://github.com/moranegg/metadata_test"
}
}
""",
"d4c647f0fc257591cc9ba1722484229780d1c607": b"""
{
"version": "5.0.3",
"name": "npm",
"description": "a package manager for JavaScript",
"keywords": [
"install",
"modules",
"package manager",
"package.json"
],
"preferGlobal": true,
"config": {
"publishtest": false
},
"homepage": "https://docs.npmjs.com/",
"author": "Isaac Z. Schlueter <i@izs.me> (http://blog.izs.me)",
"repository": {
"type": "git",
"url": "https://github.com/npm/npm"
},
"bugs": {
"url": "https://github.com/npm/npm/issues"
},
"dependencies": {
"JSONStream": "~1.3.1",
"abbrev": "~1.1.0",
"ansi-regex": "~2.1.1",
"ansicolors": "~0.3.2",
"ansistyles": "~0.1.3"
},
"devDependencies": {
"tacks": "~1.2.6",
"tap": "~10.3.2"
},
"license": "Artistic-2.0"
}
""",
"a7ab314d8a11d2c93e3dcf528ca294e7b431c449": b"""
""",
"da39a3ee5e6b4b0d3255bfef95601890afd80709": b"",
# was 626364 / b'bcd'
"e3e40fee6ff8a52f06c3b428bfe7c0ed2ef56e92": b"unimportant content for bcd",
# was 636465 / b'cde' now yarn-parser package.json
"f5305243b3ce7ef8dc864ebc73794da304025beb": b"""
{
"name": "yarn-parser",
"version": "1.0.0",
"description": "Tiny web service for parsing yarn.lock files",
"main": "index.js",
"scripts": {
"start": "node index.js",
"test": "mocha"
},
"engines": {
"node": "9.8.0"
},
"repository": {
"type": "git",
"url": "git+https://github.com/librariesio/yarn-parser.git"
},
"keywords": [
"yarn",
"parse",
"lock",
"dependencies"
],
"author": "Andrew Nesbitt",
"license": "AGPL-3.0",
"bugs": {
"url": "https://github.com/librariesio/yarn-parser/issues"
},
"homepage": "https://github.com/librariesio/yarn-parser#readme",
"dependencies": {
"@yarnpkg/lockfile": "^1.0.0",
"body-parser": "^1.15.2",
"express": "^4.14.0"
},
"devDependencies": {
"chai": "^4.1.2",
"mocha": "^5.2.0",
"request": "^2.87.0",
"test": "^0.6.0"
}
}
""",
}
YARN_PARSER_METADATA = {
"@context": "https://doi.org/10.5063/schema/codemeta-2.0",
"url": "https://github.com/librariesio/yarn-parser#readme",
"codeRepository": "git+git+https://github.com/librariesio/yarn-parser.git",
"author": [{"type": "Person", "name": "Andrew Nesbitt"}],
"license": "https://spdx.org/licenses/AGPL-3.0",
"version": "1.0.0",
"description": "Tiny web service for parsing yarn.lock files",
"issueTracker": "https://github.com/librariesio/yarn-parser/issues",
"name": "yarn-parser",
"keywords": ["yarn", "parse", "lock", "dependencies"],
"type": "SoftwareSourceCode",
}
json_dict_keys = strategies.one_of(
strategies.characters(),
strategies.just("type"),
strategies.just("url"),
strategies.just("name"),
strategies.just("email"),
strategies.just("@id"),
strategies.just("@context"),
strategies.just("repository"),
strategies.just("license"),
strategies.just("repositories"),
strategies.just("licenses"),
)
"""Hypothesis strategy that generates strings, with an emphasis on those
that are often used as dictionary keys in metadata files."""
generic_json_document = strategies.recursive(
strategies.none()
| strategies.booleans()
| strategies.floats()
| strategies.characters(),
lambda children: (
strategies.lists(children, min_size=1)
| strategies.dictionaries(json_dict_keys, children, min_size=1)
),
)
"""Hypothesis strategy that generates possible values for values of JSON
metadata files."""
def json_document_strategy(keys=None):
"""Generates an hypothesis strategy that generates metadata files
for a JSON-based format that uses the given keys."""
if keys is None:
keys = strategies.characters()
else:
keys = strategies.one_of(map(strategies.just, keys))
return strategies.dictionaries(keys, generic_json_document, min_size=1)
def _tree_to_xml(root, xmlns, data):
def encode(s):
"Skips unpaired surrogates generated by json_document_strategy"
return s.encode("utf8", "replace")
def to_xml(data, indent=b" "):
if data is None:
return b""
elif isinstance(data, (bool, str, int, float)):
return indent + encode(str(data))
elif isinstance(data, list):
return b"\n".join(to_xml(v, indent=indent) for v in data)
elif isinstance(data, dict):
lines = []
for (key, value) in data.items():
lines.append(indent + encode("<{}>".format(key)))
lines.append(to_xml(value, indent=indent + b" "))
lines.append(indent + encode("</{}>".format(key)))
return b"\n".join(lines)
else:
raise TypeError(data)
return b"\n".join(
[
'<{} xmlns="{}">'.format(root, xmlns).encode(),
to_xml(data),
"</{}>".format(root).encode(),
]
)
class TreeToXmlTest(unittest.TestCase):
def test_leaves(self):
self.assertEqual(
_tree_to_xml("root", "http://example.com", None),
b'<root xmlns="http://example.com">\n\n</root>',
)
self.assertEqual(
_tree_to_xml("root", "http://example.com", True),
b'<root xmlns="http://example.com">\n True\n</root>',
)
self.assertEqual(
_tree_to_xml("root", "http://example.com", "abc"),
b'<root xmlns="http://example.com">\n abc\n</root>',
)
self.assertEqual(
_tree_to_xml("root", "http://example.com", 42),
b'<root xmlns="http://example.com">\n 42\n</root>',
)
self.assertEqual(
_tree_to_xml("root", "http://example.com", 3.14),
b'<root xmlns="http://example.com">\n 3.14\n</root>',
)
def test_dict(self):
self.assertIn(
_tree_to_xml("root", "http://example.com", {"foo": "bar", "baz": "qux"}),
[
b'<root xmlns="http://example.com">\n'
b" <foo>\n bar\n </foo>\n"
b" <baz>\n qux\n </baz>\n"
b"</root>",
b'<root xmlns="http://example.com">\n'
b" <baz>\n qux\n </baz>\n"
b" <foo>\n bar\n </foo>\n"
b"</root>",
],
)
def test_list(self):
self.assertEqual(
_tree_to_xml(
"root", "http://example.com", [{"foo": "bar"}, {"foo": "baz"},]
),
b'<root xmlns="http://example.com">\n'
b" <foo>\n bar\n </foo>\n"
b" <foo>\n baz\n </foo>\n"
b"</root>",
)
def xml_document_strategy(keys, root, xmlns):
"""Generates an hypothesis strategy that generates metadata files
for an XML format that uses the given keys."""
return strategies.builds(
functools.partial(_tree_to_xml, root, xmlns), json_document_strategy(keys)
)
def filter_dict(d, keys):
"return a copy of the dict with keys deleted"
if not isinstance(keys, (list, tuple)):
keys = (keys,)
return dict((k, v) for (k, v) in d.items() if k not in keys)
def fill_obj_storage(obj_storage):
"""Add some content in an object storage."""
for (obj_id, content) in OBJ_STORAGE_DATA.items():
obj_storage.add(content, obj_id=hash_to_bytes(obj_id))
def fill_storage(storage):
storage.origin_add(ORIGINS)
storage.directory_add([DIRECTORY, DIRECTORY2])
storage.revision_add(REVISIONS)
storage.snapshot_add(SNAPSHOTS)
for visit, snapshot in zip(ORIGIN_VISITS, SNAPSHOTS):
assert snapshot.id is not None
visit = storage.origin_visit_add(
[OriginVisit(origin=visit["origin"], date=now(), type=visit["type"])]
)[0]
visit_status = OriginVisitStatus(
origin=visit.origin,
visit=visit.visit,
date=now(),
status="full",
snapshot=snapshot.id,
)
storage.origin_visit_status_add([visit_status])
contents = []
for (obj_id, content) in OBJ_STORAGE_DATA.items():
content_hashes = hashutil.MultiHash.from_data(content).digest()
contents.append(
Content(
data=content,
length=len(content),
status="visible",
sha1=hash_to_bytes(obj_id),
sha1_git=hash_to_bytes(obj_id),
sha256=content_hashes["sha256"],
blake2s256=content_hashes["blake2s256"],
)
)
storage.content_add(contents)
class CommonContentIndexerTest(metaclass=abc.ABCMeta):
legacy_get_format = False
"""True if and only if the tested indexer uses the legacy format.
see: https://forge.softwareheritage.org/T1433
"""
def get_indexer_results(self, ids):
"""Override this for indexers that don't have a mock storage."""
return self.indexer.idx_storage.state
def assert_legacy_results_ok(self, sha1s, expected_results=None):
# XXX old format, remove this when all endpoints are
# updated to the new one
# see: https://forge.softwareheritage.org/T1433
sha1s = [
sha1 if isinstance(sha1, bytes) else hash_to_bytes(sha1) for sha1 in sha1s
]
actual_results = list(self.get_indexer_results(sha1s))
if expected_results is None:
expected_results = self.expected_results
self.assertEqual(
len(expected_results),
len(actual_results),
(expected_results, actual_results),
)
for indexed_data in actual_results:
_id = indexed_data["id"]
expected_data = expected_results[hashutil.hash_to_hex(_id)].copy()
expected_data["id"] = _id
self.assertEqual(indexed_data, expected_data)
def assert_results_ok(self, sha1s, expected_results=None):
if self.legacy_get_format:
self.assert_legacy_results_ok(sha1s, expected_results)
return
sha1s = [
sha1 if isinstance(sha1, bytes) else hash_to_bytes(sha1) for sha1 in sha1s
]
actual_results = list(self.get_indexer_results(sha1s))
if expected_results is None:
expected_results = self.expected_results
self.assertEqual(
len(expected_results),
len(actual_results),
(expected_results, actual_results),
)
for indexed_data in actual_results:
(_id, indexed_data) = list(indexed_data.items())[0]
expected_data = expected_results[hashutil.hash_to_hex(_id)].copy()
expected_data = [expected_data]
self.assertEqual(indexed_data, expected_data)
def test_index(self):
"""Known sha1 have their data indexed
"""
sha1s = [self.id0, self.id1, self.id2]
# when
self.indexer.run(sha1s, policy_update="update-dups")
self.assert_results_ok(sha1s)
# 2nd pass
self.indexer.run(sha1s, policy_update="ignore-dups")
self.assert_results_ok(sha1s)
def test_index_one_unknown_sha1(self):
"""Unknown sha1 are not indexed"""
sha1s = [
self.id1,
"799a5ef812c53907562fe379d4b3851e69c7cb15", # unknown
"800a5ef812c53907562fe379d4b3851e69c7cb15",
] # unknown
# when
self.indexer.run(sha1s, policy_update="update-dups")
# then
expected_results = {
k: v for k, v in self.expected_results.items() if k in sha1s
}
self.assert_results_ok(sha1s, expected_results)
-class CommonContentIndexerRangeTest:
+class CommonContentIndexerPartitionTest:
"""Allows to factorize tests on range indexer.
"""
def setUp(self):
self.contents = sorted(OBJ_STORAGE_DATA)
- def assert_results_ok(self, start, end, actual_results, expected_results=None):
- if expected_results is None:
- expected_results = self.expected_results
+ def assert_results_ok(self, partition_id, nb_partitions, actual_results):
+ expected_ids = [
+ c.sha1
+ for c in stream_results(
+ self.indexer.storage.content_get_partition,
+ partition_id=partition_id,
+ nb_partitions=nb_partitions,
+ )
+ ]
+
+ start, end = get_partition_bounds_bytes(partition_id, nb_partitions, SHA1_SIZE)
actual_results = list(actual_results)
for indexed_data in actual_results:
_id = indexed_data["id"]
assert isinstance(_id, bytes)
- indexed_data = indexed_data.copy()
- indexed_data["id"] = hash_to_hex(indexed_data["id"])
- self.assertEqual(indexed_data, expected_results[hash_to_hex(_id)])
- self.assertTrue(start <= _id <= end)
+ assert _id in expected_ids
+
+ assert start <= _id
+ if end:
+ assert _id <= end
+
_tool_id = indexed_data["indexer_configuration_id"]
- self.assertEqual(_tool_id, self.indexer.tool["id"])
+ assert _tool_id == self.indexer.tool["id"]
def test__index_contents(self):
"""Indexing contents without existing data results in indexed data
"""
- _start, _end = [self.contents[0], self.contents[2]] # output hex ids
- start, end = map(hashutil.hash_to_bytes, (_start, _end))
- # given
- actual_results = list(self.indexer._index_contents(start, end, indexed={}))
+ partition_id = 0
+ nb_partitions = 4
+
+ actual_results = list(
+ self.indexer._index_contents(partition_id, nb_partitions, indexed={})
+ )
- self.assert_results_ok(start, end, actual_results)
+ self.assert_results_ok(partition_id, nb_partitions, actual_results)
def test__index_contents_with_indexed_data(self):
"""Indexing contents with existing data results in less indexed data
"""
- _start, _end = [self.contents[0], self.contents[2]] # output hex ids
- start, end = map(hashutil.hash_to_bytes, (_start, _end))
- data_indexed = [self.id0, self.id2]
+ partition_id = 3
+ nb_partitions = 4
- # given
- actual_results = self.indexer._index_contents(
- start, end, indexed=set(map(hash_to_bytes, data_indexed))
+ # first pass
+ actual_results = list(
+ self.indexer._index_contents(partition_id, nb_partitions, indexed={})
)
- # craft the expected results
- expected_results = self.expected_results.copy()
- for already_indexed_key in data_indexed:
- expected_results.pop(already_indexed_key)
+ self.assert_results_ok(partition_id, nb_partitions, actual_results)
- self.assert_results_ok(start, end, actual_results, expected_results)
-
- def test_generate_content_get(self):
- """Optimal indexing should result in indexed data
+ indexed_ids = set(res["id"] for res in actual_results)
- """
- _start, _end = [self.contents[0], self.contents[2]] # output hex ids
- start, end = map(hashutil.hash_to_bytes, (_start, _end))
-
- # given
- actual_results = self.indexer.run(start, end)
+ actual_results = list(
+ self.indexer._index_contents(
+ partition_id, nb_partitions, indexed=indexed_ids
+ )
+ )
- # then
- self.assertEqual(actual_results, {"status": "uneventful"})
+ # already indexed, so nothing new
+ assert actual_results == []
- def test_generate_content_get_input_as_bytes(self):
+ def test_generate_content_get(self):
"""Optimal indexing should result in indexed data
- Input are in bytes here.
-
"""
- _start, _end = [self.contents[0], self.contents[2]] # output hex ids
- start, end = map(hashutil.hash_to_bytes, (_start, _end))
+ partition_id = 0
+ nb_partitions = 4
- # given
- actual_results = self.indexer.run(start, end, skip_existing=False)
- # no already indexed data so same result as prior test
+ actual_results = self.indexer.run(
+ partition_id, nb_partitions, skip_existing=False
+ )
- # then
- self.assertEqual(actual_results, {"status": "uneventful"})
+ assert actual_results == {"status": "uneventful"} # why?
def test_generate_content_get_no_result(self):
"""No result indexed returns False"""
- _start, _end = [
- "0000000000000000000000000000000000000000",
- "0000000000000000000000000000000000000001",
- ]
- start, end = map(hashutil.hash_to_bytes, (_start, _end))
- # given
- actual_results = self.indexer.run(start, end, incremental=False)
+ actual_results = self.indexer.run(0, 0, incremental=False)
- # then
- self.assertEqual(actual_results, {"status": "uneventful"})
+ assert actual_results == {"status": "uneventful"}
File Metadata
Details
Attached
Mime Type
text/x-diff
Expires
Jul 4 2025, 7:55 AM (10 w, 2 d ago)
Storage Engine
blob
Storage Format
Raw Data
Storage Handle
3344907
Attached To
rDCIDX Metadata indexer
Event Timeline
Log In to Comment