Page Menu
Home
Software Heritage
Search
Configure Global Search
Log In
Files
F9349716
No One
Temporary
Actions
View File
Edit File
Delete File
View Transforms
Subscribe
Mute Notifications
Award Token
Flag For Later
Size
59 KB
Subscribers
None
View Options
diff --git a/docs/metadata-workflow.rst b/docs/metadata-workflow.rst
index c07e86c..4d99106 100644
--- a/docs/metadata-workflow.rst
+++ b/docs/metadata-workflow.rst
@@ -1,274 +1,274 @@
Metadata workflow
=================
Intrinsic metadata
------------------
Indexing :term:`intrinsic metadata` requires extracting information from the
lowest levels of the :ref:`Merkle DAG <swh-merkle-dag>` (directories, files,
and content blobs) and associate them to the highest ones (origins).
In order to deduplicate the work between origins, we split this work between
multiple indexers, which coordinate with each other and save their results
at each step in the indexer storage.
Indexer architecture
^^^^^^^^^^^^^^^^^^^^
.. thumbnail:: images/tasks-metadata-indexers.svg
Origin-Head Indexer
^^^^^^^^^^^^^^^^^^^
First, the Origin-Head indexer gets called externally, with an origin as
argument (or multiple origins, that are handled sequentially).
For now, its tasks are scheduled manually via recurring Scheduler tasks; but
in the near future, the :term:`journal` will be used to do that.
It first looks up the last :term:`snapshot` and determines what the main
branch of origin is (the "Head branch") and what revision it points to
(the "Head").
Intrinsic metadata for that origin will be extracted from that revision.
It schedules a Directory Metadata Indexer task for the root directory of
that revision.
Directory and Content Metadata Indexers
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
These two indexers do the hard part of the work. The Directory Metadata
Indexer fetches the root directory associated with a revision, then extracts
the metadata from that directory.
To do so, it lists files in that directory, and looks for known names, such
as :file:`codemeta.json`, :file:`package.json`, or :file:`pom.xml`. If there are any, it
runs the Content Metadata Indexer on them, which in turn fetches their
contents and runs them through extraction dictionaries/mappings.
See below for details.
Their results are saved in a database (the indexer storage), associated with
the content and directory hashes.
Origin Metadata Indexer
^^^^^^^^^^^^^^^^^^^^^^^
The job of this indexer is very simple: it takes an origin identifier and
uses the Origin-Head and Directory indexers to get metadata from the head
directory of an origin, and copies the metadata of the former to a new table,
to associate it with the latter.
The reason for this is to be able to perform searches on metadata, and
efficiently find out which origins matched the pattern.
Running that search on the ``directory_metadata`` table would require either
a reverse lookup from directories to origins, which is costly.
Translation from ecosystem-specific metadata to CodeMeta
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Intrinsic metadata is extracted from files provided with a project's source
code, and translated using `CodeMeta`_'s `crosswalk table`_.
All input formats supported so far are straightforward dictionaries (eg. JSON)
or can be accessed as such (eg. XML); and the first part of the translation is
to map their keys to a term in the CodeMeta vocabulary.
This is done by parsing the crosswalk table's `CSV file`_ and using it as a
map between these two vocabularies; and this does not require any
format-specific code in the indexers.
The second part is to normalize values. As language-specific metadata files
each have their way(s) of formatting these values, we need to turn them into
the data type required by CodeMeta.
This normalization makes up for most of the code of
:py:mod:`swh.indexer.metadata_dictionary`.
.. _CodeMeta: https://codemeta.github.io/
.. _crosswalk table: https://codemeta.github.io/crosswalk/
.. _CSV file: https://github.com/codemeta/codemeta/blob/master/crosswalk.csv
Extrinsic metadata
------------------
The :term:`extrinsic metadata` indexer works very differently from
the :term:`intrinsic metadata` indexers we saw above.
While the latter extract metadata from software artefacts (files and directories)
which are already a core part of the archive, the former extracts such data from
API calls pulled from forges and package managers, or pushed via the
:ref:`SWORD deposit <swh-deposit>`.
In order to preserve original information verbatim, the Software Heritage itself
stores the result of these calls, independently of indexers, in their own archive
as described in the :ref:`extrinsic-metadata-specification`.
In this section, we assume this information is already present in the archive,
but in the "raw extrinsic metadata" form, which needs to be translated to a common
vocabulary to be useful, as with intrinsic metadata.
The common vocabulary we chose is JSON-LD, with both CodeMeta and
`ForgeFed's vocabulary`_ (including `ActivityStream's vocabulary`_)
.. _ForgeFed's vocabulary: https://forgefed.org/vocabulary.html
.. _ActivityStream's vocabulary: https://www.w3.org/TR/activitystreams-vocabulary/
Instead of the four-step architecture above, the extrinsic-metadata indexer
is standalone: it reads "raw extrinsic metadata" from the :ref:`swh-journal`,
and produces new indexed entries in the database as they come.
The caveat is that, while intrinsic metadata are always unambiguously authoritative
(they are contained by their own origin repository, therefore they were added by
the origin's "owners"), extrinsic metadata can be authored by third-parties.
Support for third-party authorities is currently not implemented for this reason;
so extrinsic metadata is only indexed when provided by the same
forge/package-repository as the origin the metadata is about.
Metadata on non-origin objects (typically, directories), is also ignored for
this reason, for now.
Assuming the metadata was provided by such an authority, it is then passed
to metadata mappings; identified by a mimetype (or custom format name)
they declared rather than filenames.
Implementation status
---------------------
Supported intrinsic metadata
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The following sources of intrinsic metadata are supported:
* CodeMeta's `codemeta.json`_,
* Maven's `pom.xml`_,
* NPM's `package.json`_,
* Python's `PKG-INFO`_,
* Ruby's `.gemspec`_
.. _codemeta.json: https://codemeta.github.io/terms/
.. _pom.xml: https://maven.apache.org/pom.html
.. _package.json: https://docs.npmjs.com/files/package.json
.. _PKG-INFO: https://www.python.org/dev/peps/pep-0314/
.. _.gemspec: https://guides.rubygems.org/specification-reference/
Supported extrinsic metadata
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The following sources of extrinsic metadata are supported:
* GitHub's `"repo" API <https://docs.github.com/en/rest/repos/repos#get-a-repository>`__
Supported JSON-LD properties
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The following terms may be found in the output of the metadata translation
(other than the `codemeta` mapping, which is the identity function, and
therefore supports all properties):
-.. program-output:: python3 -m swh.indexer.cli mapping list-terms --exclude-mapping codemeta --exclude-mapping sword-codemeta
+.. program-output:: python3 -m swh.indexer.cli mapping list-terms --exclude-mapping codemeta --exclude-mapping json-sword-codemeta --exclude-mapping sword-codemeta
:nostderr:
Tutorials
---------
The rest of this page is made of two tutorials: one to index
:term:`intrinsic metadata` (ie. from a file in a VCS or in a tarball),
and one to index :term:`extrinsic metadata` (ie. obtained via external means,
such as GitHub's or GitLab's APIs).
Adding support for additional ecosystem-specific intrinsic metadata
-------------------------------------------------------------------
This section will guide you through adding code to the metadata indexer to
detect and translate new metadata formats.
First, you should start by picking one of the `CodeMeta crosswalks`_.
Then create a new file in :file:`swh-indexer/swh/indexer/metadata_dictionary/`, that
will contain your code, and create a new class that inherits from helper
classes, with some documentation about your indexer:
.. code-block:: python
from .base import DictMapping, SingleFileIntrinsicMapping
from swh.indexer.codemeta import CROSSWALK_TABLE
class MyMapping(DictMapping, SingleFileIntrinsicMapping):
"""Dedicated class for ..."""
name = 'my-mapping'
filename = b'the-filename'
mapping = CROSSWALK_TABLE['Name of the CodeMeta crosswalk']
.. _CodeMeta crosswalks: https://github.com/codemeta/codemeta/tree/master/crosswalks
And reference it from :const:`swh.indexer.metadata_dictionary.INTRINSIC_MAPPINGS`.
Then, add a ``string_fields`` attribute, that is the list of all keys whose
values are simple text values. For instance, to
`translate Python PKG-INFO`_, it's:
.. code-block:: python
string_fields = ['name', 'version', 'description', 'summary',
'author', 'author-email']
These values will be automatically added to the above list of
supported terms.
.. _translate Python PKG-INFO: https://forge.softwareheritage.org/source/swh-indexer/browse/master/swh/indexer/metadata_dictionary/python.py
Last step to get your code working: add a ``translate`` method that will
take a single byte string as argument, turn it into a Python dictionary,
whose keys are the ones of the input document, and pass it to
``_translate_dict``.
For instance, if the input document is in JSON, it can be as simple as:
.. code-block:: python
def translate(self, raw_content):
raw_content = raw_content.decode() # bytes to str
content_dict = json.loads(raw_content) # str to dict
return self._translate_dict(content_dict) # convert to CodeMeta
``_translate_dict`` will do the heavy work of reading the crosswalk table for
each of ``string_fields``, read the corresponding value in the ``content_dict``,
and build a CodeMeta dictionary with the corresponding names from the
crosswalk table.
One last thing to run your code: add it to the list in
:file:`swh-indexer/swh/indexer/metadata_dictionary/__init__.py`, so the rest of the
code is aware of it.
Now, you can run it:
.. code-block:: shell
python3 -m swh.indexer.metadata_dictionary MyMapping path/to/input/file
and it will (hopefully) returns a CodeMeta object.
If it works, well done!
You can now improve your translation code further, by adding methods that
will do more advanced conversion. For example, if there is a field named
``license`` containing an SPDX identifier, you must convert it to an URI,
like this:
.. code-block:: python
def normalize_license(self, s):
if isinstance(s, str):
return rdflib.URIRef("https://spdx.org/licenses/" + s)
This method will automatically get called by ``_translate_dict`` when it
finds a ``license`` field in ``content_dict``.
Adding support for additional ecosystem-specific extrinsic metadata
-------------------------------------------------------------------
[this section is a work in progress]
diff --git a/swh/indexer/metadata_dictionary/__init__.py b/swh/indexer/metadata_dictionary/__init__.py
index de56532..99c2504 100644
--- a/swh/indexer/metadata_dictionary/__init__.py
+++ b/swh/indexer/metadata_dictionary/__init__.py
@@ -1,58 +1,59 @@
# Copyright (C) 2017-2022 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import collections
from typing import Dict, Type
import click
from . import cff, codemeta, composer, dart, github, maven, npm, nuget, python, ruby
from .base import BaseExtrinsicMapping, BaseIntrinsicMapping, BaseMapping
INTRINSIC_MAPPINGS: Dict[str, Type[BaseIntrinsicMapping]] = {
"CffMapping": cff.CffMapping,
"CodemetaMapping": codemeta.CodemetaMapping,
"GemspecMapping": ruby.GemspecMapping,
"MavenMapping": maven.MavenMapping,
"NpmMapping": npm.NpmMapping,
"PubMapping": dart.PubspecMapping,
"PythonPkginfoMapping": python.PythonPkginfoMapping,
"ComposerMapping": composer.ComposerMapping,
"NuGetMapping": nuget.NuGetMapping,
}
EXTRINSIC_MAPPINGS: Dict[str, Type[BaseExtrinsicMapping]] = {
"GitHubMapping": github.GitHubMapping,
+ "JsonSwordCodemetaMapping": codemeta.JsonSwordCodemetaMapping,
"SwordCodemetaMapping": codemeta.SwordCodemetaMapping,
}
MAPPINGS: Dict[str, Type[BaseMapping]] = {**INTRINSIC_MAPPINGS, **EXTRINSIC_MAPPINGS}
def list_terms():
"""Returns a dictionary with all supported CodeMeta terms as keys,
and the mappings that support each of them as values."""
d = collections.defaultdict(set)
for mapping in MAPPINGS.values():
for term in mapping.supported_terms():
d[term].add(mapping)
return d
@click.command()
@click.argument("mapping_name")
@click.argument("file_name")
def main(mapping_name: str, file_name: str):
from pprint import pprint
with open(file_name, "rb") as fd:
file_content = fd.read()
res = MAPPINGS[mapping_name]().translate(file_content)
pprint(res)
if __name__ == "__main__":
main()
diff --git a/swh/indexer/metadata_dictionary/codemeta.py b/swh/indexer/metadata_dictionary/codemeta.py
index ccba012..4da5eb6 100644
--- a/swh/indexer/metadata_dictionary/codemeta.py
+++ b/swh/indexer/metadata_dictionary/codemeta.py
@@ -1,117 +1,149 @@
# Copyright (C) 2018-2022 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import collections
import json
import re
from typing import Any, Dict, List, Optional, Tuple
import xml.etree.ElementTree as ET
+import xmltodict
+
from swh.indexer.codemeta import CODEMETA_CONTEXT_URL, CODEMETA_TERMS, compact, expand
from .base import BaseExtrinsicMapping, SingleFileIntrinsicMapping
ATOM_URI = "http://www.w3.org/2005/Atom"
_TAG_RE = re.compile(r"\{(?P<namespace>.*?)\}(?P<localname>.*)")
_IGNORED_NAMESPACES = ("http://www.w3.org/2005/Atom",)
class CodemetaMapping(SingleFileIntrinsicMapping):
"""
dedicated class for CodeMeta (codemeta.json) mapping and translation
"""
name = "codemeta"
filename = b"codemeta.json"
string_fields = None
@classmethod
def supported_terms(cls) -> List[str]:
return [term for term in CODEMETA_TERMS if not term.startswith("@")]
def translate(self, content: bytes) -> Optional[Dict[str, Any]]:
try:
return self.normalize_translation(expand(json.loads(content.decode())))
except Exception:
return None
class SwordCodemetaMapping(BaseExtrinsicMapping):
"""
dedicated class for mapping and translation from JSON-LD statements
embedded in SWORD documents, optionally using Codemeta contexts,
as described in the :ref:`deposit-protocol`.
"""
name = "sword-codemeta"
@classmethod
def extrinsic_metadata_formats(cls) -> Tuple[str, ...]:
return (
"sword-v2-atom-codemeta",
"sword-v2-atom-codemeta-v2",
)
@classmethod
def supported_terms(cls) -> List[str]:
return [term for term in CODEMETA_TERMS if not term.startswith("@")]
def xml_to_jsonld(self, e: ET.Element) -> Dict[str, Any]:
doc: Dict[str, List[Dict[str, Any]]] = collections.defaultdict(list)
for child in e:
m = _TAG_RE.match(child.tag)
assert m, f"Tag with no namespace: {child}"
namespace = m.group("namespace")
localname = m.group("localname")
if namespace == ATOM_URI and localname in ("title", "name"):
# Convert Atom to Codemeta name; in case codemeta:name
# is not provided or different
doc["name"].append(self.xml_to_jsonld(child))
elif namespace == ATOM_URI and localname in ("author", "email"):
# ditto for these author properties (note that author email is also
# covered by the previous test)
doc[localname].append(self.xml_to_jsonld(child))
elif namespace in _IGNORED_NAMESPACES:
# SWORD-specific namespace that is not interesting to translate
pass
elif namespace.lower() == CODEMETA_CONTEXT_URL:
# It is a term defined by the context; write is as-is and JSON-LD
# expansion will convert it to a full URI based on
# "@context": CODEMETA_CONTEXT_URL
doc[localname].append(self.xml_to_jsonld(child))
else:
# Otherwise, we already know the URI
doc[f"{namespace}{localname}"].append(self.xml_to_jsonld(child))
# The above needed doc values to be list to work; now we allow any type
# of value as key "@value" cannot have a list as value.
doc_: Dict[str, Any] = doc
text = e.text.strip() if e.text else None
if text:
# TODO: check doc is empty, and raise mixed-content error otherwise?
doc_["@value"] = text
return doc_
def translate(self, content: bytes) -> Optional[Dict[str, Any]]:
# Parse XML
root = ET.fromstring(content)
# Transform to JSON-LD document
doc = self.xml_to_jsonld(root)
# Add @context to JSON-LD expansion replaces the "codemeta:" prefix
# hash (which uses the context URL as namespace URI for historical
# reasons) into properties in `http://schema.org/` and
# `https://codemeta.github.io/terms/` namespaces
doc["@context"] = CODEMETA_CONTEXT_URL
# Normalize as a Codemeta document
return self.normalize_translation(expand(doc))
def normalize_translation(self, metadata: Dict[str, Any]) -> Dict[str, Any]:
return compact(metadata, forgefed=False)
+
+
+class JsonSwordCodemetaMapping(SwordCodemetaMapping):
+ """
+ Variant of :class:`SwordCodemetaMapping` that reads the legacy
+ ``sword-v2-atom-codemeta-v2-in-json`` format and converts it back to
+ ``sword-v2-atom-codemeta-v2`` XML
+ """
+
+ name = "json-sword-codemeta"
+
+ @classmethod
+ def extrinsic_metadata_formats(cls) -> Tuple[str, ...]:
+ return ("sword-v2-atom-codemeta-v2-in-json",)
+
+ def translate(self, content: bytes) -> Optional[Dict[str, Any]]:
+ # ``content`` was generated by calling ``xmltodict.parse()`` on a XML document,
+ # so ``xmltodict.unparse()`` is guaranteed to return a document that is
+ # semantically equivalent to the original and pass it to SwordCodemetaMapping.
+ json_doc = json.loads(content)
+
+ if json_doc.get("@xmlns") != ATOM_URI:
+ # Technically, non-default XMLNS were allowed, but it does not seem like
+ # anyone used them, so they do not need to be implemented here.
+ raise NotImplementedError(f"Unexpected XMLNS set: {json_doc}")
+
+ # Root tag was stripped by swh-deposit
+ json_doc = {"entry": json_doc}
+
+ return super().translate(xmltodict.unparse(json_doc))
diff --git a/swh/indexer/tests/metadata_dictionary/test_codemeta.py b/swh/indexer/tests/metadata_dictionary/test_codemeta.py
index 0d4de0d..21865ee 100644
--- a/swh/indexer/tests/metadata_dictionary/test_codemeta.py
+++ b/swh/indexer/tests/metadata_dictionary/test_codemeta.py
@@ -1,351 +1,367 @@
# Copyright (C) 2017-2022 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import json
from hypothesis import HealthCheck, given, settings
from swh.indexer.codemeta import CODEMETA_TERMS
from swh.indexer.metadata_detector import detect_metadata
from swh.indexer.metadata_dictionary import MAPPINGS
from ..utils import json_document_strategy
def test_compute_metadata_valid_codemeta():
raw_content = b"""{
"@context": "https://doi.org/10.5063/schema/codemeta-2.0",
"@type": "SoftwareSourceCode",
"identifier": "CodeMeta",
"description": "CodeMeta is a concept vocabulary that can be used to standardize the exchange of software metadata across repositories and organizations.",
"name": "CodeMeta: Minimal metadata schemas for science software and code, in JSON-LD",
"codeRepository": "https://github.com/codemeta/codemeta",
"issueTracker": "https://github.com/codemeta/codemeta/issues",
"license": "https://spdx.org/licenses/Apache-2.0",
"version": "2.0",
"author": [
{
"@type": "Person",
"givenName": "Carl",
"familyName": "Boettiger",
"email": "cboettig@gmail.com",
"@id": "http://orcid.org/0000-0002-1642-628X"
},
{
"@type": "Person",
"givenName": "Matthew B.",
"familyName": "Jones",
"email": "jones@nceas.ucsb.edu",
"@id": "http://orcid.org/0000-0003-0077-4738"
}
],
"maintainer": {
"@type": "Person",
"givenName": "Carl",
"familyName": "Boettiger",
"email": "cboettig@gmail.com",
"@id": "http://orcid.org/0000-0002-1642-628X"
},
"contIntegration": "https://travis-ci.org/codemeta/codemeta",
"developmentStatus": "active",
"downloadUrl": "https://github.com/codemeta/codemeta/archive/2.0.zip",
"funder": {
"@id": "https://doi.org/10.13039/100000001",
"@type": "Organization",
"name": "National Science Foundation"
},
"funding":"1549758; Codemeta: A Rosetta Stone for Metadata in Scientific Software",
"keywords": [
"metadata",
"software"
],
"version":"2.0",
"dateCreated":"2017-06-05",
"datePublished":"2017-06-05",
"programmingLanguage": "JSON-LD"
}""" # noqa
expected_result = {
"@context": "https://doi.org/10.5063/schema/codemeta-2.0",
"type": "SoftwareSourceCode",
"identifier": "CodeMeta",
"description": "CodeMeta is a concept vocabulary that can "
"be used to standardize the exchange of software metadata "
"across repositories and organizations.",
"name": "CodeMeta: Minimal metadata schemas for science "
"software and code, in JSON-LD",
"codeRepository": "https://github.com/codemeta/codemeta",
"issueTracker": "https://github.com/codemeta/codemeta/issues",
"license": "https://spdx.org/licenses/Apache-2.0",
"version": "2.0",
"author": [
{
"type": "Person",
"givenName": "Carl",
"familyName": "Boettiger",
"email": "cboettig@gmail.com",
"id": "http://orcid.org/0000-0002-1642-628X",
},
{
"type": "Person",
"givenName": "Matthew B.",
"familyName": "Jones",
"email": "jones@nceas.ucsb.edu",
"id": "http://orcid.org/0000-0003-0077-4738",
},
],
"maintainer": {
"type": "Person",
"givenName": "Carl",
"familyName": "Boettiger",
"email": "cboettig@gmail.com",
"id": "http://orcid.org/0000-0002-1642-628X",
},
"contIntegration": "https://travis-ci.org/codemeta/codemeta",
"developmentStatus": "active",
"downloadUrl": "https://github.com/codemeta/codemeta/archive/2.0.zip",
"funder": {
"id": "https://doi.org/10.13039/100000001",
"type": "Organization",
"name": "National Science Foundation",
},
"funding": "1549758; Codemeta: A Rosetta Stone for Metadata "
"in Scientific Software",
"keywords": ["metadata", "software"],
"version": "2.0",
"dateCreated": "2017-06-05",
"datePublished": "2017-06-05",
"programmingLanguage": "JSON-LD",
}
result = MAPPINGS["CodemetaMapping"]().translate(raw_content)
assert result == expected_result
def test_compute_metadata_codemeta_alternate_context():
raw_content = b"""{
"@context": "https://raw.githubusercontent.com/codemeta/codemeta/master/codemeta.jsonld",
"@type": "SoftwareSourceCode",
"identifier": "CodeMeta"
}""" # noqa
expected_result = {
"@context": "https://doi.org/10.5063/schema/codemeta-2.0",
"type": "SoftwareSourceCode",
"identifier": "CodeMeta",
}
result = MAPPINGS["CodemetaMapping"]().translate(raw_content)
assert result == expected_result
@settings(suppress_health_check=[HealthCheck.too_slow])
@given(json_document_strategy(keys=CODEMETA_TERMS))
def test_codemeta_adversarial(doc):
raw = json.dumps(doc).encode()
MAPPINGS["CodemetaMapping"]().translate(raw)
def test_detect_metadata_codemeta_json_uppercase():
df = [
{
"sha1_git": b"abc",
"name": b"index.html",
"target": b"abc",
"length": 897,
"status": "visible",
"type": "file",
"perms": 33188,
"dir_id": b"dir_a",
"sha1": b"bcd",
},
{
"sha1_git": b"aab",
"name": b"CODEMETA.json",
"target": b"aab",
"length": 712,
"status": "visible",
"type": "file",
"perms": 33188,
"dir_id": b"dir_a",
"sha1": b"bcd",
},
]
results = detect_metadata(df)
expected_results = {"CodemetaMapping": [b"bcd"]}
assert expected_results == results
def test_sword_default_xmlns():
content = """<?xml version="1.0"?>
<atom:entry xmlns:atom="http://www.w3.org/2005/Atom"
xmlns="https://doi.org/10.5063/schema/codemeta-2.0">
<name>My Software</name>
<author>
<name>Author 1</name>
<email>foo@example.org</email>
</author>
<author>
<name>Author 2</name>
</author>
</atom:entry>
"""
result = MAPPINGS["SwordCodemetaMapping"]().translate(content)
assert result == {
"@context": "https://doi.org/10.5063/schema/codemeta-2.0",
"name": "My Software",
"author": [
{"name": "Author 1", "email": "foo@example.org"},
{"name": "Author 2"},
],
}
def test_sword_basics():
content = """<?xml version="1.0"?>
<entry xmlns="http://www.w3.org/2005/Atom"
xmlns:codemeta="https://doi.org/10.5063/schema/codemeta-2.0">
<codemeta:name>My Software</codemeta:name>
<codemeta:author>
<codemeta:name>Author 1</codemeta:name>
<codemeta:email>foo@example.org</codemeta:email>
</codemeta:author>
<codemeta:author>
<codemeta:name>Author 2</codemeta:name>
</codemeta:author>
<author>
<name>Author 3</name>
<email>bar@example.org</email>
</author>
</entry>
"""
result = MAPPINGS["SwordCodemetaMapping"]().translate(content)
assert result == {
"@context": "https://doi.org/10.5063/schema/codemeta-2.0",
"name": "My Software",
"author": [
{"name": "Author 1", "email": "foo@example.org"},
{"name": "Author 2"},
{"name": "Author 3", "email": "bar@example.org"},
],
}
def test_sword_mixed():
content = """<?xml version="1.0"?>
<atom:entry xmlns:atom="http://www.w3.org/2005/Atom"
xmlns="https://doi.org/10.5063/schema/codemeta-2.0"
xmlns:schema="http://schema.org/">
<name>My Software</name>
blah
<schema:version>1.2.3</schema:version>
blih
</atom:entry>
"""
result = MAPPINGS["SwordCodemetaMapping"]().translate(content)
assert result == {
"@context": "https://doi.org/10.5063/schema/codemeta-2.0",
"name": "My Software",
"version": "1.2.3",
}
def test_sword_schemaorg_in_codemeta():
content = """<?xml version="1.0"?>
<atom:entry xmlns:atom="http://www.w3.org/2005/Atom"
xmlns="https://doi.org/10.5063/schema/codemeta-2.0"
xmlns:schema="http://schema.org/">
<name>My Software</name>
<schema:version>1.2.3</schema:version>
</atom:entry>
"""
result = MAPPINGS["SwordCodemetaMapping"]().translate(content)
assert result == {
"@context": "https://doi.org/10.5063/schema/codemeta-2.0",
"name": "My Software",
"version": "1.2.3",
}
def test_sword_schemaorg_in_codemeta_constrained():
"""Resulting property has the compact URI 'schema:url' instead of just
the term 'url', because term 'url' is defined by the Codemeta schema
has having type '@id'."""
content = """<?xml version="1.0"?>
<atom:entry xmlns:atom="http://www.w3.org/2005/Atom"
xmlns="https://doi.org/10.5063/schema/codemeta-2.0"
xmlns:schema="http://schema.org/">
<name>My Software</name>
<schema:url>http://example.org/my-software</schema:url>
</atom:entry>
"""
result = MAPPINGS["SwordCodemetaMapping"]().translate(content)
assert result == {
"@context": "https://doi.org/10.5063/schema/codemeta-2.0",
"name": "My Software",
"schema:url": "http://example.org/my-software",
}
def test_sword_schemaorg_not_in_codemeta():
content = """<?xml version="1.0"?>
<atom:entry xmlns:atom="http://www.w3.org/2005/Atom"
xmlns="https://doi.org/10.5063/schema/codemeta-2.0"
xmlns:schema="http://schema.org/">
<name>My Software</name>
<schema:sameAs>http://example.org/my-software</schema:sameAs>
</atom:entry>
"""
result = MAPPINGS["SwordCodemetaMapping"]().translate(content)
assert result == {
"@context": "https://doi.org/10.5063/schema/codemeta-2.0",
"name": "My Software",
"schema:sameAs": "http://example.org/my-software",
}
def test_sword_atom_name():
content = """<?xml version="1.0"?>
<entry xmlns="http://www.w3.org/2005/Atom"
xmlns:codemeta="https://doi.org/10.5063/schema/codemeta-2.0">
<name>My Software</name>
</entry>
"""
result = MAPPINGS["SwordCodemetaMapping"]().translate(content)
assert result == {
"@context": "https://doi.org/10.5063/schema/codemeta-2.0",
"name": "My Software",
}
def test_sword_multiple_names():
content = """<?xml version="1.0"?>
<entry xmlns="http://www.w3.org/2005/Atom"
xmlns:codemeta="https://doi.org/10.5063/schema/codemeta-2.0">
<name>Atom Name 1</name>
<name>Atom Name 2</name>
<title>Atom Title 1</title>
<title>Atom Title 2</title>
<codemeta:name>Codemeta Name 1</codemeta:name>
<codemeta:name>Codemeta Name 2</codemeta:name>
</entry>
"""
result = MAPPINGS["SwordCodemetaMapping"]().translate(content)
assert result == {
"@context": "https://doi.org/10.5063/schema/codemeta-2.0",
"name": [
"Atom Name 1",
"Atom Name 2",
"Atom Title 1",
"Atom Title 2",
"Codemeta Name 1",
"Codemeta Name 2",
],
}
+
+
+def test_json_sword():
+ content = """{"id": "hal-01243573", "@xmlns": "http://www.w3.org/2005/Atom", "author": {"name": "Author 1", "email": "foo@example.org"}, "client": "hal", "codemeta:url": "http://example.org/", "codemeta:name": "The assignment problem", "@xmlns:codemeta": "https://doi.org/10.5063/SCHEMA/CODEMETA-2.0", "codemeta:author": {"codemeta:name": "Author 2"}, "codemeta:license": {"codemeta:name": "GNU General Public License v3.0 or later"}}""" # noqa
+ result = MAPPINGS["JsonSwordCodemetaMapping"]().translate(content)
+ assert result == {
+ "@context": "https://doi.org/10.5063/schema/codemeta-2.0",
+ "author": [
+ {"name": "Author 1", "email": "foo@example.org"},
+ {"name": "Author 2"},
+ ],
+ "license": {"name": "GNU General Public License v3.0 or later"},
+ "name": "The assignment problem",
+ "schema:url": "http://example.org/",
+ "name": "The assignment problem",
+ }
diff --git a/swh/indexer/tests/test_cli.py b/swh/indexer/tests/test_cli.py
index 71ebff4..6bbab40 100644
--- a/swh/indexer/tests/test_cli.py
+++ b/swh/indexer/tests/test_cli.py
@@ -1,919 +1,922 @@
# Copyright (C) 2019-2022 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import datetime
from functools import reduce
import re
from typing import Any, Dict, List
from unittest.mock import patch
import attr
from click.testing import CliRunner
from confluent_kafka import Consumer
import pytest
from swh.indexer import fossology_license
from swh.indexer.cli import indexer_cli_group
from swh.indexer.storage.interface import IndexerStorageInterface
from swh.indexer.storage.model import (
ContentLicenseRow,
ContentMimetypeRow,
DirectoryIntrinsicMetadataRow,
OriginExtrinsicMetadataRow,
OriginIntrinsicMetadataRow,
)
from swh.journal.writer import get_journal_writer
from swh.model.hashutil import hash_to_bytes
from swh.model.model import Content, Origin, OriginVisitStatus
from .test_metadata import REMD
from .utils import (
DIRECTORY2,
RAW_CONTENT_IDS,
RAW_CONTENTS,
REVISION,
SHA1_TO_LICENSES,
mock_compute_license,
)
def fill_idx_storage(idx_storage: IndexerStorageInterface, nb_rows: int) -> List[int]:
tools: List[Dict[str, Any]] = [
{
"tool_name": "tool %d" % i,
"tool_version": "0.0.1",
"tool_configuration": {},
}
for i in range(2)
]
tools = idx_storage.indexer_configuration_add(tools)
origin_metadata = [
OriginIntrinsicMetadataRow(
id="file://dev/%04d" % origin_id,
from_directory=hash_to_bytes("abcd{:0>36}".format(origin_id)),
indexer_configuration_id=tools[origin_id % 2]["id"],
metadata={"name": "origin %d" % origin_id},
mappings=["mapping%d" % (origin_id % 10)],
)
for origin_id in range(nb_rows)
]
directory_metadata = [
DirectoryIntrinsicMetadataRow(
id=hash_to_bytes("abcd{:0>36}".format(origin_id)),
indexer_configuration_id=tools[origin_id % 2]["id"],
metadata={"name": "origin %d" % origin_id},
mappings=["mapping%d" % (origin_id % 10)],
)
for origin_id in range(nb_rows)
]
idx_storage.directory_intrinsic_metadata_add(directory_metadata)
idx_storage.origin_intrinsic_metadata_add(origin_metadata)
return [tool["id"] for tool in tools]
def _origins_in_task_args(tasks):
"""Returns the set of origins contained in the arguments of the
provided tasks (assumed to be of type index-origin-metadata)."""
return reduce(
set.union, (set(task["arguments"]["args"][0]) for task in tasks), set()
)
def _assert_tasks_for_origins(tasks, origins):
expected_kwargs = {}
assert {task["type"] for task in tasks} == {"index-origin-metadata"}
assert all(len(task["arguments"]["args"]) == 1 for task in tasks)
for task in tasks:
assert task["arguments"]["kwargs"] == expected_kwargs, task
assert _origins_in_task_args(tasks) == set(["file://dev/%04d" % i for i in origins])
@pytest.fixture
def cli_runner():
return CliRunner()
def test_cli_mapping_list(cli_runner, swh_config):
result = cli_runner.invoke(
indexer_cli_group,
["-C", swh_config, "mapping", "list"],
catch_exceptions=False,
)
expected_output = "\n".join(
[
"cff",
"codemeta",
"composer",
"gemspec",
"github",
+ "json-sword-codemeta",
"maven",
"npm",
"nuget",
"pkg-info",
"pubspec",
"sword-codemeta",
"",
] # must be sorted for test to pass
)
assert result.exit_code == 0, result.output
assert result.output == expected_output
def test_cli_mapping_list_terms(cli_runner, swh_config):
result = cli_runner.invoke(
indexer_cli_group,
["-C", swh_config, "mapping", "list-terms"],
catch_exceptions=False,
)
assert result.exit_code == 0, result.output
assert re.search(r"http://schema.org/url:\n.*npm", result.output)
assert re.search(r"http://schema.org/url:\n.*codemeta", result.output)
assert re.search(
r"https://codemeta.github.io/terms/developmentStatus:\n\tcodemeta",
result.output,
)
def test_cli_mapping_list_terms_exclude(cli_runner, swh_config):
result = cli_runner.invoke(
indexer_cli_group,
[
"-C",
swh_config,
"mapping",
"list-terms",
"--exclude-mapping",
"codemeta",
"--exclude-mapping",
+ "json-sword-codemeta",
+ "--exclude-mapping",
"sword-codemeta",
],
catch_exceptions=False,
)
assert result.exit_code == 0, result.output
assert re.search(r"http://schema.org/url:\n.*npm", result.output)
assert not re.search(r"http://schema.org/url:\n.*codemeta", result.output)
assert not re.search(
r"https://codemeta.github.io/terms/developmentStatus:\n\tcodemeta",
result.output,
)
@patch("swh.scheduler.cli.utils.TASK_BATCH_SIZE", 3)
@patch("swh.scheduler.cli_utils.TASK_BATCH_SIZE", 3)
def test_cli_origin_metadata_reindex_empty_db(
cli_runner, swh_config, indexer_scheduler, idx_storage, storage
):
result = cli_runner.invoke(
indexer_cli_group,
[
"-C",
swh_config,
"schedule",
"reindex_origin_metadata",
],
catch_exceptions=False,
)
expected_output = "Nothing to do (no origin metadata matched the criteria).\n"
assert result.exit_code == 0, result.output
assert result.output == expected_output
tasks = indexer_scheduler.search_tasks()
assert len(tasks) == 0
@patch("swh.scheduler.cli.utils.TASK_BATCH_SIZE", 3)
@patch("swh.scheduler.cli_utils.TASK_BATCH_SIZE", 3)
def test_cli_origin_metadata_reindex_divisor(
cli_runner, swh_config, indexer_scheduler, idx_storage, storage
):
"""Tests the re-indexing when origin_batch_size*task_batch_size is a
divisor of nb_origins."""
fill_idx_storage(idx_storage, 90)
result = cli_runner.invoke(
indexer_cli_group,
[
"-C",
swh_config,
"schedule",
"reindex_origin_metadata",
],
catch_exceptions=False,
)
# Check the output
expected_output = (
"Scheduled 3 tasks (30 origins).\n"
"Scheduled 6 tasks (60 origins).\n"
"Scheduled 9 tasks (90 origins).\n"
"Done.\n"
)
assert result.exit_code == 0, result.output
assert result.output == expected_output
# Check scheduled tasks
tasks = indexer_scheduler.search_tasks()
assert len(tasks) == 9
_assert_tasks_for_origins(tasks, range(90))
@patch("swh.scheduler.cli.utils.TASK_BATCH_SIZE", 3)
@patch("swh.scheduler.cli_utils.TASK_BATCH_SIZE", 3)
def test_cli_origin_metadata_reindex_dry_run(
cli_runner, swh_config, indexer_scheduler, idx_storage, storage
):
"""Tests the re-indexing when origin_batch_size*task_batch_size is a
divisor of nb_origins."""
fill_idx_storage(idx_storage, 90)
result = cli_runner.invoke(
indexer_cli_group,
[
"-C",
swh_config,
"schedule",
"--dry-run",
"reindex_origin_metadata",
],
catch_exceptions=False,
)
# Check the output
expected_output = (
"Scheduled 3 tasks (30 origins).\n"
"Scheduled 6 tasks (60 origins).\n"
"Scheduled 9 tasks (90 origins).\n"
"Done.\n"
)
assert result.exit_code == 0, result.output
assert result.output == expected_output
# Check scheduled tasks
tasks = indexer_scheduler.search_tasks()
assert len(tasks) == 0
@patch("swh.scheduler.cli.utils.TASK_BATCH_SIZE", 3)
@patch("swh.scheduler.cli_utils.TASK_BATCH_SIZE", 3)
def test_cli_origin_metadata_reindex_nondivisor(
cli_runner, swh_config, indexer_scheduler, idx_storage, storage
):
"""Tests the re-indexing when neither origin_batch_size or
task_batch_size is a divisor of nb_origins."""
fill_idx_storage(idx_storage, 70)
result = cli_runner.invoke(
indexer_cli_group,
[
"-C",
swh_config,
"schedule",
"reindex_origin_metadata",
"--batch-size",
"20",
],
catch_exceptions=False,
)
# Check the output
expected_output = (
"Scheduled 3 tasks (60 origins).\n"
"Scheduled 4 tasks (70 origins).\n"
"Done.\n"
)
assert result.exit_code == 0, result.output
assert result.output == expected_output
# Check scheduled tasks
tasks = indexer_scheduler.search_tasks()
assert len(tasks) == 4
_assert_tasks_for_origins(tasks, range(70))
@patch("swh.scheduler.cli.utils.TASK_BATCH_SIZE", 3)
@patch("swh.scheduler.cli_utils.TASK_BATCH_SIZE", 3)
def test_cli_origin_metadata_reindex_filter_one_mapping(
cli_runner, swh_config, indexer_scheduler, idx_storage, storage
):
"""Tests the re-indexing when origin_batch_size*task_batch_size is a
divisor of nb_origins."""
fill_idx_storage(idx_storage, 110)
result = cli_runner.invoke(
indexer_cli_group,
[
"-C",
swh_config,
"schedule",
"reindex_origin_metadata",
"--mapping",
"mapping1",
],
catch_exceptions=False,
)
# Check the output
expected_output = "Scheduled 2 tasks (11 origins).\nDone.\n"
assert result.exit_code == 0, result.output
assert result.output == expected_output
# Check scheduled tasks
tasks = indexer_scheduler.search_tasks()
assert len(tasks) == 2
_assert_tasks_for_origins(tasks, [1, 11, 21, 31, 41, 51, 61, 71, 81, 91, 101])
@patch("swh.scheduler.cli.utils.TASK_BATCH_SIZE", 3)
@patch("swh.scheduler.cli_utils.TASK_BATCH_SIZE", 3)
def test_cli_origin_metadata_reindex_filter_two_mappings(
cli_runner, swh_config, indexer_scheduler, idx_storage, storage
):
"""Tests the re-indexing when origin_batch_size*task_batch_size is a
divisor of nb_origins."""
fill_idx_storage(idx_storage, 110)
result = cli_runner.invoke(
indexer_cli_group,
[
"--config-file",
swh_config,
"schedule",
"reindex_origin_metadata",
"--mapping",
"mapping1",
"--mapping",
"mapping2",
],
catch_exceptions=False,
)
# Check the output
expected_output = "Scheduled 3 tasks (22 origins).\nDone.\n"
assert result.exit_code == 0, result.output
assert result.output == expected_output
# Check scheduled tasks
tasks = indexer_scheduler.search_tasks()
assert len(tasks) == 3
_assert_tasks_for_origins(
tasks,
[
1,
11,
21,
31,
41,
51,
61,
71,
81,
91,
101,
2,
12,
22,
32,
42,
52,
62,
72,
82,
92,
102,
],
)
@patch("swh.scheduler.cli.utils.TASK_BATCH_SIZE", 3)
@patch("swh.scheduler.cli_utils.TASK_BATCH_SIZE", 3)
def test_cli_origin_metadata_reindex_filter_one_tool(
cli_runner, swh_config, indexer_scheduler, idx_storage, storage
):
"""Tests the re-indexing when origin_batch_size*task_batch_size is a
divisor of nb_origins."""
tool_ids = fill_idx_storage(idx_storage, 110)
result = cli_runner.invoke(
indexer_cli_group,
[
"-C",
swh_config,
"schedule",
"reindex_origin_metadata",
"--tool-id",
str(tool_ids[0]),
],
catch_exceptions=False,
)
# Check the output
expected_output = (
"Scheduled 3 tasks (30 origins).\n"
"Scheduled 6 tasks (55 origins).\n"
"Done.\n"
)
assert result.exit_code == 0, result.output
assert result.output == expected_output
# Check scheduled tasks
tasks = indexer_scheduler.search_tasks()
assert len(tasks) == 6
_assert_tasks_for_origins(tasks, [x * 2 for x in range(55)])
def now():
return datetime.datetime.now(tz=datetime.timezone.utc)
def test_cli_journal_client_schedule(
cli_runner,
swh_config,
indexer_scheduler,
kafka_prefix: str,
kafka_server,
consumer: Consumer,
):
"""Test the 'swh indexer journal-client' cli tool."""
journal_writer = get_journal_writer(
"kafka",
brokers=[kafka_server],
prefix=kafka_prefix,
client_id="test producer",
value_sanitizer=lambda object_type, value: value,
flush_timeout=3, # fail early if something is going wrong
)
visit_statuses = [
OriginVisitStatus(
origin="file:///dev/zero",
visit=1,
date=now(),
status="full",
snapshot=None,
),
OriginVisitStatus(
origin="file:///dev/foobar",
visit=2,
date=now(),
status="full",
snapshot=None,
),
OriginVisitStatus(
origin="file:///tmp/spamegg",
visit=3,
date=now(),
status="full",
snapshot=None,
),
OriginVisitStatus(
origin="file:///dev/0002",
visit=6,
date=now(),
status="full",
snapshot=None,
),
OriginVisitStatus( # will be filtered out due to its 'partial' status
origin="file:///dev/0000",
visit=4,
date=now(),
status="partial",
snapshot=None,
),
OriginVisitStatus( # will be filtered out due to its 'ongoing' status
origin="file:///dev/0001",
visit=5,
date=now(),
status="ongoing",
snapshot=None,
),
]
journal_writer.write_additions("origin_visit_status", visit_statuses)
visit_statuses_full = [vs for vs in visit_statuses if vs.status == "full"]
result = cli_runner.invoke(
indexer_cli_group,
[
"-C",
swh_config,
"journal-client",
"--broker",
kafka_server,
"--prefix",
kafka_prefix,
"--group-id",
"test-consumer",
"--stop-after-objects",
len(visit_statuses),
"--origin-metadata-task-type",
"index-origin-metadata",
],
catch_exceptions=False,
)
# Check the output
expected_output = "Done.\n"
assert result.exit_code == 0, result.output
assert result.output == expected_output
# Check scheduled tasks
tasks = indexer_scheduler.search_tasks(task_type="index-origin-metadata")
# This can be split into multiple tasks but no more than the origin-visit-statuses
# written in the journal
assert len(tasks) <= len(visit_statuses_full)
actual_origins = []
for task in tasks:
actual_task = dict(task)
assert actual_task["type"] == "index-origin-metadata"
scheduled_origins = actual_task["arguments"]["args"][0]
actual_origins.extend(scheduled_origins)
assert set(actual_origins) == {vs.origin for vs in visit_statuses_full}
def test_cli_journal_client_without_brokers(
cli_runner, swh_config, kafka_prefix: str, kafka_server, consumer: Consumer
):
"""Without brokers configuration, the cli fails."""
with pytest.raises(ValueError, match="brokers"):
cli_runner.invoke(
indexer_cli_group,
[
"-C",
swh_config,
"journal-client",
],
catch_exceptions=False,
)
@pytest.mark.parametrize("indexer_name", ["origin_intrinsic_metadata", "*"])
def test_cli_journal_client_index__origin_intrinsic_metadata(
cli_runner,
swh_config,
kafka_prefix: str,
kafka_server,
consumer: Consumer,
idx_storage,
storage,
mocker,
swh_indexer_config,
indexer_name: str,
):
"""Test the 'swh indexer journal-client' cli tool."""
journal_writer = get_journal_writer(
"kafka",
brokers=[kafka_server],
prefix=kafka_prefix,
client_id="test producer",
value_sanitizer=lambda object_type, value: value,
flush_timeout=3, # fail early if something is going wrong
)
visit_statuses = [
OriginVisitStatus(
origin="file:///dev/zero",
visit=1,
date=now(),
status="full",
snapshot=None,
),
OriginVisitStatus(
origin="file:///dev/foobar",
visit=2,
date=now(),
status="full",
snapshot=None,
),
OriginVisitStatus(
origin="file:///tmp/spamegg",
visit=3,
date=now(),
status="full",
snapshot=None,
),
OriginVisitStatus(
origin="file:///dev/0002",
visit=6,
date=now(),
status="full",
snapshot=None,
),
OriginVisitStatus( # will be filtered out due to its 'partial' status
origin="file:///dev/0000",
visit=4,
date=now(),
status="partial",
snapshot=None,
),
OriginVisitStatus( # will be filtered out due to its 'ongoing' status
origin="file:///dev/0001",
visit=5,
date=now(),
status="ongoing",
snapshot=None,
),
]
journal_writer.write_additions("origin_visit_status", visit_statuses)
visit_statuses_full = [vs for vs in visit_statuses if vs.status == "full"]
storage.revision_add([REVISION])
mocker.patch(
"swh.indexer.metadata.get_head_swhid",
return_value=REVISION.swhid(),
)
mocker.patch(
"swh.indexer.metadata.DirectoryMetadataIndexer.index",
return_value=[
DirectoryIntrinsicMetadataRow(
id=DIRECTORY2.id,
indexer_configuration_id=1,
mappings=["cff"],
metadata={"foo": "bar"},
)
],
)
result = cli_runner.invoke(
indexer_cli_group,
[
"-C",
swh_config,
"journal-client",
indexer_name,
"--broker",
kafka_server,
"--prefix",
kafka_prefix,
"--group-id",
"test-consumer",
"--stop-after-objects",
len(visit_statuses),
],
catch_exceptions=False,
)
# Check the output
expected_output = "Done.\n"
assert result.exit_code == 0, result.output
assert result.output == expected_output
results = idx_storage.origin_intrinsic_metadata_get(
[status.origin for status in visit_statuses]
)
expected_results = [
OriginIntrinsicMetadataRow(
id=status.origin,
from_directory=DIRECTORY2.id,
tool={"id": 1, **swh_indexer_config["tools"]},
mappings=["cff"],
metadata={"foo": "bar"},
)
for status in sorted(visit_statuses_full, key=lambda r: r.origin)
]
assert sorted(results, key=lambda r: r.id) == expected_results
@pytest.mark.parametrize("indexer_name", ["extrinsic_metadata", "*"])
def test_cli_journal_client_index__origin_extrinsic_metadata(
cli_runner,
swh_config,
kafka_prefix: str,
kafka_server,
consumer: Consumer,
idx_storage,
storage,
mocker,
swh_indexer_config,
indexer_name: str,
):
"""Test the 'swh indexer journal-client' cli tool."""
journal_writer = get_journal_writer(
"kafka",
brokers=[kafka_server],
prefix=kafka_prefix,
client_id="test producer",
value_sanitizer=lambda object_type, value: value,
flush_timeout=3, # fail early if something is going wrong
)
origin = Origin("http://example.org/repo.git")
storage.origin_add([origin])
raw_extrinsic_metadata = attr.evolve(REMD, target=origin.swhid())
raw_extrinsic_metadata = attr.evolve(
raw_extrinsic_metadata, id=raw_extrinsic_metadata.compute_hash()
)
journal_writer.write_additions("raw_extrinsic_metadata", [raw_extrinsic_metadata])
result = cli_runner.invoke(
indexer_cli_group,
[
"-C",
swh_config,
"journal-client",
indexer_name,
"--broker",
kafka_server,
"--prefix",
kafka_prefix,
"--group-id",
"test-consumer",
"--stop-after-objects",
1,
],
catch_exceptions=False,
)
# Check the output
expected_output = "Done.\n"
assert result.exit_code == 0, result.output
assert result.output == expected_output
results = idx_storage.origin_extrinsic_metadata_get([origin.url])
expected_results = [
OriginExtrinsicMetadataRow(
id=origin.url,
from_remd_id=raw_extrinsic_metadata.id,
tool={"id": 1, **swh_indexer_config["tools"]},
mappings=["github"],
metadata={
"@context": "https://doi.org/10.5063/schema/codemeta-2.0",
"type": "https://forgefed.org/ns#Repository",
"name": "test software",
},
)
]
assert sorted(results, key=lambda r: r.id) == expected_results
def test_cli_journal_client_index__content_mimetype(
cli_runner,
swh_config,
kafka_prefix: str,
kafka_server,
consumer: Consumer,
idx_storage,
obj_storage,
storage,
mocker,
swh_indexer_config,
):
"""Test the 'swh indexer journal-client' cli tool."""
journal_writer = get_journal_writer(
"kafka",
brokers=[kafka_server],
prefix=kafka_prefix,
client_id="test producer",
value_sanitizer=lambda object_type, value: value,
flush_timeout=3, # fail early if something is going wrong
)
contents = []
expected_results = []
content_ids = []
for content_id, (raw_content, mimetypes, encoding) in RAW_CONTENTS.items():
content = Content.from_data(raw_content)
assert content_id == content.sha1
contents.append(content)
content_ids.append(content_id)
# Older libmagic versions (e.g. buster: 1:5.35-4+deb10u2, bullseye: 1:5.39-3)
# returns different results. This allows to deal with such a case when executing
# tests on different environments machines (e.g. ci tox, ci debian, dev machine,
# ...)
all_mimetypes = mimetypes if isinstance(mimetypes, tuple) else [mimetypes]
expected_results.extend(
[
ContentMimetypeRow(
id=content.sha1,
tool={"id": 1, **swh_indexer_config["tools"]},
mimetype=mimetype,
encoding=encoding,
)
for mimetype in all_mimetypes
]
)
assert len(contents) == len(RAW_CONTENTS)
journal_writer.write_additions("content", contents)
result = cli_runner.invoke(
indexer_cli_group,
[
"-C",
swh_config,
"journal-client",
"content_mimetype",
"--broker",
kafka_server,
"--prefix",
kafka_prefix,
"--group-id",
"test-consumer",
"--stop-after-objects",
len(contents),
],
catch_exceptions=False,
)
# Check the output
expected_output = "Done.\n"
assert result.exit_code == 0, result.output
assert result.output == expected_output
results = idx_storage.content_mimetype_get(content_ids)
assert len(results) == len(contents)
for result in results:
assert result in expected_results
def test_cli_journal_client_index__fossology_license(
cli_runner,
swh_config,
kafka_prefix: str,
kafka_server,
consumer: Consumer,
idx_storage,
obj_storage,
storage,
mocker,
swh_indexer_config,
):
"""Test the 'swh indexer journal-client' cli tool."""
# Patch
fossology_license.compute_license = mock_compute_license
journal_writer = get_journal_writer(
"kafka",
brokers=[kafka_server],
prefix=kafka_prefix,
client_id="test producer",
value_sanitizer=lambda object_type, value: value,
flush_timeout=3, # fail early if something is going wrong
)
tool = {"id": 1, **swh_indexer_config["tools"]}
id0, id1, id2 = RAW_CONTENT_IDS
contents = []
content_ids = []
expected_results = []
for content_id, (raw_content, _, _) in RAW_CONTENTS.items():
content = Content.from_data(raw_content)
assert content_id == content.sha1
contents.append(content)
content_ids.append(content_id)
expected_results.extend(
[
ContentLicenseRow(id=content_id, tool=tool, license=license)
for license in SHA1_TO_LICENSES[content_id]
]
)
assert len(contents) == len(RAW_CONTENTS)
journal_writer.write_additions("content", contents)
result = cli_runner.invoke(
indexer_cli_group,
[
"-C",
swh_config,
"journal-client",
"content_fossology_license",
"--broker",
kafka_server,
"--prefix",
kafka_prefix,
"--group-id",
"test-consumer",
"--stop-after-objects",
len(contents),
],
catch_exceptions=False,
)
# Check the output
expected_output = "Done.\n"
assert result.exit_code == 0, result.output
assert result.output == expected_output
results = idx_storage.content_fossology_license_get(content_ids)
assert len(results) == len(expected_results)
for result in results:
assert result in expected_results
File Metadata
Details
Attached
Mime Type
text/x-diff
Expires
Jul 4 2025, 7:38 PM (8 w, 3 h ago)
Storage Engine
blob
Storage Format
Raw Data
Storage Handle
3252060
Attached To
rDCIDX Metadata indexer
Event Timeline
Log In to Comment