diff --git a/docs/metadata-workflow.rst b/docs/metadata-workflow.rst index 4d99106..96bf24f 100644 --- a/docs/metadata-workflow.rst +++ b/docs/metadata-workflow.rst @@ -1,274 +1,278 @@ Metadata workflow ================= Intrinsic metadata ------------------ Indexing :term:`intrinsic metadata` requires extracting information from the lowest levels of the :ref:`Merkle DAG ` (directories, files, and content blobs) and associate them to the highest ones (origins). In order to deduplicate the work between origins, we split this work between multiple indexers, which coordinate with each other and save their results at each step in the indexer storage. Indexer architecture ^^^^^^^^^^^^^^^^^^^^ .. thumbnail:: images/tasks-metadata-indexers.svg Origin-Head Indexer ^^^^^^^^^^^^^^^^^^^ First, the Origin-Head indexer gets called externally, with an origin as argument (or multiple origins, that are handled sequentially). For now, its tasks are scheduled manually via recurring Scheduler tasks; but in the near future, the :term:`journal` will be used to do that. It first looks up the last :term:`snapshot` and determines what the main branch of origin is (the "Head branch") and what revision it points to (the "Head"). Intrinsic metadata for that origin will be extracted from that revision. It schedules a Directory Metadata Indexer task for the root directory of that revision. Directory and Content Metadata Indexers ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ These two indexers do the hard part of the work. The Directory Metadata Indexer fetches the root directory associated with a revision, then extracts the metadata from that directory. To do so, it lists files in that directory, and looks for known names, such as :file:`codemeta.json`, :file:`package.json`, or :file:`pom.xml`. If there are any, it runs the Content Metadata Indexer on them, which in turn fetches their contents and runs them through extraction dictionaries/mappings. See below for details. Their results are saved in a database (the indexer storage), associated with the content and directory hashes. Origin Metadata Indexer ^^^^^^^^^^^^^^^^^^^^^^^ The job of this indexer is very simple: it takes an origin identifier and uses the Origin-Head and Directory indexers to get metadata from the head directory of an origin, and copies the metadata of the former to a new table, to associate it with the latter. The reason for this is to be able to perform searches on metadata, and efficiently find out which origins matched the pattern. Running that search on the ``directory_metadata`` table would require either a reverse lookup from directories to origins, which is costly. Translation from ecosystem-specific metadata to CodeMeta ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Intrinsic metadata is extracted from files provided with a project's source -code, and translated using `CodeMeta`_'s `crosswalk table`_. +code, and translated using `CodeMeta`_'s `crosswalk table`_; which is vendored +in :file:`swh/indexer/data/codemeta/codemeta.csv`. +Ecosystems not yet included in Codemeta's crosswalk have their own +:file:`swh/indexer/data/*.csv` file, with one row for each CodeMeta property, +even when not supported by the ecosystem. All input formats supported so far are straightforward dictionaries (eg. JSON) or can be accessed as such (eg. XML); and the first part of the translation is to map their keys to a term in the CodeMeta vocabulary. This is done by parsing the crosswalk table's `CSV file`_ and using it as a map between these two vocabularies; and this does not require any format-specific code in the indexers. The second part is to normalize values. As language-specific metadata files each have their way(s) of formatting these values, we need to turn them into the data type required by CodeMeta. This normalization makes up for most of the code of :py:mod:`swh.indexer.metadata_dictionary`. .. _CodeMeta: https://codemeta.github.io/ .. _crosswalk table: https://codemeta.github.io/crosswalk/ .. _CSV file: https://github.com/codemeta/codemeta/blob/master/crosswalk.csv Extrinsic metadata ------------------ The :term:`extrinsic metadata` indexer works very differently from the :term:`intrinsic metadata` indexers we saw above. While the latter extract metadata from software artefacts (files and directories) which are already a core part of the archive, the former extracts such data from API calls pulled from forges and package managers, or pushed via the :ref:`SWORD deposit `. In order to preserve original information verbatim, the Software Heritage itself stores the result of these calls, independently of indexers, in their own archive as described in the :ref:`extrinsic-metadata-specification`. In this section, we assume this information is already present in the archive, but in the "raw extrinsic metadata" form, which needs to be translated to a common vocabulary to be useful, as with intrinsic metadata. The common vocabulary we chose is JSON-LD, with both CodeMeta and `ForgeFed's vocabulary`_ (including `ActivityStream's vocabulary`_) .. _ForgeFed's vocabulary: https://forgefed.org/vocabulary.html .. _ActivityStream's vocabulary: https://www.w3.org/TR/activitystreams-vocabulary/ Instead of the four-step architecture above, the extrinsic-metadata indexer is standalone: it reads "raw extrinsic metadata" from the :ref:`swh-journal`, and produces new indexed entries in the database as they come. The caveat is that, while intrinsic metadata are always unambiguously authoritative (they are contained by their own origin repository, therefore they were added by the origin's "owners"), extrinsic metadata can be authored by third-parties. Support for third-party authorities is currently not implemented for this reason; so extrinsic metadata is only indexed when provided by the same forge/package-repository as the origin the metadata is about. Metadata on non-origin objects (typically, directories), is also ignored for this reason, for now. Assuming the metadata was provided by such an authority, it is then passed to metadata mappings; identified by a mimetype (or custom format name) they declared rather than filenames. Implementation status --------------------- Supported intrinsic metadata ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The following sources of intrinsic metadata are supported: * CodeMeta's `codemeta.json`_, * Maven's `pom.xml`_, * NPM's `package.json`_, * Python's `PKG-INFO`_, * Ruby's `.gemspec`_ .. _codemeta.json: https://codemeta.github.io/terms/ .. _pom.xml: https://maven.apache.org/pom.html .. _package.json: https://docs.npmjs.com/files/package.json .. _PKG-INFO: https://www.python.org/dev/peps/pep-0314/ .. _.gemspec: https://guides.rubygems.org/specification-reference/ Supported extrinsic metadata ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The following sources of extrinsic metadata are supported: * GitHub's `"repo" API `__ Supported JSON-LD properties ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The following terms may be found in the output of the metadata translation (other than the `codemeta` mapping, which is the identity function, and therefore supports all properties): .. program-output:: python3 -m swh.indexer.cli mapping list-terms --exclude-mapping codemeta --exclude-mapping json-sword-codemeta --exclude-mapping sword-codemeta :nostderr: Tutorials --------- The rest of this page is made of two tutorials: one to index :term:`intrinsic metadata` (ie. from a file in a VCS or in a tarball), and one to index :term:`extrinsic metadata` (ie. obtained via external means, such as GitHub's or GitLab's APIs). Adding support for additional ecosystem-specific intrinsic metadata ------------------------------------------------------------------- This section will guide you through adding code to the metadata indexer to detect and translate new metadata formats. First, you should start by picking one of the `CodeMeta crosswalks`_. Then create a new file in :file:`swh-indexer/swh/indexer/metadata_dictionary/`, that will contain your code, and create a new class that inherits from helper classes, with some documentation about your indexer: .. code-block:: python from .base import DictMapping, SingleFileIntrinsicMapping from swh.indexer.codemeta import CROSSWALK_TABLE class MyMapping(DictMapping, SingleFileIntrinsicMapping): """Dedicated class for ...""" name = 'my-mapping' filename = b'the-filename' mapping = CROSSWALK_TABLE['Name of the CodeMeta crosswalk'] .. _CodeMeta crosswalks: https://github.com/codemeta/codemeta/tree/master/crosswalks And reference it from :const:`swh.indexer.metadata_dictionary.INTRINSIC_MAPPINGS`. Then, add a ``string_fields`` attribute, that is the list of all keys whose values are simple text values. For instance, to `translate Python PKG-INFO`_, it's: .. code-block:: python string_fields = ['name', 'version', 'description', 'summary', 'author', 'author-email'] These values will be automatically added to the above list of supported terms. .. _translate Python PKG-INFO: https://forge.softwareheritage.org/source/swh-indexer/browse/master/swh/indexer/metadata_dictionary/python.py Last step to get your code working: add a ``translate`` method that will take a single byte string as argument, turn it into a Python dictionary, whose keys are the ones of the input document, and pass it to ``_translate_dict``. For instance, if the input document is in JSON, it can be as simple as: .. code-block:: python def translate(self, raw_content): raw_content = raw_content.decode() # bytes to str content_dict = json.loads(raw_content) # str to dict return self._translate_dict(content_dict) # convert to CodeMeta ``_translate_dict`` will do the heavy work of reading the crosswalk table for each of ``string_fields``, read the corresponding value in the ``content_dict``, and build a CodeMeta dictionary with the corresponding names from the crosswalk table. One last thing to run your code: add it to the list in :file:`swh-indexer/swh/indexer/metadata_dictionary/__init__.py`, so the rest of the code is aware of it. Now, you can run it: .. code-block:: shell python3 -m swh.indexer.metadata_dictionary MyMapping path/to/input/file and it will (hopefully) returns a CodeMeta object. If it works, well done! You can now improve your translation code further, by adding methods that will do more advanced conversion. For example, if there is a field named ``license`` containing an SPDX identifier, you must convert it to an URI, like this: .. code-block:: python def normalize_license(self, s): if isinstance(s, str): return rdflib.URIRef("https://spdx.org/licenses/" + s) This method will automatically get called by ``_translate_dict`` when it finds a ``license`` field in ``content_dict``. Adding support for additional ecosystem-specific extrinsic metadata ------------------------------------------------------------------- [this section is a work in progress] diff --git a/swh/indexer/codemeta.py b/swh/indexer/codemeta.py index f1d00b1..d7ddb72 100644 --- a/swh/indexer/codemeta.py +++ b/swh/indexer/codemeta.py @@ -1,189 +1,199 @@ # Copyright (C) 2018-2022 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import collections import csv import itertools import json import os.path import re -from typing import Any, List +from typing import Any, Dict, List, Set, TextIO, Tuple from pyld import jsonld import rdflib import swh.indexer from swh.indexer.namespaces import ACTIVITYSTREAMS, CODEMETA, FORGEFED, SCHEMA _DATA_DIR = os.path.join(os.path.dirname(swh.indexer.__file__), "data") CROSSWALK_TABLE_PATH = os.path.join(_DATA_DIR, "codemeta", "crosswalk.csv") CODEMETA_CONTEXT_PATH = os.path.join(_DATA_DIR, "codemeta", "codemeta.jsonld") with open(CODEMETA_CONTEXT_PATH) as fd: CODEMETA_CONTEXT = json.load(fd) _EMPTY_PROCESSED_CONTEXT: Any = {"mappings": {}} _PROCESSED_CODEMETA_CONTEXT = jsonld.JsonLdProcessor().process_context( _EMPTY_PROCESSED_CONTEXT, CODEMETA_CONTEXT, None ) CODEMETA_CONTEXT_URL = "https://doi.org/10.5063/schema/codemeta-2.0" CODEMETA_ALTERNATE_CONTEXT_URLS = { ("https://raw.githubusercontent.com/codemeta/codemeta/master/codemeta.jsonld") } PROPERTY_BLACKLIST = { # CodeMeta properties that we cannot properly represent. SCHEMA.softwareRequirements, CODEMETA.softwareSuggestions, # Duplicate of 'author' SCHEMA.creator, } _codemeta_field_separator = re.compile(r"\s*[,/]\s*") def make_absolute_uri(local_name): """Parses codemeta.jsonld, and returns the @id of terms it defines. >>> make_absolute_uri("name") 'http://schema.org/name' >>> make_absolute_uri("downloadUrl") 'http://schema.org/downloadUrl' >>> make_absolute_uri("referencePublication") 'https://codemeta.github.io/terms/referencePublication' """ uri = jsonld.JsonLdProcessor.get_context_value( _PROCESSED_CODEMETA_CONTEXT, local_name, "@id" ) assert uri.startswith(("@", CODEMETA, SCHEMA)), (local_name, uri) return uri -def _read_crosstable(fd): +def read_crosstable(fd: TextIO) -> Tuple[Set[str], Dict[str, Dict[str, rdflib.URIRef]]]: + """ + Given a file-like object to a `CodeMeta crosswalk table` (either the main + cross-table with all columns, or an auxiliary table with just the CodeMeta + column and one ecosystem-specific table); returns a list of all CodeMeta + terms, and a dictionary ``{ecosystem: {ecosystem_term: codemeta_term}}`` + + .. _CodeMeta crosswalk table: Optional[BNode]: if not isinstance(author, dict): return None node = BNode() graph.add((node, RDF.type, SCHEMA.Person)) if isinstance(author.get("name"), str): graph.add((node, SCHEMA.name, Literal(author["name"]))) if isinstance(author.get("email"), str): graph.add((node, SCHEMA.email, Literal(author["email"]))) return node def translate_authors(self, graph: Graph, root: URIRef, authors) -> None: add_map(graph, root, SCHEMA.author, self._translate_author, authors) diff --git a/swh/indexer/metadata_dictionary/dart.py b/swh/indexer/metadata_dictionary/dart.py index ec6dfb2..2fa4417 100644 --- a/swh/indexer/metadata_dictionary/dart.py +++ b/swh/indexer/metadata_dictionary/dart.py @@ -1,75 +1,75 @@ # Copyright (C) 2022 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import os.path import re from rdflib import RDF, BNode, Graph, Literal, URIRef -from swh.indexer.codemeta import _DATA_DIR, _read_crosstable +from swh.indexer.codemeta import _DATA_DIR, read_crosstable from swh.indexer.namespaces import SCHEMA from .base import YamlMapping from .utils import add_map SPDX = URIRef("https://spdx.org/licenses/") PUB_TABLE_PATH = os.path.join(_DATA_DIR, "pubspec.csv") with open(PUB_TABLE_PATH) as fd: - (CODEMETA_TERMS, PUB_TABLE) = _read_crosstable(fd) + (CODEMETA_TERMS, PUB_TABLE) = read_crosstable(fd) def name_to_person(name): return { "@type": SCHEMA.Person, SCHEMA.name: name, } class PubspecMapping(YamlMapping): name = "pubspec" filename = b"pubspec.yaml" mapping = PUB_TABLE["Pubspec"] string_fields = [ "repository", "keywords", "description", "name", "issue_tracker", "platforms", "license" # license will only be used with the SPDX Identifier ] uri_fields = ["homepage"] def normalize_license(self, s): if isinstance(s, str): return SPDX + s def _translate_author(self, graph, s): name_email_re = re.compile("(?P.*?)( <(?P.*)>)") if isinstance(s, str): author = BNode() graph.add((author, RDF.type, SCHEMA.Person)) match = name_email_re.search(s) if match: name = match.group("name") email = match.group("email") graph.add((author, SCHEMA.email, Literal(email))) else: name = s graph.add((author, SCHEMA.name, Literal(name))) return author def translate_author(self, graph: Graph, root, s) -> None: add_map(graph, root, SCHEMA.author, self._translate_author, [s]) def translate_authors(self, graph: Graph, root, authors) -> None: if isinstance(authors, list): add_map(graph, root, SCHEMA.author, self._translate_author, authors) diff --git a/swh/indexer/metadata_dictionary/gitea.py b/swh/indexer/metadata_dictionary/gitea.py index 748b556..4f6e648 100644 --- a/swh/indexer/metadata_dictionary/gitea.py +++ b/swh/indexer/metadata_dictionary/gitea.py @@ -1,124 +1,124 @@ # Copyright (C) 2022 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import os from typing import Any, Tuple from rdflib import RDF, BNode, Graph, Literal, URIRef -from swh.indexer.codemeta import _DATA_DIR, _read_crosstable +from swh.indexer.codemeta import _DATA_DIR, read_crosstable from swh.indexer.namespaces import ACTIVITYSTREAMS, FORGEFED, SCHEMA from .base import BaseExtrinsicMapping, JsonMapping, produce_terms from .utils import prettyprint_graph # noqa SPDX = URIRef("https://spdx.org/licenses/") GITEA_TABLE_PATH = os.path.join(_DATA_DIR, "Gitea.csv") with open(GITEA_TABLE_PATH) as fd: - (CODEMETA_TERMS, GITEA_TABLE) = _read_crosstable(fd) + (CODEMETA_TERMS, GITEA_TABLE) = read_crosstable(fd) class GiteaMapping(BaseExtrinsicMapping, JsonMapping): name = "gitea" mapping = GITEA_TABLE["Gitea"] uri_fields = [ "website", "clone_url", ] date_fields = [ "created_at", "updated_at", ] string_fields = [ "name", "full_name", "languages", "description", ] @classmethod def extrinsic_metadata_formats(cls) -> Tuple[str, ...]: return ("gitea-project-json", "gogs-project-json") def extra_translation(self, graph, root, content_dict): graph.remove((root, RDF.type, SCHEMA.SoftwareSourceCode)) graph.add((root, RDF.type, FORGEFED.Repository)) def get_root_uri(self, content_dict: dict) -> URIRef: if isinstance(content_dict.get("html_url"), str): return URIRef(content_dict["html_url"]) else: raise ValueError( f"Gitea/Gogs metadata has invalid/missing html_url: {content_dict}" ) @produce_terms(FORGEFED.forks, ACTIVITYSTREAMS.totalItems) def translate_forks_count(self, graph: Graph, root: BNode, v: Any) -> None: """ >>> graph = Graph() >>> root = URIRef("http://example.org/test-software") >>> GiteaMapping().translate_forks_count(graph, root, 42) >>> prettyprint_graph(graph, root) { "@id": ..., "https://forgefed.org/ns#forks": { "@type": "https://www.w3.org/ns/activitystreams#OrderedCollection", "https://www.w3.org/ns/activitystreams#totalItems": 42 } } """ if isinstance(v, int): collection = BNode() graph.add((root, FORGEFED.forks, collection)) graph.add((collection, RDF.type, ACTIVITYSTREAMS.OrderedCollection)) graph.add((collection, ACTIVITYSTREAMS.totalItems, Literal(v))) @produce_terms(ACTIVITYSTREAMS.likes, ACTIVITYSTREAMS.totalItems) def translate_stars_count(self, graph: Graph, root: BNode, v: Any) -> None: """ >>> graph = Graph() >>> root = URIRef("http://example.org/test-software") >>> GiteaMapping().translate_stars_count(graph, root, 42) >>> prettyprint_graph(graph, root) { "@id": ..., "https://www.w3.org/ns/activitystreams#likes": { "@type": "https://www.w3.org/ns/activitystreams#Collection", "https://www.w3.org/ns/activitystreams#totalItems": 42 } } """ if isinstance(v, int): collection = BNode() graph.add((root, ACTIVITYSTREAMS.likes, collection)) graph.add((collection, RDF.type, ACTIVITYSTREAMS.Collection)) graph.add((collection, ACTIVITYSTREAMS.totalItems, Literal(v))) @produce_terms(ACTIVITYSTREAMS.followers, ACTIVITYSTREAMS.totalItems) def translate_watchers_count(self, graph: Graph, root: BNode, v: Any) -> None: """ >>> graph = Graph() >>> root = URIRef("http://example.org/test-software") >>> GiteaMapping().translate_watchers_count(graph, root, 42) >>> prettyprint_graph(graph, root) { "@id": ..., "https://www.w3.org/ns/activitystreams#followers": { "@type": "https://www.w3.org/ns/activitystreams#Collection", "https://www.w3.org/ns/activitystreams#totalItems": 42 } } """ if isinstance(v, int): collection = BNode() graph.add((root, ACTIVITYSTREAMS.followers, collection)) graph.add((collection, RDF.type, ACTIVITYSTREAMS.Collection)) graph.add((collection, ACTIVITYSTREAMS.totalItems, Literal(v))) diff --git a/swh/indexer/metadata_dictionary/nuget.py b/swh/indexer/metadata_dictionary/nuget.py index 62f7ea9..087ec0e 100644 --- a/swh/indexer/metadata_dictionary/nuget.py +++ b/swh/indexer/metadata_dictionary/nuget.py @@ -1,95 +1,95 @@ # Copyright (C) 2022 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import os.path import re from typing import Any, Dict, List from rdflib import RDF, BNode, Graph, Literal, URIRef -from swh.indexer.codemeta import _DATA_DIR, _read_crosstable +from swh.indexer.codemeta import _DATA_DIR, read_crosstable from swh.indexer.namespaces import SCHEMA from swh.indexer.storage.interface import Sha1 from .base import BaseIntrinsicMapping, DirectoryLsEntry, XmlMapping from .utils import add_list NUGET_TABLE_PATH = os.path.join(_DATA_DIR, "nuget.csv") with open(NUGET_TABLE_PATH) as fd: - (CODEMETA_TERMS, NUGET_TABLE) = _read_crosstable(fd) + (CODEMETA_TERMS, NUGET_TABLE) = read_crosstable(fd) SPDX = URIRef("https://spdx.org/licenses/") class NuGetMapping(XmlMapping, BaseIntrinsicMapping): """ dedicated class for NuGet (.nuspec) mapping and translation """ name = "nuget" mapping = NUGET_TABLE["NuGet"] mapping["copyright"] = URIRef("http://schema.org/copyrightNotice") mapping["language"] = URIRef("http://schema.org/inLanguage") string_fields = [ "description", "version", "name", "tags", "license", "summary", "copyright", "language", ] uri_fields = ["projectUrl", "licenseUrl"] @classmethod def detect_metadata_files(cls, file_entries: List[DirectoryLsEntry]) -> List[Sha1]: for entry in file_entries: if entry["name"].endswith(b".nuspec"): return [entry["sha1"]] return [] def _translate_dict(self, d: Dict[str, Any]) -> Dict[str, Any]: return super()._translate_dict(d.get("package", {}).get("metadata", {})) def translate_repository(self, graph, root, v): if isinstance(v, dict) and isinstance(v["@url"], str): codemeta_key = URIRef(self.mapping["repository.url"]) graph.add((root, codemeta_key, URIRef(v["@url"]))) def normalize_license(self, v): if isinstance(v, dict) and v["@type"] == "expression": license_string = v["#text"] if not bool( re.search(r" with |\(|\)| and ", license_string, re.IGNORECASE) ): return [ SPDX + license_type.strip() for license_type in re.split( r" or ", license_string, flags=re.IGNORECASE ) ] else: return None def translate_authors(self, graph: Graph, root, s): if isinstance(s, str): authors = [] for author_name in s.split(","): author_name = author_name.strip() author = BNode() graph.add((author, RDF.type, SCHEMA.Person)) graph.add((author, SCHEMA.name, Literal(author_name))) authors.append(author) add_list(graph, root, SCHEMA.author, authors) def translate_releaseNotes(self, graph: Graph, root, s): if isinstance(s, str): graph.add((root, SCHEMA.releaseNotes, Literal(s))) def normalize_tags(self, s): if isinstance(s, str): return [Literal(tag) for tag in s.split(" ")]