Page MenuHomeSoftware Heritage

No OneTemporary

diff --git a/PKG-INFO b/PKG-INFO
index bbc9a86..4646877 100644
--- a/PKG-INFO
+++ b/PKG-INFO
@@ -1,71 +1,71 @@
Metadata-Version: 2.1
Name: swh.indexer
-Version: 0.2.0
+Version: 0.2.1
Summary: Software Heritage Content Indexer
Home-page: https://forge.softwareheritage.org/diffusion/78/
Author: Software Heritage developers
Author-email: swh-devel@inria.fr
License: UNKNOWN
Project-URL: Bug Reports, https://forge.softwareheritage.org/maniphest
Project-URL: Funding, https://www.softwareheritage.org/donate
Project-URL: Source, https://forge.softwareheritage.org/source/swh-indexer
Project-URL: Documentation, https://docs.softwareheritage.org/devel/swh-indexer/
Description: swh-indexer
============
Tools to compute multiple indexes on SWH's raw contents:
- content:
- mimetype
- ctags
- language
- fossology-license
- metadata
- revision:
- metadata
An indexer is in charge of:
- looking up objects
- extracting information from those objects
- store those information in the swh-indexer db
There are multiple indexers working on different object types:
- content indexer: works with content sha1 hashes
- revision indexer: works with revision sha1 hashes
- origin indexer: works with origin identifiers
Indexation procedure:
- receive batch of ids
- retrieve the associated data depending on object type
- compute for that object some index
- store the result to swh's storage
Current content indexers:
- mimetype (queue swh_indexer_content_mimetype): detect the encoding
and mimetype
- language (queue swh_indexer_content_language): detect the
programming language
- ctags (queue swh_indexer_content_ctags): compute tags information
- fossology-license (queue swh_indexer_fossology_license): compute the
license
- metadata: translate file into translated_metadata dict
Current revision indexers:
- metadata: detects files containing metadata and retrieves translated_metadata
in content_metadata table in storage or run content indexer to translate
files.
Platform: UNKNOWN
Classifier: Programming Language :: Python :: 3
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3)
Classifier: Operating System :: OS Independent
Classifier: Development Status :: 5 - Production/Stable
Requires-Python: >=3.7
Description-Content-Type: text/markdown
Provides-Extra: testing
diff --git a/requirements-swh.txt b/requirements-swh.txt
index 39e073c..0c8b2f9 100644
--- a/requirements-swh.txt
+++ b/requirements-swh.txt
@@ -1,6 +1,6 @@
-swh.core[db,http] >= 0.2.2
+swh.core[db,http] >= 0.2.3
swh.model >= 0.0.15
swh.objstorage >= 0.0.43
swh.scheduler >= 0.5.2
swh.storage >= 0.12.0
swh.journal >= 0.1.0
diff --git a/swh.indexer.egg-info/PKG-INFO b/swh.indexer.egg-info/PKG-INFO
index bbc9a86..4646877 100644
--- a/swh.indexer.egg-info/PKG-INFO
+++ b/swh.indexer.egg-info/PKG-INFO
@@ -1,71 +1,71 @@
Metadata-Version: 2.1
Name: swh.indexer
-Version: 0.2.0
+Version: 0.2.1
Summary: Software Heritage Content Indexer
Home-page: https://forge.softwareheritage.org/diffusion/78/
Author: Software Heritage developers
Author-email: swh-devel@inria.fr
License: UNKNOWN
Project-URL: Bug Reports, https://forge.softwareheritage.org/maniphest
Project-URL: Funding, https://www.softwareheritage.org/donate
Project-URL: Source, https://forge.softwareheritage.org/source/swh-indexer
Project-URL: Documentation, https://docs.softwareheritage.org/devel/swh-indexer/
Description: swh-indexer
============
Tools to compute multiple indexes on SWH's raw contents:
- content:
- mimetype
- ctags
- language
- fossology-license
- metadata
- revision:
- metadata
An indexer is in charge of:
- looking up objects
- extracting information from those objects
- store those information in the swh-indexer db
There are multiple indexers working on different object types:
- content indexer: works with content sha1 hashes
- revision indexer: works with revision sha1 hashes
- origin indexer: works with origin identifiers
Indexation procedure:
- receive batch of ids
- retrieve the associated data depending on object type
- compute for that object some index
- store the result to swh's storage
Current content indexers:
- mimetype (queue swh_indexer_content_mimetype): detect the encoding
and mimetype
- language (queue swh_indexer_content_language): detect the
programming language
- ctags (queue swh_indexer_content_ctags): compute tags information
- fossology-license (queue swh_indexer_fossology_license): compute the
license
- metadata: translate file into translated_metadata dict
Current revision indexers:
- metadata: detects files containing metadata and retrieves translated_metadata
in content_metadata table in storage or run content indexer to translate
files.
Platform: UNKNOWN
Classifier: Programming Language :: Python :: 3
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3)
Classifier: Operating System :: OS Independent
Classifier: Development Status :: 5 - Production/Stable
Requires-Python: >=3.7
Description-Content-Type: text/markdown
Provides-Extra: testing
diff --git a/swh.indexer.egg-info/requires.txt b/swh.indexer.egg-info/requires.txt
index 4d17096..2a308de 100644
--- a/swh.indexer.egg-info/requires.txt
+++ b/swh.indexer.egg-info/requires.txt
@@ -1,19 +1,19 @@
vcversioner
click
python-magic>=0.4.13
pyld
xmltodict
-swh.core[db,http]>=0.2.2
+swh.core[db,http]>=0.2.3
swh.model>=0.0.15
swh.objstorage>=0.0.43
swh.scheduler>=0.5.2
swh.storage>=0.12.0
swh.journal>=0.1.0
[testing]
confluent-kafka
pytest
pytest-mock
hypothesis>=3.11.0
swh.scheduler[testing]>=0.5.0
swh.storage[testing]>=0.10.0
diff --git a/swh/indexer/origin_head.py b/swh/indexer/origin_head.py
index dde9be3..1363a9e 100644
--- a/swh/indexer/origin_head.py
+++ b/swh/indexer/origin_head.py
@@ -1,159 +1,158 @@
# Copyright (C) 2018-2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
from typing import List, Tuple, Any, Dict, Union
import re
import click
import logging
from swh.indexer.indexer import OriginIndexer
+from swh.model.model import SnapshotBranch, TargetType
from swh.storage.algos.origin import origin_get_latest_visit_status
+from swh.storage.algos.snapshot import snapshot_get_all_branches
class OriginHeadIndexer(OriginIndexer):
"""Origin-level indexer.
This indexer is in charge of looking up the revision that acts as the
"head" of an origin.
In git, this is usually the commit pointed to by the 'master' branch."""
USE_TOOLS = False
def persist_index_computations(
self, results: Any, policy_update: str
) -> Dict[str, int]:
"""Do nothing. The indexer's results are not persistent, they
should only be piped to another indexer."""
return {}
# Dispatch
def index(self, origin_url):
visit_and_status = origin_get_latest_visit_status(
self.storage, origin_url, allowed_statuses=["full"], require_snapshot=True
)
if not visit_and_status:
return None
visit, visit_status = visit_and_status
- latest_snapshot = self.storage.snapshot_get(visit_status.snapshot)
- if latest_snapshot is None:
+ snapshot = snapshot_get_all_branches(self.storage, visit_status.snapshot)
+ if snapshot is None:
return None
method = getattr(
self, "_try_get_%s_head" % visit.type, self._try_get_head_generic
)
- rev_id = method(latest_snapshot)
+ rev_id = method(snapshot.branches)
if rev_id is not None:
return {
"origin_url": origin_url,
"revision_id": rev_id,
}
# could not find a head revision
return None
# Tarballs
_archive_filename_re = re.compile(
rb"^"
rb"(?P<pkgname>.*)[-_]"
rb"(?P<version>[0-9]+(\.[0-9])*)"
rb"(?P<preversion>[-+][a-zA-Z0-9.~]+?)?"
rb"(?P<extension>(\.[a-zA-Z0-9]+)+)"
rb"$"
)
@classmethod
- def _parse_version(cls: Any, filename: str) -> Tuple[Union[float, int], ...]:
+ def _parse_version(cls: Any, filename: bytes) -> Tuple[Union[float, int], ...]:
"""Extracts the release version from an archive filename,
to get an ordering whose maximum is likely to be the last
version of the software
>>> OriginHeadIndexer._parse_version(b'foo')
(-inf,)
>>> OriginHeadIndexer._parse_version(b'foo.tar.gz')
(-inf,)
>>> OriginHeadIndexer._parse_version(b'gnu-hello-0.0.1.tar.gz')
(0, 0, 1, 0)
>>> OriginHeadIndexer._parse_version(b'gnu-hello-0.0.1-beta2.tar.gz')
(0, 0, 1, -1, 'beta2')
>>> OriginHeadIndexer._parse_version(b'gnu-hello-0.0.1+foobar.tar.gz')
(0, 0, 1, 1, 'foobar')
"""
res = cls._archive_filename_re.match(filename)
if res is None:
return (float("-infinity"),)
version = [int(n) for n in res.group("version").decode().split(".")]
if res.group("preversion") is None:
version.append(0)
else:
preversion = res.group("preversion").decode()
if preversion.startswith("-"):
version.append(-1)
version.append(preversion[1:])
elif preversion.startswith("+"):
version.append(1)
version.append(preversion[1:])
else:
assert False, res.group("preversion")
return tuple(version)
- def _try_get_ftp_head(self, snapshot: Dict[str, Any]) -> Any:
- archive_names = list(snapshot["branches"])
+ def _try_get_ftp_head(self, branches: Dict[bytes, SnapshotBranch]) -> Any:
+ archive_names = list(branches)
max_archive_name = max(archive_names, key=self._parse_version)
- r = self._try_resolve_target(snapshot["branches"], max_archive_name)
+ r = self._try_resolve_target(branches, max_archive_name)
return r
# Generic
- def _try_get_head_generic(self, snapshot: Dict[str, Any]) -> Any:
+ def _try_get_head_generic(self, branches: Dict[bytes, SnapshotBranch]) -> Any:
# Works on 'deposit', 'pypi', and VCSs.
- try:
- branches = snapshot["branches"]
- except KeyError:
- return None
- else:
- return self._try_resolve_target(
- branches, b"HEAD"
- ) or self._try_resolve_target(branches, b"master")
+ return self._try_resolve_target(branches, b"HEAD") or self._try_resolve_target(
+ branches, b"master"
+ )
- def _try_resolve_target(self, branches: Dict, target_name: bytes) -> Any:
+ def _try_resolve_target(
+ self, branches: Dict[bytes, SnapshotBranch], branch_name: bytes
+ ) -> Any:
try:
- target = branches[target_name]
- if target is None:
+ branch = branches[branch_name]
+ if branch is None:
return None
- while target["target_type"] == "alias":
- target = branches[target["target"]]
- if target is None:
+ while branch.target_type == TargetType.ALIAS:
+ branch = branches[branch.target]
+ if branch is None:
return None
- if target["target_type"] == "revision":
- return target["target"]
- elif target["target_type"] == "content":
+ if branch.target_type == TargetType.REVISION:
+ return branch.target
+ elif branch.target_type == TargetType.CONTENT:
return None # TODO
- elif target["target_type"] == "directory":
+ elif branch.target_type == TargetType.DIRECTORY:
return None # TODO
- elif target["target_type"] == "release":
+ elif branch.target_type == TargetType.RELEASE:
return None # TODO
else:
- assert False
+ assert False, branch
except KeyError:
return None
@click.command()
@click.option(
"--origins", "-i", help='Origins to lookup, in the "type+url" format', multiple=True
)
def main(origins: List[str]) -> None:
rev_metadata_indexer = OriginHeadIndexer()
rev_metadata_indexer.run(origins)
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
main()
diff --git a/swh/indexer/rehash.py b/swh/indexer/rehash.py
index 06d907a..5350bda 100644
--- a/swh/indexer/rehash.py
+++ b/swh/indexer/rehash.py
@@ -1,192 +1,192 @@
# Copyright (C) 2017-2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import logging
import itertools
from collections import defaultdict
-from typing import Dict, Any, Tuple, List, Generator
+from typing import Any, Dict, Generator, List, Optional, Tuple
from swh.core import utils
from swh.core.config import SWHConfig
from swh.model import hashutil
+from swh.model.model import Content
from swh.objstorage import get_objstorage
from swh.objstorage.exc import ObjNotFoundError
from swh.storage import get_storage
class RecomputeChecksums(SWHConfig):
"""Class in charge of (re)computing content's hashes.
Hashes to compute are defined across 2 configuration options:
compute_checksums ([str])
list of hash algorithms that
py:func:`swh.model.hashutil.MultiHash.from_data` function should
be able to deal with. For variable-length checksums, a desired
checksum length should also be provided. Their format is
<algorithm's name>:<variable-length> e.g: blake2:512
recompute_checksums (bool)
a boolean to notify that we also want to recompute potential existing
hashes specified in compute_checksums. Default to False.
"""
DEFAULT_CONFIG = {
# The storage to read from or update metadata to
"storage": (
"dict",
{"cls": "remote", "args": {"url": "http://localhost:5002/"},},
),
# The objstorage to read contents' data from
"objstorage": (
"dict",
{
"cls": "pathslicing",
"args": {
"root": "/srv/softwareheritage/objects",
"slicing": "0:2/2:4/4:6",
},
},
),
# the set of checksums that should be computed.
# Examples: 'sha1_git', 'blake2b512', 'blake2s256'
"compute_checksums": ("list[str]", []),
# whether checksums that already exist in the DB should be
# recomputed/updated or left untouched
"recompute_checksums": ("bool", False),
# Number of contents to retrieve blobs at the same time
"batch_size_retrieve_content": ("int", 10),
# Number of contents to update at the same time
"batch_size_update": ("int", 100),
}
CONFIG_BASE_FILENAME = "indexer/rehash"
def __init__(self) -> None:
self.config = self.parse_config_file()
self.storage = get_storage(**self.config["storage"])
self.objstorage = get_objstorage(**self.config["objstorage"])
self.compute_checksums = self.config["compute_checksums"]
self.recompute_checksums = self.config["recompute_checksums"]
self.batch_size_retrieve_content = self.config["batch_size_retrieve_content"]
self.batch_size_update = self.config["batch_size_update"]
self.log = logging.getLogger("swh.indexer.rehash")
if not self.compute_checksums:
raise ValueError("Checksums list should not be empty.")
def _read_content_ids(
self, contents: List[Dict[str, Any]]
) -> Generator[bytes, Any, None]:
"""Read the content identifiers from the contents.
"""
for c in contents:
h = c["sha1"]
if isinstance(h, str):
h = hashutil.hash_to_bytes(h)
yield h
def get_new_contents_metadata(
self, all_contents: List[Dict[str, Any]]
) -> Generator[Tuple[Dict[str, Any], List[Any]], Any, None]:
"""Retrieve raw contents and compute new checksums on the
contents. Unknown or corrupted contents are skipped.
Args:
all_contents: List of contents as dictionary with
the necessary primary keys
Yields:
tuple: tuple of (content to update, list of checksums computed)
"""
content_ids = self._read_content_ids(all_contents)
for contents in utils.grouper(content_ids, self.batch_size_retrieve_content):
contents_iter = itertools.tee(contents, 2)
try:
- content_metadata: Dict[
- bytes, List[Dict]
- ] = self.storage.content_get_metadata( # noqa
- [s for s in contents_iter[0]]
+ sha1s = [s for s in contents_iter[0]]
+ content_metadata: List[Optional[Content]] = self.storage.content_get(
+ sha1s
)
except Exception:
self.log.exception("Problem when reading contents metadata.")
continue
- for sha1, content_dicts in content_metadata.items():
- if not content_dicts:
+ for sha1, content_model in zip(sha1s, content_metadata):
+ if not content_model:
continue
- content: Dict = content_dicts[0]
+ content: Dict = content_model.to_dict()
# Recompute checksums provided in compute_checksums options
if self.recompute_checksums:
checksums_to_compute = list(self.compute_checksums)
else:
# Compute checksums provided in compute_checksums
# options not already defined for that content
checksums_to_compute = [
h for h in self.compute_checksums if not content.get(h)
]
if not checksums_to_compute: # Nothing to recompute
continue
try:
raw_content = self.objstorage.get(sha1)
except ObjNotFoundError:
self.log.warning("Content %s not found in objstorage!", sha1)
continue
content_hashes = hashutil.MultiHash.from_data(
raw_content, hash_names=checksums_to_compute
).digest()
content.update(content_hashes)
yield content, checksums_to_compute
def run(self, contents: List[Dict[str, Any]]) -> Dict:
"""Given a list of content:
- (re)compute a given set of checksums on contents available in our
object storage
- update those contents with the new metadata
Args:
contents: contents as dictionary with necessary keys.
key present in such dictionary should be the ones defined in
the 'primary_key' option.
Returns:
A summary dict with key 'status', task' status and 'count' the
number of updated contents.
"""
status = "uneventful"
count = 0
for data in utils.grouper(
self.get_new_contents_metadata(contents), self.batch_size_update
):
groups: Dict[str, List[Any]] = defaultdict(list)
for content, keys_to_update in data:
keys_str = ",".join(keys_to_update)
groups[keys_str].append(content)
for keys_to_update, contents in groups.items():
keys: List[str] = keys_to_update.split(",")
try:
self.storage.content_update(contents, keys=keys)
count += len(contents)
status = "eventful"
except Exception:
self.log.exception("Problem during update.")
continue
return {
"status": status,
"count": count,
}
diff --git a/swh/indexer/storage/__init__.py b/swh/indexer/storage/__init__.py
index c023d5d..7096229 100644
--- a/swh/indexer/storage/__init__.py
+++ b/swh/indexer/storage/__init__.py
@@ -1,649 +1,649 @@
# Copyright (C) 2015-2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import json
import psycopg2
import psycopg2.pool
from collections import defaultdict, Counter
from typing import Dict, List, Optional
+from swh.core.db.common import db_transaction_generator, db_transaction
from swh.model.hashutil import hash_to_bytes, hash_to_hex
from swh.model.model import SHA1_SIZE
-from swh.storage.common import db_transaction_generator, db_transaction
from swh.storage.exc import StorageDBError
from swh.storage.utils import get_partition_bounds_bytes
from .interface import PagedResult, Sha1
from . import converters
from .db import Db
from .exc import IndexerStorageArgumentException, DuplicateId
from .metrics import process_metrics, send_metric, timed
INDEXER_CFG_KEY = "indexer_storage"
MAPPING_NAMES = ["codemeta", "gemspec", "maven", "npm", "pkg-info"]
def get_indexer_storage(cls, args):
"""Get an indexer storage object of class `storage_class` with
arguments `storage_args`.
Args:
cls (str): storage's class, either 'local' or 'remote'
args (dict): dictionary of arguments passed to the
storage class constructor
Returns:
an instance of swh.indexer's storage (either local or remote)
Raises:
ValueError if passed an unknown storage class.
"""
if cls == "remote":
from .api.client import RemoteStorage as IndexerStorage
elif cls == "local":
from . import IndexerStorage
elif cls == "memory":
from .in_memory import IndexerStorage
else:
raise ValueError("Unknown indexer storage class `%s`" % cls)
return IndexerStorage(**args)
def check_id_duplicates(data):
"""
If any two dictionaries in `data` have the same id, raises
a `ValueError`.
Values associated to the key must be hashable.
Args:
data (List[dict]): List of dictionaries to be inserted
>>> check_id_duplicates([
... {'id': 'foo', 'data': 'spam'},
... {'id': 'bar', 'data': 'egg'},
... ])
>>> check_id_duplicates([
... {'id': 'foo', 'data': 'spam'},
... {'id': 'foo', 'data': 'egg'},
... ])
Traceback (most recent call last):
...
swh.indexer.storage.exc.DuplicateId: ['foo']
"""
counter = Counter(item["id"] for item in data)
duplicates = [id_ for (id_, count) in counter.items() if count >= 2]
if duplicates:
raise DuplicateId(duplicates)
class IndexerStorage:
"""SWH Indexer Storage
"""
def __init__(self, db, min_pool_conns=1, max_pool_conns=10):
"""
Args:
db_conn: either a libpq connection string, or a psycopg2 connection
"""
try:
if isinstance(db, psycopg2.extensions.connection):
self._pool = None
self._db = Db(db)
else:
self._pool = psycopg2.pool.ThreadedConnectionPool(
min_pool_conns, max_pool_conns, db
)
self._db = None
except psycopg2.OperationalError as e:
raise StorageDBError(e)
def get_db(self):
if self._db:
return self._db
return Db.from_pool(self._pool)
def put_db(self, db):
if db is not self._db:
db.put_conn()
@timed
@db_transaction()
def check_config(self, *, check_write, db=None, cur=None):
# Check permissions on one of the tables
if check_write:
check = "INSERT"
else:
check = "SELECT"
cur.execute(
"select has_table_privilege(current_user, 'content_mimetype', %s)", # noqa
(check,),
)
return cur.fetchone()[0]
@timed
@db_transaction_generator()
def content_mimetype_missing(self, mimetypes, db=None, cur=None):
for obj in db.content_mimetype_missing_from_list(mimetypes, cur):
yield obj[0]
@timed
@db_transaction()
def get_partition(
self,
indexer_type: str,
indexer_configuration_id: int,
partition_id: int,
nb_partitions: int,
page_token: Optional[str] = None,
limit: int = 1000,
with_textual_data=False,
db=None,
cur=None,
) -> PagedResult[Sha1]:
"""Retrieve ids of content with `indexer_type` within within partition partition_id
bound by limit.
Args:
**indexer_type**: Type of data content to index (mimetype, language, etc...)
**indexer_configuration_id**: The tool used to index data
**partition_id**: index of the partition to fetch
**nb_partitions**: total number of partitions to split into
**page_token**: opaque token used for pagination
**limit**: Limit result (default to 1000)
**with_textual_data** (bool): Deal with only textual content (True) or all
content (all contents by defaults, False)
Raises:
IndexerStorageArgumentException for;
- limit to None
- wrong indexer_type provided
Returns:
PagedResult of Sha1. If next_page_token is None, there is no more data to
fetch
"""
if limit is None:
raise IndexerStorageArgumentException("limit should not be None")
if indexer_type not in db.content_indexer_names:
err = f"Wrong type. Should be one of [{','.join(db.content_indexer_names)}]"
raise IndexerStorageArgumentException(err)
start, end = get_partition_bounds_bytes(partition_id, nb_partitions, SHA1_SIZE)
if page_token is not None:
start = hash_to_bytes(page_token)
if end is None:
end = b"\xff" * SHA1_SIZE
next_page_token: Optional[str] = None
ids = [
row[0]
for row in db.content_get_range(
indexer_type,
start,
end,
indexer_configuration_id,
limit=limit + 1,
with_textual_data=with_textual_data,
cur=cur,
)
]
if len(ids) >= limit:
next_page_token = hash_to_hex(ids[-1])
ids = ids[:limit]
assert len(ids) <= limit
return PagedResult(results=ids, next_page_token=next_page_token)
@timed
@db_transaction()
def content_mimetype_get_partition(
self,
indexer_configuration_id: int,
partition_id: int,
nb_partitions: int,
page_token: Optional[str] = None,
limit: int = 1000,
db=None,
cur=None,
) -> PagedResult[Sha1]:
return self.get_partition(
"mimetype",
indexer_configuration_id,
partition_id,
nb_partitions,
page_token=page_token,
limit=limit,
db=db,
cur=cur,
)
@timed
@process_metrics
@db_transaction()
def content_mimetype_add(
self, mimetypes: List[Dict], conflict_update: bool = False, db=None, cur=None
) -> Dict[str, int]:
"""Add mimetypes to the storage (if conflict_update is True, this will
override existing data if any).
Returns:
A dict with the number of new elements added to the storage.
"""
check_id_duplicates(mimetypes)
mimetypes.sort(key=lambda m: m["id"])
db.mktemp_content_mimetype(cur)
db.copy_to(
mimetypes,
"tmp_content_mimetype",
["id", "mimetype", "encoding", "indexer_configuration_id"],
cur,
)
count = db.content_mimetype_add_from_temp(conflict_update, cur)
return {"content_mimetype:add": count}
@timed
@db_transaction_generator()
def content_mimetype_get(self, ids, db=None, cur=None):
for c in db.content_mimetype_get_from_list(ids, cur):
yield converters.db_to_mimetype(dict(zip(db.content_mimetype_cols, c)))
@timed
@db_transaction_generator()
def content_language_missing(self, languages, db=None, cur=None):
for obj in db.content_language_missing_from_list(languages, cur):
yield obj[0]
@timed
@db_transaction_generator()
def content_language_get(self, ids, db=None, cur=None):
for c in db.content_language_get_from_list(ids, cur):
yield converters.db_to_language(dict(zip(db.content_language_cols, c)))
@timed
@process_metrics
@db_transaction()
def content_language_add(
self, languages: List[Dict], conflict_update: bool = False, db=None, cur=None
) -> Dict[str, int]:
check_id_duplicates(languages)
languages.sort(key=lambda m: m["id"])
db.mktemp_content_language(cur)
# empty language is mapped to 'unknown'
db.copy_to(
(
{
"id": lang["id"],
"lang": "unknown" if not lang["lang"] else lang["lang"],
"indexer_configuration_id": lang["indexer_configuration_id"],
}
for lang in languages
),
"tmp_content_language",
["id", "lang", "indexer_configuration_id"],
cur,
)
count = db.content_language_add_from_temp(conflict_update, cur)
return {"content_language:add": count}
@timed
@db_transaction_generator()
def content_ctags_missing(self, ctags, db=None, cur=None):
for obj in db.content_ctags_missing_from_list(ctags, cur):
yield obj[0]
@timed
@db_transaction_generator()
def content_ctags_get(self, ids, db=None, cur=None):
for c in db.content_ctags_get_from_list(ids, cur):
yield converters.db_to_ctags(dict(zip(db.content_ctags_cols, c)))
@timed
@process_metrics
@db_transaction()
def content_ctags_add(
self, ctags: List[Dict], conflict_update: bool = False, db=None, cur=None
) -> Dict[str, int]:
check_id_duplicates(ctags)
ctags.sort(key=lambda m: m["id"])
def _convert_ctags(__ctags):
"""Convert ctags dict to list of ctags.
"""
for ctags in __ctags:
yield from converters.ctags_to_db(ctags)
db.mktemp_content_ctags(cur)
db.copy_to(
list(_convert_ctags(ctags)),
tblname="tmp_content_ctags",
columns=["id", "name", "kind", "line", "lang", "indexer_configuration_id"],
cur=cur,
)
count = db.content_ctags_add_from_temp(conflict_update, cur)
return {"content_ctags:add": count}
@timed
@db_transaction_generator()
def content_ctags_search(
self, expression, limit=10, last_sha1=None, db=None, cur=None
):
for obj in db.content_ctags_search(expression, last_sha1, limit, cur=cur):
yield converters.db_to_ctags(dict(zip(db.content_ctags_cols, obj)))
@timed
@db_transaction_generator()
def content_fossology_license_get(self, ids, db=None, cur=None):
d = defaultdict(list)
for c in db.content_fossology_license_get_from_list(ids, cur):
license = dict(zip(db.content_fossology_license_cols, c))
id_ = license["id"]
d[id_].append(converters.db_to_fossology_license(license))
for id_, facts in d.items():
yield {id_: facts}
@timed
@process_metrics
@db_transaction()
def content_fossology_license_add(
self, licenses: List[Dict], conflict_update: bool = False, db=None, cur=None
) -> Dict[str, int]:
check_id_duplicates(licenses)
licenses.sort(key=lambda m: m["id"])
db.mktemp_content_fossology_license(cur)
db.copy_to(
(
{
"id": sha1["id"],
"indexer_configuration_id": sha1["indexer_configuration_id"],
"license": license,
}
for sha1 in licenses
for license in sha1["licenses"]
),
tblname="tmp_content_fossology_license",
columns=["id", "license", "indexer_configuration_id"],
cur=cur,
)
count = db.content_fossology_license_add_from_temp(conflict_update, cur)
return {"content_fossology_license:add": count}
@timed
@db_transaction()
def content_fossology_license_get_partition(
self,
indexer_configuration_id: int,
partition_id: int,
nb_partitions: int,
page_token: Optional[str] = None,
limit: int = 1000,
db=None,
cur=None,
) -> PagedResult[Sha1]:
return self.get_partition(
"fossology_license",
indexer_configuration_id,
partition_id,
nb_partitions,
page_token=page_token,
limit=limit,
with_textual_data=True,
db=db,
cur=cur,
)
@timed
@db_transaction_generator()
def content_metadata_missing(self, metadata, db=None, cur=None):
for obj in db.content_metadata_missing_from_list(metadata, cur):
yield obj[0]
@timed
@db_transaction_generator()
def content_metadata_get(self, ids, db=None, cur=None):
for c in db.content_metadata_get_from_list(ids, cur):
yield converters.db_to_metadata(dict(zip(db.content_metadata_cols, c)))
@timed
@process_metrics
@db_transaction()
def content_metadata_add(
self, metadata: List[Dict], conflict_update: bool = False, db=None, cur=None
) -> Dict[str, int]:
check_id_duplicates(metadata)
metadata.sort(key=lambda m: m["id"])
db.mktemp_content_metadata(cur)
db.copy_to(
metadata,
"tmp_content_metadata",
["id", "metadata", "indexer_configuration_id"],
cur,
)
count = db.content_metadata_add_from_temp(conflict_update, cur)
return {
"content_metadata:add": count,
}
@timed
@db_transaction_generator()
def revision_intrinsic_metadata_missing(self, metadata, db=None, cur=None):
for obj in db.revision_intrinsic_metadata_missing_from_list(metadata, cur):
yield obj[0]
@timed
@db_transaction_generator()
def revision_intrinsic_metadata_get(self, ids, db=None, cur=None):
for c in db.revision_intrinsic_metadata_get_from_list(ids, cur):
yield converters.db_to_metadata(
dict(zip(db.revision_intrinsic_metadata_cols, c))
)
@timed
@process_metrics
@db_transaction()
def revision_intrinsic_metadata_add(
self, metadata: List[Dict], conflict_update: bool = False, db=None, cur=None
) -> Dict[str, int]:
check_id_duplicates(metadata)
metadata.sort(key=lambda m: m["id"])
db.mktemp_revision_intrinsic_metadata(cur)
db.copy_to(
metadata,
"tmp_revision_intrinsic_metadata",
["id", "metadata", "mappings", "indexer_configuration_id"],
cur,
)
count = db.revision_intrinsic_metadata_add_from_temp(conflict_update, cur)
return {
"revision_intrinsic_metadata:add": count,
}
@timed
@process_metrics
@db_transaction()
def revision_intrinsic_metadata_delete(
self, entries: List[Dict], db=None, cur=None
) -> Dict:
count = db.revision_intrinsic_metadata_delete(entries, cur)
return {"revision_intrinsic_metadata:del": count}
@timed
@db_transaction_generator()
def origin_intrinsic_metadata_get(self, ids, db=None, cur=None):
for c in db.origin_intrinsic_metadata_get_from_list(ids, cur):
yield converters.db_to_metadata(
dict(zip(db.origin_intrinsic_metadata_cols, c))
)
@timed
@process_metrics
@db_transaction()
def origin_intrinsic_metadata_add(
self, metadata: List[Dict], conflict_update: bool = False, db=None, cur=None
) -> Dict[str, int]:
check_id_duplicates(metadata)
metadata.sort(key=lambda m: m["id"])
db.mktemp_origin_intrinsic_metadata(cur)
db.copy_to(
metadata,
"tmp_origin_intrinsic_metadata",
["id", "metadata", "indexer_configuration_id", "from_revision", "mappings"],
cur,
)
count = db.origin_intrinsic_metadata_add_from_temp(conflict_update, cur)
return {
"origin_intrinsic_metadata:add": count,
}
@timed
@process_metrics
@db_transaction()
def origin_intrinsic_metadata_delete(
self, entries: List[Dict], db=None, cur=None
) -> Dict:
count = db.origin_intrinsic_metadata_delete(entries, cur)
return {
"origin_intrinsic_metadata:del": count,
}
@timed
@db_transaction_generator()
def origin_intrinsic_metadata_search_fulltext(
self, conjunction, limit=100, db=None, cur=None
):
for c in db.origin_intrinsic_metadata_search_fulltext(
conjunction, limit=limit, cur=cur
):
yield converters.db_to_metadata(
dict(zip(db.origin_intrinsic_metadata_cols, c))
)
@timed
@db_transaction()
def origin_intrinsic_metadata_search_by_producer(
self,
page_token="",
limit=100,
ids_only=False,
mappings=None,
tool_ids=None,
db=None,
cur=None,
):
assert isinstance(page_token, str)
# we go to limit+1 to check whether we should add next_page_token in
# the response
res = db.origin_intrinsic_metadata_search_by_producer(
page_token, limit + 1, ids_only, mappings, tool_ids, cur
)
result = {}
if ids_only:
result["origins"] = [origin for (origin,) in res]
if len(result["origins"]) > limit:
result["origins"][limit:] = []
result["next_page_token"] = result["origins"][-1]
else:
result["origins"] = [
converters.db_to_metadata(
dict(zip(db.origin_intrinsic_metadata_cols, c))
)
for c in res
]
if len(result["origins"]) > limit:
result["origins"][limit:] = []
result["next_page_token"] = result["origins"][-1]["id"]
return result
@timed
@db_transaction()
def origin_intrinsic_metadata_stats(self, db=None, cur=None):
mapping_names = [m for m in MAPPING_NAMES]
select_parts = []
# Count rows for each mapping
for mapping_name in mapping_names:
select_parts.append(
(
"sum(case when (mappings @> ARRAY['%s']) "
" then 1 else 0 end)"
)
% mapping_name
)
# Total
select_parts.append("sum(1)")
# Rows whose metadata has at least one key that is not '@context'
select_parts.append(
"sum(case when ('{}'::jsonb @> (metadata - '@context')) "
" then 0 else 1 end)"
)
cur.execute(
"select " + ", ".join(select_parts) + " from origin_intrinsic_metadata"
)
results = dict(zip(mapping_names + ["total", "non_empty"], cur.fetchone()))
return {
"total": results.pop("total"),
"non_empty": results.pop("non_empty"),
"per_mapping": results,
}
@timed
@db_transaction_generator()
def indexer_configuration_add(self, tools, db=None, cur=None):
db.mktemp_indexer_configuration(cur)
db.copy_to(
tools,
"tmp_indexer_configuration",
["tool_name", "tool_version", "tool_configuration"],
cur,
)
tools = db.indexer_configuration_add_from_temp(cur)
count = 0
for line in tools:
yield dict(zip(db.indexer_configuration_cols, line))
count += 1
send_metric(
"indexer_configuration:add", count, method_name="indexer_configuration_add"
)
@timed
@db_transaction()
def indexer_configuration_get(self, tool, db=None, cur=None):
tool_conf = tool["tool_configuration"]
if isinstance(tool_conf, dict):
tool_conf = json.dumps(tool_conf)
idx = db.indexer_configuration_get(
tool["tool_name"], tool["tool_version"], tool_conf
)
if not idx:
return None
return dict(zip(db.indexer_configuration_cols, idx))
diff --git a/swh/indexer/storage/in_memory.py b/swh/indexer/storage/in_memory.py
index a70f212..84576ad 100644
--- a/swh/indexer/storage/in_memory.py
+++ b/swh/indexer/storage/in_memory.py
@@ -1,496 +1,496 @@
# Copyright (C) 2018-2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import itertools
import json
import operator
import math
import re
from collections import defaultdict, Counter
from typing import Any, Dict, List, Optional
+from swh.core.collections import SortedList
from swh.model.model import SHA1_SIZE
from swh.model.hashutil import hash_to_hex, hash_to_bytes
from swh.storage.utils import get_partition_bounds_bytes
-from swh.storage.in_memory import SortedList
from . import MAPPING_NAMES, check_id_duplicates
from .exc import IndexerStorageArgumentException
from .interface import PagedResult, Sha1
SHA1_DIGEST_SIZE = 160
def _transform_tool(tool):
return {
"id": tool["id"],
"name": tool["tool_name"],
"version": tool["tool_version"],
"configuration": tool["tool_configuration"],
}
def check_id_types(data: List[Dict[str, Any]]):
"""Checks all elements of the list have an 'id' whose type is 'bytes'."""
if not all(isinstance(item.get("id"), bytes) for item in data):
raise IndexerStorageArgumentException("identifiers must be bytes.")
class SubStorage:
"""Implements common missing/get/add logic for each indexer type."""
def __init__(self, tools):
self._tools = tools
self._sorted_ids = SortedList[bytes, bytes]()
self._data = {} # map (id_, tool_id) -> metadata_dict
self._tools_per_id = defaultdict(set) # map id_ -> Set[tool_id]
def missing(self, ids):
"""List data missing from storage.
Args:
data (iterable): dictionaries with keys:
- **id** (bytes): sha1 identifier
- **indexer_configuration_id** (int): tool used to compute
the results
Yields:
missing sha1s
"""
for id_ in ids:
tool_id = id_["indexer_configuration_id"]
id_ = id_["id"]
if tool_id not in self._tools_per_id.get(id_, set()):
yield id_
def get(self, ids):
"""Retrieve data per id.
Args:
ids (iterable): sha1 checksums
Yields:
dict: dictionaries with the following keys:
- **id** (bytes)
- **tool** (dict): tool used to compute metadata
- arbitrary data (as provided to `add`)
"""
for id_ in ids:
for tool_id in self._tools_per_id.get(id_, set()):
key = (id_, tool_id)
yield {
"id": id_,
"tool": _transform_tool(self._tools[tool_id]),
**self._data[key],
}
def get_all(self):
yield from self.get(self._sorted_ids)
def get_partition(
self,
indexer_configuration_id: int,
partition_id: int,
nb_partitions: int,
page_token: Optional[str] = None,
limit: int = 1000,
) -> PagedResult[Sha1]:
"""Retrieve ids of content with `indexer_type` within partition partition_id
bound by limit.
Args:
**indexer_type**: Type of data content to index (mimetype, language, etc...)
**indexer_configuration_id**: The tool used to index data
**partition_id**: index of the partition to fetch
**nb_partitions**: total number of partitions to split into
**page_token**: opaque token used for pagination
**limit**: Limit result (default to 1000)
**with_textual_data** (bool): Deal with only textual content (True) or all
content (all contents by defaults, False)
Raises:
IndexerStorageArgumentException for;
- limit to None
- wrong indexer_type provided
Returns:
PagedResult of Sha1. If next_page_token is None, there is no more data to
fetch
"""
if limit is None:
raise IndexerStorageArgumentException("limit should not be None")
(start, end) = get_partition_bounds_bytes(
partition_id, nb_partitions, SHA1_SIZE
)
if page_token:
start = hash_to_bytes(page_token)
if end is None:
end = b"\xff" * SHA1_SIZE
next_page_token: Optional[str] = None
ids: List[Sha1] = []
sha1s = (sha1 for sha1 in self._sorted_ids.iter_from(start))
for counter, sha1 in enumerate(sha1s):
if sha1 > end:
break
if counter >= limit:
next_page_token = hash_to_hex(sha1)
break
ids.append(sha1)
assert len(ids) <= limit
return PagedResult(results=ids, next_page_token=next_page_token)
def add(self, data: List[Dict], conflict_update: bool) -> int:
"""Add data not present in storage.
Args:
data (iterable): dictionaries with keys:
- **id**: sha1
- **indexer_configuration_id**: tool used to compute the
results
- arbitrary data
conflict_update (bool): Flag to determine if we want to overwrite
(true) or skip duplicates (false)
"""
data = list(data)
check_id_duplicates(data)
count = 0
for item in data:
item = item.copy()
tool_id = item.pop("indexer_configuration_id")
id_ = item.pop("id")
data_item = item
if not conflict_update and tool_id in self._tools_per_id.get(id_, set()):
# Duplicate, should not be updated
continue
key = (id_, tool_id)
self._data[key] = data_item
self._tools_per_id[id_].add(tool_id)
count += 1
if id_ not in self._sorted_ids:
self._sorted_ids.add(id_)
return count
def add_merge(
self, new_data: List[Dict], conflict_update: bool, merged_key: str
) -> int:
added = 0
all_subitems: List
for new_item in new_data:
id_ = new_item["id"]
tool_id = new_item["indexer_configuration_id"]
if conflict_update:
all_subitems = []
else:
existing = list(self.get([id_]))
all_subitems = [
old_subitem
for existing_item in existing
if existing_item["tool"]["id"] == tool_id
for old_subitem in existing_item[merged_key]
]
for new_subitem in new_item[merged_key]:
if new_subitem not in all_subitems:
all_subitems.append(new_subitem)
added += self.add(
[
{
"id": id_,
"indexer_configuration_id": tool_id,
merged_key: all_subitems,
}
],
conflict_update=True,
)
if id_ not in self._sorted_ids:
self._sorted_ids.add(id_)
return added
def delete(self, entries: List[Dict]) -> int:
"""Delete entries and return the number of entries deleted.
"""
deleted = 0
for entry in entries:
(id_, tool_id) = (entry["id"], entry["indexer_configuration_id"])
key = (id_, tool_id)
if tool_id in self._tools_per_id[id_]:
self._tools_per_id[id_].remove(tool_id)
if key in self._data:
deleted += 1
del self._data[key]
return deleted
class IndexerStorage:
"""In-memory SWH indexer storage."""
def __init__(self):
self._tools = {}
self._mimetypes = SubStorage(self._tools)
self._languages = SubStorage(self._tools)
self._content_ctags = SubStorage(self._tools)
self._licenses = SubStorage(self._tools)
self._content_metadata = SubStorage(self._tools)
self._revision_intrinsic_metadata = SubStorage(self._tools)
self._origin_intrinsic_metadata = SubStorage(self._tools)
def check_config(self, *, check_write):
return True
def content_mimetype_missing(self, mimetypes):
yield from self._mimetypes.missing(mimetypes)
def content_mimetype_get_partition(
self,
indexer_configuration_id: int,
partition_id: int,
nb_partitions: int,
page_token: Optional[str] = None,
limit: int = 1000,
) -> PagedResult[Sha1]:
return self._mimetypes.get_partition(
indexer_configuration_id, partition_id, nb_partitions, page_token, limit
)
def content_mimetype_add(
self, mimetypes: List[Dict], conflict_update: bool = False
) -> Dict[str, int]:
check_id_types(mimetypes)
added = self._mimetypes.add(mimetypes, conflict_update)
return {"content_mimetype:add": added}
def content_mimetype_get(self, ids):
yield from self._mimetypes.get(ids)
def content_language_missing(self, languages):
yield from self._languages.missing(languages)
def content_language_get(self, ids):
yield from self._languages.get(ids)
def content_language_add(
self, languages: List[Dict], conflict_update: bool = False
) -> Dict[str, int]:
check_id_types(languages)
added = self._languages.add(languages, conflict_update)
return {"content_language:add": added}
def content_ctags_missing(self, ctags):
yield from self._content_ctags.missing(ctags)
def content_ctags_get(self, ids):
for item in self._content_ctags.get(ids):
for item_ctags_item in item["ctags"]:
yield {"id": item["id"], "tool": item["tool"], **item_ctags_item}
def content_ctags_add(
self, ctags: List[Dict], conflict_update: bool = False
) -> Dict[str, int]:
check_id_types(ctags)
added = self._content_ctags.add_merge(ctags, conflict_update, "ctags")
return {"content_ctags:add": added}
def content_ctags_search(self, expression, limit=10, last_sha1=None):
nb_matches = 0
for ((id_, tool_id), item) in sorted(self._content_ctags._data.items()):
if id_ <= (last_sha1 or bytes(0 for _ in range(SHA1_DIGEST_SIZE))):
continue
for ctags_item in item["ctags"]:
if ctags_item["name"] != expression:
continue
nb_matches += 1
yield {
"id": id_,
"tool": _transform_tool(self._tools[tool_id]),
**ctags_item,
}
if nb_matches >= limit:
return
def content_fossology_license_get(self, ids):
# Rewrites the output of SubStorage.get from the old format to
# the new one. SubStorage.get should be updated once all other
# *_get methods use the new format.
# See: https://forge.softwareheritage.org/T1433
res = {}
for d in self._licenses.get(ids):
res.setdefault(d.pop("id"), []).append(d)
for (id_, facts) in res.items():
yield {id_: facts}
def content_fossology_license_add(
self, licenses: List[Dict], conflict_update: bool = False
) -> Dict[str, int]:
check_id_types(licenses)
added = self._licenses.add_merge(licenses, conflict_update, "licenses")
return {"fossology_license_add:add": added}
def content_fossology_license_get_partition(
self,
indexer_configuration_id: int,
partition_id: int,
nb_partitions: int,
page_token: Optional[str] = None,
limit: int = 1000,
) -> PagedResult[Sha1]:
return self._licenses.get_partition(
indexer_configuration_id, partition_id, nb_partitions, page_token, limit
)
def content_metadata_missing(self, metadata):
yield from self._content_metadata.missing(metadata)
def content_metadata_get(self, ids):
yield from self._content_metadata.get(ids)
def content_metadata_add(
self, metadata: List[Dict], conflict_update: bool = False
) -> Dict[str, int]:
check_id_types(metadata)
added = self._content_metadata.add(metadata, conflict_update)
return {"content_metadata:add": added}
def revision_intrinsic_metadata_missing(self, metadata):
yield from self._revision_intrinsic_metadata.missing(metadata)
def revision_intrinsic_metadata_get(self, ids):
yield from self._revision_intrinsic_metadata.get(ids)
def revision_intrinsic_metadata_add(
self, metadata: List[Dict], conflict_update: bool = False
) -> Dict[str, int]:
check_id_types(metadata)
added = self._revision_intrinsic_metadata.add(metadata, conflict_update)
return {"revision_intrinsic_metadata:add": added}
def revision_intrinsic_metadata_delete(self, entries: List[Dict]) -> Dict:
deleted = self._revision_intrinsic_metadata.delete(entries)
return {"revision_intrinsic_metadata:del": deleted}
def origin_intrinsic_metadata_get(self, ids):
yield from self._origin_intrinsic_metadata.get(ids)
def origin_intrinsic_metadata_add(
self, metadata: List[Dict], conflict_update: bool = False
) -> Dict[str, int]:
added = self._origin_intrinsic_metadata.add(metadata, conflict_update)
return {"origin_intrinsic_metadata:add": added}
def origin_intrinsic_metadata_delete(self, entries: List[Dict]) -> Dict:
deleted = self._origin_intrinsic_metadata.delete(entries)
return {"origin_intrinsic_metadata:del": deleted}
def origin_intrinsic_metadata_search_fulltext(self, conjunction, limit=100):
# A very crude fulltext search implementation, but that's enough
# to work on English metadata
tokens_re = re.compile("[a-zA-Z0-9]+")
search_tokens = list(itertools.chain(*map(tokens_re.findall, conjunction)))
def rank(data):
# Tokenize the metadata
text = json.dumps(data["metadata"])
text_tokens = tokens_re.findall(text)
text_token_occurences = Counter(text_tokens)
# Count the number of occurrences of search tokens in the text
score = 0
for search_token in search_tokens:
if text_token_occurences[search_token] == 0:
# Search token is not in the text.
return 0
score += text_token_occurences[search_token]
# Normalize according to the text's length
return score / math.log(len(text_tokens))
results = [
(rank(data), data) for data in self._origin_intrinsic_metadata.get_all()
]
results = [(rank_, data) for (rank_, data) in results if rank_ > 0]
results.sort(
key=operator.itemgetter(0), reverse=True # Don't try to order 'data'
)
for (rank_, result) in results[:limit]:
yield result
def origin_intrinsic_metadata_search_by_producer(
self, page_token="", limit=100, ids_only=False, mappings=None, tool_ids=None
):
assert isinstance(page_token, str)
nb_results = 0
if mappings is not None:
mappings = frozenset(mappings)
if tool_ids is not None:
tool_ids = frozenset(tool_ids)
origins = []
# we go to limit+1 to check whether we should add next_page_token in
# the response
for entry in self._origin_intrinsic_metadata.get_all():
if entry["id"] <= page_token:
continue
if nb_results >= (limit + 1):
break
if mappings is not None and mappings.isdisjoint(entry["mappings"]):
continue
if tool_ids is not None and entry["tool"]["id"] not in tool_ids:
continue
origins.append(entry)
nb_results += 1
result = {}
if len(origins) > limit:
origins = origins[:limit]
result["next_page_token"] = origins[-1]["id"]
if ids_only:
origins = [origin["id"] for origin in origins]
result["origins"] = origins
return result
def origin_intrinsic_metadata_stats(self):
mapping_count = {m: 0 for m in MAPPING_NAMES}
total = non_empty = 0
for data in self._origin_intrinsic_metadata.get_all():
total += 1
if set(data["metadata"]) - {"@context"}:
non_empty += 1
for mapping in data["mappings"]:
mapping_count[mapping] += 1
return {"per_mapping": mapping_count, "total": total, "non_empty": non_empty}
def indexer_configuration_add(self, tools):
inserted = []
for tool in tools:
tool = tool.copy()
id_ = self._tool_key(tool)
tool["id"] = id_
self._tools[id_] = tool
inserted.append(tool)
return inserted
def indexer_configuration_get(self, tool):
return self._tools.get(self._tool_key(tool))
def _tool_key(self, tool):
return hash(
(
tool["tool_name"],
tool["tool_version"],
json.dumps(tool["tool_configuration"], sort_keys=True),
)
)
diff --git a/swh/indexer/tests/utils.py b/swh/indexer/tests/utils.py
index 04a34db..880d9bd 100644
--- a/swh/indexer/tests/utils.py
+++ b/swh/indexer/tests/utils.py
@@ -1,770 +1,763 @@
# Copyright (C) 2017-2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import abc
import functools
from typing import Dict, Any
import unittest
from hypothesis import strategies
from swh.core.api.classes import stream_results
from swh.model import hashutil
from swh.model.hashutil import hash_to_bytes
from swh.model.model import (
Content,
Directory,
DirectoryEntry,
Origin,
OriginVisit,
OriginVisitStatus,
Person,
Revision,
RevisionType,
- SHA1_SIZE,
Snapshot,
SnapshotBranch,
TargetType,
Timestamp,
TimestampWithTimezone,
)
-from swh.storage.utils import now, get_partition_bounds_bytes
+from swh.storage.utils import now
from swh.indexer.storage import INDEXER_CFG_KEY
BASE_TEST_CONFIG: Dict[str, Dict[str, Any]] = {
"storage": {"cls": "memory"},
"objstorage": {"cls": "memory", "args": {},},
INDEXER_CFG_KEY: {"cls": "memory", "args": {},},
}
ORIGINS = [
Origin(url="https://github.com/SoftwareHeritage/swh-storage"),
Origin(url="rsync://ftp.gnu.org/gnu/3dldf"),
Origin(url="https://forge.softwareheritage.org/source/jesuisgpl/"),
Origin(url="https://pypi.org/project/limnoria/"),
Origin(url="http://0-512-md.googlecode.com/svn/"),
Origin(url="https://github.com/librariesio/yarn-parser"),
Origin(url="https://github.com/librariesio/yarn-parser.git"),
]
ORIGIN_VISITS = [
{"type": "git", "origin": ORIGINS[0].url},
{"type": "ftp", "origin": ORIGINS[1].url},
{"type": "deposit", "origin": ORIGINS[2].url},
{"type": "pypi", "origin": ORIGINS[3].url},
{"type": "svn", "origin": ORIGINS[4].url},
{"type": "git", "origin": ORIGINS[5].url},
{"type": "git", "origin": ORIGINS[6].url},
]
DIRECTORY = Directory(
id=hash_to_bytes("34f335a750111ca0a8b64d8034faec9eedc396be"),
entries=(
DirectoryEntry(
name=b"index.js",
type="file",
target=hash_to_bytes("01c9379dfc33803963d07c1ccc748d3fe4c96bb5"),
perms=0o100644,
),
DirectoryEntry(
name=b"package.json",
type="file",
target=hash_to_bytes("26a9f72a7c87cc9205725cfd879f514ff4f3d8d5"),
perms=0o100644,
),
DirectoryEntry(
name=b".github",
type="dir",
target=Directory(entries=()).id,
perms=0o040000,
),
),
)
DIRECTORY2 = Directory(
id=b"\xf8zz\xa1\x12`<1$\xfav\xf9\x01\xfd5\x85F`\xf2\xb6",
entries=(
DirectoryEntry(
name=b"package.json",
type="file",
target=hash_to_bytes("f5305243b3ce7ef8dc864ebc73794da304025beb"),
perms=0o100644,
),
),
)
REVISION = Revision(
id=hash_to_bytes("c6201cb1b9b9df9a7542f9665c3b5dfab85e9775"),
message=b"Improve search functionality",
author=Person(
name=b"Andrew Nesbitt",
fullname=b"Andrew Nesbitt <andrewnez@gmail.com>",
email=b"andrewnez@gmail.com",
),
committer=Person(
name=b"Andrew Nesbitt",
fullname=b"Andrew Nesbitt <andrewnez@gmail.com>",
email=b"andrewnez@gmail.com",
),
committer_date=TimestampWithTimezone(
timestamp=Timestamp(seconds=1380883849, microseconds=0,),
offset=120,
negative_utc=False,
),
type=RevisionType.GIT,
synthetic=False,
date=TimestampWithTimezone(
timestamp=Timestamp(seconds=1487596456, microseconds=0,),
offset=0,
negative_utc=False,
),
directory=DIRECTORY2.id,
parents=(),
)
REVISIONS = [REVISION]
SNAPSHOTS = [
Snapshot(
id=hash_to_bytes("a50fde72265343b7d28cecf6db20d98a81d21965"),
branches={
b"refs/heads/add-revision-origin-cache": SnapshotBranch(
target=b'L[\xce\x1c\x88\x8eF\t\xf1"\x19\x1e\xfb\xc0s\xe7/\xe9l\x1e',
target_type=TargetType.REVISION,
),
b"refs/head/master": SnapshotBranch(
target=b"8K\x12\x00d\x03\xcc\xe4]bS\xe3\x8f{\xd7}\xac\xefrm",
target_type=TargetType.REVISION,
),
b"HEAD": SnapshotBranch(
target=b"refs/head/master", target_type=TargetType.ALIAS
),
b"refs/tags/v0.0.103": SnapshotBranch(
target=b'\xb6"Im{\xfdLb\xb0\x94N\xea\x96m\x13x\x88+\x0f\xdd',
target_type=TargetType.RELEASE,
),
},
),
Snapshot(
id=hash_to_bytes("2c67f69a416bca4e1f3fcd848c588fab88ad0642"),
branches={
b"3DLDF-1.1.4.tar.gz": SnapshotBranch(
target=b'dJ\xfb\x1c\x91\xf4\x82B%]6\xa2\x90|\xd3\xfc"G\x99\x11',
target_type=TargetType.REVISION,
),
b"3DLDF-2.0.2.tar.gz": SnapshotBranch(
target=b"\xb6\x0e\xe7\x9e9\xac\xaa\x19\x9e=\xd1\xc5\x00\\\xc6\xfc\xe0\xa6\xb4V", # noqa
target_type=TargetType.REVISION,
),
b"3DLDF-2.0.3-examples.tar.gz": SnapshotBranch(
target=b"!H\x19\xc0\xee\x82-\x12F1\xbd\x97\xfe\xadZ\x80\x80\xc1\x83\xff", # noqa
target_type=TargetType.REVISION,
),
b"3DLDF-2.0.3.tar.gz": SnapshotBranch(
target=b"\x8e\xa9\x8e/\xea}\x9feF\xf4\x9f\xfd\xee\xcc\x1a\xb4`\x8c\x8by", # noqa
target_type=TargetType.REVISION,
),
b"3DLDF-2.0.tar.gz": SnapshotBranch(
target=b"F6*\xff(?\x19a\xef\xb6\xc2\x1fv$S\xe3G\xd3\xd1m",
target_type=TargetType.REVISION,
),
},
),
Snapshot(
id=hash_to_bytes("68c0d26104d47e278dd6be07ed61fafb561d0d20"),
branches={
b"master": SnapshotBranch(
target=b"\xe7n\xa4\x9c\x9f\xfb\xb7\xf76\x11\x08{\xa6\xe9\x99\xb1\x9e]q\xeb", # noqa
target_type=TargetType.REVISION,
)
},
),
Snapshot(
id=hash_to_bytes("f255245269e15fc99d284affd79f766668de0b67"),
branches={
b"HEAD": SnapshotBranch(
target=b"releases/2018.09.09", target_type=TargetType.ALIAS
),
b"releases/2018.09.01": SnapshotBranch(
target=b"<\xee1(\xe8\x8d_\xc1\xc9\xa6rT\xf1\x1d\xbb\xdfF\xfdw\xcf",
target_type=TargetType.REVISION,
),
b"releases/2018.09.09": SnapshotBranch(
target=b"\x83\xb9\xb6\xc7\x05\xb1%\xd0\xfem\xd8kA\x10\x9d\xc5\xfa2\xf8t", # noqa
target_type=TargetType.REVISION,
),
},
),
Snapshot(
id=hash_to_bytes("a1a28c0ab387a8f9e0618cb705eab81fc448f473"),
branches={
b"master": SnapshotBranch(
target=b"\xe4?r\xe1,\x88\xab\xec\xe7\x9a\x87\xb8\xc9\xad#.\x1bw=\x18",
target_type=TargetType.REVISION,
)
},
),
Snapshot(
id=hash_to_bytes("bb4fd3a836930ce629d912864319637040ff3040"),
branches={
b"HEAD": SnapshotBranch(
target=REVISION.id, target_type=TargetType.REVISION,
)
},
),
Snapshot(
id=hash_to_bytes("bb4fd3a836930ce629d912864319637040ff3040"),
branches={
b"HEAD": SnapshotBranch(
target=REVISION.id, target_type=TargetType.REVISION,
)
},
),
]
SHA1_TO_LICENSES = {
"01c9379dfc33803963d07c1ccc748d3fe4c96bb5": ["GPL"],
"02fb2c89e14f7fab46701478c83779c7beb7b069": ["Apache2.0"],
"103bc087db1d26afc3a0283f38663d081e9b01e6": ["MIT"],
"688a5ef812c53907562fe379d4b3851e69c7cb15": ["AGPL"],
"da39a3ee5e6b4b0d3255bfef95601890afd80709": [],
}
SHA1_TO_CTAGS = {
"01c9379dfc33803963d07c1ccc748d3fe4c96bb5": [
{"name": "foo", "kind": "str", "line": 10, "lang": "bar",}
],
"d4c647f0fc257591cc9ba1722484229780d1c607": [
{"name": "let", "kind": "int", "line": 100, "lang": "haskell",}
],
"688a5ef812c53907562fe379d4b3851e69c7cb15": [
{"name": "symbol", "kind": "float", "line": 99, "lang": "python",}
],
}
OBJ_STORAGE_DATA = {
"01c9379dfc33803963d07c1ccc748d3fe4c96bb5": b"this is some text",
"688a5ef812c53907562fe379d4b3851e69c7cb15": b"another text",
"8986af901dd2043044ce8f0d8fc039153641cf17": b"yet another text",
"02fb2c89e14f7fab46701478c83779c7beb7b069": b"""
import unittest
import logging
from swh.indexer.mimetype import MimetypeIndexer
from swh.indexer.tests.test_utils import MockObjStorage
class MockStorage():
def content_mimetype_add(self, mimetypes):
self.state = mimetypes
self.conflict_update = conflict_update
def indexer_configuration_add(self, tools):
return [{
'id': 10,
}]
""",
"103bc087db1d26afc3a0283f38663d081e9b01e6": b"""
#ifndef __AVL__
#define __AVL__
typedef struct _avl_tree avl_tree;
typedef struct _data_t {
int content;
} data_t;
""",
"93666f74f1cf635c8c8ac118879da6ec5623c410": b"""
(should 'pygments (recognize 'lisp 'easily))
""",
"26a9f72a7c87cc9205725cfd879f514ff4f3d8d5": b"""
{
"name": "test_metadata",
"version": "0.0.1",
"description": "Simple package.json test for indexer",
"repository": {
"type": "git",
"url": "https://github.com/moranegg/metadata_test"
}
}
""",
"d4c647f0fc257591cc9ba1722484229780d1c607": b"""
{
"version": "5.0.3",
"name": "npm",
"description": "a package manager for JavaScript",
"keywords": [
"install",
"modules",
"package manager",
"package.json"
],
"preferGlobal": true,
"config": {
"publishtest": false
},
"homepage": "https://docs.npmjs.com/",
"author": "Isaac Z. Schlueter <i@izs.me> (http://blog.izs.me)",
"repository": {
"type": "git",
"url": "https://github.com/npm/npm"
},
"bugs": {
"url": "https://github.com/npm/npm/issues"
},
"dependencies": {
"JSONStream": "~1.3.1",
"abbrev": "~1.1.0",
"ansi-regex": "~2.1.1",
"ansicolors": "~0.3.2",
"ansistyles": "~0.1.3"
},
"devDependencies": {
"tacks": "~1.2.6",
"tap": "~10.3.2"
},
"license": "Artistic-2.0"
}
""",
"a7ab314d8a11d2c93e3dcf528ca294e7b431c449": b"""
""",
"da39a3ee5e6b4b0d3255bfef95601890afd80709": b"",
# was 626364 / b'bcd'
"e3e40fee6ff8a52f06c3b428bfe7c0ed2ef56e92": b"unimportant content for bcd",
# was 636465 / b'cde' now yarn-parser package.json
"f5305243b3ce7ef8dc864ebc73794da304025beb": b"""
{
"name": "yarn-parser",
"version": "1.0.0",
"description": "Tiny web service for parsing yarn.lock files",
"main": "index.js",
"scripts": {
"start": "node index.js",
"test": "mocha"
},
"engines": {
"node": "9.8.0"
},
"repository": {
"type": "git",
"url": "git+https://github.com/librariesio/yarn-parser.git"
},
"keywords": [
"yarn",
"parse",
"lock",
"dependencies"
],
"author": "Andrew Nesbitt",
"license": "AGPL-3.0",
"bugs": {
"url": "https://github.com/librariesio/yarn-parser/issues"
},
"homepage": "https://github.com/librariesio/yarn-parser#readme",
"dependencies": {
"@yarnpkg/lockfile": "^1.0.0",
"body-parser": "^1.15.2",
"express": "^4.14.0"
},
"devDependencies": {
"chai": "^4.1.2",
"mocha": "^5.2.0",
"request": "^2.87.0",
"test": "^0.6.0"
}
}
""",
}
YARN_PARSER_METADATA = {
"@context": "https://doi.org/10.5063/schema/codemeta-2.0",
"url": "https://github.com/librariesio/yarn-parser#readme",
"codeRepository": "git+git+https://github.com/librariesio/yarn-parser.git",
"author": [{"type": "Person", "name": "Andrew Nesbitt"}],
"license": "https://spdx.org/licenses/AGPL-3.0",
"version": "1.0.0",
"description": "Tiny web service for parsing yarn.lock files",
"issueTracker": "https://github.com/librariesio/yarn-parser/issues",
"name": "yarn-parser",
"keywords": ["yarn", "parse", "lock", "dependencies"],
"type": "SoftwareSourceCode",
}
json_dict_keys = strategies.one_of(
strategies.characters(),
strategies.just("type"),
strategies.just("url"),
strategies.just("name"),
strategies.just("email"),
strategies.just("@id"),
strategies.just("@context"),
strategies.just("repository"),
strategies.just("license"),
strategies.just("repositories"),
strategies.just("licenses"),
)
"""Hypothesis strategy that generates strings, with an emphasis on those
that are often used as dictionary keys in metadata files."""
generic_json_document = strategies.recursive(
strategies.none()
| strategies.booleans()
| strategies.floats()
| strategies.characters(),
lambda children: (
strategies.lists(children, min_size=1)
| strategies.dictionaries(json_dict_keys, children, min_size=1)
),
)
"""Hypothesis strategy that generates possible values for values of JSON
metadata files."""
def json_document_strategy(keys=None):
"""Generates an hypothesis strategy that generates metadata files
for a JSON-based format that uses the given keys."""
if keys is None:
keys = strategies.characters()
else:
keys = strategies.one_of(map(strategies.just, keys))
return strategies.dictionaries(keys, generic_json_document, min_size=1)
def _tree_to_xml(root, xmlns, data):
def encode(s):
"Skips unpaired surrogates generated by json_document_strategy"
return s.encode("utf8", "replace")
def to_xml(data, indent=b" "):
if data is None:
return b""
elif isinstance(data, (bool, str, int, float)):
return indent + encode(str(data))
elif isinstance(data, list):
return b"\n".join(to_xml(v, indent=indent) for v in data)
elif isinstance(data, dict):
lines = []
for (key, value) in data.items():
lines.append(indent + encode("<{}>".format(key)))
lines.append(to_xml(value, indent=indent + b" "))
lines.append(indent + encode("</{}>".format(key)))
return b"\n".join(lines)
else:
raise TypeError(data)
return b"\n".join(
[
'<{} xmlns="{}">'.format(root, xmlns).encode(),
to_xml(data),
"</{}>".format(root).encode(),
]
)
class TreeToXmlTest(unittest.TestCase):
def test_leaves(self):
self.assertEqual(
_tree_to_xml("root", "http://example.com", None),
b'<root xmlns="http://example.com">\n\n</root>',
)
self.assertEqual(
_tree_to_xml("root", "http://example.com", True),
b'<root xmlns="http://example.com">\n True\n</root>',
)
self.assertEqual(
_tree_to_xml("root", "http://example.com", "abc"),
b'<root xmlns="http://example.com">\n abc\n</root>',
)
self.assertEqual(
_tree_to_xml("root", "http://example.com", 42),
b'<root xmlns="http://example.com">\n 42\n</root>',
)
self.assertEqual(
_tree_to_xml("root", "http://example.com", 3.14),
b'<root xmlns="http://example.com">\n 3.14\n</root>',
)
def test_dict(self):
self.assertIn(
_tree_to_xml("root", "http://example.com", {"foo": "bar", "baz": "qux"}),
[
b'<root xmlns="http://example.com">\n'
b" <foo>\n bar\n </foo>\n"
b" <baz>\n qux\n </baz>\n"
b"</root>",
b'<root xmlns="http://example.com">\n'
b" <baz>\n qux\n </baz>\n"
b" <foo>\n bar\n </foo>\n"
b"</root>",
],
)
def test_list(self):
self.assertEqual(
_tree_to_xml(
"root", "http://example.com", [{"foo": "bar"}, {"foo": "baz"},]
),
b'<root xmlns="http://example.com">\n'
b" <foo>\n bar\n </foo>\n"
b" <foo>\n baz\n </foo>\n"
b"</root>",
)
def xml_document_strategy(keys, root, xmlns):
"""Generates an hypothesis strategy that generates metadata files
for an XML format that uses the given keys."""
return strategies.builds(
functools.partial(_tree_to_xml, root, xmlns), json_document_strategy(keys)
)
def filter_dict(d, keys):
"return a copy of the dict with keys deleted"
if not isinstance(keys, (list, tuple)):
keys = (keys,)
return dict((k, v) for (k, v) in d.items() if k not in keys)
def fill_obj_storage(obj_storage):
"""Add some content in an object storage."""
for (obj_id, content) in OBJ_STORAGE_DATA.items():
obj_storage.add(content, obj_id=hash_to_bytes(obj_id))
def fill_storage(storage):
storage.origin_add(ORIGINS)
storage.directory_add([DIRECTORY, DIRECTORY2])
storage.revision_add(REVISIONS)
storage.snapshot_add(SNAPSHOTS)
for visit, snapshot in zip(ORIGIN_VISITS, SNAPSHOTS):
assert snapshot.id is not None
visit = storage.origin_visit_add(
[OriginVisit(origin=visit["origin"], date=now(), type=visit["type"])]
)[0]
visit_status = OriginVisitStatus(
origin=visit.origin,
visit=visit.visit,
date=now(),
status="full",
snapshot=snapshot.id,
)
storage.origin_visit_status_add([visit_status])
contents = []
for (obj_id, content) in OBJ_STORAGE_DATA.items():
content_hashes = hashutil.MultiHash.from_data(content).digest()
contents.append(
Content(
data=content,
length=len(content),
status="visible",
sha1=hash_to_bytes(obj_id),
sha1_git=hash_to_bytes(obj_id),
sha256=content_hashes["sha256"],
blake2s256=content_hashes["blake2s256"],
)
)
storage.content_add(contents)
class CommonContentIndexerTest(metaclass=abc.ABCMeta):
legacy_get_format = False
"""True if and only if the tested indexer uses the legacy format.
see: https://forge.softwareheritage.org/T1433
"""
def get_indexer_results(self, ids):
"""Override this for indexers that don't have a mock storage."""
return self.indexer.idx_storage.state
def assert_legacy_results_ok(self, sha1s, expected_results=None):
# XXX old format, remove this when all endpoints are
# updated to the new one
# see: https://forge.softwareheritage.org/T1433
sha1s = [
sha1 if isinstance(sha1, bytes) else hash_to_bytes(sha1) for sha1 in sha1s
]
actual_results = list(self.get_indexer_results(sha1s))
if expected_results is None:
expected_results = self.expected_results
self.assertEqual(
len(expected_results),
len(actual_results),
(expected_results, actual_results),
)
for indexed_data in actual_results:
_id = indexed_data["id"]
expected_data = expected_results[hashutil.hash_to_hex(_id)].copy()
expected_data["id"] = _id
self.assertEqual(indexed_data, expected_data)
def assert_results_ok(self, sha1s, expected_results=None):
if self.legacy_get_format:
self.assert_legacy_results_ok(sha1s, expected_results)
return
sha1s = [
sha1 if isinstance(sha1, bytes) else hash_to_bytes(sha1) for sha1 in sha1s
]
actual_results = list(self.get_indexer_results(sha1s))
if expected_results is None:
expected_results = self.expected_results
self.assertEqual(
len(expected_results),
len(actual_results),
(expected_results, actual_results),
)
for indexed_data in actual_results:
(_id, indexed_data) = list(indexed_data.items())[0]
expected_data = expected_results[hashutil.hash_to_hex(_id)].copy()
expected_data = [expected_data]
self.assertEqual(indexed_data, expected_data)
def test_index(self):
"""Known sha1 have their data indexed
"""
sha1s = [self.id0, self.id1, self.id2]
# when
self.indexer.run(sha1s, policy_update="update-dups")
self.assert_results_ok(sha1s)
# 2nd pass
self.indexer.run(sha1s, policy_update="ignore-dups")
self.assert_results_ok(sha1s)
def test_index_one_unknown_sha1(self):
"""Unknown sha1 are not indexed"""
sha1s = [
self.id1,
"799a5ef812c53907562fe379d4b3851e69c7cb15", # unknown
"800a5ef812c53907562fe379d4b3851e69c7cb15",
] # unknown
# when
self.indexer.run(sha1s, policy_update="update-dups")
# then
expected_results = {
k: v for k, v in self.expected_results.items() if k in sha1s
}
self.assert_results_ok(sha1s, expected_results)
class CommonContentIndexerPartitionTest:
"""Allows to factorize tests on range indexer.
"""
def setUp(self):
self.contents = sorted(OBJ_STORAGE_DATA)
def assert_results_ok(self, partition_id, nb_partitions, actual_results):
expected_ids = [
c.sha1
for c in stream_results(
self.indexer.storage.content_get_partition,
partition_id=partition_id,
nb_partitions=nb_partitions,
)
]
- start, end = get_partition_bounds_bytes(partition_id, nb_partitions, SHA1_SIZE)
-
actual_results = list(actual_results)
for indexed_data in actual_results:
_id = indexed_data["id"]
assert isinstance(_id, bytes)
assert _id in expected_ids
- assert start <= _id
- if end:
- assert _id <= end
-
_tool_id = indexed_data["indexer_configuration_id"]
assert _tool_id == self.indexer.tool["id"]
def test__index_contents(self):
"""Indexing contents without existing data results in indexed data
"""
partition_id = 0
nb_partitions = 4
actual_results = list(
self.indexer._index_contents(partition_id, nb_partitions, indexed={})
)
self.assert_results_ok(partition_id, nb_partitions, actual_results)
def test__index_contents_with_indexed_data(self):
"""Indexing contents with existing data results in less indexed data
"""
partition_id = 3
nb_partitions = 4
# first pass
actual_results = list(
self.indexer._index_contents(partition_id, nb_partitions, indexed={})
)
self.assert_results_ok(partition_id, nb_partitions, actual_results)
indexed_ids = set(res["id"] for res in actual_results)
actual_results = list(
self.indexer._index_contents(
partition_id, nb_partitions, indexed=indexed_ids
)
)
# already indexed, so nothing new
assert actual_results == []
def test_generate_content_get(self):
"""Optimal indexing should result in indexed data
"""
partition_id = 0
nb_partitions = 4
actual_results = self.indexer.run(
partition_id, nb_partitions, skip_existing=False
)
assert actual_results == {"status": "uneventful"} # why?
def test_generate_content_get_no_result(self):
"""No result indexed returns False"""
actual_results = self.indexer.run(0, 0, incremental=False)
assert actual_results == {"status": "uneventful"}

File Metadata

Mime Type
text/x-diff
Expires
Tue, Aug 19, 12:42 AM (2 w, 1 d ago)
Storage Engine
blob
Storage Format
Raw Data
Storage Handle
3344239

Event Timeline