diff --git a/docs/extrinsic-metadata-specification.rst b/docs/extrinsic-metadata-specification.rst index 5956040b..00b5f27b 100644 --- a/docs/extrinsic-metadata-specification.rst +++ b/docs/extrinsic-metadata-specification.rst @@ -1,182 +1,183 @@ +:orphan: + .. _extrinsic-metadata-specification: Extrinsic metadata specification ================================ :term:`Extrinsic metadata` is information about software that is not part of the source code itself but still closely related to the software. Typical sources for extrinsic metadata are: the hosting place of a repository, which can offer metadata via its web view or API; external registries like collaborative curation initiatives; and out-of-band information available at source code archival time. Since they are not part of the source code, a dedicated mechanism to fetch and store them is needed. This specification assumes the reader is familiar with Software Heritage's :ref:`architecture` and :ref:`data-model`. Metadata sources ---------------- Authorities ^^^^^^^^^^^ Metadata authorities are entities that provide metadata about an :term:`origin`. Metadata authorities include: code hosting places, :term:`deposit` submitters, and registries (eg. Wikidata). An authority is uniquely defined by these properties: -* its type, representing the kind of authority, which is one of these values: + * its type, representing the kind of authority, which is one of these values: * `deposit`, for metadata pushed to Software Heritage at the same time as a software artifact * `forge`, for metadata pulled from the same source as the one hosting the software artifacts (which includes package managers) * `registry`, for metadata pulled from a third-party - -* its URL, which unambiguously identifies an instance of the authority type. + * its URL, which unambiguously identifies an instance of the authority type. Examples: =============== ================================= type url =============== ================================= deposit https://hal.archives-ouvertes.fr/ deposit https://hal.inria.fr/ deposit https://software.intel.com/ forge https://gitlab.com/ forge https://gitlab.inria.fr/ forge https://0xacab.org/ forge https://github.com/ registry https://www.wikidata.org/ registry https://swmath.org/ registry https://ascl.net/ =============== ================================= Metadata fetchers ^^^^^^^^^^^^^^^^^ Metadata fetchers are software components used to fetch metadata from a metadata authority, and ingest them into the Software Heritage archive. A metadata fetcher is uniquely defined by these properties: * its type * its version Examples: * :term:`loaders `, which may either discover metadata as a side-effect of loading source code, or be dedicated to fetching metadata. * :term:`listers `, which may discover metadata as a side-effect of discovering origins. * :term:`deposit` submitters, which push metadata to SWH from a third-party; usually at the same time as a :term:`software artifact` * crawlers, which fetch metadata from an authority in a way that is none of the above (eg. by querying a specific API of the origin's forge). Storage API ~~~~~~~~~~~ Authorities and metadata fetchers ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The :term:`storage` API offers these endpoints to manipulate metadata authorities and metadata fetchers: * ``metadata_authority_add(type, url, metadata)`` which adds a new metadata authority to the storage. * ``metadata_authority_get(type, url)`` which looks up a known authority (there is at most one) and if it is known, returns a dictionary with keys ``type``, ``url``, and ``metadata``. * ``metadata_fetcher_add(name, version, metadata)`` which adds a new metadata fetcher to the storage. * ``metadata_fetcher_get(name, version)`` which looks up a known fetcher (there is at most one) and if it is known, returns a dictionary with keys ``name``, ``version``, and ``metadata``. These `metadata` fields contain JSON-encodable dictionaries with information about the authority/fetcher, in a format specific to each authority/fetcher. With authority, the `metadata` field is reserved for information describing and qualifying the authority. With fetchers, the `metadata` field is reserved for configuration metadata and other technical usage. Origin metadata storage ----------------------- Extrinsic metadata are stored in SWH's :term:`storage database`. The storage API offers three endpoints to manipulate origin metadata: * Adding metadata:: origin_metadata_add(origin_url, discovery_date, authority, fetcher, format, metadata) which adds a new `metadata` byte string obtained from a given authority and associated to the origin. `authority` must be a dict containing keys `type` and `url`, and `fetcher` a dict containing keys `name` and `version`. The authority and fetcher must be known to the storage before using this endpoint. `format` is a text field indicating the format of the content of the `metadata` byte string. * Getting latest metadata:: origin_metadata_get_latest(origin_url, authority) where `authority` must be a dict containing keys `type` and `url`, which returns a dictionary corresponding to the latest metadata entry added from this origin, in the format:: { 'authority': {'type': ..., 'url': ...}, 'fetcher': {'name': ..., 'version': ...}, 'discovery_date': ..., 'format': '...', 'metadata': b'...' } * Getting all metadata:: origin_metadata_get(origin_url, authority, after, limit) which returns a list of dictionaries, one for each metadata item deposited, corresponding to the given origin and obtained from the specified authority. `authority` must be a dict containing keys `type` and `url`. Each of these dictionaries is in the following format:: { 'authority': {'type': ..., 'url': ...}, 'fetcher': {'name': ..., 'version': ...}, 'discovery_date': ..., 'format': '...', 'metadata': b'...' } The parameters ``after`` and ``limit`` are used for pagination based on the order defined by the ``discovery_date``. ``metadata`` is a bytes array (eventually encoded using Base64). Its format is specific to each authority; and is treated as an opaque value by the storage. Unifying these various formats into a common language is outside the scope of this specification. diff --git a/swh/storage/algos/diff.py b/swh/storage/algos/diff.py index 6eafbb85..70e8d45e 100644 --- a/swh/storage/algos/diff.py +++ b/swh/storage/algos/diff.py @@ -1,403 +1,404 @@ # Copyright (C) 2018 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information # Utility module to efficiently compute the list of changed files # between two directory trees. # The implementation is inspired from the work of Alberto Cortés # for the go-git project. For more details, you can refer to: # - this blog post: https://blog.sourced.tech/post/difftree/ # - the reference implementation in go: # https://github.com/src-d/go-git/tree/master/utils/merkletrie import collections from swh.model.hashutil import hash_to_bytes from swh.model.identifiers import directory_identifier from .dir_iterators import ( DirectoryIterator, DoubleDirectoryIterator, Remaining ) # get the hash identifier for an empty directory _empty_dir_hash = hash_to_bytes(directory_identifier({'entries': []})) def _get_rev(storage, rev_id): """ Return revision data from swh storage. """ return list(storage.revision_get([rev_id]))[0] class _RevisionChangesList(object): """ Helper class to track the changes between two revision directories. """ def __init__(self, storage, track_renaming): """ Args: storage: instance of swh storage track_renaming (bool): whether to track or not files renaming """ self.storage = storage self.track_renaming = track_renaming self.result = [] # dicts used to track file renaming based on hash value # we use a list instead of a single entry to handle the corner # case when a repository contains multiple instance of # the same file in different directories and a commit # renames all of them self.inserted_hash_idx = collections.defaultdict(list) self.deleted_hash_idx = collections.defaultdict(list) def add_insert(self, it_to): """ Add a file insertion in the to directory. Args: it_to (swh.storage.algos.dir_iterators.DirectoryIterator): iterator on the to directory """ to_hash = it_to.current_hash() # if the current file hash has been previously marked as deleted, # the file has been renamed if self.track_renaming and self.deleted_hash_idx[to_hash]: # pop the delete change index in the same order it was inserted change = self.result[self.deleted_hash_idx[to_hash].pop(0)] # change the delete change as a rename one change['type'] = 'rename' change['to'] = it_to.current() change['to_path'] = it_to.current_path() else: # add the insert change in the list self.result.append({'type': 'insert', 'from': None, 'from_path': None, 'to': it_to.current(), 'to_path': it_to.current_path()}) # if rename tracking is activated, add the change index in # the inserted_hash_idx dict if self.track_renaming: self.inserted_hash_idx[to_hash].append(len(self.result) - 1) def add_delete(self, it_from): """ Add a file deletion in the from directory. Args: it_from (swh.storage.algos.dir_iterators.DirectoryIterator): iterator on the from directory """ from_hash = it_from.current_hash() # if the current file has been previously marked as inserted, # the file has been renamed if self.track_renaming and self.inserted_hash_idx[from_hash]: # pop the insert change index in the same order it was inserted change = self.result[self.inserted_hash_idx[from_hash].pop(0)] # change the insert change as a rename one change['type'] = 'rename' change['from'] = it_from.current() change['from_path'] = it_from.current_path() else: # add the delete change in the list self.result.append({'type': 'delete', 'from': it_from.current(), 'from_path': it_from.current_path(), 'to': None, 'to_path': None}) # if rename tracking is activated, add the change index in # the deleted_hash_idx dict if self.track_renaming: self.deleted_hash_idx[from_hash].append(len(self.result) - 1) def add_modify(self, it_from, it_to): """ Add a file modification in the to directory. Args: it_from (swh.storage.algos.dir_iterators.DirectoryIterator): iterator on the from directory it_to (swh.storage.algos.dir_iterators.DirectoryIterator): iterator on the to directory """ self.result.append({'type': 'modify', 'from': it_from.current(), 'from_path': it_from.current_path(), 'to': it_to.current(), 'to_path': it_to.current_path()}) def add_recursive(self, it, insert): """ Recursively add changes from a directory. Args: it (swh.storage.algos.dir_iterators.DirectoryIterator): iterator on a directory insert (bool): the type of changes to add (insertion or deletion) """ # current iterated element is a regular file, # simply add adequate change in the list if not it.current_is_dir(): if insert: self.add_insert(it) else: self.add_delete(it) return # current iterated element is a directory, dir_id = it.current_hash() # handle empty dir insertion/deletion as the swh model allow # to have such object compared to git if dir_id == _empty_dir_hash: if insert: self.add_insert(it) else: self.add_delete(it) # iterate on files reachable from it and add # adequate changes in the list else: sub_it = DirectoryIterator(self.storage, dir_id, it.current_path() + b'/') sub_it_current = sub_it.step() while sub_it_current: if not sub_it.current_is_dir(): if insert: self.add_insert(sub_it) else: self.add_delete(sub_it) sub_it_current = sub_it.step() def add_recursive_insert(self, it_to): """ Recursively add files insertion from a to directory. Args: it_to (swh.storage.algos.dir_iterators.DirectoryIterator): iterator on a to directory """ self.add_recursive(it_to, True) def add_recursive_delete(self, it_from): """ Recursively add files deletion from a from directory. Args: it_from (swh.storage.algos.dir_iterators.DirectoryIterator): iterator on a from directory """ self.add_recursive(it_from, False) def _diff_elts_same_name(changes, it): """" Compare two directory entries with the same name and add adequate changes if any. Args: changes (_RevisionChangesList): the list of changes between two revisions it (swh.storage.algos.dir_iterators.DoubleDirectoryIterator): the iterator traversing two revision directories at the same time """ # compare the two current directory elements of the iterator status = it.compare() # elements have same hash and same permissions: # no changes to add and call next on the two iterators if status['same_hash'] and status['same_perms']: it.next_both() # elements are regular files and have been modified: # insert the modification change in the list and # call next on the two iterators elif status['both_are_files']: changes.add_modify(it.it_from, it.it_to) it.next_both() # one element is a regular file, the other a directory: # recursively add delete/insert changes and call next # on the two iterators elif status['file_and_dir']: changes.add_recursive_delete(it.it_from) changes.add_recursive_insert(it.it_to) it.next_both() # both elements are directories: elif status['both_are_dirs']: # from directory is empty: # recursively add insert changes in the to directory # and call next on the two iterators if status['from_is_empty_dir']: changes.add_recursive_insert(it.it_to) it.next_both() # to directory is empty: # recursively add delete changes in the from directory # and call next on the two iterators elif status['to_is_empty_dir']: changes.add_recursive_delete(it.it_from) it.next_both() # both directories are not empty: # call step on the two iterators to descend further in # the directory trees. elif not status['from_is_empty_dir'] and not status['to_is_empty_dir']: it.step_both() def _compare_paths(path1, path2): """ Compare paths in lexicographic depth-first order. For instance, it returns: + - "a" < "b" - "b/c/d" < "b" - "c/foo.txt" < "c.txt" """ path1_parts = path1.split(b'/') path2_parts = path2.split(b'/') i = 0 while True: if len(path1_parts) == len(path2_parts) and i == len(path1_parts): return 0 elif len(path2_parts) == i: return 1 elif len(path1_parts) == i: return -1 else: if path2_parts[i] > path1_parts[i]: return -1 elif path2_parts[i] < path1_parts[i]: return 1 i = i + 1 def _diff_elts(changes, it): """ Compare two directory entries. Args: changes (_RevisionChangesList): the list of changes between two revisions it (swh.storage.algos.dir_iterators.DoubleDirectoryIterator): the iterator traversing two revision directories at the same time """ # compare current to and from path in depth-first lexicographic order c = _compare_paths(it.it_from.current_path(), it.it_to.current_path()) # current from path is lower than the current to path: # the from path has been deleted if c < 0: changes.add_recursive_delete(it.it_from) it.next_from() # current from path is greater than the current to path: # the to path has been inserted elif c > 0: changes.add_recursive_insert(it.it_to) it.next_to() # paths are the same and need more processing else: _diff_elts_same_name(changes, it) def diff_directories(storage, from_dir, to_dir, track_renaming=False): """ Compute the differential between two directories, i.e. the list of file changes (insertion / deletion / modification / renaming) between them. Args: storage (swh.storage.storage.Storage): instance of a swh storage (either local or remote, for optimal performance the use of a local storage is recommended) from_dir (bytes): the swh identifier of the directory to compare from to_dir (bytes): the swh identifier of the directory to compare to track_renaming (bool): whether or not to track files renaming Returns: list: A list of dict representing the changes between the two revisions. Each dict contains the following entries: - *type*: a string describing the type of change ('insert' / 'delete' / 'modify' / 'rename') - *from*: a dict containing the directory entry metadata in the from revision (None in case of an insertion) - *from_path*: bytes string corresponding to the absolute path of the from revision entry (None in case of an insertion) - *to*: a dict containing the directory entry metadata in the to revision (None in case of a deletion) - *to_path*: bytes string corresponding to the absolute path of the to revision entry (None in case of a deletion) The returned list is sorted in lexicographic depth-first order according to the value of the *to_path* field. """ changes = _RevisionChangesList(storage, track_renaming) it = DoubleDirectoryIterator(storage, from_dir, to_dir) while True: r = it.remaining() if r == Remaining.NoMoreFiles: break elif r == Remaining.OnlyFromFilesRemain: changes.add_recursive_delete(it.it_from) it.next_from() elif r == Remaining.OnlyToFilesRemain: changes.add_recursive_insert(it.it_to) it.next_to() else: _diff_elts(changes, it) return changes.result def diff_revisions(storage, from_rev, to_rev, track_renaming=False): """ Compute the differential between two revisions, i.e. the list of file changes between the two associated directories. Args: storage (swh.storage.storage.Storage): instance of a swh storage (either local or remote, for optimal performance the use of a local storage is recommended) from_rev (bytes): the identifier of the revision to compare from to_rev (bytes): the identifier of the revision to compare to track_renaming (bool): whether or not to track files renaming Returns: list: A list of dict describing the introduced file changes (see :func:`swh.storage.algos.diff.diff_directories`). """ from_dir = None if from_rev: from_dir = _get_rev(storage, from_rev)['directory'] to_dir = _get_rev(storage, to_rev)['directory'] return diff_directories(storage, from_dir, to_dir, track_renaming) def diff_revision(storage, revision, track_renaming=False): """ Computes the differential between a revision and its first parent. If the revision has no parents, the directory to compare from is considered as empty. In other words, it computes the file changes introduced in a specific revision. Args: storage (swh.storage.storage.Storage): instance of a swh storage (either local or remote, for optimal performance the use of a local storage is recommended) revision (bytes): the identifier of the revision from which to compute the introduced changes. track_renaming (bool): whether or not to track files renaming Returns: list: A list of dict describing the introduced file changes (see :func:`swh.storage.algos.diff.diff_directories`). """ rev_data = _get_rev(storage, revision) parent = None if rev_data['parents']: parent = rev_data['parents'][0] return diff_revisions(storage, parent, revision, track_renaming) diff --git a/swh/storage/buffer.py b/swh/storage/buffer.py index 4f52f3c4..8f86455b 100644 --- a/swh/storage/buffer.py +++ b/swh/storage/buffer.py @@ -1,108 +1,109 @@ # Copyright (C) 2019 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information from collections import deque from functools import partial from typing import Optional, Iterable, Dict from swh.core.utils import grouper from swh.storage import get_storage class BufferingProxyStorage: """Storage implementation in charge of accumulating objects prior to discussing with the "main" storage. Sample configuration use case for buffering storage: .. code-block:: yaml storage: cls: buffer args: storage: cls: remote args: http://storage.internal.staging.swh.network:5002/ min_batch_size: content: 10000 content_bytes: 100000000 directory: 5000 revision: 1000 release: 10000 """ def __init__(self, storage, min_batch_size=None): self.storage = get_storage(**storage) if min_batch_size is None: min_batch_size = {} self.min_batch_size = { 'content': min_batch_size.get('content', 10000), 'content_bytes': min_batch_size.get('content_bytes', 100*1024*1024), 'directory': min_batch_size.get('directory', 25000), 'revision': min_batch_size.get('revision', 100000), 'release': min_batch_size.get('release', 100000), } self.object_types = ['content', 'directory', 'revision', 'release'] self._objects = {k: deque() for k in self.object_types} def __getattr__(self, key): if key.endswith('_add'): object_type = key.split('_')[0] if object_type in self.object_types: return partial( self.object_add, object_type=object_type ) return getattr(self.storage, key) def content_add(self, content: Iterable[Dict]) -> Dict: """Enqueue contents to write to the storage. Following policies apply: - - First, check if the queue's threshold is hit. If it is flush content - to the storage. - - If not, check if the total size of enqueued contents's threshold is - hit. If it is flush content to the storage. + - First, check if the queue's threshold is hit. + If it is flush content to the storage. + + - If not, check if the total size of enqueued contents's + threshold is hit. If it is flush content to the storage. """ s = self.object_add(content, object_type='content') if not s: q = self._objects['content'] total_size = sum(c['length'] for c in q) if total_size >= self.min_batch_size['content_bytes']: return self.flush(['content']) return s def flush(self, object_types: Optional[Iterable[str]] = None) -> Dict: if object_types is None: object_types = self.object_types summary = {} # type: Dict[str, Dict] for object_type in object_types: q = self._objects[object_type] for objs in grouper(q, n=self.min_batch_size[object_type]): add_fn = getattr(self.storage, '%s_add' % object_type) s = add_fn(objs) summary = {k: v + summary.get(k, 0) for k, v in s.items()} q.clear() return summary def object_add(self, objects: Iterable[Dict], *, object_type: str) -> Dict: """Enqueue objects to write to the storage. This checks if the queue's threshold is hit. If it is actually write those to the storage. """ q = self._objects[object_type] threshold = self.min_batch_size[object_type] q.extend(objects) if len(q) >= threshold: return self.flush() return {} diff --git a/swh/storage/in_memory.py b/swh/storage/in_memory.py index 04bf5295..24956088 100644 --- a/swh/storage/in_memory.py +++ b/swh/storage/in_memory.py @@ -1,1847 +1,1848 @@ # Copyright (C) 2015-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import re import bisect import dateutil import collections import copy import datetime import itertools import random from collections import defaultdict from datetime import timedelta from typing import Any, Dict, List, Optional import attr from swh.model.model import ( Content, Directory, Revision, Release, Snapshot, OriginVisit, Origin, SHA1_SIZE) from swh.model.hashutil import DEFAULT_ALGORITHMS, hash_to_bytes, hash_to_hex from swh.objstorage import get_objstorage from swh.objstorage.exc import ObjNotFoundError from .storage import get_journal_writer from .converters import origin_url_to_sha1 from .utils import get_partition_bounds_bytes # Max block size of contents to return BULK_BLOCK_CONTENT_LEN_MAX = 10000 def now(): return datetime.datetime.now(tz=datetime.timezone.utc) class Storage: def __init__(self, journal_writer=None): self._contents = {} self._content_indexes = defaultdict(lambda: defaultdict(set)) self._skipped_contents = {} self._skipped_content_indexes = defaultdict(lambda: defaultdict(set)) self.reset() if journal_writer: self.journal_writer = get_journal_writer(**journal_writer) else: self.journal_writer = None def reset(self): self._directories = {} self._revisions = {} self._releases = {} self._snapshots = {} self._origins = {} self._origins_by_id = [] self._origins_by_sha1 = {} self._origin_visits = {} self._persons = [] self._origin_metadata = defaultdict(list) self._tools = {} self._metadata_providers = {} self._objects = defaultdict(list) # ideally we would want a skip list for both fast inserts and searches self._sorted_sha1s = [] self.objstorage = get_objstorage('memory', {}) def check_config(self, *, check_write): """Check that the storage is configured and ready to go.""" return True def _content_add(self, contents, with_data): content_with_data = [] content_without_data = [] for content in contents: if content.status is None: content.status = 'visible' if content.length is None: content.length = -1 if content.status != 'absent': if self._content_key(content) not in self._contents: content_with_data.append(content) else: if self._content_key(content) not in self._skipped_contents: content_without_data.append(content) if self.journal_writer: for content in content_with_data: content = attr.evolve(content, data=None) self.journal_writer.write_addition('content', content) for content in content_without_data: self.journal_writer.write_addition('content', content) count_content_added, count_content_bytes_added = \ self._content_add_present(content_with_data, with_data) count_skipped_content_added = self._content_add_absent( content_without_data ) summary = { 'content:add': count_content_added, 'skipped_content:add': count_skipped_content_added, } if with_data: summary['content:add:bytes'] = count_content_bytes_added return summary def _content_add_present(self, contents, with_data): count_content_added = 0 count_content_bytes_added = 0 for content in contents: key = self._content_key(content) if key in self._contents: continue for algorithm in DEFAULT_ALGORITHMS: hash_ = content.get_hash(algorithm) if hash_ in self._content_indexes[algorithm]\ and (algorithm not in {'blake2s256', 'sha256'}): from . import HashCollision raise HashCollision(algorithm, hash_, key) for algorithm in DEFAULT_ALGORITHMS: hash_ = content.get_hash(algorithm) self._content_indexes[algorithm][hash_].add(key) self._objects[content.sha1_git].append( ('content', content.sha1)) self._contents[key] = content bisect.insort(self._sorted_sha1s, content.sha1) count_content_added += 1 if with_data: content_data = self._contents[key].data self._contents[key] = attr.evolve( self._contents[key], data=None) count_content_bytes_added += len(content_data) self.objstorage.add(content_data, content.sha1) return (count_content_added, count_content_bytes_added) def _content_add_absent(self, contents): count = 0 skipped_content_missing = self.skipped_content_missing(contents) for content in skipped_content_missing: key = self._content_key(content) for algo in DEFAULT_ALGORITHMS: self._skipped_content_indexes[algo][content.get_hash(algo)] \ .add(key) self._skipped_contents[key] = content count += 1 return count def _content_to_model(self, contents): """Takes a list of content dicts, optionally with an extra 'origin' key, and yields tuples (model.Content, origin).""" for content in contents: content = content.copy() content.pop('origin', None) yield Content.from_dict(content) def content_add(self, content): """Add content blobs to the storage Args: content (iterable): iterable of dictionaries representing individual pieces of content to add. Each dictionary has the following keys: - data (bytes): the actual content - length (int): content length (default: -1) - one key for each checksum algorithm in :data:`swh.model.hashutil.DEFAULT_ALGORITHMS`, mapped to the corresponding checksum - status (str): one of visible, hidden, absent - reason (str): if status = absent, the reason why - origin (int): if status = absent, the origin we saw the content in Raises: HashCollision in case of collision Returns: Summary dict with the following key and associated values: content:add: New contents added content_bytes:add: Sum of the contents' length data skipped_content:add: New skipped contents (no data) added """ now = datetime.datetime.now(tz=datetime.timezone.utc) content = [attr.evolve(c, ctime=now) for c in self._content_to_model(content)] return self._content_add(content, with_data=True) def content_add_metadata(self, content): """Add content metadata to the storage (like `content_add`, but without inserting to the objstorage). Args: content (iterable): iterable of dictionaries representing individual pieces of content to add. Each dictionary has the following keys: - length (int): content length (default: -1) - one key for each checksum algorithm in :data:`swh.model.hashutil.DEFAULT_ALGORITHMS`, mapped to the corresponding checksum - status (str): one of visible, hidden, absent - reason (str): if status = absent, the reason why - origin (int): if status = absent, the origin we saw the content in - ctime (datetime): time of insertion in the archive Raises: HashCollision in case of collision Returns: Summary dict with the following key and associated values: content:add: New contents added skipped_content:add: New skipped contents (no data) added """ content = list(self._content_to_model(content)) return self._content_add(content, with_data=False) def content_get(self, content): """Retrieve in bulk contents and their data. This function may yield more blobs than provided sha1 identifiers, in case they collide. Args: content: iterables of sha1 Yields: Dict[str, bytes]: Generates streams of contents as dict with their raw data: - sha1 (bytes): content id - data (bytes): content's raw data Raises: ValueError in case of too much contents are required. cf. BULK_BLOCK_CONTENT_LEN_MAX """ # FIXME: Make this method support slicing the `data`. if len(content) > BULK_BLOCK_CONTENT_LEN_MAX: raise ValueError( "Sending at most %s contents." % BULK_BLOCK_CONTENT_LEN_MAX) for obj_id in content: try: data = self.objstorage.get(obj_id) except ObjNotFoundError: yield None continue yield {'sha1': obj_id, 'data': data} def content_get_range(self, start, end, limit=1000, db=None, cur=None): """Retrieve contents within range [start, end] bound by limit. Note that this function may return more than one blob per hash. The limit is enforced with multiplicity (ie. two blobs with the same hash will count twice toward the limit). Args: **start** (bytes): Starting identifier range (expected smaller than end) **end** (bytes): Ending identifier range (expected larger than start) **limit** (int): Limit result (default to 1000) Returns: a dict with keys: - contents [dict]: iterable of contents in between the range. - next (bytes): There remains content in the range starting from this next sha1 """ if limit is None: raise ValueError('Development error: limit should not be None') from_index = bisect.bisect_left(self._sorted_sha1s, start) sha1s = itertools.islice(self._sorted_sha1s, from_index, None) sha1s = ((sha1, content_key) for sha1 in sha1s for content_key in self._content_indexes['sha1'][sha1]) matched = [] next_content = None for sha1, key in sha1s: if sha1 > end: break if len(matched) >= limit: next_content = sha1 break matched.append(self._contents[key].to_dict()) return { 'contents': matched, 'next': next_content, } def content_get_partition( self, partition_id: int, nb_partitions: int, limit: int = 1000, page_token: str = None): """Splits contents into nb_partitions, and returns one of these based on partition_id (which must be in [0, nb_partitions-1]) There is no guarantee on how the partitioning is done, or the result order. Args: partition_id (int): index of the partition to fetch nb_partitions (int): total number of partitions to split into limit (int): Limit result (default to 1000) page_token (Optional[str]): opaque token used for pagination. Returns: a dict with keys: - contents (List[dict]): iterable of contents in the partition. - **next_page_token** (Optional[str]): opaque token to be used as `page_token` for retrieving the next page. if absent, there is no more pages to gather. """ if limit is None: raise ValueError('Development error: limit should not be None') (start, end) = get_partition_bounds_bytes( partition_id, nb_partitions, SHA1_SIZE) if page_token: start = hash_to_bytes(page_token) if end is None: end = b'\xff'*SHA1_SIZE result = self.content_get_range(start, end, limit) result2 = { 'contents': result['contents'], 'next_page_token': None, } if result['next']: result2['next_page_token'] = hash_to_hex(result['next']) return result2 def content_get_metadata( self, contents: List[bytes]) -> Dict[bytes, List[Dict]]: """Retrieve content metadata in bulk Args: content: iterable of content identifiers (sha1) Returns: a dict with keys the content's sha1 and the associated value either the existing content's metadata or None if the content does not exist. """ result: Dict = {sha1: [] for sha1 in contents} for sha1 in contents: if sha1 in self._content_indexes['sha1']: objs = self._content_indexes['sha1'][sha1] # only 1 element as content_add_metadata would have raised a # hash collision otherwise for key in objs: d = self._contents[key].to_dict() del d['ctime'] result[sha1].append(d) return result def content_find(self, content): if not set(content).intersection(DEFAULT_ALGORITHMS): raise ValueError('content keys must contain at least one of: ' '%s' % ', '.join(sorted(DEFAULT_ALGORITHMS))) found = [] for algo in DEFAULT_ALGORITHMS: hash = content.get(algo) if hash and hash in self._content_indexes[algo]: found.append(self._content_indexes[algo][hash]) if not found: return [] keys = list(set.intersection(*found)) return [self._contents[key].to_dict() for key in keys] def content_missing(self, content, key_hash='sha1'): """List content missing from storage Args: contents ([dict]): iterable of dictionaries whose keys are either 'length' or an item of :data:`swh.model.hashutil.ALGORITHMS`; mapped to the corresponding checksum (or length). key_hash (str): name of the column to use as hash id result (default: 'sha1') Returns: iterable ([bytes]): missing content ids (as per the key_hash column) """ for cont in content: for (algo, hash_) in cont.items(): if algo not in DEFAULT_ALGORITHMS: continue if hash_ not in self._content_indexes.get(algo, []): yield cont[key_hash] break else: for result in self.content_find(cont): if result['status'] == 'missing': yield cont[key_hash] def content_missing_per_sha1(self, contents): """List content missing from storage based only on sha1. Args: contents: Iterable of sha1 to check for absence. Returns: iterable: missing ids Raises: TODO: an exception when we get a hash collision. """ for content in contents: if content not in self._content_indexes['sha1']: yield content def skipped_content_missing(self, contents): """List all skipped_content missing from storage Args: contents: Iterable of sha1 to check for skipped content entry Returns: iterable: dict of skipped content entry """ for content in contents: for (key, algorithm) in self._content_key_algorithm(content): if algorithm == 'blake2s256': continue if key not in self._skipped_content_indexes[algorithm]: # index must contain hashes of algos except blake2s256 # else the content is considered skipped yield content break def content_get_random(self): """Finds a random content id. Returns: a sha1_git """ return random.choice(list(self._content_indexes['sha1_git'])) def directory_add(self, directories): """Add directories to the storage Args: directories (iterable): iterable of dictionaries representing the individual directories to add. Each dict has the following keys: - id (sha1_git): the id of the directory to add - entries (list): list of dicts for each entry in the directory. Each dict has the following keys: - name (bytes) - type (one of 'file', 'dir', 'rev'): type of the directory entry (file, directory, revision) - target (sha1_git): id of the object pointed at by the directory entry - perms (int): entry permissions Returns: Summary dict of keys with associated count as values: directory:add: Number of directories actually added """ directories = list(directories) if self.journal_writer: self.journal_writer.write_additions( 'directory', (dir_ for dir_ in directories if dir_['id'] not in self._directories)) directories = [Directory.from_dict(d) for d in directories] count = 0 for directory in directories: if directory.id not in self._directories: count += 1 self._directories[directory.id] = directory self._objects[directory.id].append( ('directory', directory.id)) return {'directory:add': count} def directory_missing(self, directories): """List directories missing from storage Args: directories (iterable): an iterable of directory ids Yields: missing directory ids """ for id in directories: if id not in self._directories: yield id def _join_dentry_to_content(self, dentry): keys = ( 'status', 'sha1', 'sha1_git', 'sha256', 'length', ) ret = dict.fromkeys(keys) ret.update(dentry) if ret['type'] == 'file': # TODO: Make it able to handle more than one content content = self.content_find({'sha1_git': ret['target']}) if content: content = content[0] for key in keys: ret[key] = content[key] return ret def _directory_ls(self, directory_id, recursive, prefix=b''): if directory_id in self._directories: for entry in self._directories[directory_id].entries: ret = self._join_dentry_to_content(entry.to_dict()) ret['name'] = prefix + ret['name'] ret['dir_id'] = directory_id yield ret if recursive and ret['type'] == 'dir': yield from self._directory_ls( ret['target'], True, prefix + ret['name'] + b'/') def directory_ls(self, directory, recursive=False): """Get entries for one directory. Args: - directory: the directory to list entries from. - recursive: if flag on, this list recursively from this directory. Returns: List of entries for such directory. If `recursive=True`, names in the path of a dir/file not at the root are concatenated with a slash (`/`). """ yield from self._directory_ls(directory, recursive) def directory_entry_get_by_path(self, directory, paths): """Get the directory entry (either file or dir) from directory with path. Args: - directory: sha1 of the top level directory - paths: path to lookup from the top level directory. From left (top) to right (bottom). Returns: The corresponding directory entry if found, None otherwise. """ return self._directory_entry_get_by_path(directory, paths, b'') def directory_get_random(self): """Finds a random directory id. Returns: a sha1_git if any """ if not self._directories: return None return random.choice(list(self._directories)) def _directory_entry_get_by_path(self, directory, paths, prefix): if not paths: return contents = list(self.directory_ls(directory)) if not contents: return def _get_entry(entries, name): for entry in entries: if entry['name'] == name: entry = entry.copy() entry['name'] = prefix + entry['name'] return entry first_item = _get_entry(contents, paths[0]) if len(paths) == 1: return first_item if not first_item or first_item['type'] != 'dir': return return self._directory_entry_get_by_path( first_item['target'], paths[1:], prefix + paths[0] + b'/') def revision_add(self, revisions): """Add revisions to the storage Args: revisions (Iterable[dict]): iterable of dictionaries representing the individual revisions to add. Each dict has the following keys: - **id** (:class:`sha1_git`): id of the revision to add - **date** (:class:`dict`): date the revision was written - **committer_date** (:class:`dict`): date the revision got added to the origin - **type** (one of 'git', 'tar'): type of the revision added - **directory** (:class:`sha1_git`): the directory the revision points at - **message** (:class:`bytes`): the message associated with the revision - **author** (:class:`Dict[str, bytes]`): dictionary with keys: name, fullname, email - **committer** (:class:`Dict[str, bytes]`): dictionary with keys: name, fullname, email - **metadata** (:class:`jsonb`): extra information as dictionary - **synthetic** (:class:`bool`): revision's nature (tarball, directory creates synthetic revision`) - **parents** (:class:`list[sha1_git]`): the parents of this revision date dictionaries have the form defined in :mod:`swh.model`. Returns: Summary dict of keys with associated count as values revision_added: New objects actually stored in db """ revisions = list(revisions) if self.journal_writer: self.journal_writer.write_additions( 'revision', (rev for rev in revisions if rev['id'] not in self._revisions)) revisions = [Revision.from_dict(rev) for rev in revisions] count = 0 for revision in revisions: if revision.id not in self._revisions: revision = attr.evolve( revision, committer=self._person_add(revision.committer), author=self._person_add(revision.author)) self._revisions[revision.id] = revision self._objects[revision.id].append( ('revision', revision.id)) count += 1 return {'revision:add': count} def revision_missing(self, revisions): """List revisions missing from storage Args: revisions (iterable): revision ids Yields: missing revision ids """ for id in revisions: if id not in self._revisions: yield id def revision_get(self, revisions): for id in revisions: if id in self._revisions: yield self._revisions.get(id).to_dict() else: yield None def _get_parent_revs(self, rev_id, seen, limit): if limit and len(seen) >= limit: return if rev_id in seen or rev_id not in self._revisions: return seen.add(rev_id) yield self._revisions[rev_id].to_dict() for parent in self._revisions[rev_id].parents: yield from self._get_parent_revs(parent, seen, limit) def revision_log(self, revisions, limit=None): """Fetch revision entry from the given root revisions. Args: revisions: array of root revision to lookup limit: limitation on the output result. Default to None. Yields: List of revision log from such revisions root. """ seen = set() for rev_id in revisions: yield from self._get_parent_revs(rev_id, seen, limit) def revision_shortlog(self, revisions, limit=None): """Fetch the shortlog for the given revisions Args: revisions: list of root revisions to lookup limit: depth limitation for the output Yields: a list of (id, parents) tuples. """ yield from ((rev['id'], rev['parents']) for rev in self.revision_log(revisions, limit)) def revision_get_random(self): """Finds a random revision id. Returns: a sha1_git """ return random.choice(list(self._revisions)) def release_add(self, releases): """Add releases to the storage Args: releases (Iterable[dict]): iterable of dictionaries representing the individual releases to add. Each dict has the following keys: - **id** (:class:`sha1_git`): id of the release to add - **revision** (:class:`sha1_git`): id of the revision the release points to - **date** (:class:`dict`): the date the release was made - **name** (:class:`bytes`): the name of the release - **comment** (:class:`bytes`): the comment associated with the release - **author** (:class:`Dict[str, bytes]`): dictionary with keys: name, fullname, email the date dictionary has the form defined in :mod:`swh.model`. Returns: Summary dict of keys with associated count as values release:add: New objects contents actually stored in db """ releases = list(releases) if self.journal_writer: self.journal_writer.write_additions( 'release', (rel for rel in releases if rel['id'] not in self._releases)) releases = [Release.from_dict(rel) for rel in releases] count = 0 for rel in releases: if rel.id not in self._releases: if rel.author: self._person_add(rel.author) self._objects[rel.id].append( ('release', rel.id)) self._releases[rel.id] = rel count += 1 return {'release:add': count} def release_missing(self, releases): """List releases missing from storage Args: releases: an iterable of release ids Returns: a list of missing release ids """ yield from (rel for rel in releases if rel not in self._releases) def release_get(self, releases): """Given a list of sha1, return the releases's information Args: releases: list of sha1s Yields: dicts with the same keys as those given to `release_add` (or ``None`` if a release does not exist) """ for rel_id in releases: if rel_id in self._releases: yield self._releases[rel_id].to_dict() else: yield None def release_get_random(self): """Finds a random release id. Returns: a sha1_git """ return random.choice(list(self._releases)) def snapshot_add(self, snapshots): """Add a snapshot to the storage Args: snapshot ([dict]): the snapshots to add, containing the following keys: - **id** (:class:`bytes`): id of the snapshot - **branches** (:class:`dict`): branches the snapshot contains, mapping the branch name (:class:`bytes`) to the branch target, itself a :class:`dict` (or ``None`` if the branch points to an unknown object) - **target_type** (:class:`str`): one of ``content``, ``directory``, ``revision``, ``release``, ``snapshot``, ``alias`` - **target** (:class:`bytes`): identifier of the target (currently a ``sha1_git`` for all object kinds, or the name of the target branch for aliases) Raises: ValueError: if the origin's or visit's identifier does not exist. Returns: Summary dict of keys with associated count as values snapshot_added: Count of object actually stored in db """ count = 0 snapshots = (Snapshot.from_dict(d) for d in snapshots) snapshots = (snap for snap in snapshots if snap.id not in self._snapshots) for snapshot in snapshots: if self.journal_writer: self.journal_writer.write_addition('snapshot', snapshot) sorted_branch_names = sorted(snapshot.branches) self._snapshots[snapshot.id] = (snapshot, sorted_branch_names) self._objects[snapshot.id].append(('snapshot', snapshot.id)) count += 1 return {'snapshot:add': count} def snapshot_get(self, snapshot_id): """Get the content, possibly partial, of a snapshot with the given id The branches of the snapshot are iterated in the lexicographical order of their names. .. warning:: At most 1000 branches contained in the snapshot will be returned for performance reasons. In order to browse the whole set of branches, the method :meth:`snapshot_get_branches` should be used instead. Args: snapshot_id (bytes): identifier of the snapshot Returns: dict: a dict with three keys: * **id**: identifier of the snapshot * **branches**: a dict of branches contained in the snapshot whose keys are the branches' names. * **next_branch**: the name of the first branch not returned or :const:`None` if the snapshot has less than 1000 branches. """ return self.snapshot_get_branches(snapshot_id) def snapshot_get_by_origin_visit(self, origin, visit): """Get the content, possibly partial, of a snapshot for the given origin visit The branches of the snapshot are iterated in the lexicographical order of their names. .. warning:: At most 1000 branches contained in the snapshot will be returned for performance reasons. In order to browse the whole set of branches, the method :meth:`snapshot_get_branches` should be used instead. Args: origin (int): the origin's identifier visit (int): the visit's identifier Returns: dict: None if the snapshot does not exist; a dict with three keys otherwise: * **id**: identifier of the snapshot * **branches**: a dict of branches contained in the snapshot whose keys are the branches' names. * **next_branch**: the name of the first branch not returned or :const:`None` if the snapshot has less than 1000 branches. """ origin_url = self._get_origin_url(origin) if not origin_url: return if origin_url not in self._origins or \ visit > len(self._origin_visits[origin_url]): return None snapshot_id = self._origin_visits[origin_url][visit-1].snapshot if snapshot_id: return self.snapshot_get(snapshot_id) else: return None def snapshot_get_latest(self, origin, allowed_statuses=None): """Get the content, possibly partial, of the latest snapshot for the given origin, optionally only from visits that have one of the given allowed_statuses The branches of the snapshot are iterated in the lexicographical order of their names. .. warning:: At most 1000 branches contained in the snapshot will be returned for performance reasons. In order to browse the whole set of branches, the methods :meth:`origin_visit_get_latest` and :meth:`snapshot_get_branches` should be used instead. Args: origin (str): the origin's URL allowed_statuses (list of str): list of visit statuses considered to find the latest snapshot for the origin. For instance, ``allowed_statuses=['full']`` will only consider visits that have successfully run to completion. Returns: dict: a dict with three keys: * **id**: identifier of the snapshot * **branches**: a dict of branches contained in the snapshot whose keys are the branches' names. * **next_branch**: the name of the first branch not returned or :const:`None` if the snapshot has less than 1000 branches. """ origin_url = self._get_origin_url(origin) if not origin_url: return visit = self.origin_visit_get_latest( origin_url, allowed_statuses=allowed_statuses, require_snapshot=True) if visit and visit['snapshot']: snapshot = self.snapshot_get(visit['snapshot']) if not snapshot: raise ValueError( 'last origin visit references an unknown snapshot') return snapshot def snapshot_count_branches(self, snapshot_id, db=None, cur=None): """Count the number of branches in the snapshot with the given id Args: snapshot_id (bytes): identifier of the snapshot Returns: dict: A dict whose keys are the target types of branches and values their corresponding amount """ (snapshot, _) = self._snapshots[snapshot_id] return collections.Counter(branch.target_type.value if branch else None for branch in snapshot.branches.values()) def snapshot_get_branches(self, snapshot_id, branches_from=b'', branches_count=1000, target_types=None): """Get the content, possibly partial, of a snapshot with the given id The branches of the snapshot are iterated in the lexicographical order of their names. Args: snapshot_id (bytes): identifier of the snapshot branches_from (bytes): optional parameter used to skip branches whose name is lesser than it before returning them branches_count (int): optional parameter used to restrain the amount of returned branches target_types (list): optional parameter used to filter the target types of branch to return (possible values that can be contained in that list are `'content', 'directory', 'revision', 'release', 'snapshot', 'alias'`) Returns: dict: None if the snapshot does not exist; a dict with three keys otherwise: * **id**: identifier of the snapshot * **branches**: a dict of branches contained in the snapshot whose keys are the branches' names. * **next_branch**: the name of the first branch not returned or :const:`None` if the snapshot has less than `branches_count` branches after `branches_from` included. """ res = self._snapshots.get(snapshot_id) if res is None: return None (snapshot, sorted_branch_names) = res from_index = bisect.bisect_left( sorted_branch_names, branches_from) if target_types: next_branch = None branches = {} for branch_name in sorted_branch_names[from_index:]: branch = snapshot.branches[branch_name] if branch and branch.target_type.value in target_types: if len(branches) < branches_count: branches[branch_name] = branch else: next_branch = branch_name break else: # As there is no 'target_types', we can do that much faster to_index = from_index + branches_count returned_branch_names = sorted_branch_names[from_index:to_index] branches = {branch_name: snapshot.branches[branch_name] for branch_name in returned_branch_names} if to_index >= len(sorted_branch_names): next_branch = None else: next_branch = sorted_branch_names[to_index] branches = {name: branch.to_dict() if branch else None for (name, branch) in branches.items()} return { 'id': snapshot_id, 'branches': branches, 'next_branch': next_branch, } def snapshot_get_random(self): """Finds a random snapshot id. Returns: a sha1_git """ return random.choice(list(self._snapshots)) def object_find_by_sha1_git(self, ids, db=None, cur=None): """Return the objects found with the given ids. Args: ids: a generator of sha1_gits Returns: dict: a mapping from id to the list of objects found. Each object found is itself a dict with keys: - sha1_git: the input id - type: the type of object found """ ret = {} for id_ in ids: objs = self._objects.get(id_, []) ret[id_] = [{ 'sha1_git': id_, 'type': obj[0], } for obj in objs] return ret def _convert_origin(self, t): if t is None: return None return t.to_dict() def origin_get(self, origins): """Return origins, either all identified by their ids or all identified by urls. Args: origin: a list of dictionaries representing the individual origins to find. These dicts have either the key url (and optionally type): - url (bytes): the url the origin points to or the id: - id (int): the origin's identifier Returns: dict: the origin dictionary with the keys: - id: origin's id - url: origin's url Raises: ValueError: if the keys does not match (url and type) nor id. """ if isinstance(origins, dict): # Old API return_single = True origins = [origins] else: return_single = False # Sanity check to be error-compatible with the pgsql backend if any('id' in origin for origin in origins) \ and not all('id' in origin for origin in origins): raise ValueError( 'Either all origins or none at all should have an "id".') if any('url' in origin for origin in origins) \ and not all('url' in origin for origin in origins): raise ValueError( 'Either all origins or none at all should have ' 'an "url" key.') results = [] for origin in origins: result = None if 'url' in origin: if origin['url'] in self._origins: result = self._origins[origin['url']] else: raise ValueError( 'Origin must have an url.') results.append(self._convert_origin(result)) if return_single: assert len(results) == 1 return results[0] else: return results def origin_get_by_sha1(self, sha1s): """Return origins, identified by the sha1 of their URLs. Args: sha1s (list[bytes]): a list of sha1s Yields: dicts containing origin information as returned by :meth:`swh.storage.in_memory.Storage.origin_get`, or None if an origin matching the sha1 is not found. """ return [ self._convert_origin(self._origins_by_sha1.get(sha1)) for sha1 in sha1s ] def origin_get_range(self, origin_from=1, origin_count=100): """Retrieve ``origin_count`` origins whose ids are greater or equal than ``origin_from``. Origins are sorted by id before retrieving them. Args: origin_from (int): the minimum id of origins to retrieve origin_count (int): the maximum number of origins to retrieve Yields: dicts containing origin information as returned by :meth:`swh.storage.in_memory.Storage.origin_get`, plus an 'id' key. """ origin_from = max(origin_from, 1) if origin_from <= len(self._origins_by_id): max_idx = origin_from + origin_count - 1 if max_idx > len(self._origins_by_id): max_idx = len(self._origins_by_id) for idx in range(origin_from-1, max_idx): origin = self._convert_origin( self._origins[self._origins_by_id[idx]]) yield {'id': idx+1, **origin} def origin_list(self, page_token: Optional[str] = None, limit: int = 100 ) -> dict: """Returns the list of origins Args: page_token: opaque token used for pagination. limit: the maximum number of results to return Returns: dict: dict with the following keys: - **next_page_token** (str, optional): opaque token to be used as `page_token` for retrieving the next page. if absent, there is no more pages to gather. - **origins** (List[dict]): list of origins, as returned by `origin_get`. """ origin_urls = sorted(self._origins) if page_token: from_ = bisect.bisect_left(origin_urls, page_token) else: from_ = 0 result = { 'origins': [{'url': origin_url} for origin_url in origin_urls[from_:from_+limit]] } if from_+limit < len(origin_urls): result['next_page_token'] = origin_urls[from_+limit] return result def origin_search(self, url_pattern, offset=0, limit=50, regexp=False, with_visit=False, db=None, cur=None): """Search for origins whose urls contain a provided string pattern or match a provided regular expression. The search is performed in a case insensitive way. Args: url_pattern (str): the string pattern to search for in origin urls offset (int): number of found origins to skip before returning results limit (int): the maximum number of found origins to return regexp (bool): if True, consider the provided pattern as a regular expression and return origins whose urls match it with_visit (bool): if True, filter out origins with no visit Returns: An iterable of dict containing origin information as returned by :meth:`swh.storage.storage.Storage.origin_get`. """ origins = map(self._convert_origin, self._origins.values()) if regexp: pat = re.compile(url_pattern) origins = [orig for orig in origins if pat.search(orig['url'])] else: origins = [orig for orig in origins if url_pattern in orig['url']] if with_visit: origins = [ orig for orig in origins if len(self._origin_visits[orig['url']]) > 0 and set(ov.snapshot for ov in self._origin_visits[orig['url']] if ov.snapshot) & set(self._snapshots)] return origins[offset:offset+limit] def origin_count(self, url_pattern, regexp=False, with_visit=False, db=None, cur=None): """Count origins whose urls contain a provided string pattern or match a provided regular expression. The pattern search in origin urls is performed in a case insensitive way. Args: url_pattern (str): the string pattern to search for in origin urls regexp (bool): if True, consider the provided pattern as a regular expression and return origins whose urls match it with_visit (bool): if True, filter out origins with no visit Returns: int: The number of origins matching the search criterion. """ return len(self.origin_search(url_pattern, regexp=regexp, with_visit=with_visit, limit=len(self._origins))) def origin_add(self, origins): """Add origins to the storage Args: origins: list of dictionaries representing the individual origins, with the following keys: - url (bytes): the url the origin points to Returns: list: given origins as dict updated with their id """ origins = copy.deepcopy(list(origins)) for origin in origins: self.origin_add_one(origin) return origins def origin_add_one(self, origin): """Add origin to the storage Args: origin: dictionary representing the individual origin to add. This dict has the following keys: - url (bytes): the url the origin points to Returns: the id of the added origin, or of the identical one that already exists. """ origin = Origin.from_dict(origin) if origin.url not in self._origins: if self.journal_writer: self.journal_writer.write_addition('origin', origin) # generate an origin_id because it is needed by origin_get_range. # TODO: remove this when we remove origin_get_range origin_id = len(self._origins) + 1 self._origins_by_id.append(origin.url) assert len(self._origins_by_id) == origin_id self._origins[origin.url] = origin self._origins_by_sha1[origin_url_to_sha1(origin.url)] = origin self._origin_visits[origin.url] = [] self._objects[origin.url].append(('origin', origin.url)) return origin.url def origin_visit_add(self, origin, date, type): """Add an origin_visit for the origin at date with status 'ongoing'. Args: origin (str): visited origin's identifier or URL date (Union[str,datetime]): timestamp of such visit type (str): the type of loader used for the visit (hg, git, ...) Returns: dict: dictionary with keys origin and visit where: - origin: origin's identifier - visit: the visit's identifier for the new visit occurrence """ origin_url = origin if origin_url is None: raise ValueError('Unknown origin.') if isinstance(date, str): # FIXME: Converge on iso8601 at some point date = dateutil.parser.parse(date) elif not isinstance(date, datetime.datetime): raise TypeError('date must be a datetime or a string.') visit_ret = None if origin_url in self._origins: origin = self._origins[origin_url] # visit ids are in the range [1, +inf[ visit_id = len(self._origin_visits[origin_url]) + 1 status = 'ongoing' visit = OriginVisit( origin=origin.url, date=date, type=type, status=status, snapshot=None, metadata=None, visit=visit_id, ) self._origin_visits[origin_url].append(visit) visit_ret = { 'origin': origin.url, 'visit': visit_id, } self._objects[(origin_url, visit_id)].append( ('origin_visit', None)) if self.journal_writer: self.journal_writer.write_addition('origin_visit', visit) return visit_ret def origin_visit_update(self, origin, visit_id, status=None, metadata=None, snapshot=None): """Update an origin_visit's status. Args: origin (str): visited origin's URL visit_id (int): visit's identifier status: visit's new status metadata: data associated to the visit snapshot (sha1_git): identifier of the snapshot to add to the visit Returns: None """ if not isinstance(origin, str): raise TypeError('origin must be a string, not %r' % (origin,)) origin_url = self._get_origin_url(origin) if origin_url is None: raise ValueError('Unknown origin.') try: visit = self._origin_visits[origin_url][visit_id-1] except IndexError: raise ValueError('Unknown visit_id for this origin') \ from None updates = {} if status: updates['status'] = status if metadata: updates['metadata'] = metadata if snapshot: updates['snapshot'] = snapshot visit = attr.evolve(visit, **updates) if self.journal_writer: self.journal_writer.write_update('origin_visit', visit) self._origin_visits[origin_url][visit_id-1] = visit def origin_visit_upsert(self, visits): """Add a origin_visits with a specific id and with all its data. If there is already an origin_visit with the same `(origin_url, visit_id)`, updates it instead of inserting a new one. Args: visits: iterable of dicts with keys: - origin: origin url - visit: origin visit id - type: type of loader used for the visit - date: timestamp of such visit - status: Visit's new status - metadata: Data associated to the visit - snapshot (sha1_git): identifier of the snapshot to add to + - **origin**: origin url + - **visit**: origin visit id + - **type**: type of loader used for the visit + - **date**: timestamp of such visit + - **status**: Visit's new status + - **metadata**: Data associated to the visit + - **snapshot**: identifier of the snapshot to add to the visit """ for visit in visits: if not isinstance(visit['origin'], str): raise TypeError("visit['origin'] must be a string, not %r" % (visit['origin'],)) visits = [OriginVisit.from_dict(d) for d in visits] if self.journal_writer: for visit in visits: self.journal_writer.write_addition('origin_visit', visit) for visit in visits: visit_id = visit.visit origin_url = visit.origin visit = attr.evolve(visit, origin=origin_url) self._objects[(origin_url, visit_id)].append( ('origin_visit', None)) while len(self._origin_visits[origin_url]) <= visit_id: self._origin_visits[origin_url].append(None) self._origin_visits[origin_url][visit_id-1] = visit def _convert_visit(self, visit): if visit is None: return visit = visit.to_dict() return visit def origin_visit_get(self, origin, last_visit=None, limit=None): """Retrieve all the origin's visit's information. Args: origin (int): the origin's identifier last_visit (int): visit's id from which listing the next ones, default to None limit (int): maximum number of results to return, default to None Yields: List of visits. """ origin_url = self._get_origin_url(origin) if origin_url in self._origin_visits: visits = self._origin_visits[origin_url] if last_visit is not None: visits = visits[last_visit:] if limit is not None: visits = visits[:limit] for visit in visits: if not visit: continue visit_id = visit.visit yield self._convert_visit( self._origin_visits[origin_url][visit_id-1]) def origin_visit_find_by_date(self, origin, visit_date): """Retrieves the origin visit whose date is closest to the provided timestamp. In case of a tie, the visit with largest id is selected. Args: origin (str): The occurrence's origin (URL). target (datetime): target timestamp Returns: A visit. """ origin_url = self._get_origin_url(origin) if origin_url in self._origin_visits: visits = self._origin_visits[origin_url] visit = min( visits, key=lambda v: (abs(v.date - visit_date), -v.visit)) return self._convert_visit(visit) def origin_visit_get_by(self, origin, visit): """Retrieve origin visit's information. Args: origin (int): the origin's identifier Returns: The information on that particular (origin, visit) or None if it does not exist """ origin_url = self._get_origin_url(origin) if origin_url in self._origin_visits and \ visit <= len(self._origin_visits[origin_url]): return self._convert_visit( self._origin_visits[origin_url][visit-1]) def origin_visit_get_latest( self, origin, allowed_statuses=None, require_snapshot=False): """Get the latest origin visit for the given origin, optionally looking only for those with one of the given allowed_statuses or for those with a known snapshot. Args: origin (str): the origin's URL allowed_statuses (list of str): list of visit statuses considered to find the latest visit. For instance, ``allowed_statuses=['full']`` will only consider visits that have successfully run to completion. require_snapshot (bool): If True, only a visit with a snapshot will be returned. + Returns: dict: a dict with the following keys: - origin: the URL of the origin - visit: origin visit id - type: type of loader used for the visit - date: timestamp of such visit - status: Visit's new status - metadata: Data associated to the visit - snapshot (Optional[sha1_git]): identifier of the snapshot + - **origin**: the URL of the origin + - **visit**: origin visit id + - **type**: type of loader used for the visit + - **date**: timestamp of such visit + - **status**: Visit's new status + - **metadata**: Data associated to the visit + - **snapshot** (Optional[sha1_git]): identifier of the snapshot associated to the visit """ origin = self._origins.get(origin) if not origin: return visits = self._origin_visits[origin.url] if allowed_statuses is not None: visits = [visit for visit in visits if visit.status in allowed_statuses] if require_snapshot: visits = [visit for visit in visits if visit.snapshot] visit = max( visits, key=lambda v: (v.date, v.visit), default=None) return self._convert_visit(visit) def _select_random_origin_visit_by_type(self, type: str) -> str: """Select randomly an origin visit """ while True: url = random.choice(list(self._origin_visits.keys())) random_origin_visits = self._origin_visits[url] if random_origin_visits[0].type == type: return url def origin_visit_get_random(self, type: str) -> Optional[Dict[str, Any]]: """Randomly select one successful origin visit with made in the last 3 months. Returns: dict representing an origin visit, in the same format as `origin_visit_get`. """ url = self._select_random_origin_visit_by_type(type) random_origin_visits = copy.deepcopy(self._origin_visits[url]) random_origin_visits.reverse() back_in_the_day = now() - timedelta(weeks=12) # 3 months back # This should be enough for tests for visit in random_origin_visits: if visit.date > back_in_the_day and visit.status == 'full': return visit.to_dict() else: return None def stat_counters(self): """compute statistics about the number of tuples in various tables Returns: dict: a dictionary mapping textual labels (e.g., content) to integer values (e.g., the number of tuples in table content) """ keys = ( 'content', 'directory', 'origin', 'origin_visit', 'person', 'release', 'revision', 'skipped_content', 'snapshot' ) stats = {key: 0 for key in keys} stats.update(collections.Counter( obj_type for (obj_type, obj_id) in itertools.chain(*self._objects.values()))) return stats def refresh_stat_counters(self): """Recomputes the statistics for `stat_counters`.""" pass def origin_metadata_add(self, origin_url, ts, provider, tool, metadata, db=None, cur=None): """ Add an origin_metadata for the origin at ts with provenance and metadata. Args: origin_url (str): the origin url for which the metadata is added ts (datetime): timestamp of the found metadata provider: id of the provider of metadata (ex:'hal') tool: id of the tool used to extract metadata metadata (jsonb): the metadata retrieved at the time and location """ if not isinstance(origin_url, str): raise TypeError('origin_id must be str, not %r' % (origin_url,)) if isinstance(ts, str): ts = dateutil.parser.parse(ts) origin_metadata = { 'origin_url': origin_url, 'discovery_date': ts, 'tool_id': tool, 'metadata': metadata, 'provider_id': provider, } self._origin_metadata[origin_url].append(origin_metadata) return None def origin_metadata_get_by(self, origin_url, provider_type=None, db=None, cur=None): """Retrieve list of all origin_metadata entries for the origin_url Args: origin_url (str): the origin's url provider_type (str): (optional) type of provider Returns: list of dicts: the origin_metadata dictionary with the keys: - origin_url (int): origin's URL - discovery_date (datetime): timestamp of discovery - tool_id (int): metadata's extracting tool - metadata (jsonb) - provider_id (int): metadata's provider - provider_name (str) - provider_type (str) - provider_url (str) """ if not isinstance(origin_url, str): raise TypeError('origin_url must be str, not %r' % (origin_url,)) metadata = [] for item in self._origin_metadata[origin_url]: item = copy.deepcopy(item) provider = self.metadata_provider_get(item['provider_id']) for attr_name in ('name', 'type', 'url'): item['provider_' + attr_name] = \ provider['provider_' + attr_name] metadata.append(item) return metadata def tool_add(self, tools): """Add new tools to the storage. Args: tools (iterable of :class:`dict`): Tool information to add to storage. Each tool is a :class:`dict` with the following keys: - name (:class:`str`): name of the tool - version (:class:`str`): version of the tool - configuration (:class:`dict`): configuration of the tool, must be json-encodable Returns: :class:`dict`: All the tools inserted in storage (including the internal ``id``). The order of the list is not guaranteed to match the order of the initial list. """ inserted = [] for tool in tools: key = self._tool_key(tool) assert 'id' not in tool record = copy.deepcopy(tool) record['id'] = key # TODO: remove this if key not in self._tools: self._tools[key] = record inserted.append(copy.deepcopy(self._tools[key])) return inserted def tool_get(self, tool): """Retrieve tool information. Args: tool (dict): Tool information we want to retrieve from storage. The dicts have the same keys as those used in :func:`tool_add`. Returns: dict: The full tool information if it exists (``id`` included), None otherwise. """ return self._tools.get(self._tool_key(tool)) def metadata_provider_add(self, provider_name, provider_type, provider_url, metadata): """Add a metadata provider. Args: provider_name (str): Its name provider_type (str): Its type provider_url (str): Its URL metadata: JSON-encodable object Returns: an identifier of the provider """ provider = { 'provider_name': provider_name, 'provider_type': provider_type, 'provider_url': provider_url, 'metadata': metadata, } key = self._metadata_provider_key(provider) provider['id'] = key self._metadata_providers[key] = provider return key def metadata_provider_get(self, provider_id, db=None, cur=None): """Get a metadata provider Args: provider_id: Its identifier, as given by `metadata_provider_add`. Returns: dict: same as `metadata_provider_add`; or None if it does not exist. """ return self._metadata_providers.get(provider_id) def metadata_provider_get_by(self, provider, db=None, cur=None): """Get a metadata provider Args: provider_name: Its name provider_url: Its URL Returns: dict: same as `metadata_provider_add`; or None if it does not exist. """ key = self._metadata_provider_key(provider) return self._metadata_providers.get(key) def _get_origin_url(self, origin): if isinstance(origin, str): return origin else: raise TypeError('origin must be a string.') def _person_add(self, person): """Add a person in storage. Note: Private method, do not use outside of this class. Args: person: dictionary with keys fullname, name and email. """ key = ('person', person.fullname) if key not in self._objects: person_id = len(self._persons) + 1 self._persons.append(person) self._objects[key].append(('person', person_id)) else: person_id = self._objects[key][0][1] person = self._persons[person_id-1] return person @staticmethod def _content_key(content): """A stable key for a content""" return tuple(getattr(content, key) for key in sorted(DEFAULT_ALGORITHMS)) @staticmethod def _content_key_algorithm(content): """ A stable key and the algorithm for a content""" if isinstance(content, Content): content = content.to_dict() return tuple((content.get(key), key) for key in sorted(DEFAULT_ALGORITHMS)) @staticmethod def _tool_key(tool): return '%r %r %r' % (tool['name'], tool['version'], tuple(sorted(tool['configuration'].items()))) @staticmethod def _metadata_provider_key(provider): return '%r %r' % (provider['provider_name'], provider['provider_url']) diff --git a/swh/storage/storage.py b/swh/storage/storage.py index 7472ceb3..c94184dc 100644 --- a/swh/storage/storage.py +++ b/swh/storage/storage.py @@ -1,2139 +1,2140 @@ # Copyright (C) 2015-2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import copy import datetime import itertools import json from collections import defaultdict from concurrent.futures import ThreadPoolExecutor from contextlib import contextmanager from typing import Any, Dict, List, Optional import dateutil.parser import psycopg2 import psycopg2.pool from swh.core.api import remote_api_endpoint from swh.model.model import SHA1_SIZE from swh.model.hashutil import ALGORITHMS, hash_to_bytes, hash_to_hex from swh.objstorage import get_objstorage from swh.objstorage.exc import ObjNotFoundError try: from swh.journal.writer import get_journal_writer except ImportError: get_journal_writer = None # type: ignore # mypy limitation, see https://github.com/python/mypy/issues/1153 from . import converters from .common import db_transaction_generator, db_transaction from .db import Db from .exc import StorageDBError from .algos import diff from .metrics import timed, send_metric, process_metrics from .utils import get_partition_bounds_bytes # Max block size of contents to return BULK_BLOCK_CONTENT_LEN_MAX = 10000 EMPTY_SNAPSHOT_ID = hash_to_bytes('1a8893e6a86f444e8be8e7bda6cb34fb1735a00e') """Identifier for the empty snapshot""" class Storage(): """SWH storage proxy, encompassing DB and object storage """ def __init__(self, db, objstorage, min_pool_conns=1, max_pool_conns=10, journal_writer=None): """ Args: db_conn: either a libpq connection string, or a psycopg2 connection obj_root: path to the root of the object storage """ try: if isinstance(db, psycopg2.extensions.connection): self._pool = None self._db = Db(db) else: self._pool = psycopg2.pool.ThreadedConnectionPool( min_pool_conns, max_pool_conns, db ) self._db = None except psycopg2.OperationalError as e: raise StorageDBError(e) self.objstorage = get_objstorage(**objstorage) if journal_writer: if get_journal_writer is None: raise EnvironmentError( 'You need the swh.journal package to use the ' 'journal_writer feature') self.journal_writer = get_journal_writer(**journal_writer) else: self.journal_writer = None def get_db(self): if self._db: return self._db else: return Db.from_pool(self._pool) def put_db(self, db): if db is not self._db: db.put_conn() @contextmanager def db(self): db = None try: db = self.get_db() yield db finally: if db: self.put_db(db) @remote_api_endpoint('check_config') @timed @db_transaction() def check_config(self, *, check_write, db=None, cur=None): """Check that the storage is configured and ready to go.""" if not self.objstorage.check_config(check_write=check_write): return False # Check permissions on one of the tables if check_write: check = 'INSERT' else: check = 'SELECT' cur.execute( "select has_table_privilege(current_user, 'content', %s)", (check,) ) return cur.fetchone()[0] def _content_unique_key(self, hash, db): """Given a hash (tuple or dict), return a unique key from the aggregation of keys. """ keys = db.content_hash_keys if isinstance(hash, tuple): return hash return tuple([hash[k] for k in keys]) @staticmethod def _normalize_content(d): d = d.copy() if 'status' not in d: d['status'] = 'visible' if 'length' not in d: d['length'] = -1 return d @staticmethod def _validate_content(d): """Sanity checks on status / reason / length, that postgresql doesn't enforce.""" if d['status'] not in ('visible', 'absent', 'hidden'): raise ValueError('Invalid content status: {}'.format(d['status'])) if d['status'] != 'absent' and d.get('reason') is not None: raise ValueError( 'Must not provide a reason if content is not absent.') if d['length'] < -1: raise ValueError('Content length must be positive or -1.') def _filter_new_content(self, content, db=None, cur=None): """Sort contents into buckets 'with data' and 'without data', and filter out those already in the database.""" content_by_status = defaultdict(list) for d in content: content_by_status[d['status']].append(d) content_with_data = content_by_status['visible'] \ + content_by_status['hidden'] content_without_data = content_by_status['absent'] missing_content = set(self.content_missing(content_with_data, db=db, cur=cur)) missing_skipped = set(self._content_unique_key(hashes, db) for hashes in self.skipped_content_missing( content_without_data, db=db, cur=cur)) content_with_data = [ cont for cont in content_with_data if cont['sha1'] in missing_content] content_without_data = [ cont for cont in content_without_data if self._content_unique_key(cont, db) in missing_skipped] summary = { 'content:add': len(missing_content), 'skipped_content:add': len(missing_skipped), } return (content_with_data, content_without_data, summary) def _content_add_metadata(self, db, cur, content_with_data, content_without_data): """Add content to the postgresql database but not the object storage. """ if content_with_data: # create temporary table for metadata injection db.mktemp('content', cur) db.copy_to(content_with_data, 'tmp_content', db.content_add_keys, cur) # move metadata in place try: db.content_add_from_temp(cur) except psycopg2.IntegrityError as e: from . import HashCollision if e.diag.sqlstate == '23505' and \ e.diag.table_name == 'content': constraint_to_hash_name = { 'content_pkey': 'sha1', 'content_sha1_git_idx': 'sha1_git', 'content_sha256_idx': 'sha256', } colliding_hash_name = constraint_to_hash_name \ .get(e.diag.constraint_name) raise HashCollision(colliding_hash_name) from None else: raise if content_without_data: content_without_data = \ [cont.copy() for cont in content_without_data] origin_ids = db.origin_id_get_by_url( [cont.get('origin') for cont in content_without_data], cur=cur) for (cont, origin_id) in zip(content_without_data, origin_ids): if 'origin' in cont: cont['origin'] = origin_id db.mktemp('skipped_content', cur) db.copy_to(content_without_data, 'tmp_skipped_content', db.skipped_content_keys, cur) # move metadata in place db.skipped_content_add_from_temp(cur) @remote_api_endpoint('content/add') @timed @process_metrics @db_transaction() def content_add(self, content, db=None, cur=None): """Add content blobs to the storage Note: in case of DB errors, objects might have already been added to the object storage and will not be removed. Since addition to the object storage is idempotent, that should not be a problem. Args: contents (iterable): iterable of dictionaries representing individual pieces of content to add. Each dictionary has the following keys: - data (bytes): the actual content - length (int): content length (default: -1) - one key for each checksum algorithm in :data:`swh.model.hashutil.ALGORITHMS`, mapped to the corresponding checksum - status (str): one of visible, hidden, absent - reason (str): if status = absent, the reason why - origin (int): if status = absent, the origin we saw the content in Raises: In case of errors, nothing is stored in the db (in the objstorage, it could though). The following exceptions can occur: - HashCollision in case of collision - Any other exceptions raise by the db Returns: Summary dict with the following key and associated values: content:add: New contents added content:add:bytes: Sum of the contents' length data skipped_content:add: New skipped contents (no data) added """ content = [dict(c.items()) for c in content] # semi-shallow copy now = datetime.datetime.now(tz=datetime.timezone.utc) for item in content: item['ctime'] = now content = [self._normalize_content(c) for c in content] for c in content: self._validate_content(c) (content_with_data, content_without_data, summary) = \ self._filter_new_content(content, db, cur) if self.journal_writer: for item in content_with_data: if 'data' in item: item = item.copy() del item['data'] self.journal_writer.write_addition('content', item) for item in content_without_data: self.journal_writer.write_addition('content', item) def add_to_objstorage(): """Add to objstorage the new missing_content Returns: Sum of all the content's data length pushed to the objstorage. Content present twice is only sent once. """ content_bytes_added = 0 data = {} for cont in content_with_data: if cont['sha1'] not in data: data[cont['sha1']] = cont['data'] content_bytes_added += max(0, cont['length']) # FIXME: Since we do the filtering anyway now, we might as # well make the objstorage's add_batch call return what we # want here (real bytes added)... that'd simplify this... self.objstorage.add_batch(data) return content_bytes_added with ThreadPoolExecutor(max_workers=1) as executor: added_to_objstorage = executor.submit(add_to_objstorage) self._content_add_metadata( db, cur, content_with_data, content_without_data) # Wait for objstorage addition before returning from the # transaction, bubbling up any exception content_bytes_added = added_to_objstorage.result() summary['content:add:bytes'] = content_bytes_added return summary @remote_api_endpoint('content/update') @timed @db_transaction() def content_update(self, content, keys=[], db=None, cur=None): """Update content blobs to the storage. Does nothing for unknown contents or skipped ones. Args: content (iterable): iterable of dictionaries representing individual pieces of content to update. Each dictionary has the following keys: - data (bytes): the actual content - length (int): content length (default: -1) - one key for each checksum algorithm in :data:`swh.model.hashutil.ALGORITHMS`, mapped to the corresponding checksum - status (str): one of visible, hidden, absent keys (list): List of keys (str) whose values needs an update, e.g., new hash column """ # TODO: Add a check on input keys. How to properly implement # this? We don't know yet the new columns. if self.journal_writer: raise NotImplementedError( 'content_update is not yet support with a journal_writer.') db.mktemp('content', cur) select_keys = list(set(db.content_get_metadata_keys).union(set(keys))) db.copy_to(content, 'tmp_content', select_keys, cur) db.content_update_from_temp(keys_to_update=keys, cur=cur) @remote_api_endpoint('content/add_metadata') @timed @process_metrics @db_transaction() def content_add_metadata(self, content, db=None, cur=None): """Add content metadata to the storage (like `content_add`, but without inserting to the objstorage). Args: content (iterable): iterable of dictionaries representing individual pieces of content to add. Each dictionary has the following keys: - length (int): content length (default: -1) - one key for each checksum algorithm in :data:`swh.model.hashutil.ALGORITHMS`, mapped to the corresponding checksum - status (str): one of visible, hidden, absent - reason (str): if status = absent, the reason why - origin (int): if status = absent, the origin we saw the content in - ctime (datetime): time of insertion in the archive Returns: Summary dict with the following key and associated values: content:add: New contents added skipped_content:add: New skipped contents (no data) added """ content = [self._normalize_content(c) for c in content] for c in content: self._validate_content(c) (content_with_data, content_without_data, summary) = \ self._filter_new_content(content, db, cur) if self.journal_writer: for item in itertools.chain(content_with_data, content_without_data): assert 'data' not in content self.journal_writer.write_addition('content', item) self._content_add_metadata( db, cur, content_with_data, content_without_data) return summary @remote_api_endpoint('content/data') @timed def content_get(self, content): """Retrieve in bulk contents and their data. This generator yields exactly as many items than provided sha1 identifiers, but callers should not assume this will always be true. It may also yield `None` values in case an object was not found. Args: content: iterables of sha1 Yields: Dict[str, bytes]: Generates streams of contents as dict with their raw data: - sha1 (bytes): content id - data (bytes): content's raw data Raises: ValueError in case of too much contents are required. cf. BULK_BLOCK_CONTENT_LEN_MAX """ # FIXME: Make this method support slicing the `data`. if len(content) > BULK_BLOCK_CONTENT_LEN_MAX: raise ValueError( "Send at maximum %s contents." % BULK_BLOCK_CONTENT_LEN_MAX) for obj_id in content: try: data = self.objstorage.get(obj_id) except ObjNotFoundError: yield None continue yield {'sha1': obj_id, 'data': data} @remote_api_endpoint('content/range') @timed @db_transaction() def content_get_range(self, start, end, limit=1000, db=None, cur=None): """Retrieve contents within range [start, end] bound by limit. Note that this function may return more than one blob per hash. The limit is enforced with multiplicity (ie. two blobs with the same hash will count twice toward the limit). Args: **start** (bytes): Starting identifier range (expected smaller than end) **end** (bytes): Ending identifier range (expected larger than start) **limit** (int): Limit result (default to 1000) Returns: a dict with keys: - contents [dict]: iterable of contents in between the range. - next (bytes): There remains content in the range starting from this next sha1 """ if limit is None: raise ValueError('Development error: limit should not be None') contents = [] next_content = None for counter, content_row in enumerate( db.content_get_range(start, end, limit+1, cur)): content = dict(zip(db.content_get_metadata_keys, content_row)) if counter >= limit: # take the last commit for the next page starting from this next_content = content['sha1'] break contents.append(content) return { 'contents': contents, 'next': next_content, } @remote_api_endpoint('content/partition') @timed @db_transaction() def content_get_partition( self, partition_id: int, nb_partitions: int, limit: int = 1000, page_token: str = None, db=None, cur=None): """Splits contents into nb_partitions, and returns one of these based on partition_id (which must be in [0, nb_partitions-1]) There is no guarantee on how the partitioning is done, or the result order. Args: partition_id (int): index of the partition to fetch nb_partitions (int): total number of partitions to split into limit (int): Limit result (default to 1000) page_token (Optional[str]): opaque token used for pagination. Returns: a dict with keys: - contents (List[dict]): iterable of contents in the partition. - **next_page_token** (Optional[str]): opaque token to be used as `page_token` for retrieving the next page. if absent, there is no more pages to gather. """ if limit is None: raise ValueError('Development error: limit should not be None') (start, end) = get_partition_bounds_bytes( partition_id, nb_partitions, SHA1_SIZE) if page_token: start = hash_to_bytes(page_token) if end is None: end = b'\xff'*SHA1_SIZE result = self.content_get_range(start, end, limit) result2 = { 'contents': result['contents'], 'next_page_token': None, } if result['next']: result2['next_page_token'] = hash_to_hex(result['next']) return result2 @remote_api_endpoint('content/metadata') @timed @db_transaction(statement_timeout=500) def content_get_metadata( self, contents: List[bytes], db=None, cur=None) -> Dict[bytes, List[Dict]]: """Retrieve content metadata in bulk Args: content: iterable of content identifiers (sha1) Returns: a dict with keys the content's sha1 and the associated value either the existing content's metadata or None if the content does not exist. """ result: Dict[bytes, List[Dict]] = {sha1: [] for sha1 in contents} for row in db.content_get_metadata_from_sha1s(contents, cur): content_meta = dict(zip(db.content_get_metadata_keys, row)) result[content_meta['sha1']].append(content_meta) return result @remote_api_endpoint('content/missing') @timed @db_transaction_generator() def content_missing(self, content, key_hash='sha1', db=None, cur=None): """List content missing from storage Args: content ([dict]): iterable of dictionaries whose keys are either 'length' or an item of :data:`swh.model.hashutil.ALGORITHMS`; mapped to the corresponding checksum (or length). key_hash (str): name of the column to use as hash id result (default: 'sha1') Returns: iterable ([bytes]): missing content ids (as per the key_hash column) Raises: TODO: an exception when we get a hash collision. """ keys = db.content_hash_keys if key_hash not in keys: raise ValueError("key_hash should be one of %s" % keys) key_hash_idx = keys.index(key_hash) if not content: return for obj in db.content_missing_from_list(content, cur): yield obj[key_hash_idx] @remote_api_endpoint('content/missing/sha1') @timed @db_transaction_generator() def content_missing_per_sha1(self, contents, db=None, cur=None): """List content missing from storage based only on sha1. Args: contents: Iterable of sha1 to check for absence. Returns: iterable: missing ids Raises: TODO: an exception when we get a hash collision. """ for obj in db.content_missing_per_sha1(contents, cur): yield obj[0] @remote_api_endpoint('content/skipped/missing') @timed @db_transaction_generator() def skipped_content_missing(self, contents, db=None, cur=None): """List skipped_content missing from storage Args: content: iterable of dictionaries containing the data for each checksum algorithm. Returns: iterable: missing signatures """ for content in db.skipped_content_missing(contents, cur): yield dict(zip(db.content_hash_keys, content)) @remote_api_endpoint('content/present') @timed @db_transaction() def content_find(self, content, db=None, cur=None): """Find a content hash in db. Args: content: a dictionary representing one content hash, mapping checksum algorithm names (see swh.model.hashutil.ALGORITHMS) to checksum values Returns: a triplet (sha1, sha1_git, sha256) if the content exist or None otherwise. Raises: ValueError: in case the key of the dictionary is not sha1, sha1_git nor sha256. """ if not set(content).intersection(ALGORITHMS): raise ValueError('content keys must contain at least one of: ' 'sha1, sha1_git, sha256, blake2s256') contents = db.content_find(sha1=content.get('sha1'), sha1_git=content.get('sha1_git'), sha256=content.get('sha256'), blake2s256=content.get('blake2s256'), cur=cur) return [dict(zip(db.content_find_cols, content)) for content in contents] @remote_api_endpoint('content/get_random') @timed @db_transaction() def content_get_random(self, db=None, cur=None): """Finds a random content id. Returns: a sha1_git """ return db.content_get_random(cur) @remote_api_endpoint('directory/add') @timed @process_metrics @db_transaction() def directory_add(self, directories, db=None, cur=None): """Add directories to the storage Args: directories (iterable): iterable of dictionaries representing the individual directories to add. Each dict has the following keys: - id (sha1_git): the id of the directory to add - entries (list): list of dicts for each entry in the directory. Each dict has the following keys: - name (bytes) - type (one of 'file', 'dir', 'rev'): type of the directory entry (file, directory, revision) - target (sha1_git): id of the object pointed at by the directory entry - perms (int): entry permissions Returns: Summary dict of keys with associated count as values: directory:add: Number of directories actually added """ directories = list(directories) summary = {'directory:add': 0} dirs = set() dir_entries = { 'file': defaultdict(list), 'dir': defaultdict(list), 'rev': defaultdict(list), } for cur_dir in directories: dir_id = cur_dir['id'] dirs.add(dir_id) for src_entry in cur_dir['entries']: entry = src_entry.copy() entry['dir_id'] = dir_id if entry['type'] not in ('file', 'dir', 'rev'): raise ValueError( 'Entry type must be file, dir, or rev; not %s' % entry['type']) dir_entries[entry['type']][dir_id].append(entry) dirs_missing = set(self.directory_missing(dirs, db=db, cur=cur)) if not dirs_missing: return summary if self.journal_writer: self.journal_writer.write_additions( 'directory', (dir_ for dir_ in directories if dir_['id'] in dirs_missing)) # Copy directory ids dirs_missing_dict = ({'id': dir} for dir in dirs_missing) db.mktemp('directory', cur) db.copy_to(dirs_missing_dict, 'tmp_directory', ['id'], cur) # Copy entries for entry_type, entry_list in dir_entries.items(): entries = itertools.chain.from_iterable( entries_for_dir for dir_id, entries_for_dir in entry_list.items() if dir_id in dirs_missing) db.mktemp_dir_entry(entry_type) db.copy_to( entries, 'tmp_directory_entry_%s' % entry_type, ['target', 'name', 'perms', 'dir_id'], cur, ) # Do the final copy db.directory_add_from_temp(cur) summary['directory:add'] = len(dirs_missing) return summary @remote_api_endpoint('directory/missing') @timed @db_transaction_generator() def directory_missing(self, directories, db=None, cur=None): """List directories missing from storage Args: directories (iterable): an iterable of directory ids Yields: missing directory ids """ for obj in db.directory_missing_from_list(directories, cur): yield obj[0] @remote_api_endpoint('directory/ls') @timed @db_transaction_generator(statement_timeout=20000) def directory_ls(self, directory, recursive=False, db=None, cur=None): """Get entries for one directory. Args: - directory: the directory to list entries from. - recursive: if flag on, this list recursively from this directory. Returns: List of entries for such directory. If `recursive=True`, names in the path of a dir/file not at the root are concatenated with a slash (`/`). """ if recursive: res_gen = db.directory_walk(directory, cur=cur) else: res_gen = db.directory_walk_one(directory, cur=cur) for line in res_gen: yield dict(zip(db.directory_ls_cols, line)) @remote_api_endpoint('directory/path') @timed @db_transaction(statement_timeout=2000) def directory_entry_get_by_path(self, directory, paths, db=None, cur=None): """Get the directory entry (either file or dir) from directory with path. Args: - directory: sha1 of the top level directory - paths: path to lookup from the top level directory. From left (top) to right (bottom). Returns: The corresponding directory entry if found, None otherwise. """ res = db.directory_entry_get_by_path(directory, paths, cur) if res: return dict(zip(db.directory_ls_cols, res)) @remote_api_endpoint('directory/get_random') @timed @db_transaction() def directory_get_random(self, db=None, cur=None): """Finds a random directory id. Returns: a sha1_git """ return db.directory_get_random(cur) @remote_api_endpoint('revision/add') @timed @process_metrics @db_transaction() def revision_add(self, revisions, db=None, cur=None): """Add revisions to the storage Args: revisions (Iterable[dict]): iterable of dictionaries representing the individual revisions to add. Each dict has the following keys: - **id** (:class:`sha1_git`): id of the revision to add - **date** (:class:`dict`): date the revision was written - **committer_date** (:class:`dict`): date the revision got added to the origin - **type** (one of 'git', 'tar'): type of the revision added - **directory** (:class:`sha1_git`): the directory the revision points at - **message** (:class:`bytes`): the message associated with the revision - **author** (:class:`Dict[str, bytes]`): dictionary with keys: name, fullname, email - **committer** (:class:`Dict[str, bytes]`): dictionary with keys: name, fullname, email - **metadata** (:class:`jsonb`): extra information as dictionary - **synthetic** (:class:`bool`): revision's nature (tarball, directory creates synthetic revision`) - **parents** (:class:`list[sha1_git]`): the parents of this revision date dictionaries have the form defined in :mod:`swh.model`. Returns: Summary dict of keys with associated count as values revision:add: New objects actually stored in db """ revisions = list(revisions) summary = {'revision:add': 0} revisions_missing = set(self.revision_missing( set(revision['id'] for revision in revisions), db=db, cur=cur)) if not revisions_missing: return summary db.mktemp_revision(cur) revisions_filtered = [ revision for revision in revisions if revision['id'] in revisions_missing] if self.journal_writer: self.journal_writer.write_additions('revision', revisions_filtered) revisions_filtered = map(converters.revision_to_db, revisions_filtered) parents_filtered = [] db.copy_to( revisions_filtered, 'tmp_revision', db.revision_add_cols, cur, lambda rev: parents_filtered.extend(rev['parents'])) db.revision_add_from_temp(cur) db.copy_to(parents_filtered, 'revision_history', ['id', 'parent_id', 'parent_rank'], cur) return {'revision:add': len(revisions_missing)} @remote_api_endpoint('revision/missing') @timed @db_transaction_generator() def revision_missing(self, revisions, db=None, cur=None): """List revisions missing from storage Args: revisions (iterable): revision ids Yields: missing revision ids """ if not revisions: return for obj in db.revision_missing_from_list(revisions, cur): yield obj[0] @remote_api_endpoint('revision') @timed @db_transaction_generator(statement_timeout=1000) def revision_get(self, revisions, db=None, cur=None): """Get all revisions from storage Args: revisions: an iterable of revision ids Returns: iterable: an iterable of revisions as dictionaries (or None if the revision doesn't exist) """ for line in db.revision_get_from_list(revisions, cur): data = converters.db_to_revision( dict(zip(db.revision_get_cols, line)) ) if not data['type']: yield None continue yield data @remote_api_endpoint('revision/log') @timed @db_transaction_generator(statement_timeout=2000) def revision_log(self, revisions, limit=None, db=None, cur=None): """Fetch revision entry from the given root revisions. Args: revisions: array of root revision to lookup limit: limitation on the output result. Default to None. Yields: List of revision log from such revisions root. """ for line in db.revision_log(revisions, limit, cur): data = converters.db_to_revision( dict(zip(db.revision_get_cols, line)) ) if not data['type']: yield None continue yield data @remote_api_endpoint('revision/shortlog') @timed @db_transaction_generator(statement_timeout=2000) def revision_shortlog(self, revisions, limit=None, db=None, cur=None): """Fetch the shortlog for the given revisions Args: revisions: list of root revisions to lookup limit: depth limitation for the output Yields: a list of (id, parents) tuples. """ yield from db.revision_shortlog(revisions, limit, cur) @remote_api_endpoint('revision/get_random') @timed @db_transaction() def revision_get_random(self, db=None, cur=None): """Finds a random revision id. Returns: a sha1_git """ return db.revision_get_random(cur) @remote_api_endpoint('release/add') @timed @process_metrics @db_transaction() def release_add(self, releases, db=None, cur=None): """Add releases to the storage Args: releases (Iterable[dict]): iterable of dictionaries representing the individual releases to add. Each dict has the following keys: - **id** (:class:`sha1_git`): id of the release to add - **revision** (:class:`sha1_git`): id of the revision the release points to - **date** (:class:`dict`): the date the release was made - **name** (:class:`bytes`): the name of the release - **comment** (:class:`bytes`): the comment associated with the release - **author** (:class:`Dict[str, bytes]`): dictionary with keys: name, fullname, email the date dictionary has the form defined in :mod:`swh.model`. Returns: Summary dict of keys with associated count as values release:add: New objects contents actually stored in db """ releases = list(releases) summary = {'release:add': 0} release_ids = set(release['id'] for release in releases) releases_missing = set(self.release_missing(release_ids, db=db, cur=cur)) if not releases_missing: return summary db.mktemp_release(cur) releases_missing = list(releases_missing) releases_filtered = [ release for release in releases if release['id'] in releases_missing ] if self.journal_writer: self.journal_writer.write_additions('release', releases_filtered) releases_filtered = map(converters.release_to_db, releases_filtered) db.copy_to(releases_filtered, 'tmp_release', db.release_add_cols, cur) db.release_add_from_temp(cur) return {'release:add': len(releases_missing)} @remote_api_endpoint('release/missing') @timed @db_transaction_generator() def release_missing(self, releases, db=None, cur=None): """List releases missing from storage Args: releases: an iterable of release ids Returns: a list of missing release ids """ if not releases: return for obj in db.release_missing_from_list(releases, cur): yield obj[0] @remote_api_endpoint('release') @timed @db_transaction_generator(statement_timeout=500) def release_get(self, releases, db=None, cur=None): """Given a list of sha1, return the releases's information Args: releases: list of sha1s Yields: dicts with the same keys as those given to `release_add` (or ``None`` if a release does not exist) """ for release in db.release_get_from_list(releases, cur): data = converters.db_to_release( dict(zip(db.release_get_cols, release)) ) yield data if data['target_type'] else None @remote_api_endpoint('release/get_random') @timed @db_transaction() def release_get_random(self, db=None, cur=None): """Finds a random release id. Returns: a sha1_git """ return db.release_get_random(cur) @remote_api_endpoint('snapshot/add') @timed @process_metrics @db_transaction() def snapshot_add(self, snapshots, db=None, cur=None): """Add snapshots to the storage. Args: snapshot ([dict]): the snapshots to add, containing the following keys: - **id** (:class:`bytes`): id of the snapshot - **branches** (:class:`dict`): branches the snapshot contains, mapping the branch name (:class:`bytes`) to the branch target, itself a :class:`dict` (or ``None`` if the branch points to an unknown object) - **target_type** (:class:`str`): one of ``content``, ``directory``, ``revision``, ``release``, ``snapshot``, ``alias`` - **target** (:class:`bytes`): identifier of the target (currently a ``sha1_git`` for all object kinds, or the name of the target branch for aliases) Raises: ValueError: if the origin or visit id does not exist. Returns: Summary dict of keys with associated count as values snapshot:add: Count of object actually stored in db """ created_temp_table = False count = 0 for snapshot in snapshots: if not db.snapshot_exists(snapshot['id'], cur): if not created_temp_table: db.mktemp_snapshot_branch(cur) created_temp_table = True db.copy_to( ( { 'name': name, 'target': info['target'] if info else None, 'target_type': (info['target_type'] if info else None), } for name, info in snapshot['branches'].items() ), 'tmp_snapshot_branch', ['name', 'target', 'target_type'], cur, ) if self.journal_writer: self.journal_writer.write_addition('snapshot', snapshot) db.snapshot_add(snapshot['id'], cur) count += 1 return {'snapshot:add': count} @remote_api_endpoint('snapshot') @timed @db_transaction(statement_timeout=2000) def snapshot_get(self, snapshot_id, db=None, cur=None): """Get the content, possibly partial, of a snapshot with the given id The branches of the snapshot are iterated in the lexicographical order of their names. .. warning:: At most 1000 branches contained in the snapshot will be returned for performance reasons. In order to browse the whole set of branches, the method :meth:`snapshot_get_branches` should be used instead. Args: snapshot_id (bytes): identifier of the snapshot Returns: dict: a dict with three keys: * **id**: identifier of the snapshot * **branches**: a dict of branches contained in the snapshot whose keys are the branches' names. * **next_branch**: the name of the first branch not returned or :const:`None` if the snapshot has less than 1000 branches. """ return self.snapshot_get_branches(snapshot_id, db=db, cur=cur) @remote_api_endpoint('snapshot/by_origin_visit') @timed @db_transaction(statement_timeout=2000) def snapshot_get_by_origin_visit(self, origin, visit, db=None, cur=None): """Get the content, possibly partial, of a snapshot for the given origin visit The branches of the snapshot are iterated in the lexicographical order of their names. .. warning:: At most 1000 branches contained in the snapshot will be returned for performance reasons. In order to browse the whole set of branches, the method :meth:`snapshot_get_branches` should be used instead. Args: origin (int): the origin identifier visit (int): the visit identifier Returns: dict: None if the snapshot does not exist; a dict with three keys otherwise: * **id**: identifier of the snapshot * **branches**: a dict of branches contained in the snapshot whose keys are the branches' names. * **next_branch**: the name of the first branch not returned or :const:`None` if the snapshot has less than 1000 branches. """ snapshot_id = db.snapshot_get_by_origin_visit(origin, visit, cur) if snapshot_id: return self.snapshot_get(snapshot_id, db=db, cur=cur) return None @remote_api_endpoint('snapshot/latest') @timed @db_transaction(statement_timeout=4000) def snapshot_get_latest(self, origin, allowed_statuses=None, db=None, cur=None): """Get the content, possibly partial, of the latest snapshot for the given origin, optionally only from visits that have one of the given allowed_statuses The branches of the snapshot are iterated in the lexicographical order of their names. .. warning:: At most 1000 branches contained in the snapshot will be returned for performance reasons. In order to browse the whole set of branches, the method :meth:`snapshot_get_branches` should be used instead. Args: origin (str): the origin's URL allowed_statuses (list of str): list of visit statuses considered to find the latest snapshot for the visit. For instance, ``allowed_statuses=['full']`` will only consider visits that have successfully run to completion. Returns: dict: a dict with three keys: * **id**: identifier of the snapshot * **branches**: a dict of branches contained in the snapshot whose keys are the branches' names. * **next_branch**: the name of the first branch not returned or :const:`None` if the snapshot has less than 1000 branches. """ if isinstance(origin, int): origin = self.origin_get({'id': origin}, db=db, cur=cur) if not origin: return origin = origin['url'] origin_visit = self.origin_visit_get_latest( origin, allowed_statuses=allowed_statuses, require_snapshot=True, db=db, cur=cur) if origin_visit and origin_visit['snapshot']: snapshot = self.snapshot_get( origin_visit['snapshot'], db=db, cur=cur) if not snapshot: raise ValueError( 'last origin visit references an unknown snapshot') return snapshot @remote_api_endpoint('snapshot/count_branches') @timed @db_transaction(statement_timeout=2000) def snapshot_count_branches(self, snapshot_id, db=None, cur=None): """Count the number of branches in the snapshot with the given id Args: snapshot_id (bytes): identifier of the snapshot Returns: dict: A dict whose keys are the target types of branches and values their corresponding amount """ return dict([bc for bc in db.snapshot_count_branches(snapshot_id, cur)]) @remote_api_endpoint('snapshot/get_branches') @timed @db_transaction(statement_timeout=2000) def snapshot_get_branches(self, snapshot_id, branches_from=b'', branches_count=1000, target_types=None, db=None, cur=None): """Get the content, possibly partial, of a snapshot with the given id The branches of the snapshot are iterated in the lexicographical order of their names. Args: snapshot_id (bytes): identifier of the snapshot branches_from (bytes): optional parameter used to skip branches whose name is lesser than it before returning them branches_count (int): optional parameter used to restrain the amount of returned branches target_types (list): optional parameter used to filter the target types of branch to return (possible values that can be contained in that list are `'content', 'directory', 'revision', 'release', 'snapshot', 'alias'`) Returns: dict: None if the snapshot does not exist; a dict with three keys otherwise: * **id**: identifier of the snapshot * **branches**: a dict of branches contained in the snapshot whose keys are the branches' names. * **next_branch**: the name of the first branch not returned or :const:`None` if the snapshot has less than `branches_count` branches after `branches_from` included. """ if snapshot_id == EMPTY_SNAPSHOT_ID: return { 'id': snapshot_id, 'branches': {}, 'next_branch': None, } branches = {} next_branch = None fetched_branches = list(db.snapshot_get_by_id( snapshot_id, branches_from=branches_from, branches_count=branches_count+1, target_types=target_types, cur=cur, )) for branch in fetched_branches[:branches_count]: branch = dict(zip(db.snapshot_get_cols, branch)) del branch['snapshot_id'] name = branch.pop('name') if branch == {'target': None, 'target_type': None}: branch = None branches[name] = branch if len(fetched_branches) > branches_count: branch = dict(zip(db.snapshot_get_cols, fetched_branches[-1])) next_branch = branch['name'] if branches: return { 'id': snapshot_id, 'branches': branches, 'next_branch': next_branch, } return None @remote_api_endpoint('snapshot/get_random') @timed @db_transaction() def snapshot_get_random(self, db=None, cur=None): """Finds a random snapshot id. Returns: a sha1_git """ return db.snapshot_get_random(cur) @remote_api_endpoint('origin/visit/add') @timed @db_transaction() def origin_visit_add(self, origin, date, type, db=None, cur=None): """Add an origin_visit for the origin at ts with status 'ongoing'. Args: origin (str): visited origin's identifier or URL date (Union[str,datetime]): timestamp of such visit type (str): the type of loader used for the visit (hg, git, ...) Returns: dict: dictionary with keys origin and visit where: - origin: origin identifier - visit: the visit identifier for the new visit occurrence """ origin_url = origin if isinstance(date, str): # FIXME: Converge on iso8601 at some point date = dateutil.parser.parse(date) visit_id = db.origin_visit_add(origin_url, date, type, cur) if self.journal_writer: # We can write to the journal only after inserting to the # DB, because we want the id of the visit self.journal_writer.write_addition('origin_visit', { 'origin': origin_url, 'date': date, 'type': type, 'visit': visit_id, 'status': 'ongoing', 'metadata': None, 'snapshot': None}) send_metric('origin_visit:add', count=1, method_name='origin_visit') return { 'origin': origin_url, 'visit': visit_id, } @remote_api_endpoint('origin/visit/update') @timed @db_transaction() def origin_visit_update(self, origin, visit_id, status=None, metadata=None, snapshot=None, db=None, cur=None): """Update an origin_visit's status. Args: origin (str): visited origin's URL visit_id: Visit's id status: Visit's new status metadata: Data associated to the visit snapshot (sha1_git): identifier of the snapshot to add to the visit Returns: None """ if not isinstance(origin, str): raise TypeError('origin must be a string, not %r' % (origin,)) origin_url = origin visit = db.origin_visit_get(origin_url, visit_id, cur=cur) if not visit: raise ValueError('Invalid visit_id for this origin.') visit = dict(zip(db.origin_visit_get_cols, visit)) updates = {} if status and status != visit['status']: updates['status'] = status if metadata and metadata != visit['metadata']: updates['metadata'] = metadata if snapshot and snapshot != visit['snapshot']: updates['snapshot'] = snapshot if updates: if self.journal_writer: self.journal_writer.write_update('origin_visit', { **visit, **updates}) db.origin_visit_update(origin_url, visit_id, updates, cur) @remote_api_endpoint('origin/visit/upsert') @timed @db_transaction() def origin_visit_upsert(self, visits, db=None, cur=None): """Add a origin_visits with a specific id and with all its data. If there is already an origin_visit with the same `(origin_id, visit_id)`, overwrites it. Args: visits: iterable of dicts with keys: - origin: dict with keys either `id` or `url` - visit: origin visit id - date: timestamp of such visit - status: Visit's new status - metadata: Data associated to the visit - snapshot (sha1_git): identifier of the snapshot to add to + - **origin**: dict with keys either `id` or `url` + - **visit**: origin visit id + - **date**: timestamp of such visit + - **status**: Visit's new status + - **metadata**: Data associated to the visit + - **snapshot**: identifier of the snapshot to add to the visit """ visits = copy.deepcopy(visits) for visit in visits: if isinstance(visit['date'], str): visit['date'] = dateutil.parser.parse(visit['date']) if not isinstance(visit['origin'], str): raise TypeError("visit['origin'] must be a string, not %r" % (visit['origin'],)) if self.journal_writer: for visit in visits: self.journal_writer.write_addition('origin_visit', visit) for visit in visits: # TODO: upsert them all in a single query db.origin_visit_upsert(**visit, cur=cur) @remote_api_endpoint('origin/visit/get') @timed @db_transaction_generator(statement_timeout=500) def origin_visit_get(self, origin, last_visit=None, limit=None, db=None, cur=None): """Retrieve all the origin's visit's information. Args: origin (str): The visited origin last_visit: Starting point from which listing the next visits Default to None limit (int): Number of results to return from the last visit. Default to None Yields: List of visits. """ for line in db.origin_visit_get_all( origin, last_visit=last_visit, limit=limit, cur=cur): data = dict(zip(db.origin_visit_get_cols, line)) yield data @remote_api_endpoint('origin/visit/find_by_date') @timed @db_transaction(statement_timeout=500) def origin_visit_find_by_date(self, origin, visit_date, db=None, cur=None): """Retrieves the origin visit whose date is closest to the provided timestamp. In case of a tie, the visit with largest id is selected. Args: origin (str): The occurrence's origin (URL). target (datetime): target timestamp Returns: A visit. """ line = db.origin_visit_find_by_date(origin, visit_date, cur=cur) if line: return dict(zip(db.origin_visit_get_cols, line)) @remote_api_endpoint('origin/visit/getby') @timed @db_transaction(statement_timeout=500) def origin_visit_get_by(self, origin, visit, db=None, cur=None): """Retrieve origin visit's information. Args: origin: The occurrence's origin (identifier). Returns: The information on that particular (origin, visit) or None if it does not exist """ ori_visit = db.origin_visit_get(origin, visit, cur) if not ori_visit: return None return dict(zip(db.origin_visit_get_cols, ori_visit)) @remote_api_endpoint('origin/visit/get_latest') @timed @db_transaction(statement_timeout=4000) def origin_visit_get_latest( self, origin, allowed_statuses=None, require_snapshot=False, db=None, cur=None): """Get the latest origin visit for the given origin, optionally looking only for those with one of the given allowed_statuses or for those with a known snapshot. Args: origin (str): the origin's URL allowed_statuses (list of str): list of visit statuses considered to find the latest visit. For instance, ``allowed_statuses=['full']`` will only consider visits that have successfully run to completion. require_snapshot (bool): If True, only a visit with a snapshot will be returned. + Returns: dict: a dict with the following keys: - origin: the URL of the origin - visit: origin visit id - type: type of loader used for the visit - date: timestamp of such visit - status: Visit's new status - metadata: Data associated to the visit - snapshot (Optional[sha1_git]): identifier of the snapshot + - **origin**: the URL of the origin + - **visit**: origin visit id + - **type**: type of loader used for the visit + - **date**: timestamp of such visit + - **status**: Visit's new status + - **metadata**: Data associated to the visit + - **snapshot** (Optional[sha1_git]): identifier of the snapshot associated to the visit """ origin_visit = db.origin_visit_get_latest( origin, allowed_statuses=allowed_statuses, require_snapshot=require_snapshot, cur=cur) if origin_visit: return dict(zip(db.origin_visit_get_cols, origin_visit)) @remote_api_endpoint('origin/visit/get_random') @timed @db_transaction() def origin_visit_get_random( self, type: str, db=None, cur=None) -> Optional[Dict[str, Any]]: """Randomly select one successful origin visit with made in the last 3 months. Returns: dict representing an origin visit, in the same format as :py:meth:`origin_visit_get`. """ result = db.origin_visit_get_random(type, cur) if result: return dict(zip(db.origin_visit_get_cols, result)) else: return None @remote_api_endpoint('object/find_by_sha1_git') @timed @db_transaction(statement_timeout=2000) def object_find_by_sha1_git(self, ids, db=None, cur=None): """Return the objects found with the given ids. Args: ids: a generator of sha1_gits Returns: dict: a mapping from id to the list of objects found. Each object found is itself a dict with keys: - sha1_git: the input id - type: the type of object found """ ret = {id: [] for id in ids} for retval in db.object_find_by_sha1_git(ids, cur=cur): if retval[1]: ret[retval[0]].append(dict(zip(db.object_find_by_sha1_git_cols, retval))) return ret @remote_api_endpoint('origin/get') @timed @db_transaction(statement_timeout=500) def origin_get(self, origins, db=None, cur=None): """Return origins, either all identified by their ids or all identified by tuples (type, url). If the url is given and the type is omitted, one of the origins with that url is returned. Args: origin: a list of dictionaries representing the individual origins to find. These dicts have the key url: - url (bytes): the url the origin points to Returns: dict: the origin dictionary with the keys: - id: origin's id - url: origin's url Raises: ValueError: if the url or the id don't exist. """ if isinstance(origins, dict): # Old API return_single = True origins = [origins] elif len(origins) == 0: return [] else: return_single = False origin_urls = [origin['url'] for origin in origins] results = db.origin_get_by_url(origin_urls, cur) results = [dict(zip(db.origin_cols, result)) for result in results] if return_single: assert len(results) == 1 if results[0]['url'] is not None: return results[0] else: return None else: return [None if res['url'] is None else res for res in results] @remote_api_endpoint('origin/get_sha1') @timed @db_transaction_generator(statement_timeout=500) def origin_get_by_sha1(self, sha1s, db=None, cur=None): """Return origins, identified by the sha1 of their URLs. Args: sha1s (list[bytes]): a list of sha1s Yields: dicts containing origin information as returned by :meth:`swh.storage.storage.Storage.origin_get`, or None if an origin matching the sha1 is not found. """ for line in db.origin_get_by_sha1(sha1s, cur): if line[0] is not None: yield dict(zip(db.origin_cols, line)) else: yield None @remote_api_endpoint('origin/get_range') @timed @db_transaction_generator() def origin_get_range(self, origin_from=1, origin_count=100, db=None, cur=None): """Retrieve ``origin_count`` origins whose ids are greater or equal than ``origin_from``. Origins are sorted by id before retrieving them. Args: origin_from (int): the minimum id of origins to retrieve origin_count (int): the maximum number of origins to retrieve Yields: dicts containing origin information as returned by :meth:`swh.storage.storage.Storage.origin_get`. """ for origin in db.origin_get_range(origin_from, origin_count, cur): yield dict(zip(db.origin_get_range_cols, origin)) @remote_api_endpoint('origin/list') @timed @db_transaction() def origin_list(self, page_token: Optional[str] = None, limit: int = 100, *, db=None, cur=None) -> dict: """Returns the list of origins Args: page_token: opaque token used for pagination. limit: the maximum number of results to return Returns: dict: dict with the following keys: - **next_page_token** (str, optional): opaque token to be used as `page_token` for retrieving the next page. if absent, there is no more pages to gather. - **origins** (List[dict]): list of origins, as returned by `origin_get`. """ page_token = page_token or '0' if not isinstance(page_token, str): raise TypeError('page_token must be a string.') origin_from = int(page_token) result: Dict[str, Any] = { 'origins': [ dict(zip(db.origin_get_range_cols, origin)) for origin in db.origin_get_range(origin_from, limit, cur) ], } assert len(result['origins']) <= limit if len(result['origins']) == limit: result['next_page_token'] = str(result['origins'][limit-1]['id']+1) for origin in result['origins']: del origin['id'] return result @remote_api_endpoint('origin/search') @timed @db_transaction_generator() def origin_search(self, url_pattern, offset=0, limit=50, regexp=False, with_visit=False, db=None, cur=None): """Search for origins whose urls contain a provided string pattern or match a provided regular expression. The search is performed in a case insensitive way. Args: url_pattern (str): the string pattern to search for in origin urls offset (int): number of found origins to skip before returning results limit (int): the maximum number of found origins to return regexp (bool): if True, consider the provided pattern as a regular expression and return origins whose urls match it with_visit (bool): if True, filter out origins with no visit Yields: dicts containing origin information as returned by :meth:`swh.storage.storage.Storage.origin_get`. """ for origin in db.origin_search(url_pattern, offset, limit, regexp, with_visit, cur): yield dict(zip(db.origin_cols, origin)) @remote_api_endpoint('origin/count') @timed @db_transaction() def origin_count(self, url_pattern, regexp=False, with_visit=False, db=None, cur=None): """Count origins whose urls contain a provided string pattern or match a provided regular expression. The pattern search in origin urls is performed in a case insensitive way. Args: url_pattern (str): the string pattern to search for in origin urls regexp (bool): if True, consider the provided pattern as a regular expression and return origins whose urls match it with_visit (bool): if True, filter out origins with no visit Returns: int: The number of origins matching the search criterion. """ return db.origin_count(url_pattern, regexp, with_visit, cur) @remote_api_endpoint('origin/add_multi') @timed @db_transaction() def origin_add(self, origins, db=None, cur=None): """Add origins to the storage Args: origins: list of dictionaries representing the individual origins, with the following keys: - type: the origin type ('git', 'svn', 'deb', ...) - url (bytes): the url the origin points to Returns: list: given origins as dict updated with their id """ origins = copy.deepcopy(list(origins)) for origin in origins: self.origin_add_one(origin, db=db, cur=cur) send_metric('origin:add', count=len(origins), method_name='origin_add') return origins @remote_api_endpoint('origin/add') @timed @db_transaction() def origin_add_one(self, origin, db=None, cur=None): """Add origin to the storage Args: origin: dictionary representing the individual origin to add. This dict has the following keys: - type (FIXME: enum TBD): the origin type ('git', 'wget', ...) - url (bytes): the url the origin points to Returns: the id of the added origin, or of the identical one that already exists. """ origin_row = list(db.origin_get_by_url([origin['url']], cur))[0] origin_url = dict(zip(db.origin_cols, origin_row))['url'] if origin_url: return origin_url if self.journal_writer: self.journal_writer.write_addition('origin', origin) origins = db.origin_add(origin['url'], cur) send_metric('origin:add', count=len(origins), method_name='origin_add') return origins @db_transaction(statement_timeout=500) def stat_counters(self, db=None, cur=None): """compute statistics about the number of tuples in various tables Returns: dict: a dictionary mapping textual labels (e.g., content) to integer values (e.g., the number of tuples in table content) """ return {k: v for (k, v) in db.stat_counters()} @db_transaction() def refresh_stat_counters(self, db=None, cur=None): """Recomputes the statistics for `stat_counters`.""" keys = [ 'content', 'directory', 'directory_entry_dir', 'directory_entry_file', 'directory_entry_rev', 'origin', 'origin_visit', 'person', 'release', 'revision', 'revision_history', 'skipped_content', 'snapshot'] for key in keys: cur.execute('select * from swh_update_counter(%s)', (key,)) @remote_api_endpoint('origin/metadata/add') @timed @db_transaction() def origin_metadata_add(self, origin_url, ts, provider, tool, metadata, db=None, cur=None): """ Add an origin_metadata for the origin at ts with provenance and metadata. Args: origin_url (str): the origin url for which the metadata is added ts (datetime): timestamp of the found metadata provider (int): the provider of metadata (ex:'hal') tool (int): tool used to extract metadata metadata (jsonb): the metadata retrieved at the time and location """ if isinstance(ts, str): ts = dateutil.parser.parse(ts) db.origin_metadata_add(origin_url, ts, provider, tool, metadata, cur) send_metric( 'origin_metadata:add', count=1, method_name='origin_metadata_add') @remote_api_endpoint('origin/metadata/get') @timed @db_transaction_generator(statement_timeout=500) def origin_metadata_get_by(self, origin_url, provider_type=None, db=None, cur=None): """Retrieve list of all origin_metadata entries for the origin_id Args: origin_url (str): the origin's URL provider_type (str): (optional) type of provider Returns: list of dicts: the origin_metadata dictionary with the keys: - origin_id (int): origin's id - discovery_date (datetime): timestamp of discovery - tool_id (int): metadata's extracting tool - metadata (jsonb) - provider_id (int): metadata's provider - provider_name (str) - provider_type (str) - provider_url (str) """ for line in db.origin_metadata_get_by(origin_url, provider_type, cur): yield dict(zip(db.origin_metadata_get_cols, line)) @remote_api_endpoint('tool/add') @timed @db_transaction() def tool_add(self, tools, db=None, cur=None): """Add new tools to the storage. Args: tools (iterable of :class:`dict`): Tool information to add to storage. Each tool is a :class:`dict` with the following keys: - name (:class:`str`): name of the tool - version (:class:`str`): version of the tool - configuration (:class:`dict`): configuration of the tool, must be json-encodable Returns: :class:`dict`: All the tools inserted in storage (including the internal ``id``). The order of the list is not guaranteed to match the order of the initial list. """ db.mktemp_tool(cur) db.copy_to(tools, 'tmp_tool', ['name', 'version', 'configuration'], cur) tools = db.tool_add_from_temp(cur) results = [dict(zip(db.tool_cols, line)) for line in tools] send_metric('tool:add', count=len(results), method_name='tool_add') return results @remote_api_endpoint('tool/data') @timed @db_transaction(statement_timeout=500) def tool_get(self, tool, db=None, cur=None): """Retrieve tool information. Args: tool (dict): Tool information we want to retrieve from storage. The dicts have the same keys as those used in :func:`tool_add`. Returns: dict: The full tool information if it exists (``id`` included), None otherwise. """ tool_conf = tool['configuration'] if isinstance(tool_conf, dict): tool_conf = json.dumps(tool_conf) idx = db.tool_get(tool['name'], tool['version'], tool_conf) if not idx: return None return dict(zip(db.tool_cols, idx)) @remote_api_endpoint('provider/add') @timed @db_transaction() def metadata_provider_add(self, provider_name, provider_type, provider_url, metadata, db=None, cur=None): """Add a metadata provider. Args: provider_name (str): Its name provider_type (str): Its type (eg. `'deposit-client'`) provider_url (str): Its URL metadata: JSON-encodable object Returns: int: an identifier of the provider """ result = db.metadata_provider_add(provider_name, provider_type, provider_url, metadata, cur) send_metric( 'metadata_provider:add', count=1, method_name='metadata_provider') return result @remote_api_endpoint('provider/get') @timed @db_transaction() def metadata_provider_get(self, provider_id, db=None, cur=None): """Get a metadata provider Args: provider_id: Its identifier, as given by `metadata_provider_add`. Returns: dict: same as `metadata_provider_add`; or None if it does not exist. """ result = db.metadata_provider_get(provider_id) if not result: return None return dict(zip(db.metadata_provider_cols, result)) @remote_api_endpoint('provider/getby') @timed @db_transaction() def metadata_provider_get_by(self, provider, db=None, cur=None): """Get a metadata provider Args: provider (dict): A dictionary with keys: * provider_name: Its name * provider_url: Its URL Returns: dict: same as `metadata_provider_add`; or None if it does not exist. """ result = db.metadata_provider_get_by(provider['provider_name'], provider['provider_url']) if not result: return None return dict(zip(db.metadata_provider_cols, result)) @remote_api_endpoint('algos/diff_directories') @timed def diff_directories(self, from_dir, to_dir, track_renaming=False): """Compute the list of file changes introduced between two arbitrary directories (insertion / deletion / modification / renaming of files). Args: from_dir (bytes): identifier of the directory to compare from to_dir (bytes): identifier of the directory to compare to track_renaming (bool): whether or not to track files renaming Returns: A list of dict describing the introduced file changes (see :func:`swh.storage.algos.diff.diff_directories` for more details). """ return diff.diff_directories(self, from_dir, to_dir, track_renaming) @remote_api_endpoint('algos/diff_revisions') @timed def diff_revisions(self, from_rev, to_rev, track_renaming=False): """Compute the list of file changes introduced between two arbitrary revisions (insertion / deletion / modification / renaming of files). Args: from_rev (bytes): identifier of the revision to compare from to_rev (bytes): identifier of the revision to compare to track_renaming (bool): whether or not to track files renaming Returns: A list of dict describing the introduced file changes (see :func:`swh.storage.algos.diff.diff_directories` for more details). """ return diff.diff_revisions(self, from_rev, to_rev, track_renaming) @remote_api_endpoint('algos/diff_revision') @timed def diff_revision(self, revision, track_renaming=False): """Compute the list of file changes introduced by a specific revision (insertion / deletion / modification / renaming of files) by comparing it against its first parent. Args: revision (bytes): identifier of the revision from which to compute the list of files changes track_renaming (bool): whether or not to track files renaming Returns: A list of dict describing the introduced file changes (see :func:`swh.storage.algos.diff.diff_directories` for more details). """ return diff.diff_revision(self, revision, track_renaming)