diff --git a/README.md b/README.md
index f26f274..e6eae37 100644
--- a/README.md
+++ b/README.md
@@ -1,18 +1,18 @@
swh-model
=========
Implementation of the Data model of the Software Heritage project, used to
archive source code artifacts.
-This module defines the notion of Persistent Identifier (PID) and provides
-tools to compute them:
+This module defines the notion of SoftWare Heritage persistent IDentifiers
+(SWHIDs) and provides tools to compute them:
```sh
$ swh-identify fork.c kmod.c sched/deadline.c
swh:1:cnt:2e391c754ae730bd2d8520c2ab497c403220c6e3 fork.c
swh:1:cnt:0277d1216f80ae1adeed84a686ed34c9b2931fc2 kmod.c
swh:1:cnt:57b939c81bce5d06fa587df8915f05affbe22b82 sched/deadline.c
$ swh-identify --no-filename /usr/src/linux/kernel/
swh:1:dir:f9f858a48d663b3809c9e2f336412717496202ab
```
diff --git a/docs/persistent-identifiers.rst b/docs/persistent-identifiers.rst
index b0215e5..ea1781d 100644
--- a/docs/persistent-identifiers.rst
+++ b/docs/persistent-identifiers.rst
@@ -1,386 +1,386 @@
.. _persistent-identifiers:
=================================================
SoftWare Heritage persistent IDentifiers (SWHIDs)
=================================================
**version 1.5, last modified 2020-05-14**
.. contents::
:local:
:depth: 2
Overview
========
You can point to objects present in the `Software Heritage
`_ `archive
`_ by the means of **SoftWare Heritage
persistent IDentifiers**, or **SWHIDs** for short, that are guaranteed to
remain stable (persistent) over time. Their syntax, meaning, and usage is
described below. Note that they are identifiers and not URLs, even though
URL-based `resolvers`_ for SWHIDs are also available.
A SWHID consists of two separate parts, a mandatory *core identifier* that can
point to any software artifact (or "object") available in the Software Heritage
archive, and an optional list of *qualifiers* that allows to specify the
context where the object is meant to be seen and point to a subpart of the
object itself.
Objects come in different types:
* contents
* directories
* revisions
* releases
* snapshots
Each object is identified by an intrinsic, type-specific object identifier that
is embedded in its SWHID as described below. The intrinsic identifiers embedded
in SWHIDs are strong cryptographic hashes computed on the entire set of object
properties. Together, these identifiers form a `Merkle structure
`_, specifically a Merkle `DAG
`_.
See the :ref:`Software Heritage data model ` for an overview of
object types and how they are linked together. See
:py:mod:`swh.model.identifiers` for details on how the intrinsic identifiers
embedded in SWHIDs are computed.
The optional qualifiers are of two kinds:
* **context qualifiers:** carry information about the context where a given
object is meant to be seen. This is particularly important, as the same
object can be reached in the Merkle graph following different *paths*
starting from different nodes (or *anchors*), and it may have been retrieved
from different *origins*, that may evolve between different *visits*
* **fragment qualifiers:** allow to pinpoint specific subparts of an object
Syntax
======
Syntactically, SWHIDs are generated by the ```` entry point in the
following grammar:
.. code-block:: bnf
::= [ ] ;
::= "swh" ":" ":" ":" ;
::= "1" ;
::=
"snp" (* snapshot *)
| "rel" (* release *)
| "rev" (* revision *)
| "dir" (* directory *)
| "cnt" (* content *)
;
::= 40 * ; (* intrinsic object id, as hex-encoded SHA1 *)
::= "0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9" ;
::= | "a" | "b" | "c" | "d" | "e" | "f" ;
:= ";" [ ] ;
::=
|
;
::=
|
|
|
;
::= "origin" "=" ;
::= "visit" "=" ;
::= "anchor" "=" ;
::= "path" "=" ;
::= "lines" "=" ["-" ] ;
::= + ;
::= (* RFC 3987 IRI *)
::= (* RFC 3987 absolute path *)
Where:
- ```` is an ```` from `RFC 3987`_, and
- ```` is a `RFC 3987`_ IRI
in either case all occurrences of ``;`` (and ``%``, as required by the RFC)
have been percent-encoded (as ``%3B`` and ``%25`` respectively). Other
characters *can* be percent-encoded, e.g., to improve readability and/or
embeddability of SWHID in other contexts.
.. _RFC 3987: https://tools.ietf.org/html/rfc3987
Semantics
=========
Core identifiers
----------------
``:`` is used as separator between the logical parts of core identifiers. The
``swh`` prefix makes explicit that these identifiers are related to *SoftWare
Heritage*. ``1`` (````) is the current version of this
identifier *scheme*. Future editions will use higher version numbers, possibly
breaking backward compatibility, but without breaking the resolvability of
SWHIDs that conform to previous versions of the scheme.
A SWHID points to a single object, whose type is explicitly captured by
````:
* ``snp`` to **snapshots**,
* ``rel`` to **releases**,
* ``rev`` to **revisions**,
* ``dir`` to **directories**,
* ``cnt`` to **contents**.
The actual object pointed to is identified by the intrinsic identifier
````, which is a hex-encoded (using lowercase ASCII characters) SHA1
computed on the content and metadata of the object itself, as follows:
* for **snapshots**, intrinsic identifiers are computed as per
:py:func:`swh.model.identifiers.snapshot_identifier`
* for **releases**, as per
:py:func:`swh.model.identifiers.release_identifier`
that produces the same result as a git release hash
* for **revisions**, as per
:py:func:`swh.model.identifiers.revision_identifier`
that produces the same result as a git commit hash
* for **directories**, per
:py:func:`swh.model.identifiers.directory_identifier`
that produces the same result as a git tree hash
* for **contents**, the intrinsic identifier is the ``sha1_git`` hash returned by
:py:func:`swh.model.identifiers.content_identifier`, i.e., the SHA1 of a byte
sequence obtained by juxtaposing the ASCII string ``"blob"`` (without
quotes), a space, the length of the content as decimal digits, a NULL byte,
and the actual content of the file.
Qualifiers
----------
``;`` is used as separator between the core identifier and the optional
qualifiers, as well as between qualifiers. Each qualifier is specified as a
key/value pair, using ``=`` as a separator.
The following *context qualifiers* are available:
* **origin:** the *software origin* where an object has been found or observed
in the wild, as an URI;
* **visit:** the core identifier of a *snapshot* corresponding to a specific
*visit* of a repository containing the designated object;
* **anchor:** a *designated node* in the Merkle DAG relative to which a *path
to the object* is specified, as the core identifier of a directory, a
revision, a release or a snapshot;
* **path:** the *absolute file path*, from the *root directory* associated to
the *anchor node*, to the object; when the anchor denotes a directory or a
revision, and almost always when it's a release, the root directory is
uniquely determined; when the anchor denotes a snapshot, the root directory
is the one pointed to by ``HEAD`` (possibly indirectly), and undefined if
such a reference is missing;
The following *fragment qualifier* is available:
* **lines:** *line number(s)* of interest, usually within a content object
We recommend to equip identifiers meant to be shared with as many qualifiers as
possible. While qualifiers may be listed in any order, it is good practice to
present them in the order given above, i.e., ``origin``, ``visit``, ``anchor``,
``path``, ``lines``. Redundant information should be omitted: for example, if
the *visit* is present, and the *path* is relative to the snapshot indicated
there, then the *anchor* qualifier is superfluous; similarly, if the *path* is
empty, it may be omitted.
Interoperability
================
URI scheme
----------
The ``swh`` URI scheme is registered at IANA for SWHIDs. The present documents
constitutes the scheme specification for such URI scheme.
Git compatibility
-----------------
SWHIDs for contents, directories, revisions, and releases are, at present,
compatible with the `Git `_ way of `computing identifiers
`_ for its objects.
The ```` part of a SWHID for a content object is the Git blob
identifier of any file with the same content; for a revision it is the Git
commit identifier for the same revision, etc. This is not the case for
snapshot identifiers, as Git does not have a corresponding object type.
Note that Git compatibility is incidental and is not guaranteed to be
maintained in future versions of this scheme (or Git).
Examples
========
Core identifiers
----------------
* ``swh:1:cnt:94a9ed024d3859793618152ea559a168bbcbb5e2`` points to the content
of a file containing the full text of the GPL3 license
* ``swh:1:dir:d198bc9d7a6bcf6db04f476d29314f157507d505`` points to a directory
containing the source code of the Darktable photography application as it was
at some point on 4 May 2017
* ``swh:1:rev:309cf2674ee7a0749978cf8265ab91a60aea0f7d`` points to a commit in
the development history of Darktable, dated 16 January 2017, that added
undo/redo supports for masks
* ``swh:1:rel:22ece559cc7cc2364edc5e5593d63ae8bd229f9f`` points to Darktable
release 2.3.0, dated 24 December 2016
* ``swh:1:snp:c7c108084bc0bf3d81436bf980b46e98bd338453`` points to a snapshot
of the entire Darktable Git repository taken on 4 May 2017 from GitHub
Identifiers with qualifiers
---------------------------
* The following `SWHID
`_
denotes the lines 9 to 15 of a file content that can be found at absolute
path ``/Examples/SimpleFarm/simplefarm.ml`` from the root directory of the
revision ``swh:1:rev:2db189928c94d62a3b4757b3eec68f0a4d4113f0`` that is
contained in the snapshot
``swh:1:snp:d7f1b9eb7ccb596c2622c4780febaa02549830f9`` taken from the origin
``https://gitorious.org/ocamlp3l/ocamlp3l_cvs.git``:
.. code-block:: url
swh:1:cnt:4d99d2d18326621ccdd70f5ea66c2e2ac236ad8b;
origin=https://gitorious.org/ocamlp3l/ocamlp3l_cvs.git;
visit=swh:1:snp:d7f1b9eb7ccb596c2622c4780febaa02549830f9;
anchor=swh:1:rev:2db189928c94d62a3b4757b3eec68f0a4d4113f0;
path=/Examples/SimpleFarm/simplefarm.ml;
lines=9-15
* Here is an example of a `SWHID
`_
with a file path that requires percent-escaping:
.. code-block:: url
swh:1:cnt:f10371aa7b8ccabca8479196d6cd640676fd4a04;
origin=https://github.com/web-platform-tests/wpt;
visit=swh:1:snp:b37d435721bbd450624165f334724e3585346499;
anchor=swh:1:rev:259d0612af038d14f2cd889a14a3adb6c9e96d96;
path=/html/semantics/document-metadata/the-meta-element/pragma-directives/attr-meta-http-equiv-refresh/support/x%3Burl=foo/
Implementation
==============
Computing
---------
An important property of any SWHID is that its core identifier is *intrinsic*:
it can be *computed from the object itself*, without having to rely on any
third party. An implementation of SWHID that allows to do so locally is the
`swh identify `_
tool, available from the `swh.model `_
Python package under the GPL license.
SWHIDs are also automatically computed by Software Heritage for all archived
objects as part of its archival activity, and can be looked up via the project
`Web interface `_.
This has various practical implications:
* when a software artifact is obtained from Software Heritage by resolving a
SWHID, it is straightforward to verify that it is exactly the intended one:
just compute the core identifier from the artefact itself, and check that it
is the same as the core identifier part of the SHWID
* the core identifier of a software artifact can be computed *before* its
archival on Software Heritage
Resolvers
---------
Software Heritage resolver
~~~~~~~~~~~~~~~~~~~~~~~~~~
SWHIDs can be resolved using the Software Heritage `Web interface
`_. In particular, the **root endpoint**
``/`` can be given a SWHID and will lead to the browsing page of the
corresponding object, like this:
``https://archive.softwareheritage.org/``.
A **dedicated** ``/resolve`` **endpoint** of the Software Heritage `Web API
`_ is also available to
-programmatically resolve SWHIDs; see: :http:get:`/api/1/resolve/(swh_id)/`.
+programmatically resolve SWHIDs; see: :http:get:`/api/1/resolve/(swhid)/`.
Examples:
* ``_
* ``_
* ``_
* ``_
* ``_
* ``_
* ``_
Third-party resolvers
~~~~~~~~~~~~~~~~~~~~~
The following **third party resolvers** support SWHID resolution:
* `Identifiers.org `_; see:
``_ (registry identifier `MIR:00000655
`_).
* `Name-to-Thing (N2T) `_
Note that resolution via Identifiers.org currently only supports *core
identifiers* due to `syntactic incompatibilities with qualifiers
`_.
Examples:
* ``_
* ``_
* ``_
* ``_
* ``_
* ``_
* ``_
References
==========
* Roberto Di Cosmo, Morane Gruenpeter, Stefano Zacchiroli. `Identifiers for
Digital Objects: the Case of Software Source Code Preservation
`_. In Proceedings of `iPRES
2018 `_: 15th International Conference on Digital
Preservation, Boston, MA, USA, September 2018, 9 pages.
* Roberto Di Cosmo, Morane Gruenpeter, Stefano Zacchiroli. `Referencing Source
Code Artifacts: a Separate Concern in Software Citation
`_. In Computing in Science and
Engineering, volume 22, issue 2, pages 33-43. ISSN 1521-9615,
IEEE. March 2020.
diff --git a/mypy.ini b/mypy.ini
index 5467ded..71ae7f3 100644
--- a/mypy.ini
+++ b/mypy.ini
@@ -1,26 +1,29 @@
[mypy]
namespace_packages = True
warn_unused_ignores = True
# 3rd party libraries without stubs (yet)
[mypy-attrs_strict.*] # a bit sad, but...
ignore_missing_imports = True
+[mypy-deprecated.*]
+ignore_missing_imports = True
+
[mypy-django.*] # false positive, only used my hypotesis' extras
ignore_missing_imports = True
[mypy-dulwich.*]
ignore_missing_imports = True
[mypy-iso8601.*]
ignore_missing_imports = True
[mypy-pkg_resources.*]
ignore_missing_imports = True
[mypy-pyblake2.*]
ignore_missing_imports = True
[mypy-pytest.*]
ignore_missing_imports = True
diff --git a/requirements.txt b/requirements.txt
index b70187b..a5d782c 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,10 +1,11 @@
# Add here external Python modules dependencies, one per line. Module names
# should match https://pypi.python.org/pypi names. For the full spec or
# dependency lines, see https://pip.readthedocs.org/en/1.1/requirements.html
attrs
attrs_strict >= 0.0.7
+deprecated
hypothesis
iso8601
python-dateutil
typing_extensions
vcversioner
diff --git a/swh/model/cli.py b/swh/model/cli.py
index ad7e2f5..a545a70 100644
--- a/swh/model/cli.py
+++ b/swh/model/cli.py
@@ -1,216 +1,220 @@
# Copyright (C) 2018-2019 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import click
import dulwich.repo
import os
import sys
from functools import partial
from urllib.parse import urlparse
from swh.model import hashutil
-from swh.model import identifiers as pids
+from swh.model.identifiers import (
+ origin_identifier,
+ snapshot_identifier,
+ parse_swhid,
+ swhid,
+ SWHID,
+ CONTENT,
+ DIRECTORY,
+)
from swh.model.exceptions import ValidationError
from swh.model.from_disk import Content, Directory
CONTEXT_SETTINGS = dict(help_option_names=["-h", "--help"])
# Mapping between dulwich types and Software Heritage ones. Used by snapshot ID
# computation.
_DULWICH_TYPES = {
b"blob": "content",
b"tree": "directory",
b"commit": "revision",
b"tag": "release",
}
-class PidParamType(click.ParamType):
+class SWHIDParamType(click.ParamType):
name = "persistent identifier"
def convert(self, value, param, ctx):
try:
- pids.parse_persistent_identifier(value)
+ parse_swhid(value)
return value # return as string, as we need just that
except ValidationError as e:
self.fail("%s is not a valid SWHID. %s." % (value, e), param, ctx)
-def pid_of_file(path):
+def swhid_of_file(path):
object = Content.from_file(path=path).get_data()
- return pids.persistent_identifier(pids.CONTENT, object)
+ return swhid(CONTENT, object)
-def pid_of_file_content(data):
+def swhid_of_file_content(data):
object = Content.from_bytes(mode=644, data=data).get_data()
- return pids.persistent_identifier(pids.CONTENT, object)
+ return swhid(CONTENT, object)
-def pid_of_dir(path):
+def swhid_of_dir(path):
object = Directory.from_disk(path=path).get_data()
- return pids.persistent_identifier(pids.DIRECTORY, object)
+ return swhid(DIRECTORY, object)
-def pid_of_origin(url):
- pid = pids.PersistentId(
- object_type="origin", object_id=pids.origin_identifier({"url": url})
- )
- return str(pid)
+def swhid_of_origin(url):
+ swhid = SWHID(object_type="origin", object_id=origin_identifier({"url": url}))
+ return str(swhid)
-def pid_of_git_repo(path):
+def swhid_of_git_repo(path):
repo = dulwich.repo.Repo(path)
branches = {}
for ref, target in repo.refs.as_dict().items():
obj = repo[target]
if obj:
branches[ref] = {
"target": hashutil.bytehex_to_hash(target),
"target_type": _DULWICH_TYPES[obj.type_name],
}
else:
branches[ref] = None
for ref, target in repo.refs.get_symrefs().items():
branches[ref] = {
"target": target,
"target_type": "alias",
}
snapshot = {"branches": branches}
- pid = pids.PersistentId(
- object_type="snapshot", object_id=pids.snapshot_identifier(snapshot)
- )
- return str(pid)
+ swhid = SWHID(object_type="snapshot", object_id=snapshot_identifier(snapshot))
+ return str(swhid)
def identify_object(obj_type, follow_symlinks, obj):
if obj_type == "auto":
if obj == "-" or os.path.isfile(obj):
obj_type = "content"
elif os.path.isdir(obj):
obj_type = "directory"
else:
try: # URL parsing
if urlparse(obj).scheme:
obj_type = "origin"
else:
raise ValueError
except ValueError:
raise click.BadParameter("cannot detect object type for %s" % obj)
- pid = None
+ swhid = None
if obj == "-":
content = sys.stdin.buffer.read()
- pid = pid_of_file_content(content)
+ swhid = swhid_of_file_content(content)
elif obj_type in ["content", "directory"]:
path = obj.encode(sys.getfilesystemencoding())
if follow_symlinks and os.path.islink(obj):
path = os.path.realpath(obj)
if obj_type == "content":
- pid = pid_of_file(path)
+ swhid = swhid_of_file(path)
elif obj_type == "directory":
- pid = pid_of_dir(path)
+ swhid = swhid_of_dir(path)
elif obj_type == "origin":
- pid = pid_of_origin(obj)
+ swhid = swhid_of_origin(obj)
elif obj_type == "snapshot":
- pid = pid_of_git_repo(obj)
+ swhid = swhid_of_git_repo(obj)
else: # shouldn't happen, due to option validation
raise click.BadParameter("invalid object type: " + obj_type)
# note: we return original obj instead of path here, to preserve user-given
# file name in output
- return (obj, pid)
+ return (obj, swhid)
@click.command(context_settings=CONTEXT_SETTINGS)
@click.option(
"--dereference/--no-dereference",
"follow_symlinks",
default=True,
help="follow (or not) symlinks for OBJECTS passed as arguments "
+ "(default: follow)",
)
@click.option(
"--filename/--no-filename",
"show_filename",
default=True,
help="show/hide file name (default: show)",
)
@click.option(
"--type",
"-t",
"obj_type",
default="auto",
type=click.Choice(["auto", "content", "directory", "origin", "snapshot"]),
help="type of object to identify (default: auto)",
)
@click.option(
"--verify",
"-v",
metavar="SWHID",
- type=PidParamType(),
+ type=SWHIDParamType(),
help="reference identifier to be compared with computed one",
)
@click.argument("objects", nargs=-1, required=True)
def identify(obj_type, verify, show_filename, follow_symlinks, objects):
"""Compute the Software Heritage persistent identifier (SWHID) for the given
source code object(s).
For more details about SWHIDs see:
\b
https://docs.softwareheritage.org/devel/swh-model/persistent-identifiers.html
Tip: you can pass "-" to identify the content of standard input.
\b
Examples:
\b
$ swh identify fork.c kmod.c sched/deadline.c
swh:1:cnt:2e391c754ae730bd2d8520c2ab497c403220c6e3 fork.c
swh:1:cnt:0277d1216f80ae1adeed84a686ed34c9b2931fc2 kmod.c
swh:1:cnt:57b939c81bce5d06fa587df8915f05affbe22b82 sched/deadline.c
\b
$ swh identify --no-filename /usr/src/linux/kernel/
swh:1:dir:f9f858a48d663b3809c9e2f336412717496202ab
\b
$ git clone --mirror https://forge.softwareheritage.org/source/helloworld.git
$ swh identify --type snapshot helloworld.git/
swh:1:snp:510aa88bdc517345d258c1fc2babcd0e1f905e93 helloworld.git
""" # NoQA # overlong lines in shell examples are fine
if verify and len(objects) != 1:
raise click.BadParameter("verification requires a single object")
results = map(partial(identify_object, obj_type, follow_symlinks), objects)
if verify:
- pid = next(results)[1]
- if verify == pid:
- click.echo("SWHID match: %s" % pid)
+ swhid = next(results)[1]
+ if verify == swhid:
+ click.echo("SWHID match: %s" % swhid)
sys.exit(0)
else:
- click.echo("SWHID mismatch: %s != %s" % (verify, pid))
+ click.echo("SWHID mismatch: %s != %s" % (verify, swhid))
sys.exit(1)
else:
- for (obj, pid) in results:
- msg = pid
+ for (obj, swhid) in results:
+ msg = swhid
if show_filename:
- msg = "%s\t%s" % (pid, os.fsdecode(obj))
+ msg = "%s\t%s" % (swhid, os.fsdecode(obj))
click.echo(msg)
if __name__ == "__main__":
identify()
diff --git a/swh/model/identifiers.py b/swh/model/identifiers.py
index 4fe45bd..de2082d 100644
--- a/swh/model/identifiers.py
+++ b/swh/model/identifiers.py
@@ -1,824 +1,870 @@
# Copyright (C) 2015-2019 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import binascii
import datetime
import hashlib
from functools import lru_cache
from typing import Any, Dict, NamedTuple
+from deprecated import deprecated
+
from .exceptions import ValidationError
from .fields.hashes import validate_sha1
from .hashutil import hash_git_data, hash_to_hex, MultiHash
ORIGIN = "origin"
SNAPSHOT = "snapshot"
REVISION = "revision"
RELEASE = "release"
DIRECTORY = "directory"
CONTENT = "content"
-PID_NAMESPACE = "swh"
-PID_VERSION = 1
-PID_TYPES = ["ori", "snp", "rel", "rev", "dir", "cnt"]
-PID_SEP = ":"
-PID_CTXT_SEP = ";"
+SWHID_NAMESPACE = "swh"
+SWHID_VERSION = 1
+SWHID_TYPES = ["ori", "snp", "rel", "rev", "dir", "cnt"]
+SWHID_SEP = ":"
+SWHID_CTXT_SEP = ";"
+
+# deprecated variables
+PID_NAMESPACE = SWHID_NAMESPACE
+PID_VERSION = SWHID_VERSION
+PID_TYPES = SWHID_TYPES
+PID_SEP = SWHID_SEP
+PID_CTXT_SEP = SWHID_CTXT_SEP
@lru_cache()
def identifier_to_bytes(identifier):
"""Convert a text identifier to bytes.
Args:
identifier: an identifier, either a 40-char hexadecimal string or a
bytes object of length 20
Returns:
The length 20 bytestring corresponding to the given identifier
Raises:
ValueError: if the identifier is of an unexpected type or length.
"""
if isinstance(identifier, bytes):
if len(identifier) != 20:
raise ValueError(
"Wrong length for bytes identifier %s, expected 20" % len(identifier)
)
return identifier
if isinstance(identifier, str):
if len(identifier) != 40:
raise ValueError(
"Wrong length for str identifier %s, expected 40" % len(identifier)
)
return bytes.fromhex(identifier)
raise ValueError(
"Wrong type for identifier %s, expected bytes or str"
% identifier.__class__.__name__
)
@lru_cache()
def identifier_to_str(identifier):
"""Convert an identifier to an hexadecimal string.
Args:
identifier: an identifier, either a 40-char hexadecimal string or a
bytes object of length 20
Returns:
The length 40 string corresponding to the given identifier, hex encoded
Raises:
ValueError: if the identifier is of an unexpected type or length.
"""
if isinstance(identifier, str):
if len(identifier) != 40:
raise ValueError(
"Wrong length for str identifier %s, expected 40" % len(identifier)
)
return identifier
if isinstance(identifier, bytes):
if len(identifier) != 20:
raise ValueError(
"Wrong length for bytes identifier %s, expected 20" % len(identifier)
)
return binascii.hexlify(identifier).decode()
raise ValueError(
"Wrong type for identifier %s, expected bytes or str"
% identifier.__class__.__name__
)
def content_identifier(content):
"""Return the intrinsic identifier for a content.
A content's identifier is the sha1, sha1_git and sha256 checksums of its
data.
Args:
content: a content conforming to the Software Heritage schema
Returns:
A dictionary with all the hashes for the data
Raises:
KeyError: if the content doesn't have a data member.
"""
return MultiHash.from_data(content["data"]).digest()
def directory_entry_sort_key(entry):
"""The sorting key for tree entries"""
if entry["type"] == "dir":
return entry["name"] + b"/"
else:
return entry["name"]
@lru_cache()
def _perms_to_bytes(perms):
"""Convert the perms value to its bytes representation"""
oc = oct(perms)[2:]
return oc.encode("ascii")
def escape_newlines(snippet):
"""Escape the newlines present in snippet according to git rules.
New lines in git manifests are escaped by indenting the next line by one
space.
"""
if b"\n" in snippet:
return b"\n ".join(snippet.split(b"\n"))
else:
return snippet
def directory_identifier(directory):
"""Return the intrinsic identifier for a directory.
A directory's identifier is the tree sha1 à la git of a directory listing,
using the following algorithm, which is equivalent to the git algorithm for
trees:
1. Entries of the directory are sorted using the name (or the name with '/'
appended for directory entries) as key, in bytes order.
2. For each entry of the directory, the following bytes are output:
- the octal representation of the permissions for the entry (stored in
the 'perms' member), which is a representation of the entry type:
- b'100644' (int 33188) for files
- b'100755' (int 33261) for executable files
- b'120000' (int 40960) for symbolic links
- b'40000' (int 16384) for directories
- b'160000' (int 57344) for references to revisions
- an ascii space (b'\x20')
- the entry's name (as raw bytes), stored in the 'name' member
- a null byte (b'\x00')
- the 20 byte long identifier of the object pointed at by the entry,
stored in the 'target' member:
- for files or executable files: their blob sha1_git
- for symbolic links: the blob sha1_git of a file containing the link
destination
- for directories: their intrinsic identifier
- for revisions: their intrinsic identifier
(Note that there is no separator between entries)
"""
components = []
for entry in sorted(directory["entries"], key=directory_entry_sort_key):
components.extend(
[
_perms_to_bytes(entry["perms"]),
b"\x20",
entry["name"],
b"\x00",
identifier_to_bytes(entry["target"]),
]
)
return identifier_to_str(hash_git_data(b"".join(components), "tree"))
def format_date(date):
"""Convert a date object into an UTC timestamp encoded as ascii bytes.
Git stores timestamps as an integer number of seconds since the UNIX epoch.
However, Software Heritage stores timestamps as an integer number of
microseconds (postgres type "datetime with timezone").
Therefore, we print timestamps with no microseconds as integers, and
timestamps with microseconds as floating point values. We elide the
trailing zeroes from microsecond values, to "future-proof" our
representation if we ever need more precision in timestamps.
"""
if not isinstance(date, dict):
raise ValueError("format_date only supports dicts, %r received" % date)
seconds = date.get("seconds", 0)
microseconds = date.get("microseconds", 0)
if not microseconds:
return str(seconds).encode()
else:
float_value = "%d.%06d" % (seconds, microseconds)
return float_value.rstrip("0").encode()
@lru_cache()
def format_offset(offset, negative_utc=None):
"""Convert an integer number of minutes into an offset representation.
The offset representation is [+-]hhmm where:
- hh is the number of hours;
- mm is the number of minutes.
A null offset is represented as +0000.
"""
if offset < 0 or offset == 0 and negative_utc:
sign = "-"
else:
sign = "+"
hours = abs(offset) // 60
minutes = abs(offset) % 60
t = "%s%02d%02d" % (sign, hours, minutes)
return t.encode()
def normalize_timestamp(time_representation):
"""Normalize a time representation for processing by Software Heritage
This function supports a numeric timestamp (representing a number of
seconds since the UNIX epoch, 1970-01-01 at 00:00 UTC), a
:obj:`datetime.datetime` object (with timezone information), or a
normalized Software Heritage time representation (idempotency).
Args:
time_representation: the representation of a timestamp
Returns:
dict: a normalized dictionary with three keys:
- timestamp: a dict with two optional keys:
- seconds: the integral number of seconds since the UNIX epoch
- microseconds: the integral number of microseconds
- offset: the timezone offset as a number of minutes relative to
UTC
- negative_utc: a boolean representing whether the offset is -0000
when offset = 0.
"""
if time_representation is None:
return None
negative_utc = False
if isinstance(time_representation, dict):
ts = time_representation["timestamp"]
if isinstance(ts, dict):
seconds = ts.get("seconds", 0)
microseconds = ts.get("microseconds", 0)
elif isinstance(ts, int):
seconds = ts
microseconds = 0
else:
raise ValueError(
"normalize_timestamp received non-integer timestamp member:" " %r" % ts
)
offset = time_representation["offset"]
if "negative_utc" in time_representation:
negative_utc = time_representation["negative_utc"]
if negative_utc is None:
negative_utc = False
elif isinstance(time_representation, datetime.datetime):
seconds = int(time_representation.timestamp())
microseconds = time_representation.microsecond
utcoffset = time_representation.utcoffset()
if utcoffset is None:
raise ValueError(
"normalize_timestamp received datetime without timezone: %s"
% time_representation
)
# utcoffset is an integer number of minutes
seconds_offset = utcoffset.total_seconds()
offset = int(seconds_offset) // 60
elif isinstance(time_representation, int):
seconds = time_representation
microseconds = 0
offset = 0
else:
raise ValueError(
"normalize_timestamp received non-integer timestamp:"
" %r" % time_representation
)
return {
"timestamp": {"seconds": seconds, "microseconds": microseconds,},
"offset": offset,
"negative_utc": negative_utc,
}
def format_author(author):
"""Format the specification of an author.
An author is either a byte string (passed unchanged), or a dict with three
keys, fullname, name and email.
If the fullname exists, return it; if it doesn't, we construct a fullname
using the following heuristics: if the name value is None, we return the
email in angle brackets, else, we return the name, a space, and the email
in angle brackets.
"""
if isinstance(author, bytes) or author is None:
return author
if "fullname" in author:
return author["fullname"]
ret = []
if author["name"] is not None:
ret.append(author["name"])
if author["email"] is not None:
ret.append(b"".join([b"<", author["email"], b">"]))
return b" ".join(ret)
def format_author_line(header, author, date_offset):
"""Format a an author line according to git standards.
An author line has three components:
- a header, describing the type of author (author, committer, tagger)
- a name and email, which is an arbitrary bytestring
- optionally, a timestamp with UTC offset specification
The author line is formatted thus::
`header` `name and email`[ `timestamp` `utc_offset`]
The timestamp is encoded as a (decimal) number of seconds since the UNIX
epoch (1970-01-01 at 00:00 UTC). As an extension to the git format, we
support fractional timestamps, using a dot as the separator for the decimal
part.
The utc offset is a number of minutes encoded as '[+-]HHMM'. Note some
tools can pass a negative offset corresponding to the UTC timezone
('-0000'), which is valid and is encoded as such.
For convenience, this function returns the whole line with its trailing
newline.
Args:
header: the header of the author line (one of 'author', 'committer',
'tagger')
author: an author specification (dict with two bytes values: name and
email, or byte value)
date_offset: a normalized date/time representation as returned by
:func:`normalize_timestamp`.
Returns:
the newline-terminated byte string containing the author line
"""
ret = [header.encode(), b" ", escape_newlines(format_author(author))]
date_offset = normalize_timestamp(date_offset)
if date_offset is not None:
date_f = format_date(date_offset["timestamp"])
offset_f = format_offset(date_offset["offset"], date_offset["negative_utc"])
ret.extend([b" ", date_f, b" ", offset_f])
ret.append(b"\n")
return b"".join(ret)
def revision_identifier(revision):
"""Return the intrinsic identifier for a revision.
The fields used for the revision identifier computation are:
- directory
- parents
- author
- author_date
- committer
- committer_date
- metadata -> extra_headers
- message
A revision's identifier is the 'git'-checksum of a commit manifest
constructed as follows (newlines are a single ASCII newline character)::
tree
[for each parent in parents]
parent
[end for each parents]
author
committer
[for each key, value in extra_headers]
[end for each extra_headers]
The directory identifier is the ascii representation of its hexadecimal
encoding.
Author and committer are formatted with the :func:`format_author` function.
Dates are formatted with the :func:`format_offset` function.
Extra headers are an ordered list of [key, value] pairs. Keys are strings
and get encoded to utf-8 for identifier computation. Values are either byte
strings, unicode strings (that get encoded to utf-8), or integers (that get
encoded to their utf-8 decimal representation).
Multiline extra header values are escaped by indenting the continuation
lines with one ascii space.
If the message is None, the manifest ends with the last header. Else, the
message is appended to the headers after an empty line.
The checksum of the full manifest is computed using the 'commit' git object
type.
"""
components = [
b"tree ",
identifier_to_str(revision["directory"]).encode(),
b"\n",
]
for parent in revision["parents"]:
if parent:
components.extend(
[b"parent ", identifier_to_str(parent).encode(), b"\n",]
)
components.extend(
[
format_author_line("author", revision["author"], revision["date"]),
format_author_line(
"committer", revision["committer"], revision["committer_date"]
),
]
)
# Handle extra headers
metadata = revision.get("metadata")
if not metadata:
metadata = {}
for key, value in metadata.get("extra_headers", []):
# Integer values: decimal representation
if isinstance(value, int):
value = str(value).encode("utf-8")
# Unicode string values: utf-8 encoding
if isinstance(value, str):
value = value.encode("utf-8")
# encode the key to utf-8
components.extend([key.encode("utf-8"), b" ", escape_newlines(value), b"\n"])
if revision["message"] is not None:
components.extend([b"\n", revision["message"]])
commit_raw = b"".join(components)
return identifier_to_str(hash_git_data(commit_raw, "commit"))
def target_type_to_git(target_type):
"""Convert a software heritage target type to a git object type"""
return {
"content": b"blob",
"directory": b"tree",
"revision": b"commit",
"release": b"tag",
"snapshot": b"refs",
}[target_type]
def release_identifier(release):
"""Return the intrinsic identifier for a release."""
components = [
b"object ",
identifier_to_str(release["target"]).encode(),
b"\n",
b"type ",
target_type_to_git(release["target_type"]),
b"\n",
b"tag ",
release["name"],
b"\n",
]
if "author" in release and release["author"]:
components.append(
format_author_line("tagger", release["author"], release["date"])
)
if release["message"] is not None:
components.extend([b"\n", release["message"]])
return identifier_to_str(hash_git_data(b"".join(components), "tag"))
def snapshot_identifier(snapshot, *, ignore_unresolved=False):
"""Return the intrinsic identifier for a snapshot.
Snapshots are a set of named branches, which are pointers to objects at any
level of the Software Heritage DAG.
As well as pointing to other objects in the Software Heritage DAG, branches
can also be *alias*es, in which case their target is the name of another
branch in the same snapshot, or *dangling*, in which case the target is
unknown (and represented by the ``None`` value).
A snapshot identifier is a salted sha1 (using the git hashing algorithm
with the ``snapshot`` object type) of a manifest following the algorithm:
1. Branches are sorted using the name as key, in bytes order.
2. For each branch, the following bytes are output:
- the type of the branch target:
- ``content``, ``directory``, ``revision``, ``release`` or ``snapshot``
for the corresponding entries in the DAG;
- ``alias`` for branches referencing another branch;
- ``dangling`` for dangling branches
- an ascii space (``\\x20``)
- the branch name (as raw bytes)
- a null byte (``\\x00``)
- the length of the target identifier, as an ascii-encoded decimal number
(``20`` for current intrinsic identifiers, ``0`` for dangling
branches, the length of the target branch name for branch aliases)
- a colon (``:``)
- the identifier of the target object pointed at by the branch,
stored in the 'target' member:
- for contents: their *sha1_git*
- for directories, revisions, releases or snapshots: their intrinsic
identifier
- for branch aliases, the name of the target branch (as raw bytes)
- for dangling branches, the empty string
Note that, akin to directory manifests, there is no separator between
entries. Because of symbolic branches, identifiers are of arbitrary
length but are length-encoded to avoid ambiguity.
Args:
snapshot (dict): the snapshot of which to compute the identifier. A
single entry is needed, ``'branches'``, which is itself a :class:`dict`
mapping each branch to its target
ignore_unresolved (bool): if `True`, ignore unresolved branch aliases.
Returns:
str: the intrinsic identifier for `snapshot`
"""
unresolved = []
lines = []
for name, target in sorted(snapshot["branches"].items()):
if not target:
target_type = b"dangling"
target_id = b""
elif target["target_type"] == "alias":
target_type = b"alias"
target_id = target["target"]
if target_id not in snapshot["branches"] or target_id == name:
unresolved.append((name, target_id))
else:
target_type = target["target_type"].encode()
target_id = identifier_to_bytes(target["target"])
lines.extend(
[
target_type,
b"\x20",
name,
b"\x00",
("%d:" % len(target_id)).encode(),
target_id,
]
)
if unresolved and not ignore_unresolved:
raise ValueError(
"Branch aliases unresolved: %s"
% ", ".join("%s -> %s" % x for x in unresolved),
unresolved,
)
return identifier_to_str(hash_git_data(b"".join(lines), "snapshot"))
def origin_identifier(origin):
"""Return the intrinsic identifier for an origin.
An origin's identifier is the sha1 checksum of the entire origin URL
"""
return hashlib.sha1(origin["url"].encode("utf-8")).hexdigest()
_object_type_map = {
ORIGIN: {"short_name": "ori", "key_id": "id"},
SNAPSHOT: {"short_name": "snp", "key_id": "id"},
RELEASE: {"short_name": "rel", "key_id": "id"},
REVISION: {"short_name": "rev", "key_id": "id"},
DIRECTORY: {"short_name": "dir", "key_id": "id"},
CONTENT: {"short_name": "cnt", "key_id": "sha1_git"},
}
-_PersistentId = NamedTuple(
- "PersistentId",
+_SWHID = NamedTuple(
+ "SWHID",
[
("namespace", str),
("scheme_version", int),
("object_type", str),
("object_id", str),
("metadata", Dict[str, Any]),
],
)
-class PersistentId(_PersistentId):
+class SWHID(_SWHID):
"""
- Named tuple holding the relevant info associated to a Software Heritage
- persistent identifier.
+ Named tuple holding the relevant info associated to a SoftWare Heritage
+ persistent IDentifier (SWHID)
Args:
- namespace (str): the namespace of the identifier, defaults to 'swh'
+ namespace (str): the namespace of the identifier, defaults to ``swh``
scheme_version (int): the scheme version of the identifier,
defaults to 1
object_type (str): the type of object the identifier points to,
- either 'content', 'directory', 'release', 'revision' or 'snapshot'
- object_id (dict/bytes/str): object's dict representation or
- object identifier
+ either ``content``, ``directory``, ``release``, ``revision`` or ``snapshot``
+ object_id (str): object's identifier
metadata (dict): optional dict filled with metadata related to
pointed object
Raises:
- swh.model.exceptions.ValidationError: In case of invalid object type
- or id
+ swh.model.exceptions.ValidationError: In case of invalid object type or id
Once created, it contains the following attributes:
Attributes:
namespace (str): the namespace of the identifier
scheme_version (int): the scheme version of the identifier
object_type (str): the type of object the identifier points to
object_id (str): hexadecimal representation of the object hash
metadata (dict): metadata related to the pointed object
- To get the raw persistent identifier string from an instance of
- this named tuple, use the :func:`str` function::
+ To get the raw SWHID string from an instance of this named tuple,
+ use the :func:`str` function::
- pid = PersistentId(
+ swhid = SWHID(
object_type='content',
object_id='8ff44f081d43176474b267de5451f2c2e88089d0'
)
- pid_str = str(pid)
+ swhid_str = str(swhid)
# 'swh:1:cnt:8ff44f081d43176474b267de5451f2c2e88089d0'
"""
__slots__ = ()
def __new__(
cls,
- namespace=PID_NAMESPACE,
- scheme_version=PID_VERSION,
- object_type="",
- object_id="",
- metadata={},
+ namespace: str = SWHID_NAMESPACE,
+ scheme_version: int = SWHID_VERSION,
+ object_type: str = "",
+ object_id: str = "",
+ metadata: Dict[str, Any] = {},
):
o = _object_type_map.get(object_type)
if not o:
raise ValidationError(
"Wrong input: Supported types are %s" % (list(_object_type_map.keys()))
)
- if namespace != PID_NAMESPACE:
+ if namespace != SWHID_NAMESPACE:
raise ValidationError(
- "Wrong format: only supported namespace is '%s'" % PID_NAMESPACE
+ "Wrong format: only supported namespace is '%s'" % SWHID_NAMESPACE
)
- if scheme_version != PID_VERSION:
+ if scheme_version != SWHID_VERSION:
raise ValidationError(
- "Wrong format: only supported version is %d" % PID_VERSION
+ "Wrong format: only supported version is %d" % SWHID_VERSION
)
+
# internal swh representation resolution
if isinstance(object_id, dict):
object_id = object_id[o["key_id"]]
+
validate_sha1(object_id) # can raise if invalid hash
object_id = hash_to_hex(object_id)
- return super(cls, PersistentId).__new__(
+ return super().__new__(
cls, namespace, scheme_version, object_type, object_id, metadata
)
- def __str__(self):
+ def __str__(self) -> str:
o = _object_type_map.get(self.object_type)
- pid = PID_SEP.join(
+ assert o
+ swhid = SWHID_SEP.join(
[self.namespace, str(self.scheme_version), o["short_name"], self.object_id]
)
if self.metadata:
for k, v in self.metadata.items():
- pid += "%s%s=%s" % (PID_CTXT_SEP, k, v)
- return pid
+ swhid += "%s%s=%s" % (SWHID_CTXT_SEP, k, v)
+ return swhid
+
+
+@deprecated("Use swh.model.identifiers.SWHID instead")
+class PersistentId(SWHID):
+ """
+ Named tuple holding the relevant info associated to a SoftWare Heritage
+ persistent IDentifier.
+
+ .. deprecated:: 0.3.8
+ Use :class:`swh.model.identifiers.SWHID` instead
+
+ """
+
+ def __new__(cls, *args, **kwargs):
+ return super(cls, PersistentId).__new__(cls, *args, **kwargs)
-def persistent_identifier(object_type, object_id, scheme_version=1, metadata={}):
- """Compute :ref:`SWHID ` persistent identifiers.
+def swhid(
+ object_type: str,
+ object_id: str,
+ scheme_version: int = 1,
+ metadata: Dict[str, Any] = {},
+) -> str:
+ """Compute :ref:`persistent-identifiers`
Args:
- object_type (str): object's type, either 'content', 'directory',
- 'release', 'revision' or 'snapshot'
- object_id (dict/bytes/str): object's dict representation or object
- identifier
- scheme_version (int): persistent identifier scheme version,
- defaults to 1
- metadata (dict): metadata related to the pointed object
+ object_type: object's type, either ``content``, ``directory``,
+ ``release``, ``revision`` or ``snapshot``
+ object_id: object's identifier
+ scheme_version: SWHID scheme version, defaults to 1
+ metadata: metadata related to the pointed object
Raises:
- swh.model.exceptions.ValidationError: In case of invalid object type
- or id
+ swh.model.exceptions.ValidationError: In case of invalid object type or id
Returns:
- str: the persistent identifier
+ the SWHID of the object
"""
- pid = PersistentId(
+ swhid = SWHID(
scheme_version=scheme_version,
object_type=object_type,
object_id=object_id,
metadata=metadata,
)
- return str(pid)
+ return str(swhid)
+
+@deprecated("Use swh.model.identifiers.swhid instead")
+def persistent_identifier(*args, **kwargs) -> str:
+ """Compute :ref:`persistent-identifiers`
+
+ .. deprecated:: 0.3.8
+ Use :func:`swh.model.identifiers.swhid` instead
+
+ """
+ return swhid(*args, **kwargs)
-def parse_persistent_identifier(persistent_id):
- """Parse :ref:`SWHID ` persistent identifiers.
+
+def parse_swhid(swhid: str) -> SWHID:
+ """Parse :ref:`persistent-identifiers`.
Args:
- persistent_id (str): A persistent identifier
+ swhid (str): A persistent identifier
Raises:
swh.model.exceptions.ValidationError: in case of:
* missing mandatory values (4)
* invalid namespace supplied
* invalid version supplied
* invalid type supplied
* missing hash
* invalid hash identifier supplied
Returns:
- PersistentId: a named tuple holding the parsing result
+ a named tuple holding the parsing result
"""
- # ;
- persistent_id_parts = persistent_id.split(PID_CTXT_SEP)
- pid_data = persistent_id_parts.pop(0).split(":")
+ # ;
+ swhid_parts = swhid.split(SWHID_CTXT_SEP)
+ swhid_data = swhid_parts.pop(0).split(":")
- if len(pid_data) != 4:
+ if len(swhid_data) != 4:
raise ValidationError("Wrong format: There should be 4 mandatory values")
# Checking for parsing errors
- _ns, _version, _type, _id = pid_data
- pid_data[1] = int(pid_data[1])
+ _ns, _version, _type, _id = swhid_data
for otype, data in _object_type_map.items():
if _type == data["short_name"]:
- pid_data[2] = otype
+ _type = otype
break
if not _id:
raise ValidationError("Wrong format: Identifier should be present")
- persistent_id_metadata = {}
- for part in persistent_id_parts:
+ _metadata = {}
+ for part in swhid_parts:
try:
key, val = part.split("=")
- persistent_id_metadata[key] = val
+ _metadata[key] = val
except Exception:
msg = "Contextual data is badly formatted, form key=val expected"
raise ValidationError(msg)
- pid_data.append(persistent_id_metadata)
- return PersistentId(*pid_data)
+ return SWHID(_ns, int(_version), _type, _id, _metadata)
+
+
+@deprecated("Use swh.model.identifiers.parse_swhid instead")
+def parse_persistent_identifier(persistent_id: str) -> PersistentId:
+ """Parse :ref:`persistent-identifiers`.
+
+ .. deprecated:: 0.3.8
+ Use :func:`swh.model.identifiers.parse_swhid` instead
+ """
+ return PersistentId(**parse_swhid(persistent_id)._asdict())
diff --git a/swh/model/tests/test_cli.py b/swh/model/tests/test_cli.py
index 21eb8c5..b65ea03 100644
--- a/swh/model/tests/test_cli.py
+++ b/swh/model/tests/test_cli.py
@@ -1,148 +1,148 @@
# Copyright (C) 2018-2019 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import os
import tarfile
import tempfile
import unittest
from click.testing import CliRunner
import pytest
from swh.model import cli
from swh.model.hashutil import hash_to_hex
from swh.model.tests.test_from_disk import DataMixin
@pytest.mark.fs
class TestIdentify(DataMixin, unittest.TestCase):
def setUp(self):
super().setUp()
self.runner = CliRunner()
- def assertPidOK(self, result, pid):
+ def assertSWHID(self, result, swhid):
self.assertEqual(result.exit_code, 0)
- self.assertEqual(result.output.split()[0], pid)
+ self.assertEqual(result.output.split()[0], swhid)
def test_no_args(self):
result = self.runner.invoke(cli.identify)
self.assertNotEqual(result.exit_code, 0)
def test_content_id(self):
"""identify file content"""
self.make_contents(self.tmpdir_name)
for filename, content in self.contents.items():
path = os.path.join(self.tmpdir_name, filename)
result = self.runner.invoke(cli.identify, ["--type", "content", path])
- self.assertPidOK(result, "swh:1:cnt:" + hash_to_hex(content["sha1_git"]))
+ self.assertSWHID(result, "swh:1:cnt:" + hash_to_hex(content["sha1_git"]))
def test_content_id_from_stdin(self):
"""identify file content"""
self.make_contents(self.tmpdir_name)
for _, content in self.contents.items():
result = self.runner.invoke(cli.identify, ["-"], input=content["data"])
- self.assertPidOK(result, "swh:1:cnt:" + hash_to_hex(content["sha1_git"]))
+ self.assertSWHID(result, "swh:1:cnt:" + hash_to_hex(content["sha1_git"]))
def test_directory_id(self):
"""identify an entire directory"""
self.make_from_tarball(self.tmpdir_name)
path = os.path.join(self.tmpdir_name, b"sample-folder")
result = self.runner.invoke(cli.identify, ["--type", "directory", path])
- self.assertPidOK(result, "swh:1:dir:e8b0f1466af8608c8a3fb9879db172b887e80759")
+ self.assertSWHID(result, "swh:1:dir:e8b0f1466af8608c8a3fb9879db172b887e80759")
def test_snapshot_id(self):
"""identify a snapshot"""
tarball = os.path.join(
os.path.dirname(__file__), "data", "repos", "sample-repo.tgz"
)
with tempfile.TemporaryDirectory(prefix="swh.model.cli") as d:
with tarfile.open(tarball, "r:gz") as t:
t.extractall(d)
repo_dir = os.path.join(d, "sample-repo")
result = self.runner.invoke(
cli.identify, ["--type", "snapshot", repo_dir]
)
- self.assertPidOK(
+ self.assertSWHID(
result, "swh:1:snp:abc888898124270905a0ef3c67e872ce08e7e0c1"
)
def test_origin_id(self):
"""identify an origin URL"""
url = "https://github.com/torvalds/linux"
result = self.runner.invoke(cli.identify, ["--type", "origin", url])
- self.assertPidOK(result, "swh:1:ori:b63a575fe3faab7692c9f38fb09d4bb45651bb0f")
+ self.assertSWHID(result, "swh:1:ori:b63a575fe3faab7692c9f38fb09d4bb45651bb0f")
def test_symlink(self):
"""identify symlink --- both itself and target"""
regular = os.path.join(self.tmpdir_name, b"foo.txt")
link = os.path.join(self.tmpdir_name, b"bar.txt")
open(regular, "w").write("foo\n")
os.symlink(os.path.basename(regular), link)
result = self.runner.invoke(cli.identify, [link])
- self.assertPidOK(result, "swh:1:cnt:257cc5642cb1a054f08cc83f2d943e56fd3ebe99")
+ self.assertSWHID(result, "swh:1:cnt:257cc5642cb1a054f08cc83f2d943e56fd3ebe99")
result = self.runner.invoke(cli.identify, ["--no-dereference", link])
- self.assertPidOK(result, "swh:1:cnt:996f1789ff67c0e3f69ef5933a55d54c5d0e9954")
+ self.assertSWHID(result, "swh:1:cnt:996f1789ff67c0e3f69ef5933a55d54c5d0e9954")
def test_show_filename(self):
"""filename is shown by default"""
self.make_contents(self.tmpdir_name)
for filename, content in self.contents.items():
path = os.path.join(self.tmpdir_name, filename)
result = self.runner.invoke(cli.identify, ["--type", "content", path])
self.assertEqual(result.exit_code, 0)
self.assertEqual(
result.output.rstrip(),
"swh:1:cnt:%s\t%s" % (hash_to_hex(content["sha1_git"]), path.decode()),
)
def test_hide_filename(self):
"""filename is hidden upon request"""
self.make_contents(self.tmpdir_name)
for filename, content in self.contents.items():
path = os.path.join(self.tmpdir_name, filename)
result = self.runner.invoke(
cli.identify, ["--type", "content", "--no-filename", path]
)
- self.assertPidOK(result, "swh:1:cnt:" + hash_to_hex(content["sha1_git"]))
+ self.assertSWHID(result, "swh:1:cnt:" + hash_to_hex(content["sha1_git"]))
def test_auto_content(self):
"""automatic object type detection: content"""
with tempfile.NamedTemporaryFile(prefix="swh.model.cli") as f:
result = self.runner.invoke(cli.identify, [f.name])
self.assertEqual(result.exit_code, 0)
self.assertRegex(result.output, r"^swh:\d+:cnt:")
def test_auto_directory(self):
"""automatic object type detection: directory"""
with tempfile.TemporaryDirectory(prefix="swh.model.cli") as dirname:
result = self.runner.invoke(cli.identify, [dirname])
self.assertEqual(result.exit_code, 0)
self.assertRegex(result.output, r"^swh:\d+:dir:")
def test_auto_origin(self):
"""automatic object type detection: origin"""
result = self.runner.invoke(cli.identify, ["https://github.com/torvalds/linux"])
self.assertEqual(result.exit_code, 0)
self.assertRegex(result.output, r"^swh:\d+:ori:")
def test_verify_content(self):
"""identifier verification"""
self.make_contents(self.tmpdir_name)
for filename, content in self.contents.items():
expected_id = "swh:1:cnt:" + hash_to_hex(content["sha1_git"])
# match
path = os.path.join(self.tmpdir_name, filename)
result = self.runner.invoke(cli.identify, ["--verify", expected_id, path])
self.assertEqual(result.exit_code, 0)
# mismatch
with open(path, "a") as f:
f.write("trailing garbage to make verification fail")
result = self.runner.invoke(cli.identify, ["--verify", expected_id, path])
self.assertEqual(result.exit_code, 1)
diff --git a/swh/model/tests/test_identifiers.py b/swh/model/tests/test_identifiers.py
index fac86bd..6edb26c 100644
--- a/swh/model/tests/test_identifiers.py
+++ b/swh/model/tests/test_identifiers.py
@@ -1,1072 +1,1070 @@
# Copyright (C) 2015-2018 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import binascii
import datetime
import pytest
import unittest
from swh.model import hashutil, identifiers
from swh.model.exceptions import ValidationError
from swh.model.hashutil import hash_to_bytes as _x
from swh.model.identifiers import (
CONTENT,
DIRECTORY,
RELEASE,
REVISION,
SNAPSHOT,
- PersistentId,
+ SWHID,
normalize_timestamp,
)
class UtilityFunctionsIdentifier(unittest.TestCase):
def setUp(self):
self.str_id = "c2e41aae41ac17bd4a650770d6ee77f62e52235b"
self.bytes_id = binascii.unhexlify(self.str_id)
self.bad_type_id = object()
def test_identifier_to_bytes(self):
for id in [self.str_id, self.bytes_id]:
self.assertEqual(identifiers.identifier_to_bytes(id), self.bytes_id)
# wrong length
with self.assertRaises(ValueError) as cm:
identifiers.identifier_to_bytes(id[:-2])
self.assertIn("length", str(cm.exception))
with self.assertRaises(ValueError) as cm:
identifiers.identifier_to_bytes(self.bad_type_id)
self.assertIn("type", str(cm.exception))
def test_identifier_to_str(self):
for id in [self.str_id, self.bytes_id]:
self.assertEqual(identifiers.identifier_to_str(id), self.str_id)
# wrong length
with self.assertRaises(ValueError) as cm:
identifiers.identifier_to_str(id[:-2])
self.assertIn("length", str(cm.exception))
with self.assertRaises(ValueError) as cm:
identifiers.identifier_to_str(self.bad_type_id)
self.assertIn("type", str(cm.exception))
class UtilityFunctionsDateOffset(unittest.TestCase):
def setUp(self):
self.dates = {
b"1448210036": {"seconds": 1448210036, "microseconds": 0,},
b"1448210036.002342": {"seconds": 1448210036, "microseconds": 2342,},
b"1448210036.12": {"seconds": 1448210036, "microseconds": 120000,},
}
self.broken_dates = [
1448210036.12,
]
self.offsets = {
0: b"+0000",
-630: b"-1030",
800: b"+1320",
}
def test_format_date(self):
for date_repr, date in self.dates.items():
self.assertEqual(identifiers.format_date(date), date_repr)
def test_format_date_fail(self):
for date in self.broken_dates:
with self.assertRaises(ValueError):
identifiers.format_date(date)
def test_format_offset(self):
for offset, res in self.offsets.items():
self.assertEqual(identifiers.format_offset(offset), res)
class ContentIdentifier(unittest.TestCase):
def setUp(self):
self.content = {
"status": "visible",
"length": 5,
"data": b"1984\n",
"ctime": datetime.datetime(
2015, 11, 22, 16, 33, 56, tzinfo=datetime.timezone.utc
),
}
self.content_id = hashutil.MultiHash.from_data(self.content["data"]).digest()
def test_content_identifier(self):
self.assertEqual(identifiers.content_identifier(self.content), self.content_id)
directory_example = {
"id": "d7ed3d2c31d608823be58b1cbe57605310615231",
"entries": [
{
"type": "file",
"perms": 33188,
"name": b"README",
"target": _x("37ec8ea2110c0b7a32fbb0e872f6e7debbf95e21"),
},
{
"type": "file",
"perms": 33188,
"name": b"Rakefile",
"target": _x("3bb0e8592a41ae3185ee32266c860714980dbed7"),
},
{
"type": "dir",
"perms": 16384,
"name": b"app",
"target": _x("61e6e867f5d7ba3b40540869bc050b0c4fed9e95"),
},
{
"type": "file",
"perms": 33188,
"name": b"1.megabyte",
"target": _x("7c2b2fbdd57d6765cdc9d84c2d7d333f11be7fb3"),
},
{
"type": "dir",
"perms": 16384,
"name": b"config",
"target": _x("591dfe784a2e9ccc63aaba1cb68a765734310d98"),
},
{
"type": "dir",
"perms": 16384,
"name": b"public",
"target": _x("9588bf4522c2b4648bfd1c61d175d1f88c1ad4a5"),
},
{
"type": "file",
"perms": 33188,
"name": b"development.sqlite3",
"target": _x("e69de29bb2d1d6434b8b29ae775ad8c2e48c5391"),
},
{
"type": "dir",
"perms": 16384,
"name": b"doc",
"target": _x("154705c6aa1c8ead8c99c7915373e3c44012057f"),
},
{
"type": "dir",
"perms": 16384,
"name": b"db",
"target": _x("85f157bdc39356b7bc7de9d0099b4ced8b3b382c"),
},
{
"type": "dir",
"perms": 16384,
"name": b"log",
"target": _x("5e3d3941c51cce73352dff89c805a304ba96fffe"),
},
{
"type": "dir",
"perms": 16384,
"name": b"script",
"target": _x("1b278423caf176da3f3533592012502aa10f566c"),
},
{
"type": "dir",
"perms": 16384,
"name": b"test",
"target": _x("035f0437c080bfd8711670b3e8677e686c69c763"),
},
{
"type": "dir",
"perms": 16384,
"name": b"vendor",
"target": _x("7c0dc9ad978c1af3f9a4ce061e50f5918bd27138"),
},
{
"type": "rev",
"perms": 57344,
"name": b"will_paginate",
"target": _x("3d531e169db92a16a9a8974f0ae6edf52e52659e"),
},
# in git order, the dir named "order" should be between the files
# named "order." and "order0"
{
"type": "dir",
"perms": 16384,
"name": b"order",
"target": _x("62cdb7020ff920e5aa642c3d4066950dd1f01f4d"),
},
{
"type": "file",
"perms": 16384,
"name": b"order.",
"target": _x("0beec7b5ea3f0fdbc95d0dd47f3c5bc275da8a33"),
},
{
"type": "file",
"perms": 16384,
"name": b"order0",
"target": _x("bbe960a25ea311d21d40669e93df2003ba9b90a2"),
},
],
}
class DirectoryIdentifier(unittest.TestCase):
def setUp(self):
self.directory = directory_example
self.empty_directory = {
"id": "4b825dc642cb6eb9a060e54bf8d69288fbee4904",
"entries": [],
}
def test_dir_identifier(self):
self.assertEqual(
identifiers.directory_identifier(self.directory), self.directory["id"]
)
def test_dir_identifier_entry_order(self):
# Reverse order of entries, check the id is still the same.
directory = {"entries": reversed(self.directory["entries"])}
self.assertEqual(
identifiers.directory_identifier(directory), self.directory["id"]
)
def test_dir_identifier_empty_directory(self):
self.assertEqual(
identifiers.directory_identifier(self.empty_directory),
self.empty_directory["id"],
)
linus_tz = datetime.timezone(datetime.timedelta(minutes=-420))
revision_example = {
"id": "bc0195aad0daa2ad5b0d76cce22b167bc3435590",
"directory": _x("85a74718d377195e1efd0843ba4f3260bad4fe07"),
"parents": [_x("01e2d0627a9a6edb24c37db45db5ecb31e9de808")],
"author": {
"name": b"Linus Torvalds",
"email": b"torvalds@linux-foundation.org",
"fullname": b"Linus Torvalds ",
},
"date": datetime.datetime(2015, 7, 12, 15, 10, 30, tzinfo=linus_tz),
"committer": {
"name": b"Linus Torvalds",
"email": b"torvalds@linux-foundation.org",
"fullname": b"Linus Torvalds ",
},
"committer_date": datetime.datetime(2015, 7, 12, 15, 10, 30, tzinfo=linus_tz),
"message": b"Linux 4.2-rc2\n",
"type": "git",
"synthetic": False,
}
class RevisionIdentifier(unittest.TestCase):
def setUp(self):
gpgsig = b"""\
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (Darwin)
iQIcBAABAgAGBQJVJcYsAAoJEBiY3kIkQRNJVAUQAJ8/XQIfMqqC5oYeEFfHOPYZ
L7qy46bXHVBa9Qd8zAJ2Dou3IbI2ZoF6/Et89K/UggOycMlt5FKV/9toWyuZv4Po
L682wonoxX99qvVTHo6+wtnmYO7+G0f82h+qHMErxjP+I6gzRNBvRr+SfY7VlGdK
wikMKOMWC5smrScSHITnOq1Ews5pe3N7qDYMzK0XVZmgDoaem4RSWMJs4My/qVLN
e0CqYWq2A22GX7sXl6pjneJYQvcAXUX+CAzp24QnPSb+Q22Guj91TcxLFcHCTDdn
qgqMsEyMiisoglwrCbO+D+1xq9mjN9tNFWP66SQ48mrrHYTBV5sz9eJyDfroJaLP
CWgbDTgq6GzRMehHT3hXfYS5NNatjnhkNISXR7pnVP/obIi/vpWh5ll6Gd8q26z+
a/O41UzOaLTeNI365MWT4/cnXohVLRG7iVJbAbCxoQmEgsYMRc/pBAzWJtLfcB2G
jdTswYL6+MUdL8sB9pZ82D+BP/YAdHe69CyTu1lk9RT2pYtI/kkfjHubXBCYEJSG
+VGllBbYG6idQJpyrOYNRJyrDi9yvDJ2W+S0iQrlZrxzGBVGTB/y65S8C+2WTBcE
lf1Qb5GDsQrZWgD+jtWTywOYHtCBwyCKSAXxSARMbNPeak9WPlcW/Jmu+fUcMe2x
dg1KdHOa34shrKDaOVzW
=od6m
-----END PGP SIGNATURE-----"""
self.revision = revision_example
self.revision_none_metadata = {
"id": "bc0195aad0daa2ad5b0d76cce22b167bc3435590",
"directory": _x("85a74718d377195e1efd0843ba4f3260bad4fe07"),
"parents": [_x("01e2d0627a9a6edb24c37db45db5ecb31e9de808")],
"author": {
"name": b"Linus Torvalds",
"email": b"torvalds@linux-foundation.org",
},
"date": datetime.datetime(2015, 7, 12, 15, 10, 30, tzinfo=linus_tz),
"committer": {
"name": b"Linus Torvalds",
"email": b"torvalds@linux-foundation.org",
},
"committer_date": datetime.datetime(
2015, 7, 12, 15, 10, 30, tzinfo=linus_tz
),
"message": b"Linux 4.2-rc2\n",
"metadata": None,
}
self.synthetic_revision = {
"id": b"\xb2\xa7\xe1&\x04\x92\xe3D\xfa\xb3\xcb\xf9\x1b\xc1<\x91"
b"\xe0T&\xfd",
"author": {
"name": b"Software Heritage",
"email": b"robot@softwareheritage.org",
},
"date": {
"timestamp": {"seconds": 1437047495},
"offset": 0,
"negative_utc": False,
},
"type": "tar",
"committer": {
"name": b"Software Heritage",
"email": b"robot@softwareheritage.org",
},
"committer_date": 1437047495,
"synthetic": True,
"parents": [None],
"message": b"synthetic revision message\n",
"directory": b"\xd1\x1f\x00\xa6\xa0\xfe\xa6\x05SA\xd2U\x84\xb5\xa9"
b"e\x16\xc0\xd2\xb8",
"metadata": {
"original_artifact": [
{
"archive_type": "tar",
"name": "gcc-5.2.0.tar.bz2",
"sha1_git": "39d281aff934d44b439730057e55b055e206a586",
"sha1": "fe3f5390949d47054b613edc36c557eb1d51c18e",
"sha256": "5f835b04b5f7dd4f4d2dc96190ec1621b8d89f"
"2dc6f638f9f8bc1b1014ba8cad",
}
]
},
}
# cat commit.txt | git hash-object -t commit --stdin
self.revision_with_extra_headers = {
"id": "010d34f384fa99d047cdd5e2f41e56e5c2feee45",
"directory": _x("85a74718d377195e1efd0843ba4f3260bad4fe07"),
"parents": [_x("01e2d0627a9a6edb24c37db45db5ecb31e9de808")],
"author": {
"name": b"Linus Torvalds",
"email": b"torvalds@linux-foundation.org",
"fullname": b"Linus Torvalds ",
},
"date": datetime.datetime(2015, 7, 12, 15, 10, 30, tzinfo=linus_tz),
"committer": {
"name": b"Linus Torvalds",
"email": b"torvalds@linux-foundation.org",
"fullname": b"Linus Torvalds ",
},
"committer_date": datetime.datetime(
2015, 7, 12, 15, 10, 30, tzinfo=linus_tz
),
"message": b"Linux 4.2-rc2\n",
"metadata": {
"extra_headers": [
["svn-repo-uuid", "046f1af7-66c2-d61b-5410-ce57b7db7bff"],
["svn-revision", 10],
]
},
}
self.revision_with_gpgsig = {
"id": "44cc742a8ca17b9c279be4cc195a93a6ef7a320e",
"directory": _x("b134f9b7dc434f593c0bab696345548b37de0558"),
"parents": [
_x("689664ae944b4692724f13b709a4e4de28b54e57"),
_x("c888305e1efbaa252d01b4e5e6b778f865a97514"),
],
"author": {
"name": b"Jiang Xin",
"email": b"worldhello.net@gmail.com",
"fullname": b"Jiang Xin ",
},
"date": {"timestamp": 1428538899, "offset": 480,},
"committer": {"name": b"Jiang Xin", "email": b"worldhello.net@gmail.com",},
"committer_date": {"timestamp": 1428538899, "offset": 480,},
"metadata": {"extra_headers": [["gpgsig", gpgsig],],},
"message": b"""Merge branch 'master' of git://github.com/alexhenrie/git-po
* 'master' of git://github.com/alexhenrie/git-po:
l10n: ca.po: update translation
""",
}
self.revision_no_message = {
"id": "4cfc623c9238fa92c832beed000ce2d003fd8333",
"directory": _x("b134f9b7dc434f593c0bab696345548b37de0558"),
"parents": [
_x("689664ae944b4692724f13b709a4e4de28b54e57"),
_x("c888305e1efbaa252d01b4e5e6b778f865a97514"),
],
"author": {
"name": b"Jiang Xin",
"email": b"worldhello.net@gmail.com",
"fullname": b"Jiang Xin ",
},
"date": {"timestamp": 1428538899, "offset": 480,},
"committer": {"name": b"Jiang Xin", "email": b"worldhello.net@gmail.com",},
"committer_date": {"timestamp": 1428538899, "offset": 480,},
"message": None,
}
self.revision_empty_message = {
"id": "7442cd78bd3b4966921d6a7f7447417b7acb15eb",
"directory": _x("b134f9b7dc434f593c0bab696345548b37de0558"),
"parents": [
_x("689664ae944b4692724f13b709a4e4de28b54e57"),
_x("c888305e1efbaa252d01b4e5e6b778f865a97514"),
],
"author": {
"name": b"Jiang Xin",
"email": b"worldhello.net@gmail.com",
"fullname": b"Jiang Xin ",
},
"date": {"timestamp": 1428538899, "offset": 480,},
"committer": {"name": b"Jiang Xin", "email": b"worldhello.net@gmail.com",},
"committer_date": {"timestamp": 1428538899, "offset": 480,},
"message": b"",
}
self.revision_only_fullname = {
"id": "010d34f384fa99d047cdd5e2f41e56e5c2feee45",
"directory": _x("85a74718d377195e1efd0843ba4f3260bad4fe07"),
"parents": [_x("01e2d0627a9a6edb24c37db45db5ecb31e9de808")],
"author": {"fullname": b"Linus Torvalds ",},
"date": datetime.datetime(2015, 7, 12, 15, 10, 30, tzinfo=linus_tz),
"committer": {
"fullname": b"Linus Torvalds ",
},
"committer_date": datetime.datetime(
2015, 7, 12, 15, 10, 30, tzinfo=linus_tz
),
"message": b"Linux 4.2-rc2\n",
"metadata": {
"extra_headers": [
["svn-repo-uuid", "046f1af7-66c2-d61b-5410-ce57b7db7bff"],
["svn-revision", 10],
]
},
}
def test_revision_identifier(self):
self.assertEqual(
identifiers.revision_identifier(self.revision),
identifiers.identifier_to_str(self.revision["id"]),
)
def test_revision_identifier_none_metadata(self):
self.assertEqual(
identifiers.revision_identifier(self.revision_none_metadata),
identifiers.identifier_to_str(self.revision_none_metadata["id"]),
)
def test_revision_identifier_synthetic(self):
self.assertEqual(
identifiers.revision_identifier(self.synthetic_revision),
identifiers.identifier_to_str(self.synthetic_revision["id"]),
)
def test_revision_identifier_with_extra_headers(self):
self.assertEqual(
identifiers.revision_identifier(self.revision_with_extra_headers),
identifiers.identifier_to_str(self.revision_with_extra_headers["id"]),
)
def test_revision_identifier_with_gpgsig(self):
self.assertEqual(
identifiers.revision_identifier(self.revision_with_gpgsig),
identifiers.identifier_to_str(self.revision_with_gpgsig["id"]),
)
def test_revision_identifier_no_message(self):
self.assertEqual(
identifiers.revision_identifier(self.revision_no_message),
identifiers.identifier_to_str(self.revision_no_message["id"]),
)
def test_revision_identifier_empty_message(self):
self.assertEqual(
identifiers.revision_identifier(self.revision_empty_message),
identifiers.identifier_to_str(self.revision_empty_message["id"]),
)
def test_revision_identifier_only_fullname(self):
self.assertEqual(
identifiers.revision_identifier(self.revision_only_fullname),
identifiers.identifier_to_str(self.revision_only_fullname["id"]),
)
release_example = {
"id": "2b10839e32c4c476e9d94492756bb1a3e1ec4aa8",
"target": b't\x1b"R\xa5\xe1Ml`\xa9\x13\xc7z`\x99\xab\xe7:\x85J',
"target_type": "revision",
"name": b"v2.6.14",
"author": {
"name": b"Linus Torvalds",
"email": b"torvalds@g5.osdl.org",
"fullname": b"Linus Torvalds ",
},
"date": datetime.datetime(2005, 10, 27, 17, 2, 33, tzinfo=linus_tz),
"message": b"""\
Linux 2.6.14 release
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.1 (GNU/Linux)
iD8DBQBDYWq6F3YsRnbiHLsRAmaeAJ9RCez0y8rOBbhSv344h86l/VVcugCeIhO1
wdLOnvj91G4wxYqrvThthbE=
=7VeT
-----END PGP SIGNATURE-----
""",
"synthetic": False,
}
class ReleaseIdentifier(unittest.TestCase):
def setUp(self):
linus_tz = datetime.timezone(datetime.timedelta(minutes=-420))
self.release = release_example
self.release_no_author = {
"id": b"&y\x1a\x8b\xcf\x0em3\xf4:\xefv\x82\xbd\xb5U#mV\xde",
"target": "9ee1c939d1cb936b1f98e8d81aeffab57bae46ab",
"target_type": "revision",
"name": b"v2.6.12",
"message": b"""\
This is the final 2.6.12 release
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)
iD8DBQBCsykyF3YsRnbiHLsRAvPNAJ482tCZwuxp/bJRz7Q98MHlN83TpACdHr37
o6X/3T+vm8K3bf3driRr34c=
=sBHn
-----END PGP SIGNATURE-----
""",
"synthetic": False,
}
self.release_no_message = {
"id": "b6f4f446715f7d9543ef54e41b62982f0db40045",
"target": "9ee1c939d1cb936b1f98e8d81aeffab57bae46ab",
"target_type": "revision",
"name": b"v2.6.12",
"author": {"name": b"Linus Torvalds", "email": b"torvalds@g5.osdl.org",},
"date": datetime.datetime(2005, 10, 27, 17, 2, 33, tzinfo=linus_tz),
"message": None,
}
self.release_empty_message = {
"id": "71a0aea72444d396575dc25ac37fec87ee3c6492",
"target": "9ee1c939d1cb936b1f98e8d81aeffab57bae46ab",
"target_type": "revision",
"name": b"v2.6.12",
"author": {"name": b"Linus Torvalds", "email": b"torvalds@g5.osdl.org",},
"date": datetime.datetime(2005, 10, 27, 17, 2, 33, tzinfo=linus_tz),
"message": b"",
}
self.release_negative_utc = {
"id": "97c8d2573a001f88e72d75f596cf86b12b82fd01",
"name": b"20081029",
"target": "54e9abca4c77421e2921f5f156c9fe4a9f7441c7",
"target_type": "revision",
"date": {
"timestamp": {"seconds": 1225281976},
"offset": 0,
"negative_utc": True,
},
"author": {
"name": b"Otavio Salvador",
"email": b"otavio@debian.org",
"id": 17640,
},
"synthetic": False,
"message": b"tagging version 20081029\n\nr56558\n",
}
self.release_newline_in_author = {
"author": {
"email": b"esycat@gmail.com",
"fullname": b"Eugene Janusov\n",
"name": b"Eugene Janusov\n",
},
"date": {
"negative_utc": None,
"offset": 600,
"timestamp": {"microseconds": 0, "seconds": 1377480558,},
},
"id": b"\\\x98\xf5Y\xd04\x16-\xe2->\xbe\xb9T3\xe6\xf8\x88R1",
"message": b"Release of v0.3.2.",
"name": b"0.3.2",
"synthetic": False,
"target": (b"\xc0j\xa3\xd9;x\xa2\x86\\I5\x17" b"\x000\xf8\xc2\xd79o\xd3"),
"target_type": "revision",
}
self.release_snapshot_target = dict(self.release)
self.release_snapshot_target["target_type"] = "snapshot"
self.release_snapshot_target["id"] = "c29c3ddcc6769a04e54dd69d63a6fdcbc566f850"
def test_release_identifier(self):
self.assertEqual(
identifiers.release_identifier(self.release),
identifiers.identifier_to_str(self.release["id"]),
)
def test_release_identifier_no_author(self):
self.assertEqual(
identifiers.release_identifier(self.release_no_author),
identifiers.identifier_to_str(self.release_no_author["id"]),
)
def test_release_identifier_no_message(self):
self.assertEqual(
identifiers.release_identifier(self.release_no_message),
identifiers.identifier_to_str(self.release_no_message["id"]),
)
def test_release_identifier_empty_message(self):
self.assertEqual(
identifiers.release_identifier(self.release_empty_message),
identifiers.identifier_to_str(self.release_empty_message["id"]),
)
def test_release_identifier_negative_utc(self):
self.assertEqual(
identifiers.release_identifier(self.release_negative_utc),
identifiers.identifier_to_str(self.release_negative_utc["id"]),
)
def test_release_identifier_newline_in_author(self):
self.assertEqual(
identifiers.release_identifier(self.release_newline_in_author),
identifiers.identifier_to_str(self.release_newline_in_author["id"]),
)
def test_release_identifier_snapshot_target(self):
self.assertEqual(
identifiers.release_identifier(self.release_snapshot_target),
identifiers.identifier_to_str(self.release_snapshot_target["id"]),
)
snapshot_example = {
"id": _x("6e65b86363953b780d92b0a928f3e8fcdd10db36"),
"branches": {
b"directory": {
"target": _x("1bd0e65f7d2ff14ae994de17a1e7fe65111dcad8"),
"target_type": "directory",
},
b"content": {
"target": _x("fe95a46679d128ff167b7c55df5d02356c5a1ae1"),
"target_type": "content",
},
b"alias": {"target": b"revision", "target_type": "alias",},
b"revision": {
"target": _x("aafb16d69fd30ff58afdd69036a26047f3aebdc6"),
"target_type": "revision",
},
b"release": {
"target": _x("7045404f3d1c54e6473c71bbb716529fbad4be24"),
"target_type": "release",
},
b"snapshot": {
"target": _x("1a8893e6a86f444e8be8e7bda6cb34fb1735a00e"),
"target_type": "snapshot",
},
b"dangling": None,
},
}
class SnapshotIdentifier(unittest.TestCase):
def setUp(self):
super().setUp()
self.empty = {
"id": "1a8893e6a86f444e8be8e7bda6cb34fb1735a00e",
"branches": {},
}
self.dangling_branch = {
"id": "c84502e821eb21ed84e9fd3ec40973abc8b32353",
"branches": {b"HEAD": None,},
}
self.unresolved = {
"id": "84b4548ea486e4b0a7933fa541ff1503a0afe1e0",
"branches": {b"foo": {"target": b"bar", "target_type": "alias",},},
}
self.all_types = snapshot_example
def test_empty_snapshot(self):
self.assertEqual(
identifiers.snapshot_identifier(self.empty),
identifiers.identifier_to_str(self.empty["id"]),
)
def test_dangling_branch(self):
self.assertEqual(
identifiers.snapshot_identifier(self.dangling_branch),
identifiers.identifier_to_str(self.dangling_branch["id"]),
)
def test_unresolved(self):
with self.assertRaisesRegex(ValueError, "b'foo' -> b'bar'"):
identifiers.snapshot_identifier(self.unresolved)
def test_unresolved_force(self):
self.assertEqual(
identifiers.snapshot_identifier(self.unresolved, ignore_unresolved=True,),
identifiers.identifier_to_str(self.unresolved["id"]),
)
def test_all_types(self):
self.assertEqual(
identifiers.snapshot_identifier(self.all_types),
identifiers.identifier_to_str(self.all_types["id"]),
)
- def test_persistent_identifier(self):
+ def test_swhid(self):
_snapshot_id = _x("c7c108084bc0bf3d81436bf980b46e98bd338453")
_release_id = "22ece559cc7cc2364edc5e5593d63ae8bd229f9f"
_revision_id = "309cf2674ee7a0749978cf8265ab91a60aea0f7d"
_directory_id = "d198bc9d7a6bcf6db04f476d29314f157507d505"
_content_id = "94a9ed024d3859793618152ea559a168bbcbb5e2"
_snapshot = {"id": _snapshot_id}
_release = {"id": _release_id}
_revision = {"id": _revision_id}
_directory = {"id": _directory_id}
_content = {"sha1_git": _content_id}
- for full_type, _hash, expected_persistent_id, version, _meta in [
+ for full_type, _hash, expected_swhid, version, _meta in [
(
SNAPSHOT,
_snapshot_id,
"swh:1:snp:c7c108084bc0bf3d81436bf980b46e98bd338453",
None,
{},
),
(
RELEASE,
_release_id,
"swh:1:rel:22ece559cc7cc2364edc5e5593d63ae8bd229f9f",
1,
{},
),
(
REVISION,
_revision_id,
"swh:1:rev:309cf2674ee7a0749978cf8265ab91a60aea0f7d",
None,
{},
),
(
DIRECTORY,
_directory_id,
"swh:1:dir:d198bc9d7a6bcf6db04f476d29314f157507d505",
None,
{},
),
(
CONTENT,
_content_id,
"swh:1:cnt:94a9ed024d3859793618152ea559a168bbcbb5e2",
1,
{},
),
(
SNAPSHOT,
_snapshot,
"swh:1:snp:c7c108084bc0bf3d81436bf980b46e98bd338453",
None,
{},
),
(
RELEASE,
_release,
"swh:1:rel:22ece559cc7cc2364edc5e5593d63ae8bd229f9f",
1,
{},
),
(
REVISION,
_revision,
"swh:1:rev:309cf2674ee7a0749978cf8265ab91a60aea0f7d",
None,
{},
),
(
DIRECTORY,
_directory,
"swh:1:dir:d198bc9d7a6bcf6db04f476d29314f157507d505",
None,
{},
),
(
CONTENT,
_content,
"swh:1:cnt:94a9ed024d3859793618152ea559a168bbcbb5e2",
1,
{},
),
(
CONTENT,
_content,
"swh:1:cnt:94a9ed024d3859793618152ea559a168bbcbb5e2;origin=1",
1,
{"origin": "1"},
),
]:
if version:
- actual_value = identifiers.persistent_identifier(
+ actual_value = identifiers.swhid(
full_type, _hash, version, metadata=_meta
)
else:
- actual_value = identifiers.persistent_identifier(
- full_type, _hash, metadata=_meta
- )
+ actual_value = identifiers.swhid(full_type, _hash, metadata=_meta)
- self.assertEqual(actual_value, expected_persistent_id)
+ self.assertEqual(actual_value, expected_swhid)
- def test_persistent_identifier_wrong_input(self):
+ def test_swhid_wrong_input(self):
_snapshot_id = "notahash4bc0bf3d81436bf980b46e98bd338453"
_snapshot = {"id": _snapshot_id}
for _type, _hash in [
(SNAPSHOT, _snapshot_id),
(SNAPSHOT, _snapshot),
("foo", ""),
]:
with self.assertRaises(ValidationError):
- identifiers.persistent_identifier(_type, _hash)
+ identifiers.swhid(_type, _hash)
- def test_parse_persistent_identifier(self):
- for pid, _type, _version, _hash in [
+ def test_parse_swhid(self):
+ for swhid, _type, _version, _hash in [
(
"swh:1:cnt:94a9ed024d3859793618152ea559a168bbcbb5e2",
CONTENT,
1,
"94a9ed024d3859793618152ea559a168bbcbb5e2",
),
(
"swh:1:dir:d198bc9d7a6bcf6db04f476d29314f157507d505",
DIRECTORY,
1,
"d198bc9d7a6bcf6db04f476d29314f157507d505",
),
(
"swh:1:rev:309cf2674ee7a0749978cf8265ab91a60aea0f7d",
REVISION,
1,
"309cf2674ee7a0749978cf8265ab91a60aea0f7d",
),
(
"swh:1:rel:22ece559cc7cc2364edc5e5593d63ae8bd229f9f",
RELEASE,
1,
"22ece559cc7cc2364edc5e5593d63ae8bd229f9f",
),
(
"swh:1:snp:c7c108084bc0bf3d81436bf980b46e98bd338453",
SNAPSHOT,
1,
"c7c108084bc0bf3d81436bf980b46e98bd338453",
),
]:
- expected_result = PersistentId(
+ expected_result = SWHID(
namespace="swh",
scheme_version=_version,
object_type=_type,
object_id=_hash,
metadata={},
)
- actual_result = identifiers.parse_persistent_identifier(pid)
+ actual_result = identifiers.parse_swhid(swhid)
self.assertEqual(actual_result, expected_result)
- for pid, _type, _version, _hash, _metadata in [
+ for swhid, _type, _version, _hash, _metadata in [
(
"swh:1:cnt:9c95815d9e9d91b8dae8e05d8bbc696fe19f796b;lines=1-18;origin=https://github.com/python/cpython", # noqa
CONTENT,
1,
"9c95815d9e9d91b8dae8e05d8bbc696fe19f796b",
{"lines": "1-18", "origin": "https://github.com/python/cpython"},
),
(
"swh:1:dir:0b6959356d30f1a4e9b7f6bca59b9a336464c03d;origin=deb://Debian/packages/linuxdoc-tools", # noqa
DIRECTORY,
1,
"0b6959356d30f1a4e9b7f6bca59b9a336464c03d",
{"origin": "deb://Debian/packages/linuxdoc-tools"},
),
]:
- expected_result = PersistentId(
+ expected_result = SWHID(
namespace="swh",
scheme_version=_version,
object_type=_type,
object_id=_hash,
metadata=_metadata,
)
- actual_result = identifiers.parse_persistent_identifier(pid)
+ actual_result = identifiers.parse_swhid(swhid)
self.assertEqual(actual_result, expected_result)
- def test_parse_persistent_identifier_parsing_error(self):
- for pid in [
+ def test_parse_swhid_parsing_error(self):
+ for swhid in [
("swh:1:cnt"),
("swh:1:"),
("swh:"),
("swh:1:cnt:"),
("foo:1:cnt:abc8bc9d7a6bcf6db04f476d29314f157507d505"),
("swh:2:dir:def8bc9d7a6bcf6db04f476d29314f157507d505"),
("swh:1:foo:fed8bc9d7a6bcf6db04f476d29314f157507d505"),
("swh:1:dir:0b6959356d30f1a4e9b7f6bca59b9a336464c03d;invalid;" "malformed"),
("swh:1:snp:gh6959356d30f1a4e9b7f6bca59b9a336464c03d"),
("swh:1:snp:foo"),
]:
with self.assertRaises(ValidationError):
- identifiers.parse_persistent_identifier(pid)
+ identifiers.parse_swhid(swhid)
def test_persistentid_class_validation_error(self):
for _ns, _version, _type, _id in [
("foo", 1, CONTENT, "abc8bc9d7a6bcf6db04f476d29314f157507d505"),
("swh", 2, DIRECTORY, "def8bc9d7a6bcf6db04f476d29314f157507d505"),
("swh", 1, "foo", "fed8bc9d7a6bcf6db04f476d29314f157507d505"),
("swh", 1, SNAPSHOT, "gh6959356d30f1a4e9b7f6bca59b9a336464c03d"),
]:
with self.assertRaises(ValidationError):
- PersistentId(
+ SWHID(
namespace=_ns,
scheme_version=_version,
object_type=_type,
object_id=_id,
)
class OriginIdentifier(unittest.TestCase):
def setUp(self):
self.origin = {
"url": "https://github.com/torvalds/linux",
}
def test_content_identifier(self):
self.assertEqual(
identifiers.origin_identifier(self.origin),
"b63a575fe3faab7692c9f38fb09d4bb45651bb0f",
)
TS_DICTS = [
(
{"timestamp": 12345, "offset": 0},
{
"timestamp": {"seconds": 12345, "microseconds": 0},
"offset": 0,
"negative_utc": False,
},
),
(
{"timestamp": 12345, "offset": 0, "negative_utc": False},
{
"timestamp": {"seconds": 12345, "microseconds": 0},
"offset": 0,
"negative_utc": False,
},
),
(
{"timestamp": 12345, "offset": 0, "negative_utc": False},
{
"timestamp": {"seconds": 12345, "microseconds": 0},
"offset": 0,
"negative_utc": False,
},
),
(
{"timestamp": 12345, "offset": 0, "negative_utc": None},
{
"timestamp": {"seconds": 12345, "microseconds": 0},
"offset": 0,
"negative_utc": False,
},
),
(
{"timestamp": {"seconds": 12345}, "offset": 0, "negative_utc": None},
{
"timestamp": {"seconds": 12345, "microseconds": 0},
"offset": 0,
"negative_utc": False,
},
),
(
{
"timestamp": {"seconds": 12345, "microseconds": 0},
"offset": 0,
"negative_utc": None,
},
{
"timestamp": {"seconds": 12345, "microseconds": 0},
"offset": 0,
"negative_utc": False,
},
),
(
{
"timestamp": {"seconds": 12345, "microseconds": 100},
"offset": 0,
"negative_utc": None,
},
{
"timestamp": {"seconds": 12345, "microseconds": 100},
"offset": 0,
"negative_utc": False,
},
),
(
{"timestamp": 12345, "offset": 0, "negative_utc": True},
{
"timestamp": {"seconds": 12345, "microseconds": 0},
"offset": 0,
"negative_utc": True,
},
),
(
{"timestamp": 12345, "offset": 0, "negative_utc": None},
{
"timestamp": {"seconds": 12345, "microseconds": 0},
"offset": 0,
"negative_utc": False,
},
),
]
@pytest.mark.parametrize("dict_input,expected", TS_DICTS)
def test_normalize_timestamp_dict(dict_input, expected):
assert normalize_timestamp(dict_input) == expected
TS_DICTS_INVALID_TIMESTAMP = [
{"timestamp": 1.2, "offset": 0},
{"timestamp": "1", "offset": 0},
# these below should really also trigger a ValueError...
# {"timestamp": {"seconds": "1"}, "offset": 0},
# {"timestamp": {"seconds": 1.2}, "offset": 0},
# {"timestamp": {"seconds": 1.2}, "offset": 0},
]
@pytest.mark.parametrize("dict_input", TS_DICTS_INVALID_TIMESTAMP)
def test_normalize_timestamp_dict_invalid_timestamp(dict_input):
with pytest.raises(ValueError, match="non-integer timestamp"):
normalize_timestamp(dict_input)