Page MenuHomeSoftware Heritage

No OneTemporary

This file is larger than 256 KB, so syntax highlighting was skipped.
diff --git a/.git-blame-ignore-revs b/.git-blame-ignore-revs
new file mode 100644
index 0000000..bc1957e
--- /dev/null
+++ b/.git-blame-ignore-revs
@@ -0,0 +1,5 @@
+# Enable black
+bf3f1cec8685c8f480ddd95027852f8caa10b8e3
+
+# python: Reformat code with black 22.3.0
+4c39334b2aa9f782950aaee72781dc1df9d37550
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 05398bb..d0b93d3 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -1,42 +1,43 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.1.0
hooks:
- id: trailing-whitespace
- id: check-json
- id: check-yaml
- repo: https://gitlab.com/pycqa/flake8
rev: 4.0.1
hooks:
- id: flake8
+ additional_dependencies: [flake8-bugbear==22.3.23]
- repo: https://github.com/codespell-project/codespell
rev: v2.1.0
hooks:
- id: codespell
name: Check source code spelling
stages: [commit]
- id: codespell
name: Check commit message spelling
stages: [commit-msg]
- repo: local
hooks:
- id: mypy
name: mypy
entry: mypy
args: [swh]
pass_filenames: false
language: system
types: [python]
- repo: https://github.com/PyCQA/isort
rev: 5.10.1
hooks:
- id: isort
- repo: https://github.com/python/black
- rev: 19.10b0
+ rev: 22.3.0
hooks:
- id: black
diff --git a/PKG-INFO b/PKG-INFO
index 2719958..e632705 100644
--- a/PKG-INFO
+++ b/PKG-INFO
@@ -1,46 +1,46 @@
Metadata-Version: 2.1
Name: swh.model
-Version: 6.0.1
+Version: 6.1.0
Summary: Software Heritage data model
Home-page: https://forge.softwareheritage.org/diffusion/DMOD/
Author: Software Heritage developers
Author-email: swh-devel@inria.fr
License: UNKNOWN
Project-URL: Bug Reports, https://forge.softwareheritage.org/maniphest
Project-URL: Funding, https://www.softwareheritage.org/donate
Project-URL: Source, https://forge.softwareheritage.org/source/swh-model
Project-URL: Documentation, https://docs.softwareheritage.org/devel/swh-model/
Platform: UNKNOWN
Classifier: Programming Language :: Python :: 3
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3)
Classifier: Operating System :: OS Independent
Classifier: Development Status :: 5 - Production/Stable
Requires-Python: >=3.7
Description-Content-Type: text/markdown
Provides-Extra: cli
Provides-Extra: testing-minimal
Provides-Extra: testing
License-File: LICENSE
License-File: AUTHORS
swh-model
=========
Implementation of the Data model of the Software Heritage project, used to
archive source code artifacts.
This module defines the notion of SoftWare Heritage persistent IDentifiers
(SWHIDs) and provides tools to compute them:
```sh
$ swh-identify fork.c kmod.c sched/deadline.c
swh:1:cnt:2e391c754ae730bd2d8520c2ab497c403220c6e3 fork.c
swh:1:cnt:0277d1216f80ae1adeed84a686ed34c9b2931fc2 kmod.c
swh:1:cnt:57b939c81bce5d06fa587df8915f05affbe22b82 sched/deadline.c
$ swh-identify --no-filename /usr/src/linux/kernel/
swh:1:dir:f9f858a48d663b3809c9e2f336412717496202ab
```
diff --git a/bin/swh-revhash b/bin/swh-revhash
index 56b587d..ad76f9e 100755
--- a/bin/swh-revhash
+++ b/bin/swh-revhash
@@ -1,31 +1,29 @@
#!/usr/bin/env python3
# Use:
# swh-revhash 'tree 4b825dc642cb6eb9a060e54bf8d69288fbee4904\nparent 22c0fa5195a53f2e733ec75a9b6e9d1624a8b771\nauthor seanius <seanius@3187e211-bb14-4c82-9596-0b59d67cd7f4> 1138341044 +0000\ncommitter seanius <seanius@3187e211-bb14-4c82-9596-0b59d67cd7f4> 1138341044 +0000\n\nmaking dir structure...\n' # noqa
# output: 17a631d474f49bbebfdf3d885dcde470d7faafd7
# To compare with git:
# git-revhash 'tree 4b825dc642cb6eb9a060e54bf8d69288fbee4904\nparent 22c0fa5195a53f2e733ec75a9b6e9d1624a8b771\nauthor seanius <seanius@3187e211-bb14-4c82-9596-0b59d67cd7f4> 1138341044 +0000\ncommitter seanius <seanius@3187e211-bb14-4c82-9596-0b59d67cd7f4> 1138341044 +0000\n\nmaking dir structure...\n' # noqa
# output: 17a631d474f49bbebfdf3d885dcde470d7faafd7
import sys
from swh.model import hashutil, identifiers
def revhash(revision_raw):
- """Compute the revision hash.
-
- """
+ """Compute the revision hash."""
# HACK: string have somehow their \n expanded to \\n
if b"\\n" in revision_raw:
revision_raw = revision_raw.replace(b"\\n", b"\n")
h = hashutil.hash_git_data(revision_raw, "commit")
return identifiers.identifier_to_str(h)
if __name__ == "__main__":
revision_raw = sys.argv[1].encode("utf-8")
print(revhash(revision_raw))
diff --git a/setup.cfg b/setup.cfg
index 1d722c2..f65ba0a 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -1,8 +1,9 @@
[flake8]
-ignore = E203,E231,W503
+select = C,E,F,W,B950
+ignore = E203,E231,E501,W503
max-line-length = 88
[egg_info]
tag_build =
tag_date = 0
diff --git a/swh.model.egg-info/PKG-INFO b/swh.model.egg-info/PKG-INFO
index 2719958..e632705 100644
--- a/swh.model.egg-info/PKG-INFO
+++ b/swh.model.egg-info/PKG-INFO
@@ -1,46 +1,46 @@
Metadata-Version: 2.1
Name: swh.model
-Version: 6.0.1
+Version: 6.1.0
Summary: Software Heritage data model
Home-page: https://forge.softwareheritage.org/diffusion/DMOD/
Author: Software Heritage developers
Author-email: swh-devel@inria.fr
License: UNKNOWN
Project-URL: Bug Reports, https://forge.softwareheritage.org/maniphest
Project-URL: Funding, https://www.softwareheritage.org/donate
Project-URL: Source, https://forge.softwareheritage.org/source/swh-model
Project-URL: Documentation, https://docs.softwareheritage.org/devel/swh-model/
Platform: UNKNOWN
Classifier: Programming Language :: Python :: 3
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3)
Classifier: Operating System :: OS Independent
Classifier: Development Status :: 5 - Production/Stable
Requires-Python: >=3.7
Description-Content-Type: text/markdown
Provides-Extra: cli
Provides-Extra: testing-minimal
Provides-Extra: testing
License-File: LICENSE
License-File: AUTHORS
swh-model
=========
Implementation of the Data model of the Software Heritage project, used to
archive source code artifacts.
This module defines the notion of SoftWare Heritage persistent IDentifiers
(SWHIDs) and provides tools to compute them:
```sh
$ swh-identify fork.c kmod.c sched/deadline.c
swh:1:cnt:2e391c754ae730bd2d8520c2ab497c403220c6e3 fork.c
swh:1:cnt:0277d1216f80ae1adeed84a686ed34c9b2931fc2 kmod.c
swh:1:cnt:57b939c81bce5d06fa587df8915f05affbe22b82 sched/deadline.c
$ swh-identify --no-filename /usr/src/linux/kernel/
swh:1:dir:f9f858a48d663b3809c9e2f336412717496202ab
```
diff --git a/swh.model.egg-info/SOURCES.txt b/swh.model.egg-info/SOURCES.txt
index bc820c0..31e7e3b 100644
--- a/swh.model.egg-info/SOURCES.txt
+++ b/swh.model.egg-info/SOURCES.txt
@@ -1,85 +1,86 @@
+.git-blame-ignore-revs
.gitignore
.pre-commit-config.yaml
AUTHORS
CODE_OF_CONDUCT.md
CONTRIBUTORS
LICENSE
MANIFEST.in
Makefile
Makefile.local
README.md
mypy.ini
pyproject.toml
pytest.ini
requirements-cli.txt
requirements-test.txt
requirements.txt
setup.cfg
setup.py
tox.ini
bin/git-revhash
bin/swh-hashtree
bin/swh-revhash
docs/.gitignore
docs/Makefile
docs/Makefile.local
docs/cli.rst
docs/conf.py
docs/data-model.rst
docs/iana-swh-template.txt
docs/index.rst
docs/persistent-identifiers.rst
docs/_static/.placeholder
docs/_templates/.placeholder
docs/images/.gitignore
docs/images/Makefile
docs/images/swh-merkle-dag.dia
swh/__init__.py
swh.model.egg-info/PKG-INFO
swh.model.egg-info/SOURCES.txt
swh.model.egg-info/dependency_links.txt
swh.model.egg-info/entry_points.txt
swh.model.egg-info/requires.txt
swh.model.egg-info/top_level.txt
swh/model/__init__.py
swh/model/cli.py
swh/model/collections.py
swh/model/exceptions.py
swh/model/from_disk.py
swh/model/git_objects.py
swh/model/hashutil.py
swh/model/hypothesis_strategies.py
swh/model/identifiers.py
swh/model/merkle.py
swh/model/model.py
swh/model/py.typed
swh/model/swhids.py
swh/model/toposort.py
swh/model/validators.py
swh/model/fields/__init__.py
swh/model/fields/compound.py
swh/model/fields/hashes.py
swh/model/fields/simple.py
swh/model/tests/__init__.py
swh/model/tests/generate_testdata.py
swh/model/tests/generate_testdata_from_disk.py
swh/model/tests/swh_model_data.py
swh/model/tests/test_cli.py
swh/model/tests/test_collections.py
swh/model/tests/test_from_disk.py
swh/model/tests/test_generate_testdata.py
swh/model/tests/test_hashutil.py
swh/model/tests/test_hypothesis_strategies.py
swh/model/tests/test_identifiers.py
swh/model/tests/test_merkle.py
swh/model/tests/test_model.py
swh/model/tests/test_swh_model_data.py
swh/model/tests/test_swhids.py
swh/model/tests/test_toposort.py
swh/model/tests/test_validators.py
swh/model/tests/data/dir-folders/sample-folder.tgz
swh/model/tests/data/repos/sample-repo.tgz
swh/model/tests/fields/__init__.py
swh/model/tests/fields/test_compound.py
swh/model/tests/fields/test_hashes.py
swh/model/tests/fields/test_simple.py
\ No newline at end of file
diff --git a/swh.model.egg-info/requires.txt b/swh.model.egg-info/requires.txt
index 6ec0311..abf9475 100644
--- a/swh.model.egg-info/requires.txt
+++ b/swh.model.egg-info/requires.txt
@@ -1,29 +1,28 @@
attrs!=21.1.0
attrs_strict>=0.0.7
deprecated
hypothesis
iso8601
python-dateutil
typing_extensions
[cli]
swh.core>=0.3
Click
dulwich
[testing]
click
pytest
pytz
types-python-dateutil
types-pytz
swh.core>=0.3
-Click
dulwich
[testing-minimal]
click
pytest
pytz
types-python-dateutil
types-pytz
diff --git a/swh/model/cli.py b/swh/model/cli.py
index 583d5c1..6220661 100644
--- a/swh/model/cli.py
+++ b/swh/model/cli.py
@@ -1,307 +1,310 @@
# Copyright (C) 2018-2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import os
import sys
from typing import Dict, Iterable, Optional
# WARNING: do not import unnecessary things here to keep cli startup time under
# control
try:
import click
except ImportError:
print(
"Cannot run swh-identify; the Click package is not installed."
"Please install 'swh.model[cli]' for full functionality.",
file=sys.stderr,
)
exit(1)
try:
from swh.core.cli import swh as swh_cli_group
except ImportError:
# stub so that swh-identify can be used when swh-core isn't installed
swh_cli_group = click # type: ignore
from swh.model.from_disk import Directory
from swh.model.swhids import CoreSWHID
CONTEXT_SETTINGS = dict(help_option_names=["-h", "--help"])
# Mapping between dulwich types and Software Heritage ones. Used by snapshot ID
# computation.
_DULWICH_TYPES = {
b"blob": "content",
b"tree": "directory",
b"commit": "revision",
b"tag": "release",
}
class CoreSWHIDParamType(click.ParamType):
"""Click argument that accepts a core SWHID and returns them as
- :class:`swh.model.swhids.CoreSWHID` instances """
+ :class:`swh.model.swhids.CoreSWHID` instances"""
name = "SWHID"
def convert(self, value, param, ctx) -> CoreSWHID:
from swh.model.exceptions import ValidationError
try:
return CoreSWHID.from_string(value)
except ValidationError as e:
self.fail(f'"{value}" is not a valid core SWHID: {e}', param, ctx)
def swhid_of_file(path) -> CoreSWHID:
from swh.model.from_disk import Content
object = Content.from_file(path=path)
return object.swhid()
def swhid_of_file_content(data) -> CoreSWHID:
from swh.model.from_disk import Content
object = Content.from_bytes(mode=644, data=data)
return object.swhid()
def model_of_dir(path: bytes, exclude_patterns: Iterable[bytes] = None) -> Directory:
from swh.model.from_disk import accept_all_directories, ignore_directories_patterns
dir_filter = (
ignore_directories_patterns(path, exclude_patterns)
if exclude_patterns
else accept_all_directories
)
return Directory.from_disk(path=path, dir_filter=dir_filter)
def swhid_of_dir(path: bytes, exclude_patterns: Iterable[bytes] = None) -> CoreSWHID:
obj = model_of_dir(path, exclude_patterns)
return obj.swhid()
def swhid_of_origin(url):
from swh.model.model import Origin
return Origin(url).swhid()
def swhid_of_git_repo(path) -> CoreSWHID:
try:
import dulwich.repo
except ImportError:
raise click.ClickException(
"Cannot compute snapshot identifier; the Dulwich package is not installed. "
"Please install 'swh.model[cli]' for full functionality.",
)
from swh.model import hashutil
from swh.model.model import Snapshot
repo = dulwich.repo.Repo(path)
branches: Dict[bytes, Optional[Dict]] = {}
for ref, target in repo.refs.as_dict().items():
obj = repo[target]
if obj:
branches[ref] = {
"target": hashutil.bytehex_to_hash(target),
"target_type": _DULWICH_TYPES[obj.type_name],
}
else:
branches[ref] = None
for ref, target in repo.refs.get_symrefs().items():
branches[ref] = {
"target": target,
"target_type": "alias",
}
snapshot = {"branches": branches}
return Snapshot.from_dict(snapshot).swhid()
def identify_object(
obj_type: str, follow_symlinks: bool, exclude_patterns: Iterable[bytes], obj
) -> str:
from urllib.parse import urlparse
if obj_type == "auto":
if obj == "-" or os.path.isfile(obj):
obj_type = "content"
elif os.path.isdir(obj):
obj_type = "directory"
else:
try: # URL parsing
if urlparse(obj).scheme:
obj_type = "origin"
else:
raise ValueError
except ValueError:
raise click.BadParameter("cannot detect object type for %s" % obj)
if obj == "-":
content = sys.stdin.buffer.read()
swhid = str(swhid_of_file_content(content))
elif obj_type in ["content", "directory"]:
path = obj.encode(sys.getfilesystemencoding())
if follow_symlinks and os.path.islink(obj):
path = os.path.realpath(obj)
if obj_type == "content":
swhid = str(swhid_of_file(path))
elif obj_type == "directory":
swhid = str(swhid_of_dir(path, exclude_patterns))
elif obj_type == "origin":
swhid = str(swhid_of_origin(obj))
elif obj_type == "snapshot":
swhid = str(swhid_of_git_repo(obj))
else: # shouldn't happen, due to option validation
raise click.BadParameter("invalid object type: " + obj_type)
# note: we return original obj instead of path here, to preserve user-given
# file name in output
return swhid
@swh_cli_group.command(context_settings=CONTEXT_SETTINGS)
@click.option(
"--dereference/--no-dereference",
"follow_symlinks",
default=True,
help="follow (or not) symlinks for OBJECTS passed as arguments "
+ "(default: follow)",
)
@click.option(
"--filename/--no-filename",
"show_filename",
default=True,
help="show/hide file name (default: show)",
)
@click.option(
"--type",
"-t",
"obj_type",
default="auto",
type=click.Choice(["auto", "content", "directory", "origin", "snapshot"]),
help="type of object to identify (default: auto)",
)
@click.option(
"--exclude",
"-x",
"exclude_patterns",
metavar="PATTERN",
multiple=True,
help="Exclude directories using glob patterns \
(e.g., ``*.git`` to exclude all .git directories)",
)
@click.option(
"--verify",
"-v",
metavar="SWHID",
type=CoreSWHIDParamType(),
help="reference identifier to be compared with computed one",
)
@click.option(
- "-r", "--recursive", is_flag=True, help="compute SWHID recursively",
+ "-r",
+ "--recursive",
+ is_flag=True,
+ help="compute SWHID recursively",
)
@click.argument("objects", nargs=-1, required=True)
def identify(
obj_type,
verify,
show_filename,
follow_symlinks,
objects,
exclude_patterns,
recursive,
):
"""Compute the Software Heritage persistent identifier (SWHID) for the given
source code object(s).
For more details about SWHIDs see:
https://docs.softwareheritage.org/devel/swh-model/persistent-identifiers.html
Tip: you can pass "-" to identify the content of standard input.
Examples::
$ swh identify fork.c kmod.c sched/deadline.c
swh:1:cnt:2e391c754ae730bd2d8520c2ab497c403220c6e3 fork.c
swh:1:cnt:0277d1216f80ae1adeed84a686ed34c9b2931fc2 kmod.c
swh:1:cnt:57b939c81bce5d06fa587df8915f05affbe22b82 sched/deadline.c
$ swh identify --no-filename /usr/src/linux/kernel/
swh:1:dir:f9f858a48d663b3809c9e2f336412717496202ab
$ git clone --mirror https://forge.softwareheritage.org/source/helloworld.git
$ swh identify --type snapshot helloworld.git/
swh:1:snp:510aa88bdc517345d258c1fc2babcd0e1f905e93 helloworld.git
"""
from functools import partial
import logging
if exclude_patterns:
exclude_patterns = set(pattern.encode() for pattern in exclude_patterns)
if verify and len(objects) != 1:
raise click.BadParameter("verification requires a single object")
if recursive and not os.path.isdir(objects[0]):
recursive = False
logging.warn("recursive option disabled, input is not a directory object")
if recursive:
if verify:
raise click.BadParameter(
"verification of recursive object identification is not supported"
)
if not obj_type == ("auto" or "directory"):
raise click.BadParameter(
"recursive identification is supported only for directories"
)
path = os.fsencode(objects[0])
dir_obj = model_of_dir(path, exclude_patterns)
for sub_obj in dir_obj.iter_tree():
path_name = "path" if "path" in sub_obj.data.keys() else "data"
path = os.fsdecode(sub_obj.data[path_name])
swhid = str(sub_obj.swhid())
msg = f"{swhid}\t{path}" if show_filename else f"{swhid}"
click.echo(msg)
else:
results = zip(
objects,
map(
partial(identify_object, obj_type, follow_symlinks, exclude_patterns),
objects,
),
)
if verify:
swhid = next(results)[1]
if str(verify) == swhid:
click.echo("SWHID match: %s" % swhid)
sys.exit(0)
else:
click.echo("SWHID mismatch: %s != %s" % (verify, swhid))
sys.exit(1)
else:
for (obj, swhid) in results:
msg = swhid
if show_filename:
msg = "%s\t%s" % (swhid, os.fsdecode(obj))
click.echo(msg)
if __name__ == "__main__":
identify()
diff --git a/swh/model/fields/compound.py b/swh/model/fields/compound.py
index 90b4685..dcdbfd6 100644
--- a/swh/model/fields/compound.py
+++ b/swh/model/fields/compound.py
@@ -1,125 +1,128 @@
# Copyright (C) 2015 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
from collections import defaultdict
import itertools
from ..exceptions import NON_FIELD_ERRORS, ValidationError
def validate_against_schema(model, schema, value):
"""Validate a value for the given model against the given schema.
Args:
model: the name of the model
schema: the schema to validate against
value: the value to validate
Returns:
True if the value is correct against the schema
Raises:
ValidationError if the value does not validate against the schema
"""
if not isinstance(value, dict):
raise ValidationError(
"Unexpected type %(type)s for %(model)s, expected dict",
- params={"model": model, "type": value.__class__.__name__,},
+ params={
+ "model": model,
+ "type": value.__class__.__name__,
+ },
code="model-unexpected-type",
)
errors = defaultdict(list)
for key, (mandatory, validators) in itertools.chain(
((k, v) for k, v in schema.items() if k != NON_FIELD_ERRORS),
[(NON_FIELD_ERRORS, (False, schema.get(NON_FIELD_ERRORS, [])))],
):
if not validators:
continue
if not isinstance(validators, list):
validators = [validators]
validated_value = value
if key != NON_FIELD_ERRORS:
try:
validated_value = value[key]
except KeyError:
if mandatory:
errors[key].append(
ValidationError(
"Field %(field)s is mandatory",
params={"field": key},
code="model-field-mandatory",
)
)
continue
else:
if errors:
# Don't validate the whole object if some fields are broken
continue
for validator in validators:
try:
valid = validator(validated_value)
except ValidationError as e:
errors[key].append(e)
else:
if not valid:
errdata = {
"validator": validator.__name__,
}
if key == NON_FIELD_ERRORS:
errmsg = (
"Validation of model %(model)s failed in " "%(validator)s"
)
errdata["model"] = model
errcode = "model-validation-failed"
else:
errmsg = (
"Validation of field %(field)s failed in " "%(validator)s"
)
errdata["field"] = key
errcode = "field-validation-failed"
errors[key].append(
ValidationError(errmsg, params=errdata, code=errcode)
)
if errors:
raise ValidationError(dict(errors))
return True
def validate_all_keys(value, keys):
"""Validate that all the given keys are present in value"""
missing_keys = set(keys) - set(value)
if missing_keys:
missing_fields = ", ".join(sorted(missing_keys))
raise ValidationError(
"Missing mandatory fields %(missing_fields)s",
params={"missing_fields": missing_fields},
code="missing-mandatory-field",
)
return True
def validate_any_key(value, keys):
"""Validate that any of the given keys is present in value"""
present_keys = set(keys) & set(value)
if not present_keys:
missing_fields = ", ".join(sorted(keys))
raise ValidationError(
"Must contain one of the alternative fields %(missing_fields)s",
params={"missing_fields": missing_fields},
code="missing-alternative-field",
)
return True
diff --git a/swh/model/fields/hashes.py b/swh/model/fields/hashes.py
index 9b5ee4a..46c94ec 100644
--- a/swh/model/fields/hashes.py
+++ b/swh/model/fields/hashes.py
@@ -1,116 +1,118 @@
# Copyright (C) 2015 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import string
from ..exceptions import ValidationError
def validate_hash(value, hash_type):
"""Validate that the given value represents a hash of the given hash_type.
Args:
value: the value to check
hash_type: the type of hash the value is representing
Returns:
True if the hash validates
Raises:
ValueError if the hash does not validate
"""
hash_lengths = {
"sha1": 20,
"sha1_git": 20,
"sha256": 32,
}
hex_digits = set(string.hexdigits)
if hash_type not in hash_lengths:
raise ValidationError(
"Unexpected hash type %(hash_type)s, expected one of" " %(hash_types)s",
params={
"hash_type": hash_type,
"hash_types": ", ".join(sorted(hash_lengths)),
},
code="unexpected-hash-type",
)
if isinstance(value, str):
errors = []
extra_chars = set(value) - hex_digits
if extra_chars:
errors.append(
ValidationError(
"Unexpected characters `%(unexpected_chars)s' for hash "
"type %(hash_type)s",
params={
"unexpected_chars": ", ".join(sorted(extra_chars)),
"hash_type": hash_type,
},
code="unexpected-hash-contents",
)
)
length = len(value)
expected_length = 2 * hash_lengths[hash_type]
if length != expected_length:
errors.append(
ValidationError(
"Unexpected length %(length)d for hash type "
"%(hash_type)s, expected %(expected_length)d",
params={
"length": length,
"expected_length": expected_length,
"hash_type": hash_type,
},
code="unexpected-hash-length",
)
)
if errors:
raise ValidationError(errors)
return True
if isinstance(value, bytes):
length = len(value)
expected_length = hash_lengths[hash_type]
if length != expected_length:
raise ValidationError(
"Unexpected length %(length)d for hash type "
"%(hash_type)s, expected %(expected_length)d",
params={
"length": length,
"expected_length": expected_length,
"hash_type": hash_type,
},
code="unexpected-hash-length",
)
return True
raise ValidationError(
"Unexpected type %(type)s for hash, expected str or bytes",
- params={"type": value.__class__.__name__,},
+ params={
+ "type": value.__class__.__name__,
+ },
code="unexpected-hash-value-type",
)
def validate_sha1(sha1):
"""Validate that sha1 is a valid sha1 hash"""
return validate_hash(sha1, "sha1")
def validate_sha1_git(sha1_git):
"""Validate that sha1_git is a valid sha1_git hash"""
return validate_hash(sha1_git, "sha1_git")
def validate_sha256(sha256):
"""Validate that sha256 is a valid sha256 hash"""
return validate_hash(sha256, "sha256")
diff --git a/swh/model/fields/simple.py b/swh/model/fields/simple.py
index 98fcc11..71fe1d7 100644
--- a/swh/model/fields/simple.py
+++ b/swh/model/fields/simple.py
@@ -1,79 +1,82 @@
# Copyright (C) 2015 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import datetime
import numbers
from ..exceptions import ValidationError
def validate_type(value, type):
"""Validate that value is an integer"""
if not isinstance(value, type):
if isinstance(type, tuple):
typestr = "one of %s" % ", ".join(typ.__name__ for typ in type)
else:
typestr = type.__name__
raise ValidationError(
"Unexpected type %(type)s, expected %(expected_type)s",
- params={"type": value.__class__.__name__, "expected_type": typestr,},
+ params={
+ "type": value.__class__.__name__,
+ "expected_type": typestr,
+ },
code="unexpected-type",
)
return True
def validate_int(value):
"""Validate that the given value is an int"""
return validate_type(value, numbers.Integral)
def validate_str(value):
"""Validate that the given value is a string"""
return validate_type(value, str)
def validate_bytes(value):
"""Validate that the given value is a bytes object"""
return validate_type(value, bytes)
def validate_datetime(value):
"""Validate that the given value is either a datetime, or a numeric number
of seconds since the UNIX epoch."""
errors = []
try:
validate_type(value, (datetime.datetime, numbers.Real))
except ValidationError as e:
errors.append(e)
if isinstance(value, datetime.datetime) and value.tzinfo is None:
errors.append(
ValidationError(
"Datetimes must be timezone-aware in swh",
code="datetime-without-tzinfo",
)
)
if errors:
raise ValidationError(errors)
return True
def validate_enum(value, expected_values):
"""Validate that value is contained in expected_values"""
if value not in expected_values:
raise ValidationError(
"Unexpected value %(value)s, expected one of %(expected_values)s",
params={
"value": value,
"expected_values": ", ".join(sorted(expected_values)),
},
code="unexpected-value",
)
return True
diff --git a/swh/model/from_disk.py b/swh/model/from_disk.py
index 43a9c71..9ef7afa 100644
--- a/swh/model/from_disk.py
+++ b/swh/model/from_disk.py
@@ -1,563 +1,561 @@
# Copyright (C) 2017-2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
"""Conversion from filesystem tree to SWH objects.
This module allows reading a tree of directories and files from a local
filesystem, and convert them to in-memory data structures, which can then
be exported to SWH data model objects, as defined in :mod:`swh.model.model`.
"""
import datetime
import enum
import fnmatch
import glob
import os
import re
import stat
from typing import Any, Iterable, Iterator, List, Optional, Pattern, Tuple
import attr
from attrs_strict import type_validator
from typing_extensions import Final
from . import model
from .exceptions import InvalidDirectoryPath
from .git_objects import directory_entry_sort_key
from .hashutil import MultiHash, hash_to_hex
from .merkle import MerkleLeaf, MerkleNode
from .swhids import CoreSWHID, ObjectType
@attr.s(frozen=True, slots=True)
class DiskBackedContent(model.BaseContent):
"""Content-like class, which allows lazy-loading data from the disk."""
object_type: Final = "content_file"
sha1 = attr.ib(type=bytes, validator=type_validator())
sha1_git = attr.ib(type=model.Sha1Git, validator=type_validator())
sha256 = attr.ib(type=bytes, validator=type_validator())
blake2s256 = attr.ib(type=bytes, validator=type_validator())
length = attr.ib(type=int, validator=type_validator())
status = attr.ib(
type=str,
validator=attr.validators.in_(["visible", "hidden"]),
default="visible",
)
ctime = attr.ib(
type=Optional[datetime.datetime],
validator=type_validator(),
default=None,
eq=False,
)
path = attr.ib(type=Optional[bytes], default=None)
@classmethod
def from_dict(cls, d):
return cls(**d)
def __attrs_post_init__(self):
if self.path is None:
raise TypeError("path must not be None.")
def with_data(self) -> model.Content:
args = self.to_dict()
del args["path"]
assert self.path is not None
with open(self.path, "rb") as fd:
return model.Content.from_dict({**args, "data": fd.read()})
class DentryPerms(enum.IntEnum):
"""Admissible permissions for directory entries."""
content = 0o100644
"""Content"""
executable_content = 0o100755
"""Executable content (e.g. executable script)"""
symlink = 0o120000
"""Symbolic link"""
directory = 0o040000
"""Directory"""
revision = 0o160000
"""Revision (e.g. submodule)"""
def mode_to_perms(mode):
"""Convert a file mode to a permission compatible with Software Heritage
directory entries
Args:
mode (int): a file mode as returned by :func:`os.stat` in
:attr:`os.stat_result.st_mode`
Returns:
DentryPerms: one of the following values:
:const:`DentryPerms.content`: plain file
:const:`DentryPerms.executable_content`: executable file
:const:`DentryPerms.symlink`: symbolic link
:const:`DentryPerms.directory`: directory
"""
if stat.S_ISLNK(mode):
return DentryPerms.symlink
if stat.S_ISDIR(mode):
return DentryPerms.directory
else:
# file is executable in any way
if mode & (0o111):
return DentryPerms.executable_content
else:
return DentryPerms.content
class Content(MerkleLeaf):
"""Representation of a Software Heritage content as a node in a Merkle tree.
The current Merkle hash for the Content nodes is the `sha1_git`, which
makes it consistent with what :class:`Directory` uses for its own hash
computation.
"""
__slots__ = [] # type: List[str]
object_type: Final = "content"
@classmethod
def from_bytes(cls, *, mode, data):
"""Convert data (raw :class:`bytes`) to a Software Heritage content entry
Args:
mode (int): a file mode (passed to :func:`mode_to_perms`)
data (bytes): raw contents of the file
"""
ret = MultiHash.from_data(data).digest()
ret["length"] = len(data)
ret["perms"] = mode_to_perms(mode)
ret["data"] = data
ret["status"] = "visible"
return cls(ret)
@classmethod
def from_symlink(cls, *, path, mode):
"""Convert a symbolic link to a Software Heritage content entry"""
return cls.from_bytes(mode=mode, data=os.readlink(path))
@classmethod
def from_file(cls, *, path, max_content_length=None):
"""Compute the Software Heritage content entry corresponding to an
on-disk file.
The returned dictionary contains keys useful for both:
- loading the content in the archive (hashes, `length`)
- using the content as a directory entry in a directory
Args:
save_path (bool): add the file path to the entry
max_content_length (Optional[int]): if given, all contents larger
than this will be skipped.
"""
file_stat = os.lstat(path)
mode = file_stat.st_mode
length = file_stat.st_size
too_large = max_content_length is not None and length > max_content_length
if stat.S_ISLNK(mode):
# Symbolic link: return a file whose contents are the link target
if too_large:
# Unlike large contents, we can't stream symlinks to
# MultiHash, and we don't want to fit them in memory if
# they exceed max_content_length either.
# Thankfully, this should not happen for reasonable values of
# max_content_length because of OS/filesystem limitations,
# so let's just raise an error.
raise Exception(f"Symlink too large ({length} bytes)")
return cls.from_symlink(path=path, mode=mode)
elif not stat.S_ISREG(mode):
# not a regular file: return the empty file instead
return cls.from_bytes(mode=mode, data=b"")
if too_large:
skip_reason = "Content too large"
else:
skip_reason = None
hashes = MultiHash.from_path(path).digest()
if skip_reason:
ret = {
**hashes,
"status": "absent",
"reason": skip_reason,
}
else:
ret = {
**hashes,
"status": "visible",
}
ret["path"] = path
ret["perms"] = mode_to_perms(mode)
ret["length"] = length
obj = cls(ret)
return obj
def swhid(self) -> CoreSWHID:
- """Return node identifier as a SWHID
- """
+ """Return node identifier as a SWHID"""
return CoreSWHID(object_type=ObjectType.CONTENT, object_id=self.hash)
def __repr__(self):
return "Content(id=%s)" % hash_to_hex(self.hash)
def compute_hash(self):
return self.data["sha1_git"]
def to_model(self) -> model.BaseContent:
"""Builds a `model.BaseContent` object based on this leaf."""
data = self.get_data().copy()
data.pop("perms", None)
if data["status"] == "absent":
data.pop("path", None)
return model.SkippedContent.from_dict(data)
elif "data" in data:
return model.Content.from_dict(data)
else:
return DiskBackedContent.from_dict(data)
def accept_all_directories(dirpath: str, dirname: str, entries: Iterable[Any]) -> bool:
"""Default filter for :func:`Directory.from_disk` accepting all
directories
Args:
dirname (bytes): directory name
entries (list): directory entries
"""
return True
def ignore_empty_directories(
dirpath: str, dirname: str, entries: Iterable[Any]
) -> bool:
"""Filter for :func:`directory_to_objects` ignoring empty directories
Args:
dirname (bytes): directory name
entries (list): directory entries
Returns:
True if the directory is not empty, false if the directory is empty
"""
return bool(entries)
def ignore_named_directories(names, *, case_sensitive=True):
"""Filter for :func:`directory_to_objects` to ignore directories named one
of names.
Args:
names (list of bytes): names to ignore
case_sensitive (bool): whether to do the filtering in a case sensitive
way
Returns:
a directory filter for :func:`directory_to_objects`
"""
if not case_sensitive:
names = [name.lower() for name in names]
def named_filter(
dirpath: str,
dirname: str,
entries: Iterable[Any],
names: Iterable[Any] = names,
case_sensitive: bool = case_sensitive,
):
if case_sensitive:
return dirname not in names
else:
return dirname.lower() not in names
return named_filter
# TODO: `extract_regex_objs` has been copied and adapted from `swh.scanner`.
# In the future `swh.scanner` should use the `swh.model` version and remove its own.
def extract_regex_objs(
root_path: bytes, patterns: Iterable[bytes]
) -> Iterator[Pattern[bytes]]:
"""Generates a regex object for each pattern given in input and checks if
- the path is a subdirectory or relative to the root path.
+ the path is a subdirectory or relative to the root path.
- Args:
- root_path (bytes): path to the root directory
- patterns (list of byte): patterns to match
+ Args:
+ root_path (bytes): path to the root directory
+ patterns (list of byte): patterns to match
- Yields:
- an SRE_Pattern object
+ Yields:
+ an SRE_Pattern object
"""
absolute_root_path = os.path.abspath(root_path)
for pattern in patterns:
for path in glob.glob(pattern):
absolute_path = os.path.abspath(path)
if not absolute_path.startswith(absolute_root_path):
error_msg = (
b'The path "' + path + b'" is not a subdirectory or relative '
b'to the root directory path: "' + root_path + b'"'
)
raise InvalidDirectoryPath(error_msg)
regex = fnmatch.translate((pattern.decode()))
yield re.compile(regex.encode())
def ignore_directories_patterns(root_path: bytes, patterns: Iterable[bytes]):
"""Filter for :func:`directory_to_objects` to ignore directories
matching certain patterns.
Args:
root_path (bytes): path of the root directory
patterns (list of byte): patterns to ignore
Returns:
a directory filter for :func:`directory_to_objects`
"""
sre_patterns = set(extract_regex_objs(root_path, patterns))
def pattern_filter(
dirpath: bytes,
dirname: bytes,
entries: Iterable[Any],
patterns: Iterable[Any] = sre_patterns,
root_path: bytes = os.path.abspath(root_path),
):
full_path = os.path.abspath(dirpath)
relative_path = os.path.relpath(full_path, root_path)
return not any([pattern.match(relative_path) for pattern in patterns])
return pattern_filter
def iter_directory(
directory,
) -> Tuple[List[model.Content], List[model.SkippedContent], List[model.Directory]]:
"""Return the directory listing from a disk-memory directory instance.
Raises:
TypeError in case an unexpected object type is listed.
Returns:
Tuple of respectively iterable of content, skipped content and directories.
"""
contents: List[model.Content] = []
skipped_contents: List[model.SkippedContent] = []
directories: List[model.Directory] = []
for obj in directory.iter_tree():
obj = obj.to_model()
obj_type = obj.object_type
if obj_type in (model.Content.object_type, DiskBackedContent.object_type):
# FIXME: read the data from disk later (when the
# storage buffer is flushed).
obj = obj.with_data()
contents.append(obj)
elif obj_type == model.SkippedContent.object_type:
skipped_contents.append(obj)
elif obj_type == model.Directory.object_type:
directories.append(obj)
else:
raise TypeError(f"Unexpected object type from disk: {obj}")
return contents, skipped_contents, directories
class Directory(MerkleNode):
"""Representation of a Software Heritage directory as a node in a Merkle Tree.
This class can be used to generate, from an on-disk directory, all the
objects that need to be sent to the Software Heritage archive.
The :func:`from_disk` constructor allows you to generate the data structure
from a directory on disk. The resulting :class:`Directory` can then be
manipulated as a dictionary, using the path as key.
The :func:`collect` method is used to retrieve all the objects that need to
be added to the Software Heritage archive since the last collection, by
class (contents and directories).
When using the dict-like methods to update the contents of the directory,
the affected levels of hierarchy are reset and can be collected again using
the same method. This enables the efficient collection of updated nodes,
for instance when the client is applying diffs.
"""
__slots__ = ["__entries"]
object_type: Final = "directory"
@classmethod
def from_disk(
cls, *, path, dir_filter=accept_all_directories, max_content_length=None
):
"""Compute the Software Heritage objects for a given directory tree
Args:
path (bytes): the directory to traverse
data (bool): whether to add the data to the content objects
save_path (bool): whether to add the path to the content objects
dir_filter (function): a filter to ignore some directories by
name or contents. Takes two arguments: dirname and entries, and
returns True if the directory should be added, False if the
directory should be ignored.
max_content_length (Optional[int]): if given, all contents larger
than this will be skipped.
"""
top_path = path
dirs = {}
for root, dentries, fentries in os.walk(top_path, topdown=False):
entries = {}
# Join fentries and dentries in the same processing, as symbolic
# links to directories appear in dentries...
for name in fentries + dentries:
path = os.path.join(root, name)
if not os.path.isdir(path) or os.path.islink(path):
content = Content.from_file(
path=path, max_content_length=max_content_length
)
entries[name] = content
else:
if dir_filter(path, name, dirs[path].entries):
entries[name] = dirs[path]
dirs[root] = cls({"name": os.path.basename(root), "path": root})
dirs[root].update(entries)
return dirs[top_path]
def __init__(self, data=None):
super().__init__(data=data)
self.__entries = None
def invalidate_hash(self):
self.__entries = None
super().invalidate_hash()
@staticmethod
def child_to_directory_entry(name, child):
if child.object_type == "directory":
return {
"type": "dir",
"perms": DentryPerms.directory,
"target": child.hash,
"name": name,
}
elif child.object_type == "content":
return {
"type": "file",
"perms": child.data["perms"],
"target": child.hash,
"name": name,
}
else:
raise ValueError(f"unknown child {child}")
def get_data(self, **kwargs):
return {
"id": self.hash,
"entries": self.entries,
}
@property
def entries(self):
"""Child nodes, sorted by name in the same way
:func:`swh.model.git_objects.directory_git_object` does."""
if self.__entries is None:
self.__entries = sorted(
(
self.child_to_directory_entry(name, child)
for name, child in self.items()
),
key=directory_entry_sort_key,
)
return self.__entries
def swhid(self) -> CoreSWHID:
- """Return node identifier as a SWHID
- """
+ """Return node identifier as a SWHID"""
return CoreSWHID(object_type=ObjectType.DIRECTORY, object_id=self.hash)
def compute_hash(self):
return model.Directory.from_dict({"entries": self.entries}).id
def to_model(self) -> model.Directory:
"""Builds a `model.Directory` object based on this node;
ignoring its children."""
return model.Directory.from_dict(self.get_data())
def __getitem__(self, key):
if not isinstance(key, bytes):
raise ValueError("Can only get a bytes from Directory")
# Convenience shortcut
if key == b"":
return self
if b"/" not in key:
return super().__getitem__(key)
else:
key1, key2 = key.split(b"/", 1)
return self.__getitem__(key1)[key2]
def __setitem__(self, key, value):
if not isinstance(key, bytes):
raise ValueError("Can only set a bytes Directory entry")
if not isinstance(value, (Content, Directory)):
raise ValueError(
"Can only set a Directory entry to a Content or " "Directory"
)
if key == b"":
raise ValueError("Directory entry must have a name")
if b"\x00" in key:
raise ValueError("Directory entry name must not contain nul bytes")
if b"/" not in key:
return super().__setitem__(key, value)
else:
key1, key2 = key.rsplit(b"/", 1)
self[key1].__setitem__(key2, value)
def __delitem__(self, key):
if not isinstance(key, bytes):
raise ValueError("Can only delete a bytes Directory entry")
if b"/" not in key:
super().__delitem__(key)
else:
key1, key2 = key.rsplit(b"/", 1)
del self[key1][key2]
def __contains__(self, key):
if b"/" not in key:
return super().__contains__(key)
else:
key1, key2 = key.split(b"/", 1)
return super().__contains__(key1) and self[key1].__contains__(key2)
def __repr__(self):
return "Directory(id=%s, entries=[%s])" % (
hash_to_hex(self.hash),
", ".join(str(entry) for entry in self),
)
diff --git a/swh/model/git_objects.py b/swh/model/git_objects.py
index 7b36da6..d0f7bf8 100644
--- a/swh/model/git_objects.py
+++ b/swh/model/git_objects.py
@@ -1,626 +1,638 @@
# Copyright (C) 2015-2021 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
"""
Converts SWH model objects to git(-like) objects
Most of the functions in this module take as argument an object from
:mod:`swh.model.model`, and format it like a git object.
They are the inverse functions of those in :mod:`swh.loader.git.converters`,
but with extensions, as SWH's model is a superset of Git's:
* extensions of existing types (eg. revision/commit and release/tag dates
can be expressed with precision up to milliseconds, to support formatting
Mercurial objects)
* new types, for SWH's specific needs (:class:`swh.model.model.RawExtrinsicMetadata`
and :class:`swh.model.model.ExtID`)
* support for somewhat corrupted git objects that we need to reproduce
This is used for two purposes:
* Format manifests that can be hashed to produce :ref:`intrinsic identifiers
<persistent-identifiers>`
* Write git objects to reproduce git repositories that were ingested in the archive.
"""
from __future__ import annotations
import datetime
from functools import lru_cache
from typing import Dict, Iterable, List, Optional, Tuple, Union, cast
import warnings
from . import model
from .collections import ImmutableDict
from .hashutil import git_object_header, hash_to_bytehex
def directory_entry_sort_key(entry: model.DirectoryEntry):
"""The sorting key for tree entries"""
if isinstance(entry, dict):
# For backward compatibility
entry = model.DirectoryEntry.from_dict(entry)
if entry.type == "dir":
return entry.name + b"/"
else:
return entry.name
@lru_cache()
def _perms_to_bytes(perms):
"""Convert the perms value to its canonical bytes representation"""
oc = oct(perms)[2:]
return oc.encode("ascii")
def escape_newlines(snippet):
"""Escape the newlines present in snippet according to git rules.
New lines in git manifests are escaped by indenting the next line by one
space.
"""
if b"\n" in snippet:
return b"\n ".join(snippet.split(b"\n"))
else:
return snippet
def format_date(date: model.Timestamp) -> bytes:
"""Convert a date object into an UTC timestamp encoded as ascii bytes.
Git stores timestamps as an integer number of seconds since the UNIX epoch.
However, Software Heritage stores timestamps as an integer number of
microseconds (postgres type "datetime with timezone").
Therefore, we print timestamps with no microseconds as integers, and
timestamps with microseconds as floating point values. We elide the
trailing zeroes from microsecond values, to "future-proof" our
representation if we ever need more precision in timestamps.
"""
if isinstance(date, dict):
# For backward compatibility
date = model.Timestamp.from_dict(date)
if not date.microseconds:
return str(date.seconds).encode()
else:
float_value = "%d.%06d" % (date.seconds, date.microseconds)
return float_value.rstrip("0").encode()
def normalize_timestamp(time_representation):
"""Normalize a time representation for processing by Software Heritage
This function supports a numeric timestamp (representing a number of
seconds since the UNIX epoch, 1970-01-01 at 00:00 UTC), a
:obj:`datetime.datetime` object (with timezone information), or a
normalized Software Heritage time representation (idempotency).
Args:
time_representation: the representation of a timestamp
Returns:
dict: a normalized dictionary with three keys:
- timestamp: a dict with two optional keys:
- seconds: the integral number of seconds since the UNIX epoch
- microseconds: the integral number of microseconds
- offset: the timezone offset as a number of minutes relative to
UTC
- negative_utc: a boolean representing whether the offset is -0000
when offset = 0.
"""
if time_representation is None:
return None
else:
return model.TimestampWithTimezone.from_dict(time_representation).to_dict()
def directory_git_object(directory: Union[Dict, model.Directory]) -> bytes:
"""Formats a directory as a git tree.
A directory's identifier is the tree sha1 à la git of a directory listing,
using the following algorithm, which is equivalent to the git algorithm for
trees:
1. Entries of the directory are sorted using the name (or the name with '/'
appended for directory entries) as key, in bytes order.
2. For each entry of the directory, the following bytes are output:
- the octal representation of the permissions for the entry (stored in
the 'perms' member), which is a representation of the entry type:
- b'100644' (int 33188) for files
- b'100755' (int 33261) for executable files
- b'120000' (int 40960) for symbolic links
- b'40000' (int 16384) for directories
- b'160000' (int 57344) for references to revisions
- an ascii space (b'\x20')
- the entry's name (as raw bytes), stored in the 'name' member
- a null byte (b'\x00')
- the 20 byte long identifier of the object pointed at by the entry,
stored in the 'target' member:
- for files or executable files: their blob sha1_git
- for symbolic links: the blob sha1_git of a file containing the link
destination
- for directories: their intrinsic identifier
- for revisions: their intrinsic identifier
(Note that there is no separator between entries)
"""
if isinstance(directory, dict):
# For backward compatibility
warnings.warn(
"directory_git_object's argument should be a swh.model.model.Directory "
"object.",
DeprecationWarning,
stacklevel=2,
)
directory = model.Directory.from_dict(directory)
directory = cast(model.Directory, directory)
components = []
for entry in sorted(directory.entries, key=directory_entry_sort_key):
components.extend(
- [_perms_to_bytes(entry.perms), b"\x20", entry.name, b"\x00", entry.target,]
+ [
+ _perms_to_bytes(entry.perms),
+ b"\x20",
+ entry.name,
+ b"\x00",
+ entry.target,
+ ]
)
return format_git_object_from_parts("tree", components)
def format_git_object_from_headers(
git_type: str,
headers: Iterable[Tuple[bytes, bytes]],
message: Optional[bytes] = None,
) -> bytes:
"""Format a git_object comprised of a git header and a manifest,
which is itself a sequence of `headers`, and an optional `message`.
The git_object format, compatible with the git format for tag and commit
objects, is as follows:
- for each `key`, `value` in `headers`, emit:
- the `key`, literally
- an ascii space (``\\x20``)
- the `value`, with newlines escaped using :func:`escape_newlines`,
- an ascii newline (``\\x0a``)
- if the `message` is not None, emit:
- an ascii newline (``\\x0a``)
- the `message`, literally
Args:
headers: a sequence of key/value headers stored in the manifest;
message: an optional message used to trail the manifest.
Returns:
the formatted git_object as bytes
"""
entries: List[bytes] = []
for key, value in headers:
entries.extend((key, b" ", escape_newlines(value), b"\n"))
if message is not None:
entries.extend((b"\n", message))
concatenated_entries = b"".join(entries)
header = git_object_header(git_type, len(concatenated_entries))
return header + concatenated_entries
def format_git_object_from_parts(git_type: str, parts: Iterable[bytes]) -> bytes:
"""Similar to :func:`format_git_object_from_headers`, but for manifests made of
a flat list of entries, instead of key-value + message, ie. trees and snapshots."""
concatenated_parts = b"".join(parts)
header = git_object_header(git_type, len(concatenated_parts))
return header + concatenated_parts
def format_author_data(
author: model.Person, date_offset: Optional[model.TimestampWithTimezone]
) -> bytes:
"""Format authorship data according to git standards.
Git authorship data has two components:
- an author specification, usually a name and email, but in practice an
arbitrary bytestring
- optionally, a timestamp with a UTC offset specification
The authorship data is formatted thus::
`name and email`[ `timestamp` `utc_offset`]
The timestamp is encoded as a (decimal) number of seconds since the UNIX
epoch (1970-01-01 at 00:00 UTC). As an extension to the git format, we
support fractional timestamps, using a dot as the separator for the decimal
part.
The utc offset is a number of minutes encoded as '[+-]HHMM'. Note that some
tools can pass a negative offset corresponding to the UTC timezone
('-0000'), which is valid and is encoded as such.
Returns:
the byte string containing the authorship data
"""
ret = [author.fullname]
if date_offset is not None:
date_f = format_date(date_offset.timestamp)
ret.extend([b" ", date_f, b" ", date_offset.offset_bytes])
return b"".join(ret)
def revision_git_object(revision: Union[Dict, model.Revision]) -> bytes:
"""Formats a revision as a git tree.
The fields used for the revision identifier computation are:
- directory
- parents
- author
- author_date
- committer
- committer_date
- extra_headers or metadata -> extra_headers
- message
A revision's identifier is the 'git'-checksum of a commit manifest
constructed as follows (newlines are a single ASCII newline character)::
tree <directory identifier>
[for each parent in parents]
parent <parent identifier>
[end for each parents]
author <author> <author_date>
committer <committer> <committer_date>
[for each key, value in extra_headers]
<key> <encoded value>
[end for each extra_headers]
<message>
The directory identifier is the ascii representation of its hexadecimal
encoding.
Author and committer are formatted using the :attr:`Person.fullname` attribute only.
Dates are formatted with the :func:`format_offset` function.
Extra headers are an ordered list of [key, value] pairs. Keys are strings
and get encoded to utf-8 for identifier computation. Values are either byte
strings, unicode strings (that get encoded to utf-8), or integers (that get
encoded to their utf-8 decimal representation).
Multiline extra header values are escaped by indenting the continuation
lines with one ascii space.
If the message is None, the manifest ends with the last header. Else, the
message is appended to the headers after an empty line.
The checksum of the full manifest is computed using the 'commit' git object
type.
"""
if isinstance(revision, dict):
# For backward compatibility
warnings.warn(
"revision_git_object's argument should be a swh.model.model.Revision "
"object.",
DeprecationWarning,
stacklevel=2,
)
revision = model.Revision.from_dict(revision)
revision = cast(model.Revision, revision)
headers = [(b"tree", hash_to_bytehex(revision.directory))]
for parent in revision.parents:
if parent:
headers.append((b"parent", hash_to_bytehex(parent)))
if revision.author is not None:
headers.append((b"author", format_author_data(revision.author, revision.date)))
if revision.committer is not None:
headers.append(
(
b"committer",
format_author_data(revision.committer, revision.committer_date),
)
)
# Handle extra headers
metadata = revision.metadata or ImmutableDict()
extra_headers = revision.extra_headers or ()
if not extra_headers and "extra_headers" in metadata:
extra_headers = metadata["extra_headers"]
headers.extend(extra_headers)
return format_git_object_from_headers("commit", headers, revision.message)
def target_type_to_git(target_type: model.ObjectType) -> bytes:
"""Convert a software heritage target type to a git object type"""
return {
model.ObjectType.CONTENT: b"blob",
model.ObjectType.DIRECTORY: b"tree",
model.ObjectType.REVISION: b"commit",
model.ObjectType.RELEASE: b"tag",
model.ObjectType.SNAPSHOT: b"refs",
}[target_type]
def release_git_object(release: Union[Dict, model.Release]) -> bytes:
if isinstance(release, dict):
# For backward compatibility
warnings.warn(
"release_git_object's argument should be a swh.model.model.Directory "
"object.",
DeprecationWarning,
stacklevel=2,
)
release = model.Release.from_dict(release)
release = cast(model.Release, release)
headers = [
(b"object", hash_to_bytehex(release.target)),
(b"type", target_type_to_git(release.target_type)),
(b"tag", release.name),
]
if release.author is not None:
headers.append((b"tagger", format_author_data(release.author, release.date)))
return format_git_object_from_headers("tag", headers, release.message)
def snapshot_git_object(snapshot: Union[Dict, model.Snapshot]) -> bytes:
"""Formats a snapshot as a git-like object.
Snapshots are a set of named branches, which are pointers to objects at any
level of the Software Heritage DAG.
As well as pointing to other objects in the Software Heritage DAG, branches
can also be *alias*es, in which case their target is the name of another
branch in the same snapshot, or *dangling*, in which case the target is
unknown (and represented by the ``None`` value).
A snapshot identifier is a salted sha1 (using the git hashing algorithm
with the ``snapshot`` object type) of a manifest following the algorithm:
1. Branches are sorted using the name as key, in bytes order.
2. For each branch, the following bytes are output:
- the type of the branch target:
- ``content``, ``directory``, ``revision``, ``release`` or ``snapshot``
for the corresponding entries in the DAG;
- ``alias`` for branches referencing another branch;
- ``dangling`` for dangling branches
- an ascii space (``\\x20``)
- the branch name (as raw bytes)
- a null byte (``\\x00``)
- the length of the target identifier, as an ascii-encoded decimal number
(``20`` for current intrinsic identifiers, ``0`` for dangling
branches, the length of the target branch name for branch aliases)
- a colon (``:``)
- the identifier of the target object pointed at by the branch,
stored in the 'target' member:
- for contents: their *sha1_git*
- for directories, revisions, releases or snapshots: their intrinsic
identifier
- for branch aliases, the name of the target branch (as raw bytes)
- for dangling branches, the empty string
Note that, akin to directory manifests, there is no separator between
entries. Because of symbolic branches, identifiers are of arbitrary
length but are length-encoded to avoid ambiguity.
"""
if isinstance(snapshot, dict):
# For backward compatibility
warnings.warn(
"snapshot_git_object's argument should be a swh.model.model.Snapshot "
"object.",
DeprecationWarning,
stacklevel=2,
)
snapshot = model.Snapshot.from_dict(snapshot)
snapshot = cast(model.Snapshot, snapshot)
unresolved = []
lines = []
for name, target in sorted(snapshot.branches.items()):
if not target:
target_type = b"dangling"
target_id = b""
elif target.target_type == model.TargetType.ALIAS:
target_type = b"alias"
target_id = target.target
if target_id not in snapshot.branches or target_id == name:
unresolved.append((name, target_id))
else:
target_type = target.target_type.value.encode()
target_id = target.target
lines.extend(
[
target_type,
b"\x20",
name,
b"\x00",
("%d:" % len(target_id)).encode(),
target_id,
]
)
if unresolved:
raise ValueError(
"Branch aliases unresolved: %s"
% ", ".join("%r -> %r" % x for x in unresolved),
unresolved,
)
return format_git_object_from_parts("snapshot", lines)
def raw_extrinsic_metadata_git_object(
metadata: Union[Dict, model.RawExtrinsicMetadata]
) -> bytes:
"""Formats RawExtrinsicMetadata as a git-like object.
A raw_extrinsic_metadata identifier is a salted sha1 (using the git
hashing algorithm with the ``raw_extrinsic_metadata`` object type) of
a manifest following the format::
target $ExtendedSwhid
discovery_date $Timestamp
authority $StrWithoutSpaces $IRI
fetcher $Str $Version
format $StrWithoutSpaces
origin $IRI <- optional
visit $IntInDecimal <- optional
snapshot $CoreSwhid <- optional
release $CoreSwhid <- optional
revision $CoreSwhid <- optional
path $Bytes <- optional
directory $CoreSwhid <- optional
$MetadataBytes
$IRI must be RFC 3987 IRIs (so they may contain newlines, that are escaped as
described below)
$StrWithoutSpaces and $Version are ASCII strings, and may not contain spaces.
$Str is an UTF-8 string.
$CoreSwhid are core SWHIDs, as defined in :ref:`persistent-identifiers`.
$ExtendedSwhid is a core SWHID, with extra types allowed ('ori' for
origins and 'emd' for raw extrinsic metadata)
$Timestamp is a decimal representation of the rounded-down integer number of
seconds since the UNIX epoch (1970-01-01 00:00:00 UTC),
with no leading '0' (unless the timestamp value is zero) and no timezone.
It may be negative by prefixing it with a '-', which must not be followed
by a '0'.
Newlines in $Bytes, $Str, and $Iri are escaped as with other git fields,
ie. by adding a space after them.
"""
if isinstance(metadata, dict):
# For backward compatibility
warnings.warn(
"raw_extrinsic_metadata_git_object's argument should be a "
"swh.model.model.RawExtrinsicMetadata object.",
DeprecationWarning,
stacklevel=2,
)
metadata = model.RawExtrinsicMetadata.from_dict(metadata)
metadata = cast(model.RawExtrinsicMetadata, metadata)
# equivalent to using math.floor(dt.timestamp()) to round down,
# as int(dt.timestamp()) rounds toward zero,
# which would map two seconds on the 0 timestamp.
#
# This should never be an issue in practice as Software Heritage didn't
# start collecting metadata before 2015.
timestamp = (
metadata.discovery_date.astimezone(datetime.timezone.utc)
.replace(microsecond=0)
.timestamp()
)
assert timestamp.is_integer()
headers = [
(b"target", str(metadata.target).encode()),
(b"discovery_date", str(int(timestamp)).encode("ascii")),
(
b"authority",
f"{metadata.authority.type.value} {metadata.authority.url}".encode(),
),
- (b"fetcher", f"{metadata.fetcher.name} {metadata.fetcher.version}".encode(),),
+ (
+ b"fetcher",
+ f"{metadata.fetcher.name} {metadata.fetcher.version}".encode(),
+ ),
(b"format", metadata.format.encode()),
]
for key in (
"origin",
"visit",
"snapshot",
"release",
"revision",
"path",
"directory",
):
if getattr(metadata, key, None) is not None:
value: bytes
if key == "path":
value = getattr(metadata, key)
else:
value = str(getattr(metadata, key)).encode()
headers.append((key.encode("ascii"), value))
return format_git_object_from_headers(
"raw_extrinsic_metadata", headers, metadata.metadata
)
def extid_git_object(extid: model.ExtID) -> bytes:
"""Formats an extid as a gi-like object.
An ExtID identifier is a salted sha1 (using the git hashing algorithm with
the ``extid`` object type) of a manifest following the format:
```
extid_type $StrWithoutSpaces
[extid_version $Str]
extid $Bytes
target $CoreSwhid
```
$StrWithoutSpaces is an ASCII string, and may not contain spaces.
Newlines in $Bytes are escaped as with other git fields, ie. by adding a
space after them.
The extid_version line is only generated if the version is non-zero.
"""
headers = [
(b"extid_type", extid.extid_type.encode("ascii")),
]
extid_version = extid.extid_version
if extid_version != 0:
headers.append((b"extid_version", str(extid_version).encode("ascii")))
headers.extend(
- [(b"extid", extid.extid), (b"target", str(extid.target).encode("ascii")),]
+ [
+ (b"extid", extid.extid),
+ (b"target", str(extid.target).encode("ascii")),
+ ]
)
return format_git_object_from_headers("extid", headers)
diff --git a/swh/model/hashutil.py b/swh/model/hashutil.py
index 8740787..106e7c0 100644
--- a/swh/model/hashutil.py
+++ b/swh/model/hashutil.py
@@ -1,353 +1,351 @@
# Copyright (C) 2015-2018 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
"""Module in charge of hashing function definitions. This is the base
module use to compute swh's hashes.
Only a subset of hashing algorithms is supported as defined in the
ALGORITHMS set. Any provided algorithms not in that list will result
in a ValueError explaining the error.
This module defines a MultiHash class to ease the softwareheritage
hashing algorithms computation. This allows to compute hashes from
file object, path, data using a similar interface as what the standard
hashlib module provides.
Basic usage examples:
- file object: MultiHash.from_file(
file_object, hash_names=DEFAULT_ALGORITHMS).digest()
- path (filepath): MultiHash.from_path(b'foo').hexdigest()
- data (bytes): MultiHash.from_data(b'foo').bytehexdigest()
"Complex" usage, defining a swh hashlib instance first:
- To compute length, integrate the length to the set of algorithms to
compute, for example:
.. code-block:: python
h = MultiHash(hash_names=set({'length'}).union(DEFAULT_ALGORITHMS))
with open(filepath, 'rb') as f:
h.update(f.read(HASH_BLOCK_SIZE))
hashes = h.digest() # returns a dict of {hash_algo_name: hash_in_bytes}
- Write alongside computing hashing algorithms (from a stream), example:
.. code-block:: python
h = MultiHash(length=length)
with open(filepath, 'wb') as f:
for chunk in r.iter_content(): # r a stream of sort
h.update(chunk)
f.write(chunk)
hashes = h.hexdigest() # returns a dict of {hash_algo_name: hash_in_hex}
"""
import binascii
import functools
import hashlib
from io import BytesIO
import os
from typing import Callable, Dict, Optional
ALGORITHMS = set(["sha1", "sha256", "sha1_git", "blake2s256", "blake2b512", "md5"])
"""Hashing algorithms supported by this module"""
DEFAULT_ALGORITHMS = set(["sha1", "sha256", "sha1_git", "blake2s256"])
"""Algorithms computed by default when calling the functions from this module.
Subset of :const:`ALGORITHMS`.
"""
HASH_BLOCK_SIZE = 32768
"""Block size for streaming hash computations made in this module"""
_blake2_hash_cache = {} # type: Dict[str, Callable]
class MultiHash:
"""Hashutil class to support multiple hashes computation.
Args:
hash_names (set): Set of hash algorithms (+ optionally length)
to compute hashes (cf. DEFAULT_ALGORITHMS)
length (int): Length of the total sum of chunks to read
If the length is provided as algorithm, the length is also
computed and returned.
"""
def __init__(self, hash_names=DEFAULT_ALGORITHMS, length=None):
self.state = {}
self.track_length = False
for name in hash_names:
if name == "length":
self.state["length"] = 0
self.track_length = True
else:
self.state[name] = _new_hash(name, length)
@classmethod
def from_state(cls, state, track_length):
ret = cls([])
ret.state = state
ret.track_length = track_length
@classmethod
def from_file(cls, fobj, hash_names=DEFAULT_ALGORITHMS, length=None):
ret = cls(length=length, hash_names=hash_names)
while True:
chunk = fobj.read(HASH_BLOCK_SIZE)
if not chunk:
break
ret.update(chunk)
return ret
@classmethod
def from_path(cls, path, hash_names=DEFAULT_ALGORITHMS):
length = os.path.getsize(path)
with open(path, "rb") as f:
ret = cls.from_file(f, hash_names=hash_names, length=length)
return ret
@classmethod
def from_data(cls, data, hash_names=DEFAULT_ALGORITHMS):
length = len(data)
fobj = BytesIO(data)
return cls.from_file(fobj, hash_names=hash_names, length=length)
def update(self, chunk):
for name, h in self.state.items():
if name == "length":
continue
h.update(chunk)
if self.track_length:
self.state["length"] += len(chunk)
def digest(self):
return {
name: h.digest() if name != "length" else h
for name, h in self.state.items()
}
def hexdigest(self):
return {
name: h.hexdigest() if name != "length" else h
for name, h in self.state.items()
}
def bytehexdigest(self):
return {
name: hash_to_bytehex(h.digest()) if name != "length" else h
for name, h in self.state.items()
}
def copy(self):
copied_state = {
name: h.copy() if name != "length" else h for name, h in self.state.items()
}
return self.from_state(copied_state, self.track_length)
def _new_blake2_hash(algo):
- """Return a function that initializes a blake2 hash.
-
- """
+ """Return a function that initializes a blake2 hash."""
if algo in _blake2_hash_cache:
return _blake2_hash_cache[algo]()
lalgo = algo.lower()
if not lalgo.startswith("blake2"):
raise ValueError("Algorithm %s is not a blake2 hash" % algo)
blake_family = lalgo[:7]
digest_size = None
if lalgo[7:]:
try:
digest_size, remainder = divmod(int(lalgo[7:]), 8)
except ValueError:
raise ValueError("Unknown digest size for algo %s" % algo) from None
if remainder:
raise ValueError(
"Digest size for algorithm %s must be a multiple of 8" % algo
)
blake2 = getattr(hashlib, blake_family)
_blake2_hash_cache[algo] = lambda: blake2(digest_size=digest_size)
return _blake2_hash_cache[algo]()
def _new_hashlib_hash(algo):
"""Initialize a digest object from hashlib.
Handle the swh-specific names for the blake2-related algorithms
"""
if algo.startswith("blake2"):
return _new_blake2_hash(algo)
else:
return hashlib.new(algo)
def git_object_header(git_type: str, length: int) -> bytes:
"""Returns the header for a git object of the given type and length.
The header of a git object consists of:
- The type of the object (encoded in ASCII)
- One ASCII space (\x20)
- The length of the object (decimal encoded in ASCII)
- One NUL byte
Args:
base_algo (str from :const:`ALGORITHMS`): a hashlib-supported algorithm
git_type: the type of the git object (supposedly one of 'blob',
'commit', 'tag', 'tree')
length: the length of the git object you're encoding
Returns:
a hashutil.hash object
"""
git_object_types = {
"blob",
"tree",
"commit",
"tag",
"snapshot",
"raw_extrinsic_metadata",
"extid",
}
if git_type not in git_object_types:
raise ValueError(
"Unexpected git object type %s, expected one of %s"
% (git_type, ", ".join(sorted(git_object_types)))
)
return ("%s %d\0" % (git_type, length)).encode("ascii")
def _new_hash(algo: str, length: Optional[int] = None):
"""Initialize a digest object (as returned by python's hashlib) for
the requested algorithm. See the constant ALGORITHMS for the list
of supported algorithms. If a git-specific hashing algorithm is
requested (e.g., "sha1_git"), the hashing object will be pre-fed
with the needed header; for this to work, length must be given.
Args:
algo (str): a hashing algorithm (one of ALGORITHMS)
length (int): the length of the hashed payload (needed for
git-specific algorithms)
Returns:
a hashutil.hash object
Raises:
ValueError if algo is unknown, or length is missing for a git-specific
hash.
"""
if algo not in ALGORITHMS:
raise ValueError(
"Unexpected hashing algorithm %s, expected one of %s"
% (algo, ", ".join(sorted(ALGORITHMS)))
)
if algo.endswith("_git"):
if length is None:
raise ValueError("Missing length for git hashing algorithm")
base_algo = algo[:-4]
h = _new_hashlib_hash(base_algo)
h.update(git_object_header("blob", length))
return h
return _new_hashlib_hash(algo)
def hash_git_data(data, git_type, base_algo="sha1"):
"""Hash the given data as a git object of type git_type.
Args:
data: a bytes object
git_type: the git object type
base_algo: the base hashing algorithm used (default: sha1)
Returns: a dict mapping each algorithm to a bytes digest
Raises:
ValueError if the git_type is unexpected.
"""
h = _new_hashlib_hash(base_algo)
h.update(git_object_header(git_type, len(data)))
h.update(data)
return h.digest()
@functools.lru_cache()
def hash_to_hex(hash):
"""Converts a hash (in hex or bytes form) to its hexadecimal ascii form
Args:
hash (str or bytes): a :class:`bytes` hash or a :class:`str` containing
the hexadecimal form of the hash
Returns:
str: the hexadecimal form of the hash
"""
if isinstance(hash, str):
return hash
return binascii.hexlify(hash).decode("ascii")
@functools.lru_cache()
def hash_to_bytehex(hash):
"""Converts a hash to its hexadecimal bytes representation
Args:
hash (bytes): a :class:`bytes` hash
Returns:
bytes: the hexadecimal form of the hash, as :class:`bytes`
"""
return binascii.hexlify(hash)
@functools.lru_cache()
def hash_to_bytes(hash):
"""Converts a hash (in hex or bytes form) to its raw bytes form
Args:
hash (str or bytes): a :class:`bytes` hash or a :class:`str` containing
the hexadecimal form of the hash
Returns:
bytes: the :class:`bytes` form of the hash
"""
if isinstance(hash, bytes):
return hash
return bytes.fromhex(hash)
@functools.lru_cache()
def bytehex_to_hash(hex):
"""Converts a hexadecimal bytes representation of a hash to that hash
Args:
hash (bytes): a :class:`bytes` containing the hexadecimal form of the
hash encoded in ascii
Returns:
bytes: the :class:`bytes` form of the hash
"""
return hash_to_bytes(hex.decode())
diff --git a/swh/model/hypothesis_strategies.py b/swh/model/hypothesis_strategies.py
index 54f552c..dabecf9 100644
--- a/swh/model/hypothesis_strategies.py
+++ b/swh/model/hypothesis_strategies.py
@@ -1,589 +1,590 @@
# Copyright (C) 2019-2021 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import datetime
import string
from typing import Sequence
from hypothesis import assume
from hypothesis.extra.dateutil import timezones
from hypothesis.strategies import (
binary,
booleans,
builds,
characters,
composite,
datetimes,
dictionaries,
from_regex,
integers,
just,
lists,
none,
one_of,
sampled_from,
sets,
text,
tuples,
)
from .from_disk import DentryPerms
from .model import (
BaseContent,
Content,
Directory,
DirectoryEntry,
MetadataAuthority,
MetadataFetcher,
ObjectType,
Origin,
OriginVisit,
OriginVisitStatus,
Person,
RawExtrinsicMetadata,
Release,
Revision,
RevisionType,
SkippedContent,
Snapshot,
SnapshotBranch,
TargetType,
Timestamp,
TimestampWithTimezone,
)
from .swhids import ExtendedObjectType, ExtendedSWHID
pgsql_alphabet = characters(
blacklist_categories=("Cs",), blacklist_characters=["\u0000"]
) # postgresql does not like these
def optional(strategy):
return one_of(none(), strategy)
def pgsql_text():
return text(alphabet=pgsql_alphabet)
def sha1_git():
return binary(min_size=20, max_size=20)
def sha1():
return binary(min_size=20, max_size=20)
def binaries_without_bytes(blacklist: Sequence[int]):
"""Like hypothesis.strategies.binary, but takes a sequence of bytes that
should not be included."""
return lists(sampled_from([i for i in range(256) if i not in blacklist])).map(bytes)
@composite
def extended_swhids(draw):
object_type = draw(sampled_from(ExtendedObjectType))
object_id = draw(sha1_git())
return ExtendedSWHID(object_type=object_type, object_id=object_id)
def aware_datetimes():
# datetimes in Software Heritage are not used for software artifacts
# (which may be much older than 2000), but only for objects like scheduler
# task runs, and origin visits, which were created by Software Heritage,
# so at least in 2015.
# We're forbidding old datetimes, because until 1956, many timezones had seconds
# in their "UTC offsets" (see
# <https://en.wikipedia.org/wiki/Time_zone#Worldwide_time_zones>), which is not
# encodable in ISO8601; and we need our datetimes to be ISO8601-encodable in the
# RPC protocol
min_value = datetime.datetime(2000, 1, 1, 0, 0, 0)
return datetimes(min_value=min_value, timezones=timezones())
@composite
def iris(draw):
protocol = draw(sampled_from(["git", "http", "https", "deb"]))
domain = draw(from_regex(r"\A([a-z]([a-z0-9é🏛️-]*)\.){1,3}([a-z0-9é])+\Z"))
return "%s://%s" % (protocol, domain)
@composite
def persons_d(draw):
fullname = draw(binary())
email = draw(optional(binary()))
name = draw(optional(binary()))
assume(not (len(fullname) == 32 and email is None and name is None))
return dict(fullname=fullname, name=name, email=email)
def persons():
return persons_d().map(Person.from_dict)
def timestamps_d():
max_seconds = datetime.datetime.max.replace(
tzinfo=datetime.timezone.utc
).timestamp()
min_seconds = datetime.datetime.min.replace(
tzinfo=datetime.timezone.utc
).timestamp()
return builds(
dict,
seconds=integers(min_seconds, max_seconds),
microseconds=integers(0, 1000000),
)
def timestamps():
return timestamps_d().map(Timestamp.from_dict)
@composite
def timestamps_with_timezone_d(
draw,
timestamp=timestamps_d(),
offset=integers(min_value=-14 * 60, max_value=14 * 60),
negative_utc=booleans(),
):
timestamp = draw(timestamp)
offset = draw(offset)
negative_utc = draw(negative_utc)
assume(not (negative_utc and offset))
return dict(timestamp=timestamp, offset=offset, negative_utc=negative_utc)
timestamps_with_timezone = timestamps_with_timezone_d().map(
TimestampWithTimezone.from_dict
)
def origins_d():
return builds(dict, url=iris())
def origins():
return origins_d().map(Origin.from_dict)
def origin_visits_d():
return builds(
dict,
visit=integers(1, 1000),
origin=iris(),
date=aware_datetimes(),
type=pgsql_text(),
)
def origin_visits():
return origin_visits_d().map(OriginVisit.from_dict)
def metadata_dicts():
return dictionaries(pgsql_text(), pgsql_text())
def origin_visit_statuses_d():
return builds(
dict,
visit=integers(1, 1000),
origin=iris(),
type=optional(sampled_from(["git", "svn", "pypi", "debian"])),
status=sampled_from(
["created", "ongoing", "full", "partial", "not_found", "failed"]
),
date=aware_datetimes(),
snapshot=optional(sha1_git()),
metadata=optional(metadata_dicts()),
)
def origin_visit_statuses():
return origin_visit_statuses_d().map(OriginVisitStatus.from_dict)
@composite
def releases_d(draw):
target_type = sampled_from([x.value for x in ObjectType])
name = binary()
message = optional(binary())
synthetic = booleans()
target = sha1_git()
metadata = optional(revision_metadata())
d = draw(
one_of(
# None author/date:
builds(
dict,
name=name,
message=message,
synthetic=synthetic,
author=none(),
date=none(),
target=target,
target_type=target_type,
metadata=metadata,
),
# non-None author/date:
builds(
dict,
name=name,
message=message,
synthetic=synthetic,
date=timestamps_with_timezone_d(),
author=persons_d(),
target=target,
target_type=target_type,
metadata=metadata,
),
# it is also possible for date to be None but not author, but let's not
# overwhelm hypothesis with this edge case
)
)
raw_manifest = draw(optional(binary()))
if raw_manifest:
d["raw_manifest"] = raw_manifest
return d
def releases():
return releases_d().map(Release.from_dict)
revision_metadata = metadata_dicts
def extra_headers():
return lists(
tuples(binary(min_size=0, max_size=50), binary(min_size=0, max_size=500))
).map(tuple)
@composite
def revisions_d(draw):
d = draw(
one_of(
# None author/committer/date/committer_date
builds(
dict,
message=optional(binary()),
synthetic=booleans(),
author=none(),
committer=none(),
date=none(),
committer_date=none(),
parents=tuples(sha1_git()),
directory=sha1_git(),
type=sampled_from([x.value for x in RevisionType]),
metadata=optional(revision_metadata()),
extra_headers=extra_headers(),
),
# non-None author/committer/date/committer_date
builds(
dict,
message=optional(binary()),
synthetic=booleans(),
author=persons_d(),
committer=persons_d(),
date=timestamps_with_timezone_d(),
committer_date=timestamps_with_timezone_d(),
parents=tuples(sha1_git()),
directory=sha1_git(),
type=sampled_from([x.value for x in RevisionType]),
metadata=optional(revision_metadata()),
extra_headers=extra_headers(),
),
# There are many other combinations, but let's not overwhelm hypothesis
# with these edge cases
)
)
# TODO: metadata['extra_headers'] can have binary keys and values
raw_manifest = draw(optional(binary()))
if raw_manifest:
d["raw_manifest"] = raw_manifest
return d
def revisions():
return revisions_d().map(Revision.from_dict)
def directory_entries_d():
return one_of(
builds(
dict,
name=binaries_without_bytes(b"/"),
target=sha1_git(),
type=just("file"),
perms=one_of(
integers(min_value=0o100000, max_value=0o100777), # regular file
integers(min_value=0o120000, max_value=0o120777), # symlink
),
),
builds(
dict,
name=binaries_without_bytes(b"/"),
target=sha1_git(),
type=just("dir"),
perms=integers(
min_value=DentryPerms.directory,
max_value=DentryPerms.directory + 0o777,
),
),
builds(
dict,
name=binaries_without_bytes(b"/"),
target=sha1_git(),
type=just("rev"),
perms=integers(
- min_value=DentryPerms.revision, max_value=DentryPerms.revision + 0o777,
+ min_value=DentryPerms.revision,
+ max_value=DentryPerms.revision + 0o777,
),
),
)
def directory_entries():
return directory_entries_d().map(DirectoryEntry)
@composite
def directories_d(draw):
d = draw(builds(dict, entries=tuples(directory_entries_d())))
raw_manifest = draw(optional(binary()))
if raw_manifest:
d["raw_manifest"] = raw_manifest
return d
def directories():
return directories_d().map(Directory.from_dict)
def contents_d():
return one_of(present_contents_d(), skipped_contents_d())
def contents():
return one_of(present_contents(), skipped_contents())
def present_contents_d():
return builds(
dict,
data=binary(max_size=4096),
ctime=optional(aware_datetimes()),
status=one_of(just("visible"), just("hidden")),
)
def present_contents():
return present_contents_d().map(lambda d: Content.from_data(**d))
@composite
def skipped_contents_d(draw):
result = BaseContent._hash_data(draw(binary(max_size=4096)))
result.pop("data")
nullify_attrs = draw(
sets(sampled_from(["sha1", "sha1_git", "sha256", "blake2s256"]))
)
for k in nullify_attrs:
result[k] = None
result["reason"] = draw(pgsql_text())
result["status"] = "absent"
result["ctime"] = draw(optional(aware_datetimes()))
return result
def skipped_contents():
return skipped_contents_d().map(SkippedContent.from_dict)
def branch_names():
return binary(min_size=1)
def branch_targets_object_d():
return builds(
dict,
target=sha1_git(),
target_type=sampled_from(
[x.value for x in TargetType if x.value not in ("alias",)]
),
)
def branch_targets_alias_d():
return builds(
dict, target=sha1_git(), target_type=just("alias")
) # TargetType.ALIAS.value))
def branch_targets_d(*, only_objects=False):
if only_objects:
return branch_targets_object_d()
else:
return one_of(branch_targets_alias_d(), branch_targets_object_d())
def branch_targets(*, only_objects=False):
return builds(SnapshotBranch.from_dict, branch_targets_d(only_objects=only_objects))
@composite
def snapshots_d(draw, *, min_size=0, max_size=100, only_objects=False):
branches = draw(
dictionaries(
keys=branch_names(),
values=optional(branch_targets_d(only_objects=only_objects)),
min_size=min_size,
max_size=max_size,
)
)
if not only_objects:
# Make sure aliases point to actual branches
unresolved_aliases = {
branch: target["target"]
for branch, target in branches.items()
if (
target
and target["target_type"] == "alias"
and target["target"] not in branches
)
}
for alias_name, alias_target in unresolved_aliases.items():
# Override alias branch with one pointing to a real object
# if max_size constraint is reached
alias = alias_target if len(branches) < max_size else alias_name
branches[alias] = draw(branch_targets_d(only_objects=True))
# Ensure no cycles between aliases
while True:
try:
snapshot = Snapshot.from_dict(
{
"branches": {
name: branch or None for (name, branch) in branches.items()
}
}
)
except ValueError as e:
for (source, target) in e.args[1]:
branches[source] = draw(branch_targets_d(only_objects=True))
else:
break
return snapshot.to_dict()
def snapshots(*, min_size=0, max_size=100, only_objects=False):
return snapshots_d(
min_size=min_size, max_size=max_size, only_objects=only_objects
).map(Snapshot.from_dict)
def metadata_authorities():
return builds(MetadataAuthority, url=iris(), metadata=just(None))
def metadata_fetchers():
return builds(
MetadataFetcher,
name=text(min_size=1, alphabet=string.printable),
version=text(
min_size=1,
alphabet=string.ascii_letters + string.digits + string.punctuation,
),
metadata=just(None),
)
def raw_extrinsic_metadata():
return builds(
RawExtrinsicMetadata,
target=extended_swhids(),
discovery_date=aware_datetimes(),
authority=metadata_authorities(),
fetcher=metadata_fetchers(),
format=text(min_size=1, alphabet=string.printable),
)
def raw_extrinsic_metadata_d():
return raw_extrinsic_metadata().map(RawExtrinsicMetadata.to_dict)
def objects(blacklist_types=("origin_visit_status",), split_content=False):
"""generates a random couple (type, obj)
which obj is an instance of the Model class corresponding to obj_type.
`blacklist_types` is a list of obj_type to exclude from the strategy.
If `split_content` is True, generates Content and SkippedContent under different
obj_type, resp. "content" and "skipped_content".
"""
strategies = [
("origin", origins),
("origin_visit", origin_visits),
("origin_visit_status", origin_visit_statuses),
("snapshot", snapshots),
("release", releases),
("revision", revisions),
("directory", directories),
("raw_extrinsic_metadata", raw_extrinsic_metadata),
]
if split_content:
strategies.append(("content", present_contents))
strategies.append(("skipped_content", skipped_contents))
else:
strategies.append(("content", contents))
args = [
obj_gen().map(lambda x, obj_type=obj_type: (obj_type, x))
for (obj_type, obj_gen) in strategies
if obj_type not in blacklist_types
]
return one_of(*args)
def object_dicts(blacklist_types=("origin_visit_status",), split_content=False):
"""generates a random couple (type, dict)
which dict is suitable for <ModelForType>.from_dict() factory methods.
`blacklist_types` is a list of obj_type to exclude from the strategy.
If `split_content` is True, generates Content and SkippedContent under different
obj_type, resp. "content" and "skipped_content".
"""
strategies = [
("origin", origins_d),
("origin_visit", origin_visits_d),
("origin_visit_status", origin_visit_statuses_d),
("snapshot", snapshots_d),
("release", releases_d),
("revision", revisions_d),
("directory", directories_d),
("raw_extrinsic_metadata", raw_extrinsic_metadata_d),
]
if split_content:
strategies.append(("content", present_contents_d))
strategies.append(("skipped_content", skipped_contents_d))
else:
strategies.append(("content", contents_d))
args = [
obj_gen().map(lambda x, obj_type=obj_type: (obj_type, x))
for (obj_type, obj_gen) in strategies
if obj_type not in blacklist_types
]
return one_of(*args)
diff --git a/swh/model/merkle.py b/swh/model/merkle.py
index 8934ad1..ab6b8ea 100644
--- a/swh/model/merkle.py
+++ b/swh/model/merkle.py
@@ -1,315 +1,315 @@
# Copyright (C) 2017-2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
"""Merkle tree data structure"""
import abc
from collections.abc import Mapping
from typing import Dict, Iterator, List, Set
def deep_update(left, right):
"""Recursively update the left mapping with deeply nested values from the right
mapping.
This function is useful to merge the results of several calls to
:func:`MerkleNode.collect`.
Arguments:
left: a mapping (modified by the update operation)
right: a mapping
Returns:
the left mapping, updated with nested values from the right mapping
Example:
>>> a = {
... 'key1': {
... 'key2': {
... 'key3': 'value1/2/3',
... },
... },
... }
>>> deep_update(a, {
... 'key1': {
... 'key2': {
... 'key4': 'value1/2/4',
... },
... },
... }) == {
... 'key1': {
... 'key2': {
... 'key3': 'value1/2/3',
... 'key4': 'value1/2/4',
... },
... },
... }
True
>>> deep_update(a, {
... 'key1': {
... 'key2': {
... 'key3': 'newvalue1/2/3',
... },
... },
... }) == {
... 'key1': {
... 'key2': {
... 'key3': 'newvalue1/2/3',
... 'key4': 'value1/2/4',
... },
... },
... }
True
"""
for key, rvalue in right.items():
if isinstance(rvalue, Mapping):
new_lvalue = deep_update(left.get(key, {}), rvalue)
left[key] = new_lvalue
else:
left[key] = rvalue
return left
class MerkleNode(dict, metaclass=abc.ABCMeta):
"""Representation of a node in a Merkle Tree.
A (generalized) `Merkle Tree`_ is a tree in which every node is labeled
with a hash of its own data and the hash of its children.
.. _Merkle Tree: https://en.wikipedia.org/wiki/Merkle_tree
In pseudocode::
node.hash = hash(node.data
+ sum(child.hash for child in node.children))
This class efficiently implements the Merkle Tree data structure on top of
a Python :class:`dict`, minimizing hash computations and new data
collections when updating nodes.
Node data is stored in the :attr:`data` attribute, while (named) children
are stored as items of the underlying dictionary.
Addition, update and removal of objects are instrumented to automatically
invalidate the hashes of the current node as well as its registered
parents; It also resets the collection status of the objects so the updated
objects can be collected.
The collection of updated data from the tree is implemented through the
:func:`collect` function and associated helpers.
"""
__slots__ = ["parents", "data", "__hash", "collected"]
data: Dict
"""data associated to the current node"""
parents: List
"""known parents of the current node"""
collected: bool
"""whether the current node has been collected"""
def __init__(self, data=None):
super().__init__()
self.parents = []
self.data = data
self.__hash = None
self.collected = False
def __eq__(self, other):
return (
isinstance(other, MerkleNode)
and super().__eq__(other)
and self.data == other.data
)
def __ne__(self, other):
return not self.__eq__(other)
def invalidate_hash(self):
"""Invalidate the cached hash of the current node."""
if not self.__hash:
return
self.__hash = None
self.collected = False
for parent in self.parents:
parent.invalidate_hash()
def update_hash(self, *, force=False):
"""Recursively compute the hash of the current node.
Args:
force (bool): invalidate the cache and force the computation for
this node and all children.
"""
if self.__hash and not force:
return self.__hash
if force:
self.invalidate_hash()
for child in self.values():
child.update_hash(force=force)
self.__hash = self.compute_hash()
return self.__hash
@property
def hash(self):
"""The hash of the current node, as calculated by
:func:`compute_hash`.
"""
return self.update_hash()
@abc.abstractmethod
def compute_hash(self):
"""Compute the hash of the current node.
The hash should depend on the data of the node, as well as on hashes
of the children nodes.
"""
raise NotImplementedError("Must implement compute_hash method")
def __setitem__(self, name, new_child):
"""Add a child, invalidating the current hash"""
self.invalidate_hash()
super().__setitem__(name, new_child)
new_child.parents.append(self)
def __delitem__(self, name):
"""Remove a child, invalidating the current hash"""
if name in self:
self.invalidate_hash()
self[name].parents.remove(self)
super().__delitem__(name)
else:
raise KeyError(name)
def update(self, new_children):
"""Add several named children from a dictionary"""
if not new_children:
return
self.invalidate_hash()
for name, new_child in new_children.items():
new_child.parents.append(self)
if name in self:
self[name].parents.remove(self)
super().update(new_children)
def get_data(self, **kwargs):
"""Retrieve and format the collected data for the current node, for use by
:func:`collect`.
Can be overridden, for instance when you want the collected data to
contain information about the child nodes.
Arguments:
kwargs: allow subclasses to alter behaviour depending on how
:func:`collect` is called.
Returns:
data formatted for :func:`collect`
"""
return self.data
def collect_node(self, **kwargs):
"""Collect the data for the current node, for use by :func:`collect`.
Arguments:
kwargs: passed as-is to :func:`get_data`.
Returns:
A :class:`dict` compatible with :func:`collect`.
"""
if not self.collected:
self.collected = True
return {self.object_type: {self.hash: self.get_data(**kwargs)}}
else:
return {}
def collect(self, **kwargs):
"""Collect the data for all nodes in the subtree rooted at `self`.
The data is deduplicated by type and by hash.
Arguments:
kwargs: passed as-is to :func:`get_data`.
Returns:
A :class:`dict` with the following structure::
{
'typeA': {
node1.hash: node1.get_data(),
node2.hash: node2.get_data(),
},
'typeB': {
node3.hash: node3.get_data(),
...
},
...
}
"""
ret = self.collect_node(**kwargs)
for child in self.values():
deep_update(ret, child.collect(**kwargs))
return ret
def reset_collect(self):
"""Recursively unmark collected nodes in the subtree rooted at `self`.
This lets the caller use :func:`collect` again.
"""
self.collected = False
for child in self.values():
child.reset_collect()
def iter_tree(self, dedup=True) -> Iterator["MerkleNode"]:
"""Yields all children nodes, recursively. Common nodes are deduplicated
- by default (deduplication can be turned off setting the given argument
- 'dedup' to False).
+ by default (deduplication can be turned off setting the given argument
+ 'dedup' to False).
"""
yield from self._iter_tree(set(), dedup)
def _iter_tree(self, seen: Set[bytes], dedup) -> Iterator["MerkleNode"]:
if self.hash not in seen:
if dedup:
seen.add(self.hash)
yield self
for child in self.values():
yield from child._iter_tree(seen=seen, dedup=dedup)
class MerkleLeaf(MerkleNode):
"""A leaf to a Merkle tree.
A Merkle leaf is simply a Merkle node with children disabled.
"""
__slots__ = [] # type: List[str]
def __setitem__(self, name, child):
raise ValueError("%s is a leaf" % self.__class__.__name__)
def __getitem__(self, name):
raise ValueError("%s is a leaf" % self.__class__.__name__)
def __delitem__(self, name):
raise ValueError("%s is a leaf" % self.__class__.__name__)
def update(self, new_children):
"""Children update operation. Disabled for leaves."""
raise ValueError("%s is a leaf" % self.__class__.__name__)
diff --git a/swh/model/model.py b/swh/model/model.py
index 3eca674..508d41c 100644
--- a/swh/model/model.py
+++ b/swh/model/model.py
@@ -1,1515 +1,1549 @@
# Copyright (C) 2018-2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
"""
Implementation of Software Heritage's data model
See :ref:`data-model` for an overview of the data model.
The classes defined in this module are immutable
`attrs objects <https://attrs.org/>`__ and enums.
All classes define a ``from_dict`` class method and a ``to_dict``
method to convert between them and msgpack-serializable objects.
"""
from abc import ABCMeta, abstractmethod
import datetime
from enum import Enum
import hashlib
-from typing import Any, Dict, Iterable, Optional, Tuple, TypeVar, Union
+from typing import Any, Dict, Iterable, Optional, Tuple, Type, TypeVar, Union
import attr
from attrs_strict import AttributeTypeError
import dateutil.parser
import iso8601
from typing_extensions import Final
from . import git_objects
from .collections import ImmutableDict
from .hashutil import DEFAULT_ALGORITHMS, MultiHash, hash_to_hex
from .swhids import CoreSWHID
from .swhids import ExtendedObjectType as SwhidExtendedObjectType
from .swhids import ExtendedSWHID
from .swhids import ObjectType as SwhidObjectType
class MissingData(Exception):
"""Raised by `Content.with_data` when it has no way of fetching the
data (but not when fetching the data fails)."""
pass
KeyType = Union[Dict[str, str], Dict[str, bytes], bytes]
"""The type returned by BaseModel.unique_key()."""
SHA1_SIZE = 20
_OFFSET_CHARS = frozenset(b"+-0123456789")
# TODO: Limit this to 20 bytes
Sha1Git = bytes
Sha1 = bytes
KT = TypeVar("KT")
VT = TypeVar("VT")
def hash_repr(h: bytes) -> str:
if h is None:
return "None"
else:
return f"hash_to_bytes('{hash_to_hex(h)}')"
def freeze_optional_dict(
d: Union[None, Dict[KT, VT], ImmutableDict[KT, VT]] # type: ignore
) -> Optional[ImmutableDict[KT, VT]]:
if isinstance(d, dict):
return ImmutableDict(d)
else:
return d
def dictify(value):
"Helper function used by BaseModel.to_dict()"
if isinstance(value, BaseModel):
return value.to_dict()
elif isinstance(value, (CoreSWHID, ExtendedSWHID)):
return str(value)
elif isinstance(value, Enum):
return value.value
elif isinstance(value, (dict, ImmutableDict)):
return {k: dictify(v) for k, v in value.items()}
elif isinstance(value, tuple):
return tuple(dictify(v) for v in value)
else:
return value
def _check_type(type_, value):
if type_ is object or type_ is Any:
return True
if type_ is None:
return value is None
origin = getattr(type_, "__origin__", None)
# Non-generic type, check it directly
if origin is None:
# This is functionally equivalent to using just this:
# return isinstance(value, type)
# but using type equality before isinstance allows very quick checks
# when the exact class is used (which is the overwhelming majority of cases)
# while still allowing subclasses to be used.
return type(value) == type_ or isinstance(value, type_)
# Check the type of the value itself
#
# For the same reason as above, this condition is functionally equivalent to:
# if origin is not Union and not isinstance(value, origin):
if origin is not Union and type(value) != origin and not isinstance(value, origin):
return False
# Then, if it's a container, check its items.
if origin is tuple:
args = type_.__args__
if len(args) == 2 and args[1] is Ellipsis:
# Infinite tuple
return all(_check_type(args[0], item) for item in value)
else:
# Finite tuple
if len(args) != len(value):
return False
return all(
_check_type(item_type, item) for (item_type, item) in zip(args, value)
)
elif origin is Union:
args = type_.__args__
return any(_check_type(variant, value) for variant in args)
elif origin is ImmutableDict:
(key_type, value_type) = type_.__args__
return all(
_check_type(key_type, key) and _check_type(value_type, value)
for (key, value) in value.items()
)
else:
# No need to check dict or list. because they are converted to ImmutableDict
# and tuple respectively.
raise NotImplementedError(f"Type-checking {type_}")
def type_validator():
"""Like attrs_strict.type_validator(), but stricter.
It is an attrs validator, which checks attributes have the specified type,
using type equality instead of ``isinstance()``, for improved performance
"""
def validator(instance, attribute, value):
if not _check_type(attribute.type, value):
raise AttributeTypeError(value, attribute)
return validator
ModelType = TypeVar("ModelType", bound="BaseModel")
class BaseModel:
"""Base class for SWH model classes.
Provides serialization/deserialization to/from Python dictionaries,
that are suitable for JSON/msgpack-like formats."""
__slots__ = ()
def to_dict(self):
"""Wrapper of `attr.asdict` that can be overridden by subclasses
that have special handling of some of the fields."""
return dictify(attr.asdict(self, recurse=False))
@classmethod
def from_dict(cls, d):
"""Takes a dictionary representing a tree of SWH objects, and
recursively builds the corresponding objects."""
return cls(**d)
def anonymize(self: ModelType) -> Optional[ModelType]:
"""Returns an anonymized version of the object, if needed.
If the object model does not need/support anonymization, returns None.
"""
return None
def unique_key(self) -> KeyType:
"""Returns a unique key for this object, that can be used for
deduplication."""
raise NotImplementedError(f"unique_key for {self}")
def check(self) -> None:
"""Performs internal consistency checks, and raises an error if one fails."""
attr.validate(self)
def _compute_hash_from_manifest(manifest: bytes) -> Sha1Git:
return hashlib.new("sha1", manifest).digest()
class HashableObject(metaclass=ABCMeta):
"""Mixin to automatically compute object identifier hash when
the associated model is instantiated."""
__slots__ = ()
id: Sha1Git
def compute_hash(self) -> bytes:
"""Derived model classes must implement this to compute
the object hash.
This method is called by the object initialization if the `id`
attribute is set to an empty value.
"""
return self._compute_hash_from_attributes()
@abstractmethod
def _compute_hash_from_attributes(self) -> Sha1Git:
raise NotImplementedError(f"_compute_hash_from_attributes for {self}")
def __attrs_post_init__(self):
if not self.id:
obj_id = self.compute_hash()
object.__setattr__(self, "id", obj_id)
def unique_key(self) -> KeyType:
return self.id
def check(self) -> None:
super().check() # type: ignore
if self.id != self.compute_hash():
raise ValueError("'id' does not match recomputed hash.")
class HashableObjectWithManifest(HashableObject):
"""Derived class of HashableObject, for objects that may need to store
verbatim git objects as ``raw_manifest`` to preserve original hashes."""
__slots__ = ()
raw_manifest: Optional[bytes] = None
"""Stores the original content of git objects when they cannot be faithfully
represented using only the other attributes.
This should only be used as a last resort, and only set in the Git loader,
for objects too corrupt to fit the data model."""
def to_dict(self):
d = super().to_dict()
if d["raw_manifest"] is None:
del d["raw_manifest"]
return d
def compute_hash(self) -> bytes:
"""Derived model classes must implement this to compute
the object hash.
This method is called by the object initialization if the `id`
attribute is set to an empty value.
"""
if self.raw_manifest is None:
return super().compute_hash()
else:
return _compute_hash_from_manifest(self.raw_manifest)
def check(self) -> None:
super().check()
if (
self.raw_manifest is not None
and self.id == self._compute_hash_from_attributes()
):
raise ValueError(
f"{self} has a non-none raw_manifest attribute, but does not need it."
)
@attr.s(frozen=True, slots=True)
class Person(BaseModel):
"""Represents the author/committer of a revision or release."""
object_type: Final = "person"
fullname = attr.ib(type=bytes, validator=type_validator())
name = attr.ib(type=Optional[bytes], validator=type_validator(), eq=False)
email = attr.ib(type=Optional[bytes], validator=type_validator(), eq=False)
@classmethod
def from_fullname(cls, fullname: bytes):
"""Returns a Person object, by guessing the name and email from the
fullname, in the `name <email>` format.
The fullname is left unchanged."""
if fullname is None:
raise TypeError("fullname is None.")
name: Optional[bytes]
email: Optional[bytes]
try:
open_bracket = fullname.index(b"<")
except ValueError:
name = fullname
email = None
else:
raw_name = fullname[:open_bracket]
raw_email = fullname[open_bracket + 1 :]
if not raw_name:
name = None
else:
name = raw_name.strip()
try:
close_bracket = raw_email.rindex(b">")
except ValueError:
email = raw_email
else:
email = raw_email[:close_bracket]
- return Person(name=name or None, email=email or None, fullname=fullname,)
+ return Person(
+ name=name or None,
+ email=email or None,
+ fullname=fullname,
+ )
def anonymize(self) -> "Person":
"""Returns an anonymized version of the Person object.
Anonymization is simply a Person which fullname is the hashed, with unset name
or email.
"""
return Person(
- fullname=hashlib.sha256(self.fullname).digest(), name=None, email=None,
+ fullname=hashlib.sha256(self.fullname).digest(),
+ name=None,
+ email=None,
)
@classmethod
def from_dict(cls, d):
"""
If the fullname is missing, construct a fullname
using the following heuristics: if the name value is None, we return the
email in angle brackets, else, we return the name, a space, and the email
in angle brackets.
"""
if "fullname" not in d:
parts = []
if d["name"] is not None:
parts.append(d["name"])
if d["email"] is not None:
parts.append(b"".join([b"<", d["email"], b">"]))
fullname = b" ".join(parts)
d = {**d, "fullname": fullname}
d = {"name": None, "email": None, **d}
return super().from_dict(d)
@attr.s(frozen=True, slots=True)
class Timestamp(BaseModel):
"""Represents a naive timestamp from a VCS."""
object_type: Final = "timestamp"
seconds = attr.ib(type=int, validator=type_validator())
microseconds = attr.ib(type=int, validator=type_validator())
@seconds.validator
def check_seconds(self, attribute, value):
"""Check that seconds fit in a 64-bits signed integer."""
- if not (-(2 ** 63) <= value < 2 ** 63):
+ if not (-(2**63) <= value < 2**63):
raise ValueError("Seconds must be a signed 64-bits integer.")
@microseconds.validator
def check_microseconds(self, attribute, value):
"""Checks that microseconds are positive and < 1000000."""
- if not (0 <= value < 10 ** 6):
+ if not (0 <= value < 10**6):
raise ValueError("Microseconds must be in [0, 1000000[.")
@attr.s(frozen=True, slots=True)
class TimestampWithTimezone(BaseModel):
"""Represents a TZ-aware timestamp from a VCS."""
object_type: Final = "timestamp_with_timezone"
timestamp = attr.ib(type=Timestamp, validator=type_validator())
offset_bytes = attr.ib(type=bytes, validator=type_validator())
"""Raw git representation of the timezone, as an offset from UTC.
It should follow this format: ``+HHMM`` or ``-HHMM`` (including ``+0000`` and
``-0000``).
However, when created from git objects, it must be the exact bytes used in the
original objects, so it may differ from this format when they do.
"""
@classmethod
def from_numeric_offset(
cls, timestamp: Timestamp, offset: int, negative_utc: bool
) -> "TimestampWithTimezone":
"""Returns a :class:`TimestampWithTimezone` instance from the old dictionary
format (with ``offset`` and ``negative_utc`` instead of ``offset_bytes``).
"""
negative = offset < 0 or negative_utc
(hours, minutes) = divmod(abs(offset), 60)
offset_bytes = f"{'-' if negative else '+'}{hours:02}{minutes:02}".encode()
tstz = TimestampWithTimezone(timestamp=timestamp, offset_bytes=offset_bytes)
assert tstz.offset_minutes() == offset, (tstz.offset_minutes(), offset)
return tstz
@classmethod
def from_dict(
cls, time_representation: Union[Dict, datetime.datetime, int]
) -> "TimestampWithTimezone":
"""Builds a TimestampWithTimezone from any of the formats
accepted by :func:`swh.model.normalize_timestamp`."""
# TODO: this accept way more types than just dicts; find a better
# name
if isinstance(time_representation, dict):
ts = time_representation["timestamp"]
if isinstance(ts, dict):
seconds = ts.get("seconds", 0)
microseconds = ts.get("microseconds", 0)
elif isinstance(ts, int):
seconds = ts
microseconds = 0
else:
raise ValueError(
f"TimestampWithTimezone.from_dict received non-integer timestamp "
f"member {ts!r}"
)
timestamp = Timestamp(seconds=seconds, microseconds=microseconds)
if "offset_bytes" in time_representation:
return cls(
timestamp=timestamp,
offset_bytes=time_representation["offset_bytes"],
)
else:
# old format
offset = time_representation["offset"]
negative_utc = time_representation.get("negative_utc") or False
return cls.from_numeric_offset(timestamp, offset, negative_utc)
elif isinstance(time_representation, datetime.datetime):
# TODO: warn when using from_dict() on a datetime
utcoffset = time_representation.utcoffset()
time_representation = time_representation.astimezone(datetime.timezone.utc)
microseconds = time_representation.microsecond
if microseconds:
time_representation = time_representation.replace(microsecond=0)
seconds = int(time_representation.timestamp())
if utcoffset is None:
raise ValueError(
f"TimestampWithTimezone.from_dict received datetime without "
f"timezone: {time_representation}"
)
# utcoffset is an integer number of minutes
seconds_offset = utcoffset.total_seconds()
offset = int(seconds_offset) // 60
# TODO: warn if remainder is not zero
return cls.from_numeric_offset(
Timestamp(seconds=seconds, microseconds=microseconds), offset, False
)
elif isinstance(time_representation, int):
# TODO: warn when using from_dict() on an int
seconds = time_representation
timestamp = Timestamp(seconds=time_representation, microseconds=0)
return cls(timestamp=timestamp, offset_bytes=b"+0000")
else:
raise ValueError(
f"TimestampWithTimezone.from_dict received non-integer timestamp: "
f"{time_representation!r}"
)
@classmethod
def from_datetime(cls, dt: datetime.datetime) -> "TimestampWithTimezone":
return cls.from_dict(dt)
def to_datetime(self) -> datetime.datetime:
"""Convert to a datetime (with a timezone set to the recorded fixed UTC offset)
Beware that this conversion can be lossy: ``-0000`` and 'weird' offsets
cannot be represented. Also note that it may fail due to type overflow.
"""
timestamp = datetime.datetime.fromtimestamp(
self.timestamp.seconds,
datetime.timezone(datetime.timedelta(minutes=self.offset_minutes())),
)
timestamp = timestamp.replace(microsecond=self.timestamp.microseconds)
return timestamp
@classmethod
def from_iso8601(cls, s):
- """Builds a TimestampWithTimezone from an ISO8601-formatted string.
- """
+ """Builds a TimestampWithTimezone from an ISO8601-formatted string."""
dt = iso8601.parse_date(s)
tstz = cls.from_datetime(dt)
if dt.tzname() == "-00:00":
assert tstz.offset_bytes == b"+0000"
tstz = attr.evolve(tstz, offset_bytes=b"-0000")
return tstz
@staticmethod
def _parse_offset_bytes(offset_bytes: bytes) -> int:
"""Parses an ``offset_bytes`` value (in Git's ``[+-]HHMM`` format),
and returns the corresponding numeric values (in number of minutes).
Tries to account for some mistakes in the format, to support incorrect
Git implementations.
>>> TimestampWithTimezone._parse_offset_bytes(b"+0000")
0
>>> TimestampWithTimezone._parse_offset_bytes(b"-0000")
0
>>> TimestampWithTimezone._parse_offset_bytes(b"+0200")
120
>>> TimestampWithTimezone._parse_offset_bytes(b"-0200")
-120
>>> TimestampWithTimezone._parse_offset_bytes(b"+200")
120
>>> TimestampWithTimezone._parse_offset_bytes(b"-200")
-120
>>> TimestampWithTimezone._parse_offset_bytes(b"+02")
120
>>> TimestampWithTimezone._parse_offset_bytes(b"-02")
-120
>>> TimestampWithTimezone._parse_offset_bytes(b"+0010")
10
>>> TimestampWithTimezone._parse_offset_bytes(b"-0010")
-10
>>> TimestampWithTimezone._parse_offset_bytes(b"+200000000000000000")
0
>>> TimestampWithTimezone._parse_offset_bytes(b"+0160") # 60 minutes...
0
"""
offset_str = offset_bytes.decode()
assert offset_str[0] in "+-"
sign = int(offset_str[0] + "1")
if len(offset_str) <= 3:
hours = int(offset_str[1:])
minutes = 0
else:
hours = int(offset_str[1:-2])
minutes = int(offset_str[-2:])
offset = sign * (hours * 60 + minutes)
- if (0 <= minutes <= 59) and (-(2 ** 15) <= offset < 2 ** 15):
+ if (0 <= minutes <= 59) and (-(2**15) <= offset < 2**15):
return offset
else:
# can't parse it to a reasonable value; give up and pretend it's UTC.
return 0
def offset_minutes(self):
"""Returns the offset, as a number of minutes since UTC.
>>> TimestampWithTimezone(
... Timestamp(seconds=1642765364, microseconds=0), offset_bytes=b"+0000"
... ).offset_minutes()
0
>>> TimestampWithTimezone(
... Timestamp(seconds=1642765364, microseconds=0), offset_bytes=b"+0200"
... ).offset_minutes()
120
>>> TimestampWithTimezone(
... Timestamp(seconds=1642765364, microseconds=0), offset_bytes=b"-0200"
... ).offset_minutes()
-120
>>> TimestampWithTimezone(
... Timestamp(seconds=1642765364, microseconds=0), offset_bytes=b"+0530"
... ).offset_minutes()
330
"""
return self._parse_offset_bytes(self.offset_bytes)
@attr.s(frozen=True, slots=True)
class Origin(HashableObject, BaseModel):
"""Represents a software source: a VCS and an URL."""
object_type: Final = "origin"
url = attr.ib(type=str, validator=type_validator())
id = attr.ib(type=Sha1Git, validator=type_validator(), default=b"")
def unique_key(self) -> KeyType:
return {"url": self.url}
def _compute_hash_from_attributes(self) -> bytes:
return _compute_hash_from_manifest(self.url.encode("utf-8"))
def swhid(self) -> ExtendedSWHID:
"""Returns a SWHID representing this origin."""
return ExtendedSWHID(
- object_type=SwhidExtendedObjectType.ORIGIN, object_id=self.id,
+ object_type=SwhidExtendedObjectType.ORIGIN,
+ object_id=self.id,
)
@attr.s(frozen=True, slots=True)
class OriginVisit(BaseModel):
"""Represents an origin visit with a given type at a given point in time, by a
SWH loader."""
object_type: Final = "origin_visit"
origin = attr.ib(type=str, validator=type_validator())
date = attr.ib(type=datetime.datetime, validator=type_validator())
type = attr.ib(type=str, validator=type_validator())
"""Should not be set before calling 'origin_visit_add()'."""
visit = attr.ib(type=Optional[int], validator=type_validator(), default=None)
@date.validator
def check_date(self, attribute, value):
"""Checks the date has a timezone."""
if value is not None and value.tzinfo is None:
raise ValueError("date must be a timezone-aware datetime.")
def to_dict(self):
"""Serializes the date as a string and omits the visit id if it is
`None`."""
ov = super().to_dict()
if ov["visit"] is None:
del ov["visit"]
return ov
def unique_key(self) -> KeyType:
return {"origin": self.origin, "date": str(self.date)}
@attr.s(frozen=True, slots=True)
class OriginVisitStatus(BaseModel):
- """Represents a visit update of an origin at a given point in time.
-
- """
+ """Represents a visit update of an origin at a given point in time."""
object_type: Final = "origin_visit_status"
origin = attr.ib(type=str, validator=type_validator())
visit = attr.ib(type=int, validator=type_validator())
date = attr.ib(type=datetime.datetime, validator=type_validator())
status = attr.ib(
type=str,
validator=attr.validators.in_(
["created", "ongoing", "full", "partial", "not_found", "failed"]
),
)
snapshot = attr.ib(
type=Optional[Sha1Git], validator=type_validator(), repr=hash_repr
)
# Type is optional be to able to use it before adding it to the database model
type = attr.ib(type=Optional[str], validator=type_validator(), default=None)
metadata = attr.ib(
type=Optional[ImmutableDict[str, object]],
validator=type_validator(),
converter=freeze_optional_dict,
default=None,
)
@date.validator
def check_date(self, attribute, value):
"""Checks the date has a timezone."""
if value is not None and value.tzinfo is None:
raise ValueError("date must be a timezone-aware datetime.")
def unique_key(self) -> KeyType:
return {"origin": self.origin, "visit": str(self.visit), "date": str(self.date)}
class TargetType(Enum):
"""The type of content pointed to by a snapshot branch. Usually a
revision or an alias."""
CONTENT = "content"
DIRECTORY = "directory"
REVISION = "revision"
RELEASE = "release"
SNAPSHOT = "snapshot"
ALIAS = "alias"
def __repr__(self):
return f"TargetType.{self.name}"
class ObjectType(Enum):
"""The type of content pointed to by a release. Usually a revision"""
CONTENT = "content"
DIRECTORY = "directory"
REVISION = "revision"
RELEASE = "release"
SNAPSHOT = "snapshot"
def __repr__(self):
return f"ObjectType.{self.name}"
@attr.s(frozen=True, slots=True)
class SnapshotBranch(BaseModel):
"""Represents one of the branches of a snapshot."""
object_type: Final = "snapshot_branch"
target = attr.ib(type=bytes, validator=type_validator(), repr=hash_repr)
target_type = attr.ib(type=TargetType, validator=type_validator())
@target.validator
def check_target(self, attribute, value):
"""Checks the target type is not an alias, checks the target is a
valid sha1_git."""
if self.target_type != TargetType.ALIAS and self.target is not None:
if len(value) != 20:
raise ValueError("Wrong length for bytes identifier: %d" % len(value))
@classmethod
def from_dict(cls, d):
return cls(target=d["target"], target_type=TargetType(d["target_type"]))
@attr.s(frozen=True, slots=True)
class Snapshot(HashableObject, BaseModel):
"""Represents the full state of an origin at a given point in time."""
object_type: Final = "snapshot"
branches = attr.ib(
type=ImmutableDict[bytes, Optional[SnapshotBranch]],
validator=type_validator(),
converter=freeze_optional_dict,
)
id = attr.ib(type=Sha1Git, validator=type_validator(), default=b"", repr=hash_repr)
def _compute_hash_from_attributes(self) -> bytes:
return _compute_hash_from_manifest(git_objects.snapshot_git_object(self))
@classmethod
def from_dict(cls, d):
d = d.copy()
return cls(
branches=ImmutableDict(
(name, SnapshotBranch.from_dict(branch) if branch else None)
for (name, branch) in d.pop("branches").items()
),
**d,
)
def swhid(self) -> CoreSWHID:
"""Returns a SWHID representing this object."""
return CoreSWHID(object_type=SwhidObjectType.SNAPSHOT, object_id=self.id)
@attr.s(frozen=True, slots=True)
class Release(HashableObjectWithManifest, BaseModel):
object_type: Final = "release"
name = attr.ib(type=bytes, validator=type_validator())
message = attr.ib(type=Optional[bytes], validator=type_validator())
target = attr.ib(type=Optional[Sha1Git], validator=type_validator(), repr=hash_repr)
target_type = attr.ib(type=ObjectType, validator=type_validator())
synthetic = attr.ib(type=bool, validator=type_validator())
author = attr.ib(type=Optional[Person], validator=type_validator(), default=None)
date = attr.ib(
type=Optional[TimestampWithTimezone], validator=type_validator(), default=None
)
metadata = attr.ib(
type=Optional[ImmutableDict[str, object]],
validator=type_validator(),
converter=freeze_optional_dict,
default=None,
)
id = attr.ib(type=Sha1Git, validator=type_validator(), default=b"", repr=hash_repr)
raw_manifest = attr.ib(type=Optional[bytes], default=None)
def _compute_hash_from_attributes(self) -> bytes:
return _compute_hash_from_manifest(git_objects.release_git_object(self))
@author.validator
def check_author(self, attribute, value):
"""If the author is `None`, checks the date is `None` too."""
if self.author is None and self.date is not None:
raise ValueError("release date must be None if author is None.")
def to_dict(self):
rel = super().to_dict()
if rel["metadata"] is None:
del rel["metadata"]
return rel
@classmethod
def from_dict(cls, d):
d = d.copy()
if d.get("author"):
d["author"] = Person.from_dict(d["author"])
if d.get("date"):
d["date"] = TimestampWithTimezone.from_dict(d["date"])
return cls(target_type=ObjectType(d.pop("target_type")), **d)
def swhid(self) -> CoreSWHID:
"""Returns a SWHID representing this object."""
return CoreSWHID(object_type=SwhidObjectType.RELEASE, object_id=self.id)
def anonymize(self) -> "Release":
"""Returns an anonymized version of the Release object.
Anonymization consists in replacing the author with an anonymized Person object.
"""
author = self.author and self.author.anonymize()
return attr.evolve(self, author=author)
class RevisionType(Enum):
GIT = "git"
TAR = "tar"
DSC = "dsc"
SUBVERSION = "svn"
MERCURIAL = "hg"
CVS = "cvs"
BAZAAR = "bzr"
def __repr__(self):
return f"RevisionType.{self.name}"
def tuplify_extra_headers(value: Iterable):
return tuple((k, v) for k, v in value)
@attr.s(frozen=True, slots=True)
class Revision(HashableObjectWithManifest, BaseModel):
object_type: Final = "revision"
message = attr.ib(type=Optional[bytes], validator=type_validator())
author = attr.ib(type=Optional[Person], validator=type_validator())
committer = attr.ib(type=Optional[Person], validator=type_validator())
date = attr.ib(type=Optional[TimestampWithTimezone], validator=type_validator())
committer_date = attr.ib(
type=Optional[TimestampWithTimezone], validator=type_validator()
)
type = attr.ib(type=RevisionType, validator=type_validator())
directory = attr.ib(type=Sha1Git, validator=type_validator(), repr=hash_repr)
synthetic = attr.ib(type=bool, validator=type_validator())
metadata = attr.ib(
type=Optional[ImmutableDict[str, object]],
validator=type_validator(),
converter=freeze_optional_dict,
default=None,
)
parents = attr.ib(type=Tuple[Sha1Git, ...], validator=type_validator(), default=())
id = attr.ib(type=Sha1Git, validator=type_validator(), default=b"", repr=hash_repr)
extra_headers = attr.ib(
type=Tuple[Tuple[bytes, bytes], ...],
validator=type_validator(),
converter=tuplify_extra_headers,
default=(),
)
raw_manifest = attr.ib(type=Optional[bytes], default=None)
def __attrs_post_init__(self):
super().__attrs_post_init__()
# ensure metadata is a deep copy of whatever was given, and if needed
# extract extra_headers from there
if self.metadata:
metadata = self.metadata
if not self.extra_headers and "extra_headers" in metadata:
(extra_headers, metadata) = metadata.copy_pop("extra_headers")
object.__setattr__(
- self, "extra_headers", tuplify_extra_headers(extra_headers),
+ self,
+ "extra_headers",
+ tuplify_extra_headers(extra_headers),
)
attr.validate(self)
object.__setattr__(self, "metadata", metadata)
def _compute_hash_from_attributes(self) -> bytes:
return _compute_hash_from_manifest(git_objects.revision_git_object(self))
@author.validator
def check_author(self, attribute, value):
"""If the author is `None`, checks the date is `None` too."""
if self.author is None and self.date is not None:
raise ValueError("revision date must be None if author is None.")
@committer.validator
def check_committer(self, attribute, value):
"""If the committer is `None`, checks the committer_date is `None` too."""
if self.committer is None and self.committer_date is not None:
raise ValueError(
"revision committer_date must be None if committer is None."
)
@classmethod
def from_dict(cls, d):
d = d.copy()
date = d.pop("date")
if date:
date = TimestampWithTimezone.from_dict(date)
committer_date = d.pop("committer_date")
if committer_date:
committer_date = TimestampWithTimezone.from_dict(committer_date)
author = d.pop("author")
if author:
author = Person.from_dict(author)
committer = d.pop("committer")
if committer:
committer = Person.from_dict(committer)
return cls(
author=author,
committer=committer,
date=date,
committer_date=committer_date,
type=RevisionType(d.pop("type")),
parents=tuple(d.pop("parents")), # for BW compat
**d,
)
def swhid(self) -> CoreSWHID:
"""Returns a SWHID representing this object."""
return CoreSWHID(object_type=SwhidObjectType.REVISION, object_id=self.id)
def anonymize(self) -> "Revision":
"""Returns an anonymized version of the Revision object.
Anonymization consists in replacing the author and committer with an anonymized
Person object.
"""
return attr.evolve(
self,
author=None if self.author is None else self.author.anonymize(),
committer=None if self.committer is None else self.committer.anonymize(),
)
@attr.s(frozen=True, slots=True)
class DirectoryEntry(BaseModel):
object_type: Final = "directory_entry"
name = attr.ib(type=bytes, validator=type_validator())
type = attr.ib(type=str, validator=attr.validators.in_(["file", "dir", "rev"]))
target = attr.ib(type=Sha1Git, validator=type_validator(), repr=hash_repr)
perms = attr.ib(type=int, validator=type_validator(), converter=int, repr=oct)
"""Usually one of the values of `swh.model.from_disk.DentryPerms`."""
@name.validator
def check_name(self, attribute, value):
if b"/" in value:
raise ValueError(f"{value!r} is not a valid directory entry name.")
@attr.s(frozen=True, slots=True)
class Directory(HashableObjectWithManifest, BaseModel):
object_type: Final = "directory"
entries = attr.ib(type=Tuple[DirectoryEntry, ...], validator=type_validator())
id = attr.ib(type=Sha1Git, validator=type_validator(), default=b"", repr=hash_repr)
raw_manifest = attr.ib(type=Optional[bytes], default=None)
def _compute_hash_from_attributes(self) -> bytes:
return _compute_hash_from_manifest(git_objects.directory_git_object(self))
@entries.validator
def check_entries(self, attribute, value):
seen = set()
for entry in value:
if entry.name in seen:
# Cannot use self.swhid() here, self.id may be None
raise ValueError(
f"swh:1:dir:{hash_to_hex(self.id)} has duplicated entry name: "
f"{entry.name!r}"
)
seen.add(entry.name)
@classmethod
def from_dict(cls, d):
d = d.copy()
return cls(
entries=tuple(
DirectoryEntry.from_dict(entry) for entry in d.pop("entries")
),
**d,
)
def swhid(self) -> CoreSWHID:
"""Returns a SWHID representing this object."""
return CoreSWHID(object_type=SwhidObjectType.DIRECTORY, object_id=self.id)
@attr.s(frozen=True, slots=True)
class BaseContent(BaseModel):
status = attr.ib(
type=str, validator=attr.validators.in_(["visible", "hidden", "absent"])
)
@staticmethod
def _hash_data(data: bytes):
"""Hash some data, returning most of the fields of a content object"""
d = MultiHash.from_data(data).digest()
d["data"] = data
d["length"] = len(data)
return d
@classmethod
def from_dict(cls, d, use_subclass=True):
if use_subclass:
# Chooses a subclass to instantiate instead.
if d["status"] == "absent":
return SkippedContent.from_dict(d)
else:
return Content.from_dict(d)
else:
return super().from_dict(d)
def get_hash(self, hash_name):
if hash_name not in DEFAULT_ALGORITHMS:
raise ValueError("{} is not a valid hash name.".format(hash_name))
return getattr(self, hash_name)
def hashes(self) -> Dict[str, bytes]:
"""Returns a dictionary {hash_name: hash_value}"""
return {algo: getattr(self, algo) for algo in DEFAULT_ALGORITHMS}
@attr.s(frozen=True, slots=True)
class Content(BaseContent):
object_type: Final = "content"
sha1 = attr.ib(type=bytes, validator=type_validator(), repr=hash_repr)
sha1_git = attr.ib(type=Sha1Git, validator=type_validator(), repr=hash_repr)
sha256 = attr.ib(type=bytes, validator=type_validator(), repr=hash_repr)
blake2s256 = attr.ib(type=bytes, validator=type_validator(), repr=hash_repr)
length = attr.ib(type=int, validator=type_validator())
status = attr.ib(
type=str,
validator=attr.validators.in_(["visible", "hidden"]),
default="visible",
)
data = attr.ib(type=Optional[bytes], validator=type_validator(), default=None)
ctime = attr.ib(
type=Optional[datetime.datetime],
validator=type_validator(),
default=None,
eq=False,
)
@length.validator
def check_length(self, attribute, value):
"""Checks the length is positive."""
if value < 0:
raise ValueError("Length must be positive.")
@ctime.validator
def check_ctime(self, attribute, value):
"""Checks the ctime has a timezone."""
if value is not None and value.tzinfo is None:
raise ValueError("ctime must be a timezone-aware datetime.")
def to_dict(self):
content = super().to_dict()
if content["data"] is None:
del content["data"]
if content["ctime"] is None:
del content["ctime"]
return content
@classmethod
def from_data(cls, data, status="visible", ctime=None) -> "Content":
"""Generate a Content from a given `data` byte string.
This populates the Content with the hashes and length for the data
passed as argument, as well as the data itself.
"""
d = cls._hash_data(data)
d["status"] = status
d["ctime"] = ctime
return cls(**d)
@classmethod
def from_dict(cls, d):
if isinstance(d.get("ctime"), str):
d = d.copy()
d["ctime"] = dateutil.parser.parse(d["ctime"])
return super().from_dict(d, use_subclass=False)
def with_data(self) -> "Content":
"""Loads the `data` attribute; meaning that it is guaranteed not to
be None after this call.
This call is almost a no-op, but subclasses may overload this method
to lazy-load data (eg. from disk or objstorage)."""
if self.data is None:
raise MissingData("Content data is None.")
return self
def unique_key(self) -> KeyType:
return self.sha1 # TODO: use a dict of hashes
def swhid(self) -> CoreSWHID:
"""Returns a SWHID representing this object."""
return CoreSWHID(object_type=SwhidObjectType.CONTENT, object_id=self.sha1_git)
@attr.s(frozen=True, slots=True)
class SkippedContent(BaseContent):
object_type: Final = "skipped_content"
sha1 = attr.ib(type=Optional[bytes], validator=type_validator(), repr=hash_repr)
sha1_git = attr.ib(
type=Optional[Sha1Git], validator=type_validator(), repr=hash_repr
)
sha256 = attr.ib(type=Optional[bytes], validator=type_validator(), repr=hash_repr)
blake2s256 = attr.ib(
type=Optional[bytes], validator=type_validator(), repr=hash_repr
)
length = attr.ib(type=Optional[int], validator=type_validator())
status = attr.ib(type=str, validator=attr.validators.in_(["absent"]))
reason = attr.ib(type=Optional[str], validator=type_validator(), default=None)
origin = attr.ib(type=Optional[str], validator=type_validator(), default=None)
ctime = attr.ib(
type=Optional[datetime.datetime],
validator=type_validator(),
default=None,
eq=False,
)
@reason.validator
def check_reason(self, attribute, value):
"""Checks the reason is full if status != absent."""
assert self.reason == value
if value is None:
raise ValueError("Must provide a reason if content is absent.")
@length.validator
def check_length(self, attribute, value):
"""Checks the length is positive or -1."""
if value < -1:
raise ValueError("Length must be positive or -1.")
@ctime.validator
def check_ctime(self, attribute, value):
"""Checks the ctime has a timezone."""
if value is not None and value.tzinfo is None:
raise ValueError("ctime must be a timezone-aware datetime.")
def to_dict(self):
content = super().to_dict()
if content["origin"] is None:
del content["origin"]
if content["ctime"] is None:
del content["ctime"]
return content
@classmethod
def from_data(
cls, data: bytes, reason: str, ctime: Optional[datetime.datetime] = None
) -> "SkippedContent":
"""Generate a SkippedContent from a given `data` byte string.
This populates the SkippedContent with the hashes and length for the
data passed as argument.
You can use `attr.evolve` on such a generated content to nullify some
of its attributes, e.g. for tests.
"""
d = cls._hash_data(data)
del d["data"]
d["status"] = "absent"
d["reason"] = reason
d["ctime"] = ctime
return cls(**d)
@classmethod
def from_dict(cls, d):
d2 = d.copy()
if d2.pop("data", None) is not None:
raise ValueError('SkippedContent has no "data" attribute %r' % d)
return super().from_dict(d2, use_subclass=False)
def unique_key(self) -> KeyType:
return self.hashes()
class MetadataAuthorityType(Enum):
DEPOSIT_CLIENT = "deposit_client"
FORGE = "forge"
REGISTRY = "registry"
def __repr__(self):
return f"MetadataAuthorityType.{self.name}"
@attr.s(frozen=True, slots=True)
class MetadataAuthority(BaseModel):
"""Represents an entity that provides metadata about an origin or
software artifact."""
object_type: Final = "metadata_authority"
type = attr.ib(type=MetadataAuthorityType, validator=type_validator())
url = attr.ib(type=str, validator=type_validator())
metadata = attr.ib(
type=Optional[ImmutableDict[str, Any]],
default=None,
validator=type_validator(),
converter=freeze_optional_dict,
)
def to_dict(self):
d = super().to_dict()
if d["metadata"] is None:
del d["metadata"]
return d
@classmethod
def from_dict(cls, d):
d = {
**d,
"type": MetadataAuthorityType(d["type"]),
}
return super().from_dict(d)
def unique_key(self) -> KeyType:
return {"type": self.type.value, "url": self.url}
@attr.s(frozen=True, slots=True)
class MetadataFetcher(BaseModel):
"""Represents a software component used to fetch metadata from a metadata
authority, and ingest them into the Software Heritage archive."""
object_type: Final = "metadata_fetcher"
name = attr.ib(type=str, validator=type_validator())
version = attr.ib(type=str, validator=type_validator())
metadata = attr.ib(
type=Optional[ImmutableDict[str, Any]],
default=None,
validator=type_validator(),
converter=freeze_optional_dict,
)
def to_dict(self):
d = super().to_dict()
if d["metadata"] is None:
del d["metadata"]
return d
def unique_key(self) -> KeyType:
return {"name": self.name, "version": self.version}
def normalize_discovery_date(value: Any) -> datetime.datetime:
if not isinstance(value, datetime.datetime):
raise TypeError("discovery_date must be a timezone-aware datetime.")
if value.tzinfo is None:
raise ValueError("discovery_date must be a timezone-aware datetime.")
# Normalize timezone to utc, and truncate microseconds to 0
return value.astimezone(datetime.timezone.utc).replace(microsecond=0)
@attr.s(frozen=True, slots=True)
class RawExtrinsicMetadata(HashableObject, BaseModel):
object_type: Final = "raw_extrinsic_metadata"
# target object
target = attr.ib(type=ExtendedSWHID, validator=type_validator())
# source
discovery_date = attr.ib(type=datetime.datetime, converter=normalize_discovery_date)
authority = attr.ib(type=MetadataAuthority, validator=type_validator())
fetcher = attr.ib(type=MetadataFetcher, validator=type_validator())
# the metadata itself
format = attr.ib(type=str, validator=type_validator())
metadata = attr.ib(type=bytes, validator=type_validator())
# context
origin = attr.ib(type=Optional[str], default=None, validator=type_validator())
visit = attr.ib(type=Optional[int], default=None, validator=type_validator())
snapshot = attr.ib(
type=Optional[CoreSWHID], default=None, validator=type_validator()
)
release = attr.ib(
type=Optional[CoreSWHID], default=None, validator=type_validator()
)
revision = attr.ib(
type=Optional[CoreSWHID], default=None, validator=type_validator()
)
path = attr.ib(type=Optional[bytes], default=None, validator=type_validator())
directory = attr.ib(
type=Optional[CoreSWHID], default=None, validator=type_validator()
)
id = attr.ib(type=Sha1Git, validator=type_validator(), default=b"", repr=hash_repr)
def _compute_hash_from_attributes(self) -> bytes:
return _compute_hash_from_manifest(
git_objects.raw_extrinsic_metadata_git_object(self)
)
@origin.validator
def check_origin(self, attribute, value):
if value is None:
return
if self.target.object_type not in (
SwhidExtendedObjectType.SNAPSHOT,
SwhidExtendedObjectType.RELEASE,
SwhidExtendedObjectType.REVISION,
SwhidExtendedObjectType.DIRECTORY,
SwhidExtendedObjectType.CONTENT,
):
raise ValueError(
f"Unexpected 'origin' context for "
f"{self.target.object_type.name.lower()} object: {value}"
)
if value.startswith("swh:"):
# Technically this is valid; but:
# 1. SWHIDs are URIs, not URLs
# 2. if a SWHID gets here, it's very likely to be a mistake
# (and we can remove this check if it turns out there is a
# legitimate use for it).
raise ValueError(f"SWHID used as context origin URL: {value}")
@visit.validator
def check_visit(self, attribute, value):
if value is None:
return
if self.target.object_type not in (
SwhidExtendedObjectType.SNAPSHOT,
SwhidExtendedObjectType.RELEASE,
SwhidExtendedObjectType.REVISION,
SwhidExtendedObjectType.DIRECTORY,
SwhidExtendedObjectType.CONTENT,
):
raise ValueError(
f"Unexpected 'visit' context for "
f"{self.target.object_type.name.lower()} object: {value}"
)
if self.origin is None:
raise ValueError("'origin' context must be set if 'visit' is.")
if value <= 0:
raise ValueError("Nonpositive visit id")
@snapshot.validator
def check_snapshot(self, attribute, value):
if value is None:
return
if self.target.object_type not in (
SwhidExtendedObjectType.RELEASE,
SwhidExtendedObjectType.REVISION,
SwhidExtendedObjectType.DIRECTORY,
SwhidExtendedObjectType.CONTENT,
):
raise ValueError(
f"Unexpected 'snapshot' context for "
f"{self.target.object_type.name.lower()} object: {value}"
)
self._check_swhid(SwhidObjectType.SNAPSHOT, value)
@release.validator
def check_release(self, attribute, value):
if value is None:
return
if self.target.object_type not in (
SwhidExtendedObjectType.REVISION,
SwhidExtendedObjectType.DIRECTORY,
SwhidExtendedObjectType.CONTENT,
):
raise ValueError(
f"Unexpected 'release' context for "
f"{self.target.object_type.name.lower()} object: {value}"
)
self._check_swhid(SwhidObjectType.RELEASE, value)
@revision.validator
def check_revision(self, attribute, value):
if value is None:
return
if self.target.object_type not in (
SwhidExtendedObjectType.DIRECTORY,
SwhidExtendedObjectType.CONTENT,
):
raise ValueError(
f"Unexpected 'revision' context for "
f"{self.target.object_type.name.lower()} object: {value}"
)
self._check_swhid(SwhidObjectType.REVISION, value)
@path.validator
def check_path(self, attribute, value):
if value is None:
return
if self.target.object_type not in (
SwhidExtendedObjectType.DIRECTORY,
SwhidExtendedObjectType.CONTENT,
):
raise ValueError(
f"Unexpected 'path' context for "
f"{self.target.object_type.name.lower()} object: {value}"
)
@directory.validator
def check_directory(self, attribute, value):
if value is None:
return
if self.target.object_type not in (SwhidExtendedObjectType.CONTENT,):
raise ValueError(
f"Unexpected 'directory' context for "
f"{self.target.object_type.name.lower()} object: {value}"
)
self._check_swhid(SwhidObjectType.DIRECTORY, value)
def _check_swhid(self, expected_object_type, swhid):
if isinstance(swhid, str):
raise ValueError(f"Expected SWHID, got a string: {swhid}")
if swhid.object_type != expected_object_type:
raise ValueError(
f"Expected SWHID type '{expected_object_type.name.lower()}', "
f"got '{swhid.object_type.name.lower()}' in {swhid}"
)
def to_dict(self):
d = super().to_dict()
context_keys = (
"origin",
"visit",
"snapshot",
"release",
"revision",
"directory",
"path",
)
for context_key in context_keys:
if d[context_key] is None:
del d[context_key]
return d
@classmethod
def from_dict(cls, d):
d = {
**d,
"target": ExtendedSWHID.from_string(d["target"]),
"authority": MetadataAuthority.from_dict(d["authority"]),
"fetcher": MetadataFetcher.from_dict(d["fetcher"]),
}
swhid_keys = ("snapshot", "release", "revision", "directory")
for swhid_key in swhid_keys:
if d.get(swhid_key):
d[swhid_key] = CoreSWHID.from_string(d[swhid_key])
return super().from_dict(d)
def swhid(self) -> ExtendedSWHID:
"""Returns a SWHID representing this RawExtrinsicMetadata object."""
return ExtendedSWHID(
object_type=SwhidExtendedObjectType.RAW_EXTRINSIC_METADATA,
object_id=self.id,
)
@attr.s(frozen=True, slots=True)
class ExtID(HashableObject, BaseModel):
object_type: Final = "extid"
extid_type = attr.ib(type=str, validator=type_validator())
extid = attr.ib(type=bytes, validator=type_validator())
target = attr.ib(type=CoreSWHID, validator=type_validator())
extid_version = attr.ib(type=int, validator=type_validator(), default=0)
id = attr.ib(type=Sha1Git, validator=type_validator(), default=b"", repr=hash_repr)
@classmethod
def from_dict(cls, d):
return cls(
extid=d["extid"],
extid_type=d["extid_type"],
target=CoreSWHID.from_string(d["target"]),
extid_version=d.get("extid_version", 0),
)
def _compute_hash_from_attributes(self) -> bytes:
return _compute_hash_from_manifest(git_objects.extid_git_object(self))
+
+
+# Note: we need the type ignore stanza here because mypy cannot figure that all
+# subclasses of BaseModel do have an object_type attribute, even if BaseModel
+# itself does not (because these are Final)
+SWH_MODEL_OBJECT_TYPES: Dict[str, Type[BaseModel]] = {
+ cls.object_type: cls # type: ignore
+ for cls in (
+ Person,
+ Timestamp,
+ TimestampWithTimezone,
+ Origin,
+ OriginVisit,
+ OriginVisitStatus,
+ Snapshot,
+ SnapshotBranch,
+ Release,
+ Revision,
+ Directory,
+ DirectoryEntry,
+ Content,
+ SkippedContent,
+ MetadataAuthority,
+ MetadataFetcher,
+ RawExtrinsicMetadata,
+ ExtID,
+ )
+}
diff --git a/swh/model/tests/fields/test_compound.py b/swh/model/tests/fields/test_compound.py
index 352bba9..23b05b2 100644
--- a/swh/model/tests/fields/test_compound.py
+++ b/swh/model/tests/fields/test_compound.py
@@ -1,238 +1,242 @@
# Copyright (C) 2015 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import datetime
import unittest
from swh.model.exceptions import NON_FIELD_ERRORS, ValidationError
from swh.model.fields import compound, simple
class ValidateCompound(unittest.TestCase):
def setUp(self):
def validate_always(model):
return True
def validate_never(model):
return False
self.test_model = "test model"
self.test_schema = {
"int": (True, simple.validate_int),
"str": (True, simple.validate_str),
"str2": (True, simple.validate_str),
"datetime": (False, simple.validate_datetime),
NON_FIELD_ERRORS: validate_always,
}
self.test_schema_shortcut = self.test_schema.copy()
self.test_schema_shortcut[NON_FIELD_ERRORS] = validate_never
self.test_schema_field_failed = self.test_schema.copy()
self.test_schema_field_failed["int"] = (
True,
[simple.validate_int, validate_never],
)
self.test_value = {
"str": "value1",
"str2": "value2",
"int": 42,
"datetime": datetime.datetime(
1990, 1, 1, 12, 0, 0, tzinfo=datetime.timezone.utc
),
}
self.test_value_missing = {
"str": "value1",
}
self.test_value_str_error = {
"str": 1984,
"str2": "value2",
"int": 42,
"datetime": datetime.datetime(
1990, 1, 1, 12, 0, 0, tzinfo=datetime.timezone.utc
),
}
self.test_value_missing_keys = {"int"}
self.test_value_wrong_type = 42
self.present_keys = set(self.test_value)
self.missing_keys = {"missingkey1", "missingkey2"}
def test_validate_any_key(self):
self.assertTrue(compound.validate_any_key(self.test_value, self.present_keys))
self.assertTrue(
compound.validate_any_key(
self.test_value, self.present_keys | self.missing_keys
)
)
def test_validate_any_key_missing(self):
with self.assertRaises(ValidationError) as cm:
compound.validate_any_key(self.test_value, self.missing_keys)
exc = cm.exception
self.assertIsInstance(str(exc), str)
self.assertEqual(exc.code, "missing-alternative-field")
self.assertEqual(
exc.params["missing_fields"], ", ".join(sorted(self.missing_keys))
)
def test_validate_all_keys(self):
self.assertTrue(compound.validate_all_keys(self.test_value, self.present_keys))
def test_validate_all_keys_missing(self):
with self.assertRaises(ValidationError) as cm:
compound.validate_all_keys(self.test_value, self.missing_keys)
exc = cm.exception
self.assertIsInstance(str(exc), str)
self.assertEqual(exc.code, "missing-mandatory-field")
self.assertEqual(
exc.params["missing_fields"], ", ".join(sorted(self.missing_keys))
)
with self.assertRaises(ValidationError) as cm:
compound.validate_all_keys(
self.test_value, self.present_keys | self.missing_keys
)
exc = cm.exception
self.assertIsInstance(str(exc), str)
self.assertEqual(exc.code, "missing-mandatory-field")
self.assertEqual(
exc.params["missing_fields"], ", ".join(sorted(self.missing_keys))
)
def test_validate_against_schema(self):
self.assertTrue(
compound.validate_against_schema(
self.test_model, self.test_schema, self.test_value
)
)
def test_validate_against_schema_wrong_type(self):
with self.assertRaises(ValidationError) as cm:
compound.validate_against_schema(
self.test_model, self.test_schema, self.test_value_wrong_type
)
exc = cm.exception
self.assertIsInstance(str(exc), str)
self.assertEqual(exc.code, "model-unexpected-type")
self.assertEqual(exc.params["model"], self.test_model)
self.assertEqual(
exc.params["type"], self.test_value_wrong_type.__class__.__name__
)
def test_validate_against_schema_mandatory_keys(self):
with self.assertRaises(ValidationError) as cm:
compound.validate_against_schema(
self.test_model, self.test_schema, self.test_value_missing
)
# The exception should be of the form:
# ValidationError({
# 'mandatory_key1': [ValidationError('model-field-mandatory')],
# 'mandatory_key2': [ValidationError('model-field-mandatory')],
# })
exc = cm.exception
self.assertIsInstance(str(exc), str)
for key in self.test_value_missing_keys:
nested_key = exc.error_dict[key]
self.assertIsInstance(nested_key, list)
self.assertEqual(len(nested_key), 1)
nested = nested_key[0]
self.assertIsInstance(nested, ValidationError)
self.assertEqual(nested.code, "model-field-mandatory")
self.assertEqual(nested.params["field"], key)
def test_validate_whole_schema_shortcut_previous_error(self):
with self.assertRaises(ValidationError) as cm:
compound.validate_against_schema(
- self.test_model, self.test_schema_shortcut, self.test_value_missing,
+ self.test_model,
+ self.test_schema_shortcut,
+ self.test_value_missing,
)
exc = cm.exception
self.assertIsInstance(str(exc), str)
self.assertNotIn(NON_FIELD_ERRORS, exc.error_dict)
def test_validate_whole_schema(self):
with self.assertRaises(ValidationError) as cm:
compound.validate_against_schema(
- self.test_model, self.test_schema_shortcut, self.test_value,
+ self.test_model,
+ self.test_schema_shortcut,
+ self.test_value,
)
# The exception should be of the form:
# ValidationError({
# NON_FIELD_ERRORS: [ValidationError('model-validation-failed')],
# })
exc = cm.exception
self.assertIsInstance(str(exc), str)
self.assertEqual(set(exc.error_dict.keys()), {NON_FIELD_ERRORS})
non_field_errors = exc.error_dict[NON_FIELD_ERRORS]
self.assertIsInstance(non_field_errors, list)
self.assertEqual(len(non_field_errors), 1)
nested = non_field_errors[0]
self.assertIsInstance(nested, ValidationError)
self.assertEqual(nested.code, "model-validation-failed")
self.assertEqual(nested.params["model"], self.test_model)
self.assertEqual(nested.params["validator"], "validate_never")
def test_validate_against_schema_field_error(self):
with self.assertRaises(ValidationError) as cm:
compound.validate_against_schema(
self.test_model, self.test_schema, self.test_value_str_error
)
# The exception should be of the form:
# ValidationError({
# 'str': [ValidationError('unexpected-type')],
# })
exc = cm.exception
self.assertIsInstance(str(exc), str)
self.assertEqual(set(exc.error_dict.keys()), {"str"})
str_errors = exc.error_dict["str"]
self.assertIsInstance(str_errors, list)
self.assertEqual(len(str_errors), 1)
nested = str_errors[0]
self.assertIsInstance(nested, ValidationError)
self.assertEqual(nested.code, "unexpected-type")
def test_validate_against_schema_field_failed(self):
with self.assertRaises(ValidationError) as cm:
compound.validate_against_schema(
self.test_model, self.test_schema_field_failed, self.test_value
)
# The exception should be of the form:
# ValidationError({
# 'int': [ValidationError('field-validation-failed')],
# })
exc = cm.exception
self.assertIsInstance(str(exc), str)
self.assertEqual(set(exc.error_dict.keys()), {"int"})
int_errors = exc.error_dict["int"]
self.assertIsInstance(int_errors, list)
self.assertEqual(len(int_errors), 1)
nested = int_errors[0]
self.assertIsInstance(nested, ValidationError)
self.assertEqual(nested.code, "field-validation-failed")
self.assertEqual(nested.params["validator"], "validate_never")
self.assertEqual(nested.params["field"], "int")
diff --git a/swh/model/tests/swh_model_data.py b/swh/model/tests/swh_model_data.py
index 03b9ca7..382b643 100644
--- a/swh/model/tests/swh_model_data.py
+++ b/swh/model/tests/swh_model_data.py
@@ -1,435 +1,475 @@
# Copyright (C) 2019-2021 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import datetime
from typing import Dict, Sequence
import attr
from swh.model.hashutil import MultiHash, hash_to_bytes
from swh.model.model import (
BaseModel,
Content,
Directory,
DirectoryEntry,
ExtID,
MetadataAuthority,
MetadataAuthorityType,
MetadataFetcher,
ObjectType,
Origin,
OriginVisit,
OriginVisitStatus,
Person,
RawExtrinsicMetadata,
Release,
Revision,
RevisionType,
SkippedContent,
Snapshot,
SnapshotBranch,
TargetType,
Timestamp,
TimestampWithTimezone,
)
from swh.model.swhids import ExtendedSWHID
UTC = datetime.timezone.utc
CONTENTS = [
Content(
length=4,
data=f"foo{i}".encode(),
status="visible",
**MultiHash.from_data(f"foo{i}".encode()).digest(),
)
for i in range(10)
] + [
Content(
length=14,
data=f"forbidden foo{i}".encode(),
status="hidden",
**MultiHash.from_data(f"forbidden foo{i}".encode()).digest(),
)
for i in range(10)
]
SKIPPED_CONTENTS = [
SkippedContent(
length=4,
status="absent",
reason=f"because chr({i}) != '*'",
**MultiHash.from_data(f"bar{i}".encode()).digest(),
)
for i in range(2)
]
duplicate_content1 = Content(
length=4,
sha1=hash_to_bytes("44973274ccef6ab4dfaaf86599792fa9c3fe4689"),
sha1_git=b"another-foo",
blake2s256=b"another-bar",
sha256=b"another-baz",
status="visible",
)
# Craft a sha1 collision
sha1_array = bytearray(duplicate_content1.sha1_git)
sha1_array[0] += 1
duplicate_content2 = attr.evolve(duplicate_content1, sha1_git=bytes(sha1_array))
DUPLICATE_CONTENTS = [duplicate_content1, duplicate_content2]
COMMITTERS = [
Person(fullname=b"foo", name=b"foo", email=b""),
Person(fullname=b"bar", name=b"bar", email=b""),
]
DATES = [
TimestampWithTimezone(
- timestamp=Timestamp(seconds=1234567891, microseconds=0,), offset_bytes=b"+0200",
+ timestamp=Timestamp(
+ seconds=1234567891,
+ microseconds=0,
+ ),
+ offset_bytes=b"+0200",
),
TimestampWithTimezone(
- timestamp=Timestamp(seconds=1234567892, microseconds=0,), offset_bytes=b"+0200",
+ timestamp=Timestamp(
+ seconds=1234567892,
+ microseconds=0,
+ ),
+ offset_bytes=b"+0200",
),
]
REVISIONS = [
Revision(
id=hash_to_bytes("66c7c1cd9673275037140f2abff7b7b11fc9439c"),
message=b"hello",
date=DATES[0],
committer=COMMITTERS[0],
author=COMMITTERS[0],
committer_date=DATES[0],
type=RevisionType.GIT,
directory=b"\x01" * 20,
synthetic=False,
metadata=None,
parents=(
hash_to_bytes("9b918dd063cec85c2bc63cc7f167e29f5894dcbc"),
hash_to_bytes("757f38bdcd8473aaa12df55357f5e2f1a318e672"),
),
),
Revision(
id=hash_to_bytes("c7f96242d73c267adc77c2908e64e0c1cb6a4431"),
message=b"hello again",
date=DATES[1],
committer=COMMITTERS[1],
author=COMMITTERS[1],
committer_date=DATES[1],
type=RevisionType.MERCURIAL,
directory=b"\x02" * 20,
synthetic=False,
metadata=None,
parents=(),
extra_headers=((b"foo", b"bar"),),
),
Revision(
id=hash_to_bytes("51580d63b8dcc0ec73e74994e66896858542840a"),
message=b"hello",
date=DATES[0],
committer=COMMITTERS[0],
author=COMMITTERS[0],
committer_date=DATES[0],
type=RevisionType.GIT,
directory=b"\x01" * 20,
synthetic=False,
metadata=None,
parents=(hash_to_bytes("9b918dd063cec85c2bc63cc7f167e29f5894dcbc"),),
raw_manifest=(
b"commit 207\x00"
b"tree 0101010101010101010101010101010101010101\n"
b"parent 9B918DD063CEC85C2BC63CC7F167E29F5894DCBC" # upper-cased
b"nauthor foo 1234567891 +0200\n"
b"committer foo 1234567891 +0200"
b"\n\nhello"
),
),
]
EXTIDS = [
- ExtID(extid_type="git256", extid=b"\x03" * 32, target=REVISIONS[0].swhid(),),
- ExtID(extid_type="hg", extid=b"\x04" * 20, target=REVISIONS[1].swhid(),),
+ ExtID(
+ extid_type="git256",
+ extid=b"\x03" * 32,
+ target=REVISIONS[0].swhid(),
+ ),
+ ExtID(
+ extid_type="hg",
+ extid=b"\x04" * 20,
+ target=REVISIONS[1].swhid(),
+ ),
ExtID(
extid_type="hg-nodeid",
extid=b"\x05" * 20,
target=REVISIONS[1].swhid(),
extid_version=1,
),
]
RELEASES = [
Release(
id=hash_to_bytes("8059dc4e17fcd0e51ca3bcd6b80f4577d281fd08"),
name=b"v0.0.1",
date=TimestampWithTimezone(
- timestamp=Timestamp(seconds=1234567890, microseconds=0,),
+ timestamp=Timestamp(
+ seconds=1234567890,
+ microseconds=0,
+ ),
offset_bytes=b"+0200",
),
author=COMMITTERS[0],
target_type=ObjectType.REVISION,
target=b"\x04" * 20,
message=b"foo",
synthetic=False,
),
Release(
id=hash_to_bytes("ee4d20e80af850cc0f417d25dc5073792c5010d2"),
name=b"this-is-a/tag/1.0",
date=None,
author=None,
target_type=ObjectType.DIRECTORY,
target=b"\x05" * 20,
message=b"bar",
synthetic=False,
),
Release(
id=hash_to_bytes("1cdd1e87234b6f066d0855a3b5b567638a55d583"),
name=b"v0.0.1",
date=TimestampWithTimezone(
- timestamp=Timestamp(seconds=1234567890, microseconds=0,),
+ timestamp=Timestamp(
+ seconds=1234567890,
+ microseconds=0,
+ ),
offset_bytes=b"+0200",
),
author=COMMITTERS[0],
target_type=ObjectType.REVISION,
target=b"\x04" * 20,
message=b"foo",
synthetic=False,
raw_manifest=(
b"tag 102\x00"
b"object 0404040404040404040404040404040404040404\n"
b"type commit\n"
b"tag v0.0.1\n"
b"tagger foo 1234567890 +200" # missing leading 0 for timezone
b"\n\nfoo"
),
),
]
ORIGINS = [
- Origin(url="https://somewhere.org/den/fox",),
- Origin(url="https://overtherainbow.org/fox/den",),
+ Origin(
+ url="https://somewhere.org/den/fox",
+ ),
+ Origin(
+ url="https://overtherainbow.org/fox/den",
+ ),
]
ORIGIN_VISITS = [
OriginVisit(
origin=ORIGINS[0].url,
date=datetime.datetime(2013, 5, 7, 4, 20, 39, 369271, tzinfo=UTC),
visit=1,
type="git",
),
OriginVisit(
origin=ORIGINS[1].url,
date=datetime.datetime(2014, 11, 27, 17, 20, 39, tzinfo=UTC),
visit=1,
type="hg",
),
OriginVisit(
origin=ORIGINS[0].url,
date=datetime.datetime(2018, 11, 27, 17, 20, 39, tzinfo=UTC),
visit=2,
type="git",
),
OriginVisit(
origin=ORIGINS[0].url,
date=datetime.datetime(2018, 11, 27, 17, 20, 39, tzinfo=UTC),
visit=3,
type="git",
),
OriginVisit(
origin=ORIGINS[1].url,
date=datetime.datetime(2015, 11, 27, 17, 20, 39, tzinfo=UTC),
visit=2,
type="hg",
),
]
# The origin-visit-status dates needs to be shifted slightly in the future from their
# visit dates counterpart. Otherwise, we are hitting storage-wise the "on conflict"
# ignore policy (because origin-visit-add creates an origin-visit-status with the same
# parameters from the origin-visit {origin, visit, date}...
ORIGIN_VISIT_STATUSES = [
OriginVisitStatus(
origin=ORIGINS[0].url,
date=datetime.datetime(2013, 5, 7, 4, 20, 39, 432222, tzinfo=UTC),
visit=1,
type="git",
status="ongoing",
snapshot=None,
metadata=None,
),
OriginVisitStatus(
origin=ORIGINS[1].url,
date=datetime.datetime(2014, 11, 27, 17, 21, 12, tzinfo=UTC),
visit=1,
type="hg",
status="ongoing",
snapshot=None,
metadata=None,
),
OriginVisitStatus(
origin=ORIGINS[0].url,
date=datetime.datetime(2018, 11, 27, 17, 20, 59, tzinfo=UTC),
visit=2,
type="git",
status="ongoing",
snapshot=None,
metadata=None,
),
OriginVisitStatus(
origin=ORIGINS[0].url,
date=datetime.datetime(2018, 11, 27, 17, 20, 49, tzinfo=UTC),
visit=3,
type="git",
status="full",
snapshot=hash_to_bytes("9e78d7105c5e0f886487511e2a92377b4ee4c32a"),
metadata=None,
),
OriginVisitStatus(
origin=ORIGINS[1].url,
date=datetime.datetime(2015, 11, 27, 17, 22, 18, tzinfo=UTC),
visit=2,
type="hg",
status="partial",
snapshot=hash_to_bytes("0e7f84ede9a254f2cd55649ad5240783f557e65f"),
metadata=None,
),
]
DIRECTORIES = [
Directory(id=hash_to_bytes("4b825dc642cb6eb9a060e54bf8d69288fbee4904"), entries=()),
Directory(
id=hash_to_bytes("87b339104f7dc2a8163dec988445e3987995545f"),
entries=(
DirectoryEntry(
name=b"file1.ext",
perms=0o644,
type="file",
target=CONTENTS[0].sha1_git,
),
DirectoryEntry(
name=b"dir1",
perms=0o755,
type="dir",
target=hash_to_bytes("4b825dc642cb6eb9a060e54bf8d69288fbee4904"),
),
DirectoryEntry(
- name=b"subprepo1", perms=0o160000, type="rev", target=REVISIONS[1].id,
+ name=b"subprepo1",
+ perms=0o160000,
+ type="rev",
+ target=REVISIONS[1].id,
),
),
),
Directory(
id=hash_to_bytes("d135a91ac82a754e7f4bdeff8d56ef06d921eb7d"),
entries=(
DirectoryEntry(
- name=b"file1.ext", perms=0o644, type="file", target=b"\x11" * 20,
+ name=b"file1.ext",
+ perms=0o644,
+ type="file",
+ target=b"\x11" * 20,
),
),
raw_manifest=(
b"tree 34\x00"
+ b"00644 file1.ext\x00" # added two leading zeros
+ b"\x11" * 20
),
),
]
SNAPSHOTS = [
Snapshot(
id=hash_to_bytes("9e78d7105c5e0f886487511e2a92377b4ee4c32a"),
branches={
b"master": SnapshotBranch(
target_type=TargetType.REVISION, target=REVISIONS[0].id
)
},
),
Snapshot(
id=hash_to_bytes("0e7f84ede9a254f2cd55649ad5240783f557e65f"),
branches={
b"target/revision": SnapshotBranch(
- target_type=TargetType.REVISION, target=REVISIONS[0].id,
+ target_type=TargetType.REVISION,
+ target=REVISIONS[0].id,
),
b"target/alias": SnapshotBranch(
target_type=TargetType.ALIAS, target=b"target/revision"
),
b"target/directory": SnapshotBranch(
- target_type=TargetType.DIRECTORY, target=DIRECTORIES[0].id,
+ target_type=TargetType.DIRECTORY,
+ target=DIRECTORIES[0].id,
),
b"target/release": SnapshotBranch(
target_type=TargetType.RELEASE, target=RELEASES[0].id
),
b"target/snapshot": SnapshotBranch(
target_type=TargetType.SNAPSHOT,
target=hash_to_bytes("9e78d7105c5e0f886487511e2a92377b4ee4c32a"),
),
},
),
]
METADATA_AUTHORITIES = [
MetadataAuthority(
- type=MetadataAuthorityType.FORGE, url="http://example.org/", metadata={},
+ type=MetadataAuthorityType.FORGE,
+ url="http://example.org/",
+ metadata={},
),
]
METADATA_FETCHERS = [
- MetadataFetcher(name="test-fetcher", version="1.0.0", metadata={},)
+ MetadataFetcher(
+ name="test-fetcher",
+ version="1.0.0",
+ metadata={},
+ )
]
RAW_EXTRINSIC_METADATA = [
RawExtrinsicMetadata(
target=Origin("http://example.org/foo.git").swhid(),
discovery_date=datetime.datetime(2020, 7, 30, 17, 8, 20, tzinfo=UTC),
authority=attr.evolve(METADATA_AUTHORITIES[0], metadata=None),
fetcher=attr.evolve(METADATA_FETCHERS[0], metadata=None),
format="json",
metadata=b'{"foo": "bar"}',
),
RawExtrinsicMetadata(
target=ExtendedSWHID.from_string(str(CONTENTS[0].swhid())),
discovery_date=datetime.datetime(2020, 7, 30, 17, 8, 20, tzinfo=UTC),
authority=attr.evolve(METADATA_AUTHORITIES[0], metadata=None),
fetcher=attr.evolve(METADATA_FETCHERS[0], metadata=None),
format="json",
metadata=b'{"foo": "bar"}',
),
]
TEST_OBJECTS: Dict[str, Sequence[BaseModel]] = {
"content": CONTENTS,
"directory": DIRECTORIES,
"extid": EXTIDS,
"metadata_authority": METADATA_AUTHORITIES,
"metadata_fetcher": METADATA_FETCHERS,
"origin": ORIGINS,
"origin_visit": ORIGIN_VISITS,
"origin_visit_status": ORIGIN_VISIT_STATUSES,
"raw_extrinsic_metadata": RAW_EXTRINSIC_METADATA,
"release": RELEASES,
"revision": REVISIONS,
"snapshot": SNAPSHOTS,
"skipped_content": SKIPPED_CONTENTS,
}
SAMPLE_FOLDER_SWHIDS = [
"swh:1:dir:e8b0f1466af8608c8a3fb9879db172b887e80759",
"swh:1:cnt:7d5c08111e21c8a9f71540939998551683375fad",
"swh:1:cnt:68769579c3eaadbe555379b9c3538e6628bae1eb",
"swh:1:cnt:e86b45e538d9b6888c969c89fbd22a85aa0e0366",
"swh:1:dir:3c1f578394f4623f74a0ba7fe761729f59fc6ec4",
"swh:1:dir:c3020f6bf135a38c6df3afeb5fb38232c5e07087",
"swh:1:cnt:133693b125bad2b4ac318535b84901ebb1f6b638",
"swh:1:dir:4b825dc642cb6eb9a060e54bf8d69288fbee4904",
"swh:1:cnt:19102815663d23f8b75a47e7a01965dcdc96468c",
"swh:1:dir:2b41c40f0d1fbffcba12497db71fba83fcca96e5",
"swh:1:cnt:8185dfb2c0c2c597d16f75a8a0c37668567c3d7e",
"swh:1:cnt:7c4c57ba9ff496ad179b8f65b1d286edbda34c9a",
"swh:1:cnt:acac326ddd63b0bc70840659d4ac43619484e69f",
]
diff --git a/swh/model/tests/test_cli.py b/swh/model/tests/test_cli.py
index eeb5a63..ad42349 100644
--- a/swh/model/tests/test_cli.py
+++ b/swh/model/tests/test_cli.py
@@ -1,211 +1,213 @@
# Copyright (C) 2018-2019 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import os
import sys
import tarfile
import tempfile
import unittest
import unittest.mock
from click.testing import CliRunner
import pytest
from swh.model import cli
from swh.model.hashutil import hash_to_hex
from swh.model.tests.swh_model_data import SAMPLE_FOLDER_SWHIDS
from swh.model.tests.test_from_disk import DataMixin
@pytest.mark.fs
class TestIdentify(DataMixin, unittest.TestCase):
def setUp(self):
super().setUp()
self.runner = CliRunner()
def assertSWHID(self, result, swhid):
self.assertEqual(result.exit_code, 0, result.output)
self.assertEqual(result.output.split()[0], swhid)
def test_no_args(self):
result = self.runner.invoke(cli.identify)
self.assertNotEqual(result.exit_code, 0)
def test_content_id(self):
"""identify file content"""
self.make_contents(self.tmpdir_name)
for filename, content in self.contents.items():
path = os.path.join(self.tmpdir_name, filename)
result = self.runner.invoke(cli.identify, ["--type", "content", path])
self.assertSWHID(result, "swh:1:cnt:" + hash_to_hex(content["sha1_git"]))
def test_content_id_from_stdin(self):
"""identify file content"""
self.make_contents(self.tmpdir_name)
for _, content in self.contents.items():
result = self.runner.invoke(cli.identify, ["-"], input=content["data"])
self.assertSWHID(result, "swh:1:cnt:" + hash_to_hex(content["sha1_git"]))
def test_directory_id(self):
"""identify an entire directory"""
self.make_from_tarball(self.tmpdir_name)
path = os.path.join(self.tmpdir_name, b"sample-folder")
result = self.runner.invoke(cli.identify, ["--type", "directory", path])
self.assertSWHID(result, "swh:1:dir:e8b0f1466af8608c8a3fb9879db172b887e80759")
@pytest.mark.requires_optional_deps
def test_snapshot_id(self):
"""identify a snapshot"""
tarball = os.path.join(
os.path.dirname(__file__), "data", "repos", "sample-repo.tgz"
)
with tempfile.TemporaryDirectory(prefix="swh.model.cli") as d:
with tarfile.open(tarball, "r:gz") as t:
t.extractall(d)
repo_dir = os.path.join(d, "sample-repo")
result = self.runner.invoke(
cli.identify, ["--type", "snapshot", repo_dir]
)
self.assertSWHID(
result, "swh:1:snp:abc888898124270905a0ef3c67e872ce08e7e0c1"
)
def test_snapshot_without_dulwich(self):
"""checks swh-identify returns a 'nice' message instead of a traceback
when dulwich is not installed"""
with unittest.mock.patch.dict(sys.modules, {"dulwich": None}):
with tempfile.TemporaryDirectory(prefix="swh.model.cli") as d:
result = self.runner.invoke(
- cli.identify, ["--type", "snapshot", d], catch_exceptions=False,
+ cli.identify,
+ ["--type", "snapshot", d],
+ catch_exceptions=False,
)
assert result.exit_code == 1
assert "'swh.model[cli]'" in result.output
def test_origin_id(self):
"""identify an origin URL"""
url = "https://github.com/torvalds/linux"
result = self.runner.invoke(cli.identify, ["--type", "origin", url])
self.assertSWHID(result, "swh:1:ori:b63a575fe3faab7692c9f38fb09d4bb45651bb0f")
def test_symlink(self):
"""identify symlink --- both itself and target"""
regular = os.path.join(self.tmpdir_name, b"foo.txt")
link = os.path.join(self.tmpdir_name, b"bar.txt")
open(regular, "w").write("foo\n")
os.symlink(os.path.basename(regular), link)
result = self.runner.invoke(cli.identify, [link])
self.assertSWHID(result, "swh:1:cnt:257cc5642cb1a054f08cc83f2d943e56fd3ebe99")
result = self.runner.invoke(cli.identify, ["--no-dereference", link])
self.assertSWHID(result, "swh:1:cnt:996f1789ff67c0e3f69ef5933a55d54c5d0e9954")
def test_show_filename(self):
"""filename is shown by default"""
self.make_contents(self.tmpdir_name)
for filename, content in self.contents.items():
path = os.path.join(self.tmpdir_name, filename)
result = self.runner.invoke(cli.identify, ["--type", "content", path])
self.assertEqual(result.exit_code, 0)
self.assertEqual(
result.output.rstrip(),
"swh:1:cnt:%s\t%s" % (hash_to_hex(content["sha1_git"]), path.decode()),
)
def test_hide_filename(self):
"""filename is hidden upon request"""
self.make_contents(self.tmpdir_name)
for filename, content in self.contents.items():
path = os.path.join(self.tmpdir_name, filename)
result = self.runner.invoke(
cli.identify, ["--type", "content", "--no-filename", path]
)
self.assertSWHID(result, "swh:1:cnt:" + hash_to_hex(content["sha1_git"]))
def test_auto_content(self):
"""automatic object type detection: content"""
with tempfile.NamedTemporaryFile(prefix="swh.model.cli") as f:
result = self.runner.invoke(cli.identify, [f.name])
self.assertEqual(result.exit_code, 0)
self.assertRegex(result.output, r"^swh:\d+:cnt:")
def test_auto_directory(self):
"""automatic object type detection: directory"""
with tempfile.TemporaryDirectory(prefix="swh.model.cli") as dirname:
result = self.runner.invoke(cli.identify, [dirname])
self.assertEqual(result.exit_code, 0)
self.assertRegex(result.output, r"^swh:\d+:dir:")
def test_auto_origin(self):
"""automatic object type detection: origin"""
result = self.runner.invoke(cli.identify, ["https://github.com/torvalds/linux"])
self.assertEqual(result.exit_code, 0, result.output)
self.assertRegex(result.output, r"^swh:\d+:ori:")
def test_verify_content(self):
"""identifier verification"""
self.make_contents(self.tmpdir_name)
for filename, content in self.contents.items():
expected_id = "swh:1:cnt:" + hash_to_hex(content["sha1_git"])
# match
path = os.path.join(self.tmpdir_name, filename)
result = self.runner.invoke(cli.identify, ["--verify", expected_id, path])
self.assertEqual(result.exit_code, 0, result.output)
# mismatch
with open(path, "a") as f:
f.write("trailing garbage to make verification fail")
result = self.runner.invoke(cli.identify, ["--verify", expected_id, path])
self.assertEqual(result.exit_code, 1)
def test_exclude(self):
"""exclude patterns"""
self.make_from_tarball(self.tmpdir_name)
path = os.path.join(self.tmpdir_name, b"sample-folder")
excluded_dir = os.path.join(path, b"excluded_dir\x96")
os.mkdir(excluded_dir)
with open(os.path.join(excluded_dir, b"some_file"), "w") as f:
f.write("content")
result = self.runner.invoke(
cli.identify, ["--type", "directory", "--exclude", "excluded_*", path]
)
self.assertSWHID(result, "swh:1:dir:e8b0f1466af8608c8a3fb9879db172b887e80759")
def test_recursive_directory(self):
self.make_from_tarball(self.tmpdir_name)
path = os.path.join(self.tmpdir_name, b"sample-folder")
result = self.runner.invoke(cli.identify, ["--recursive", path])
self.assertEqual(result.exit_code, 0, result.output)
result = result.output.split()
result_swhids = []
# get all SWHID from the result
for i in range(0, len(result)):
if i % 2 == 0:
result_swhids.append(result[i])
assert len(result_swhids) == len(SAMPLE_FOLDER_SWHIDS)
for swhid in SAMPLE_FOLDER_SWHIDS:
assert swhid in result_swhids
def test_recursive_directory_no_filename(self):
self.make_from_tarball(self.tmpdir_name)
path = os.path.join(self.tmpdir_name, b"sample-folder")
result = self.runner.invoke(
cli.identify, ["--recursive", "--no-filename", path]
)
self.assertEqual(result.exit_code, 0, result.output)
result_swhids = result.output.split()
assert len(result_swhids) == len(SAMPLE_FOLDER_SWHIDS)
for swhid in SAMPLE_FOLDER_SWHIDS:
assert swhid in result_swhids
diff --git a/swh/model/tests/test_from_disk.py b/swh/model/tests/test_from_disk.py
index fa3d8b6..b7674d4 100644
--- a/swh/model/tests/test_from_disk.py
+++ b/swh/model/tests/test_from_disk.py
@@ -1,996 +1,1001 @@
# Copyright (C) 2017-2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
from collections import defaultdict
import os
import tarfile
import tempfile
from typing import ClassVar, Optional
import unittest
import pytest
from swh.model import from_disk, model
from swh.model.from_disk import Content, DentryPerms, Directory, DiskBackedContent
from swh.model.hashutil import DEFAULT_ALGORITHMS, hash_to_bytes, hash_to_hex
TEST_DATA = os.path.join(os.path.dirname(__file__), "data")
class ModeToPerms(unittest.TestCase):
def setUp(self):
super().setUp()
# Generate a full permissions map
self.perms_map = {}
# Symlinks
for i in range(0o120000, 0o127777 + 1):
self.perms_map[i] = DentryPerms.symlink
# Directories
for i in range(0o040000, 0o047777 + 1):
self.perms_map[i] = DentryPerms.directory
# Other file types: socket, regular file, block device, character
# device, fifo all map to regular files
for ft in [0o140000, 0o100000, 0o060000, 0o020000, 0o010000]:
for i in range(ft, ft + 0o7777 + 1):
if i & 0o111:
# executable bits are set
self.perms_map[i] = DentryPerms.executable_content
else:
self.perms_map[i] = DentryPerms.content
def test_exhaustive_mode_to_perms(self):
for fmode, perm in self.perms_map.items():
self.assertEqual(perm, from_disk.mode_to_perms(fmode))
class TestDiskBackedContent(unittest.TestCase):
def test_with_data(self):
expected_content = model.Content(
length=42,
status="visible",
data=b"foo bar",
sha1=b"foo",
sha1_git=b"bar",
sha256=b"baz",
blake2s256=b"qux",
)
with tempfile.NamedTemporaryFile(mode="w+b") as fd:
content = DiskBackedContent(
length=42,
status="visible",
path=fd.name,
sha1=b"foo",
sha1_git=b"bar",
sha256=b"baz",
blake2s256=b"qux",
)
fd.write(b"foo bar")
fd.seek(0)
content_with_data = content.with_data()
assert expected_content == content_with_data
def test_lazy_data(self):
with tempfile.NamedTemporaryFile(mode="w+b") as fd:
fd.write(b"foo")
fd.seek(0)
content = DiskBackedContent(
length=42,
status="visible",
path=fd.name,
sha1=b"foo",
sha1_git=b"bar",
sha256=b"baz",
blake2s256=b"qux",
)
fd.write(b"bar")
fd.seek(0)
content_with_data = content.with_data()
fd.write(b"baz")
fd.seek(0)
assert content_with_data.data == b"bar"
def test_with_data_cannot_read(self):
with tempfile.NamedTemporaryFile(mode="w+b") as fd:
content = DiskBackedContent(
length=42,
status="visible",
path=fd.name,
sha1=b"foo",
sha1_git=b"bar",
sha256=b"baz",
blake2s256=b"qux",
)
with pytest.raises(OSError):
content.with_data()
def test_missing_path(self):
with pytest.raises(TypeError):
DiskBackedContent(
length=42,
status="visible",
sha1=b"foo",
sha1_git=b"bar",
sha256=b"baz",
blake2s256=b"qux",
)
with pytest.raises(TypeError):
DiskBackedContent(
length=42,
status="visible",
path=None,
sha1=b"foo",
sha1_git=b"bar",
sha256=b"baz",
blake2s256=b"qux",
)
class DataMixin:
maxDiff = None # type: ClassVar[Optional[int]]
def setUp(self):
self.tmpdir = tempfile.TemporaryDirectory(prefix="swh.model.from_disk")
self.tmpdir_name = os.fsencode(self.tmpdir.name)
self.contents = {
b"file": {
"data": b"42\n",
"sha1": hash_to_bytes("34973274ccef6ab4dfaaf86599792fa9c3fe4689"),
"sha256": hash_to_bytes(
"084c799cd551dd1d8d5c5f9a5d593b2e"
"931f5e36122ee5c793c1d08a19839cc0"
),
"sha1_git": hash_to_bytes("d81cc0710eb6cf9efd5b920a8453e1e07157b6cd"),
"blake2s256": hash_to_bytes(
"d5fe1939576527e42cfd76a9455a2432"
"fe7f56669564577dd93c4280e76d661d"
),
"length": 3,
"mode": 0o100644,
},
}
self.symlinks = {
b"symlink": {
"data": b"target",
"blake2s256": hash_to_bytes(
"595d221b30fdd8e10e2fdf18376e688e"
"9f18d56fd9b6d1eb6a822f8c146c6da6"
),
"sha1": hash_to_bytes("0e8a3ad980ec179856012b7eecf4327e99cd44cd"),
"sha1_git": hash_to_bytes("1de565933b05f74c75ff9a6520af5f9f8a5a2f1d"),
"sha256": hash_to_bytes(
"34a04005bcaf206eec990bd9637d9fdb"
"6725e0a0c0d4aebf003f17f4c956eb5c"
),
"length": 6,
"perms": DentryPerms.symlink,
}
}
self.specials = {
b"fifo": os.mkfifo,
}
self.empty_content = {
"data": b"",
"length": 0,
"blake2s256": hash_to_bytes(
"69217a3079908094e11121d042354a7c" "1f55b6482ca1a51e1b250dfd1ed0eef9"
),
"sha1": hash_to_bytes("da39a3ee5e6b4b0d3255bfef95601890afd80709"),
"sha1_git": hash_to_bytes("e69de29bb2d1d6434b8b29ae775ad8c2e48c5391"),
"sha256": hash_to_bytes(
"e3b0c44298fc1c149afbf4c8996fb924" "27ae41e4649b934ca495991b7852b855"
),
"perms": DentryPerms.content,
}
self.empty_directory = {
"id": hash_to_bytes("4b825dc642cb6eb9a060e54bf8d69288fbee4904"),
"entries": [],
}
# Generated with generate_testdata_from_disk
self.tarball_contents = {
b"": {
"entries": [
{
"name": b"bar",
"perms": DentryPerms.directory,
"target": hash_to_bytes(
"3c1f578394f4623f74a0ba7fe761729f59fc6ec4"
),
"type": "dir",
},
{
"name": b"empty-folder",
"perms": DentryPerms.directory,
"target": hash_to_bytes(
"4b825dc642cb6eb9a060e54bf8d69288fbee4904"
),
"type": "dir",
},
{
"name": b"foo",
"perms": DentryPerms.directory,
"target": hash_to_bytes(
"2b41c40f0d1fbffcba12497db71fba83fcca96e5"
),
"type": "dir",
},
{
"name": b"link-to-another-quote",
"perms": DentryPerms.symlink,
"target": hash_to_bytes(
"7d5c08111e21c8a9f71540939998551683375fad"
),
"type": "file",
},
{
"name": b"link-to-binary",
"perms": DentryPerms.symlink,
"target": hash_to_bytes(
"e86b45e538d9b6888c969c89fbd22a85aa0e0366"
),
"type": "file",
},
{
"name": b"link-to-foo",
"perms": DentryPerms.symlink,
"target": hash_to_bytes(
"19102815663d23f8b75a47e7a01965dcdc96468c"
),
"type": "file",
},
{
"name": b"some-binary",
"perms": DentryPerms.executable_content,
"target": hash_to_bytes(
"68769579c3eaadbe555379b9c3538e6628bae1eb"
),
"type": "file",
},
],
"id": hash_to_bytes("e8b0f1466af8608c8a3fb9879db172b887e80759"),
},
b"bar": {
"entries": [
{
"name": b"barfoo",
"perms": DentryPerms.directory,
"target": hash_to_bytes(
"c3020f6bf135a38c6df3afeb5fb38232c5e07087"
),
"type": "dir",
}
],
"id": hash_to_bytes("3c1f578394f4623f74a0ba7fe761729f59fc6ec4"),
},
b"bar/barfoo": {
"entries": [
{
"name": b"another-quote.org",
"perms": DentryPerms.content,
"target": hash_to_bytes(
"133693b125bad2b4ac318535b84901ebb1f6b638"
),
"type": "file",
}
],
"id": hash_to_bytes("c3020f6bf135a38c6df3afeb5fb38232c5e07087"),
},
b"bar/barfoo/another-quote.org": {
"blake2s256": hash_to_bytes(
"d26c1cad82d43df0bffa5e7be11a60e3"
"4adb85a218b433cbce5278b10b954fe8"
),
"length": 72,
"perms": DentryPerms.content,
"sha1": hash_to_bytes("90a6138ba59915261e179948386aa1cc2aa9220a"),
"sha1_git": hash_to_bytes("133693b125bad2b4ac318535b84901ebb1f6b638"),
"sha256": hash_to_bytes(
"3db5ae168055bcd93a4d08285dc99ffe"
"e2883303b23fac5eab850273a8ea5546"
),
},
b"empty-folder": {
"entries": [],
"id": hash_to_bytes("4b825dc642cb6eb9a060e54bf8d69288fbee4904"),
},
b"foo": {
"entries": [
{
"name": b"barfoo",
"perms": DentryPerms.symlink,
"target": hash_to_bytes(
"8185dfb2c0c2c597d16f75a8a0c37668567c3d7e"
),
"type": "file",
},
{
"name": b"quotes.md",
"perms": DentryPerms.content,
"target": hash_to_bytes(
"7c4c57ba9ff496ad179b8f65b1d286edbda34c9a"
),
"type": "file",
},
{
"name": b"rel-link-to-barfoo",
"perms": DentryPerms.symlink,
"target": hash_to_bytes(
"acac326ddd63b0bc70840659d4ac43619484e69f"
),
"type": "file",
},
],
"id": hash_to_bytes("2b41c40f0d1fbffcba12497db71fba83fcca96e5"),
},
b"foo/barfoo": {
"blake2s256": hash_to_bytes(
"e1252f2caa4a72653c4efd9af871b62b"
"f2abb7bb2f1b0e95969204bd8a70d4cd"
),
"data": b"bar/barfoo",
"length": 10,
"perms": DentryPerms.symlink,
"sha1": hash_to_bytes("9057ee6d0162506e01c4d9d5459a7add1fedac37"),
"sha1_git": hash_to_bytes("8185dfb2c0c2c597d16f75a8a0c37668567c3d7e"),
"sha256": hash_to_bytes(
"29ad3f5725321b940332c78e403601af"
"ff61daea85e9c80b4a7063b6887ead68"
),
},
b"foo/quotes.md": {
"blake2s256": hash_to_bytes(
"bf7ce4fe304378651ee6348d3e9336ed"
"5ad603d33e83c83ba4e14b46f9b8a80b"
),
"length": 66,
"perms": DentryPerms.content,
"sha1": hash_to_bytes("1bf0bb721ac92c18a19b13c0eb3d741cbfadebfc"),
"sha1_git": hash_to_bytes("7c4c57ba9ff496ad179b8f65b1d286edbda34c9a"),
"sha256": hash_to_bytes(
"caca942aeda7b308859eb56f909ec96d"
"07a499491690c453f73b9800a93b1659"
),
},
b"foo/rel-link-to-barfoo": {
"blake2s256": hash_to_bytes(
"d9c327421588a1cf61f316615005a2e9"
"c13ac3a4e96d43a24138d718fa0e30db"
),
"data": b"../bar/barfoo",
"length": 13,
"perms": DentryPerms.symlink,
"sha1": hash_to_bytes("dc51221d308f3aeb2754db48391b85687c2869f4"),
"sha1_git": hash_to_bytes("acac326ddd63b0bc70840659d4ac43619484e69f"),
"sha256": hash_to_bytes(
"8007d20db2af40435f42ddef4b8ad76b"
"80adbec26b249fdf0473353f8d99df08"
),
},
b"link-to-another-quote": {
"blake2s256": hash_to_bytes(
"2d0e73cea01ba949c1022dc10c8a43e6"
"6180639662e5dc2737b843382f7b1910"
),
"data": b"bar/barfoo/another-quote.org",
"length": 28,
"perms": DentryPerms.symlink,
"sha1": hash_to_bytes("cbeed15e79599c90de7383f420fed7acb48ea171"),
"sha1_git": hash_to_bytes("7d5c08111e21c8a9f71540939998551683375fad"),
"sha256": hash_to_bytes(
"e6e17d0793aa750a0440eb9ad5b80b25"
"8076637ef0fb68f3ac2e59e4b9ac3ba6"
),
},
b"link-to-binary": {
"blake2s256": hash_to_bytes(
"9ce18b1adecb33f891ca36664da676e1"
"2c772cc193778aac9a137b8dc5834b9b"
),
"data": b"some-binary",
"length": 11,
"perms": DentryPerms.symlink,
"sha1": hash_to_bytes("d0248714948b3a48a25438232a6f99f0318f59f1"),
"sha1_git": hash_to_bytes("e86b45e538d9b6888c969c89fbd22a85aa0e0366"),
"sha256": hash_to_bytes(
"14126e97d83f7d261c5a6889cee73619"
"770ff09e40c5498685aba745be882eff"
),
},
b"link-to-foo": {
"blake2s256": hash_to_bytes(
"08d6cad88075de8f192db097573d0e82"
"9411cd91eb6ec65e8fc16c017edfdb74"
),
"data": b"foo",
"length": 3,
"perms": DentryPerms.symlink,
"sha1": hash_to_bytes("0beec7b5ea3f0fdbc95d0dd47f3c5bc275da8a33"),
"sha1_git": hash_to_bytes("19102815663d23f8b75a47e7a01965dcdc96468c"),
"sha256": hash_to_bytes(
"2c26b46b68ffc68ff99b453c1d304134"
"13422d706483bfa0f98a5e886266e7ae"
),
},
b"some-binary": {
"blake2s256": hash_to_bytes(
"922e0f7015035212495b090c27577357"
"a740ddd77b0b9e0cd23b5480c07a18c6"
),
"length": 5,
"perms": DentryPerms.executable_content,
"sha1": hash_to_bytes("0bbc12d7f4a2a15b143da84617d95cb223c9b23c"),
"sha1_git": hash_to_bytes("68769579c3eaadbe555379b9c3538e6628bae1eb"),
"sha256": hash_to_bytes(
"bac650d34a7638bb0aeb5342646d24e3"
"b9ad6b44c9b383621faa482b990a367d"
),
},
}
def tearDown(self):
self.tmpdir.cleanup()
def assertContentEqual(self, left, right, *, check_path=False): # noqa
if not isinstance(left, Content):
raise ValueError("%s is not a Content" % left)
if isinstance(right, Content):
right = right.get_data()
# Compare dictionaries
keys = DEFAULT_ALGORITHMS | {
"length",
"perms",
}
if check_path:
keys |= {"path"}
failed = []
for key in keys:
try:
lvalue = left.data[key]
if key == "perms" and "perms" not in right:
rvalue = from_disk.mode_to_perms(right["mode"])
else:
rvalue = right[key]
except KeyError:
failed.append(key)
continue
if lvalue != rvalue:
failed.append(key)
if failed:
raise self.failureException(
"Content mismatched:\n"
+ "\n".join(
"content[%s] = %r != %r" % (key, left.data.get(key), right.get(key))
for key in failed
)
)
def assertDirectoryEqual(self, left, right): # NoQA
if not isinstance(left, Directory):
raise ValueError("%s is not a Directory" % left)
if isinstance(right, Directory):
right = right.get_data()
assert left.entries == right["entries"]
assert left.hash == right["id"]
assert left.to_model() == model.Directory.from_dict(right)
def make_contents(self, directory):
for filename, content in self.contents.items():
path = os.path.join(directory, filename)
with open(path, "wb") as f:
f.write(content["data"])
os.chmod(path, content["mode"])
def make_symlinks(self, directory):
for filename, symlink in self.symlinks.items():
path = os.path.join(directory, filename)
os.symlink(symlink["data"], path)
def make_specials(self, directory):
for filename, fn in self.specials.items():
path = os.path.join(directory, filename)
fn(path)
def make_from_tarball(self, directory):
tarball = os.path.join(TEST_DATA, "dir-folders", "sample-folder.tgz")
with tarfile.open(tarball, "r:gz") as f:
f.extractall(os.fsdecode(directory))
class TestContent(DataMixin, unittest.TestCase):
def setUp(self):
super().setUp()
def test_data_to_content(self):
for filename, content in self.contents.items():
conv_content = Content.from_bytes(
mode=content["mode"], data=content["data"]
)
self.assertContentEqual(conv_content, content)
self.assertIn(hash_to_hex(conv_content.hash), repr(conv_content))
def test_content_swhid(self):
for _, content in self.contents.items():
content_res = Content.from_bytes(mode=content["mode"], data=content["data"])
content_swhid = "swh:1:cnt:" + hash_to_hex(content["sha1_git"])
assert str(content_res.swhid()) == content_swhid
class TestDirectory(DataMixin, unittest.TestCase):
def setUp(self):
super().setUp()
def test_directory_swhid(self):
directory_swhid = "swh:1:dir:" + hash_to_hex(self.empty_directory["id"])
directory = Directory.from_disk(path=self.tmpdir_name)
assert str(directory.swhid()) == directory_swhid
class SymlinkToContent(DataMixin, unittest.TestCase):
def setUp(self):
super().setUp()
self.make_symlinks(self.tmpdir_name)
def test_symlink_to_content(self):
for filename, symlink in self.symlinks.items():
path = os.path.join(self.tmpdir_name, filename)
perms = 0o120000
conv_content = Content.from_symlink(path=path, mode=perms)
self.assertContentEqual(conv_content, symlink)
def test_symlink_to_base_model(self):
for filename, symlink in self.symlinks.items():
path = os.path.join(self.tmpdir_name, filename)
perms = 0o120000
model_content = Content.from_symlink(path=path, mode=perms).to_model()
right = symlink.copy()
for key in ("perms", "path", "mode"):
right.pop(key, None)
right["status"] = "visible"
assert model_content == model.Content.from_dict(right)
class FileToContent(DataMixin, unittest.TestCase):
def setUp(self):
super().setUp()
self.make_contents(self.tmpdir_name)
self.make_symlinks(self.tmpdir_name)
self.make_specials(self.tmpdir_name)
def test_symlink_to_content(self):
for filename, symlink in self.symlinks.items():
path = os.path.join(self.tmpdir_name, filename)
conv_content = Content.from_file(path=path)
self.assertContentEqual(conv_content, symlink)
def test_file_to_content(self):
for filename, content in self.contents.items():
path = os.path.join(self.tmpdir_name, filename)
conv_content = Content.from_file(path=path)
self.assertContentEqual(conv_content, content)
def test_special_to_content(self):
for filename in self.specials:
path = os.path.join(self.tmpdir_name, filename)
conv_content = Content.from_file(path=path)
self.assertContentEqual(conv_content, self.empty_content)
for path in ["/dev/null", "/dev/zero"]:
path = os.path.join(self.tmpdir_name, filename)
conv_content = Content.from_file(path=path)
self.assertContentEqual(conv_content, self.empty_content)
def test_symlink_to_content_model(self):
for filename, symlink in self.symlinks.items():
path = os.path.join(self.tmpdir_name, filename)
model_content = Content.from_file(path=path).to_model()
right = symlink.copy()
for key in ("perms", "path", "mode"):
right.pop(key, None)
right["status"] = "visible"
assert model_content == model.Content.from_dict(right)
def test_file_to_content_model(self):
for filename, content in self.contents.items():
path = os.path.join(self.tmpdir_name, filename)
model_content = Content.from_file(path=path).to_model()
right = content.copy()
for key in ("perms", "mode"):
right.pop(key, None)
assert model_content.with_data() == model.Content.from_dict(right)
right["path"] = path
del right["data"]
assert model_content == DiskBackedContent.from_dict(right)
def test_special_to_content_model(self):
for filename in self.specials:
path = os.path.join(self.tmpdir_name, filename)
model_content = Content.from_file(path=path).to_model()
right = self.empty_content.copy()
for key in ("perms", "path", "mode"):
right.pop(key, None)
right["status"] = "visible"
assert model_content == model.Content.from_dict(right)
for path in ["/dev/null", "/dev/zero"]:
model_content = Content.from_file(path=path).to_model()
right = self.empty_content.copy()
for key in ("perms", "path", "mode"):
right.pop(key, None)
right["status"] = "visible"
assert model_content == model.Content.from_dict(right)
def test_symlink_max_length(self):
for max_content_length in [4, 10]:
for filename, symlink in self.symlinks.items():
path = os.path.join(self.tmpdir_name, filename)
content = Content.from_file(path=path)
if content.data["length"] > max_content_length:
with pytest.raises(Exception, match="too large"):
Content.from_file(
path=path, max_content_length=max_content_length
)
else:
limited_content = Content.from_file(
path=path, max_content_length=max_content_length
)
assert content == limited_content
def test_file_max_length(self):
for max_content_length in [2, 4]:
for filename, content in self.contents.items():
path = os.path.join(self.tmpdir_name, filename)
content = Content.from_file(path=path)
limited_content = Content.from_file(
path=path, max_content_length=max_content_length
)
assert content.data["length"] == limited_content.data["length"]
assert content.data["status"] == "visible"
if content.data["length"] > max_content_length:
assert limited_content.data["status"] == "absent"
assert limited_content.data["reason"] == "Content too large"
else:
assert limited_content.data["status"] == "visible"
def test_special_file_max_length(self):
for max_content_length in [None, 0, 1]:
for filename in self.specials:
path = os.path.join(self.tmpdir_name, filename)
content = Content.from_file(path=path)
limited_content = Content.from_file(
path=path, max_content_length=max_content_length
)
assert limited_content == content
def test_file_to_content_with_path(self):
for filename, content in self.contents.items():
content_w_path = content.copy()
path = os.path.join(self.tmpdir_name, filename)
content_w_path["path"] = path
conv_content = Content.from_file(path=path)
self.assertContentEqual(conv_content, content_w_path, check_path=True)
@pytest.mark.fs
class DirectoryToObjects(DataMixin, unittest.TestCase):
def setUp(self):
super().setUp()
contents = os.path.join(self.tmpdir_name, b"contents")
os.mkdir(contents)
self.make_contents(contents)
symlinks = os.path.join(self.tmpdir_name, b"symlinks")
os.mkdir(symlinks)
self.make_symlinks(symlinks)
specials = os.path.join(self.tmpdir_name, b"specials")
os.mkdir(specials)
self.make_specials(specials)
empties = os.path.join(self.tmpdir_name, b"empty1", b"empty2")
os.makedirs(empties)
def test_directory_to_objects(self):
directory = Directory.from_disk(path=self.tmpdir_name)
for name, value in self.contents.items():
self.assertContentEqual(directory[b"contents/" + name], value)
for name, value in self.symlinks.items():
self.assertContentEqual(directory[b"symlinks/" + name], value)
for name in self.specials:
self.assertContentEqual(
- directory[b"specials/" + name], self.empty_content,
+ directory[b"specials/" + name],
+ self.empty_content,
)
self.assertEqual(
- directory[b"empty1/empty2"].get_data(), self.empty_directory,
+ directory[b"empty1/empty2"].get_data(),
+ self.empty_directory,
)
# Raise on non existent file
with self.assertRaisesRegex(KeyError, "b'nonexistent'"):
directory[b"empty1/nonexistent"]
# Raise on non existent directory
with self.assertRaisesRegex(KeyError, "b'nonexistentdir'"):
directory[b"nonexistentdir/file"]
objs = directory.collect()
self.assertCountEqual(["content", "directory"], objs)
self.assertEqual(len(objs["directory"]), 6)
self.assertEqual(
len(objs["content"]), len(self.contents) + len(self.symlinks) + 1
)
def test_directory_to_objects_ignore_empty(self):
directory = Directory.from_disk(
path=self.tmpdir_name, dir_filter=from_disk.ignore_empty_directories
)
for name, value in self.contents.items():
self.assertContentEqual(directory[b"contents/" + name], value)
for name, value in self.symlinks.items():
self.assertContentEqual(directory[b"symlinks/" + name], value)
for name in self.specials:
self.assertContentEqual(
- directory[b"specials/" + name], self.empty_content,
+ directory[b"specials/" + name],
+ self.empty_content,
)
# empty directories have been ignored recursively
with self.assertRaisesRegex(KeyError, "b'empty1'"):
directory[b"empty1"]
with self.assertRaisesRegex(KeyError, "b'empty1'"):
directory[b"empty1/empty2"]
objs = directory.collect()
self.assertCountEqual(["content", "directory"], objs)
self.assertEqual(len(objs["directory"]), 4)
self.assertEqual(
len(objs["content"]), len(self.contents) + len(self.symlinks) + 1
)
def test_directory_to_objects_ignore_name(self):
directory = Directory.from_disk(
path=self.tmpdir_name,
dir_filter=from_disk.ignore_named_directories([b"symlinks"]),
)
for name, value in self.contents.items():
self.assertContentEqual(directory[b"contents/" + name], value)
for name in self.specials:
self.assertContentEqual(
- directory[b"specials/" + name], self.empty_content,
+ directory[b"specials/" + name],
+ self.empty_content,
)
self.assertEqual(
- directory[b"empty1/empty2"].get_data(), self.empty_directory,
+ directory[b"empty1/empty2"].get_data(),
+ self.empty_directory,
)
with self.assertRaisesRegex(KeyError, "b'symlinks'"):
directory[b"symlinks"]
objs = directory.collect()
self.assertCountEqual(["content", "directory"], objs)
self.assertEqual(len(objs["directory"]), 5)
self.assertEqual(len(objs["content"]), len(self.contents) + 1)
def test_directory_to_objects_ignore_name_case(self):
directory = Directory.from_disk(
path=self.tmpdir_name,
dir_filter=from_disk.ignore_named_directories(
[b"symLiNks"], case_sensitive=False
),
)
for name, value in self.contents.items():
self.assertContentEqual(directory[b"contents/" + name], value)
for name in self.specials:
self.assertContentEqual(
- directory[b"specials/" + name], self.empty_content,
+ directory[b"specials/" + name],
+ self.empty_content,
)
self.assertEqual(
- directory[b"empty1/empty2"].get_data(), self.empty_directory,
+ directory[b"empty1/empty2"].get_data(),
+ self.empty_directory,
)
with self.assertRaisesRegex(KeyError, "b'symlinks'"):
directory[b"symlinks"]
objs = directory.collect()
self.assertCountEqual(["content", "directory"], objs)
self.assertEqual(len(objs["directory"]), 5)
self.assertEqual(len(objs["content"]), len(self.contents) + 1)
def test_directory_entry_order(self):
with tempfile.TemporaryDirectory() as dirname:
dirname = os.fsencode(dirname)
open(os.path.join(dirname, b"foo."), "a")
open(os.path.join(dirname, b"foo0"), "a")
os.mkdir(os.path.join(dirname, b"foo"))
directory = Directory.from_disk(path=dirname)
assert [entry["name"] for entry in directory.entries] == [
b"foo.",
b"foo",
b"foo0",
]
@pytest.mark.fs
class TarballTest(DataMixin, unittest.TestCase):
def setUp(self):
super().setUp()
self.make_from_tarball(self.tmpdir_name)
def test_contents_match(self):
directory = Directory.from_disk(
path=os.path.join(self.tmpdir_name, b"sample-folder")
)
for name, expected in self.tarball_contents.items():
obj = directory[name]
if isinstance(obj, Content):
self.assertContentEqual(obj, expected)
elif isinstance(obj, Directory):
self.assertDirectoryEqual(obj, expected)
else:
raise self.failureException("Unknown type for %s" % obj)
class TarballIterDirectory(DataMixin, unittest.TestCase):
def setUp(self):
super().setUp()
self.make_from_tarball(self.tmpdir_name)
def test_iter_directory(self):
- """Iter from_disk.directory should yield the full arborescence tree
-
- """
+ """Iter from_disk.directory should yield the full arborescence tree"""
directory = Directory.from_disk(
path=os.path.join(self.tmpdir_name, b"sample-folder")
)
contents, skipped_contents, directories = from_disk.iter_directory(directory)
expected_nb = defaultdict(int)
for name in self.tarball_contents.keys():
obj = directory[name]
expected_nb[obj.object_type] += 1
assert len(contents) == expected_nb["content"] and len(contents) > 0
assert len(skipped_contents) == 0
assert len(directories) == expected_nb["directory"] and len(directories) > 0
class DirectoryManipulation(DataMixin, unittest.TestCase):
def test_directory_access_nested(self):
d = Directory()
d[b"a"] = Directory()
d[b"a/b"] = Directory()
self.assertEqual(d[b"a/b"].get_data(), self.empty_directory)
def test_directory_del_nested(self):
d = Directory()
d[b"a"] = Directory()
d[b"a/b"] = Directory()
with self.assertRaisesRegex(KeyError, "b'c'"):
del d[b"a/b/c"]
with self.assertRaisesRegex(KeyError, "b'level2'"):
del d[b"a/level2/c"]
del d[b"a/b"]
self.assertEqual(d[b"a"].get_data(), self.empty_directory)
def test_directory_access_self(self):
d = Directory()
self.assertIs(d, d[b""])
self.assertIs(d, d[b"/"])
self.assertIs(d, d[b"//"])
def test_directory_access_wrong_type(self):
d = Directory()
with self.assertRaisesRegex(ValueError, "bytes from Directory"):
d["foo"]
with self.assertRaisesRegex(ValueError, "bytes from Directory"):
d[42]
def test_directory_repr(self):
entries = [b"a", b"b", b"c"]
d = Directory()
for entry in entries:
d[entry] = Directory()
r = repr(d)
self.assertIn(hash_to_hex(d.hash), r)
for entry in entries:
self.assertIn(str(entry), r)
def test_directory_set_wrong_type_name(self):
d = Directory()
with self.assertRaisesRegex(ValueError, "bytes Directory entry"):
d["foo"] = Directory()
with self.assertRaisesRegex(ValueError, "bytes Directory entry"):
d[42] = Directory()
def test_directory_set_nul_in_name(self):
d = Directory()
with self.assertRaisesRegex(ValueError, "nul bytes"):
d[b"\x00\x01"] = Directory()
def test_directory_set_empty_name(self):
d = Directory()
with self.assertRaisesRegex(ValueError, "must have a name"):
d[b""] = Directory()
with self.assertRaisesRegex(ValueError, "must have a name"):
d[b"/"] = Directory()
def test_directory_set_wrong_type(self):
d = Directory()
with self.assertRaisesRegex(ValueError, "Content or Directory"):
d[b"entry"] = object()
def test_directory_del_wrong_type(self):
d = Directory()
with self.assertRaisesRegex(ValueError, "bytes Directory entry"):
del d["foo"]
with self.assertRaisesRegex(ValueError, "bytes Directory entry"):
del d[42]
def test_directory_contains(self):
d = Directory()
d[b"a"] = Directory()
d[b"a/b"] = Directory()
d[b"a/b/c"] = Directory()
d[b"a/b/c/d"] = Content()
self.assertIn(b"a", d)
self.assertIn(b"a/b", d)
self.assertIn(b"a/b/c", d)
self.assertIn(b"a/b/c/d", d)
self.assertNotIn(b"b", d)
self.assertNotIn(b"b/c", d)
self.assertNotIn(b"b/c/d", d)
diff --git a/swh/model/tests/test_identifiers.py b/swh/model/tests/test_identifiers.py
index 6214584..793e6d5 100644
--- a/swh/model/tests/test_identifiers.py
+++ b/swh/model/tests/test_identifiers.py
@@ -1,1187 +1,1339 @@
# Copyright (C) 2015-2021 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import datetime
import hashlib
from typing import Dict
import unittest
import pytest
from swh.model import git_objects, hashutil
from swh.model.hashutil import hash_to_bytes as _x
from swh.model.model import (
Content,
Directory,
ExtID,
Origin,
RawExtrinsicMetadata,
Release,
Revision,
Snapshot,
TimestampWithTimezone,
)
def remove_id(d: Dict) -> Dict:
"""Returns a (shallow) copy of a dict with the 'id' key removed."""
d = d.copy()
if "id" in d:
del d["id"]
return d
class UtilityFunctionsDateOffset(unittest.TestCase):
def setUp(self):
self.dates = {
- b"1448210036": {"seconds": 1448210036, "microseconds": 0,},
- b"1448210036.002342": {"seconds": 1448210036, "microseconds": 2342,},
- b"1448210036.12": {"seconds": 1448210036, "microseconds": 120000,},
+ b"1448210036": {
+ "seconds": 1448210036,
+ "microseconds": 0,
+ },
+ b"1448210036.002342": {
+ "seconds": 1448210036,
+ "microseconds": 2342,
+ },
+ b"1448210036.12": {
+ "seconds": 1448210036,
+ "microseconds": 120000,
+ },
}
def test_format_date(self):
for date_repr, date in self.dates.items():
self.assertEqual(git_objects.format_date(date), date_repr)
content_example = {
"status": "visible",
"length": 5,
"data": b"1984\n",
"ctime": datetime.datetime(2015, 11, 22, 16, 33, 56, tzinfo=datetime.timezone.utc),
}
class ContentIdentifier(unittest.TestCase):
def setUp(self):
self.content_id = hashutil.MultiHash.from_data(content_example["data"]).digest()
def test_content_identifier(self):
self.assertEqual(
Content.from_data(content_example["data"]).hashes(), self.content_id
)
directory_example = {
"id": _x("d7ed3d2c31d608823be58b1cbe57605310615231"),
"entries": [
{
"type": "file",
"perms": 33188,
"name": b"README",
"target": _x("37ec8ea2110c0b7a32fbb0e872f6e7debbf95e21"),
},
{
"type": "file",
"perms": 33188,
"name": b"Rakefile",
"target": _x("3bb0e8592a41ae3185ee32266c860714980dbed7"),
},
{
"type": "dir",
"perms": 16384,
"name": b"app",
"target": _x("61e6e867f5d7ba3b40540869bc050b0c4fed9e95"),
},
{
"type": "file",
"perms": 33188,
"name": b"1.megabyte",
"target": _x("7c2b2fbdd57d6765cdc9d84c2d7d333f11be7fb3"),
},
{
"type": "dir",
"perms": 16384,
"name": b"config",
"target": _x("591dfe784a2e9ccc63aaba1cb68a765734310d98"),
},
{
"type": "dir",
"perms": 16384,
"name": b"public",
"target": _x("9588bf4522c2b4648bfd1c61d175d1f88c1ad4a5"),
},
{
"type": "file",
"perms": 33188,
"name": b"development.sqlite3",
"target": _x("e69de29bb2d1d6434b8b29ae775ad8c2e48c5391"),
},
{
"type": "dir",
"perms": 16384,
"name": b"doc",
"target": _x("154705c6aa1c8ead8c99c7915373e3c44012057f"),
},
{
"type": "dir",
"perms": 16384,
"name": b"db",
"target": _x("85f157bdc39356b7bc7de9d0099b4ced8b3b382c"),
},
{
"type": "dir",
"perms": 16384,
"name": b"log",
"target": _x("5e3d3941c51cce73352dff89c805a304ba96fffe"),
},
{
"type": "dir",
"perms": 16384,
"name": b"script",
"target": _x("1b278423caf176da3f3533592012502aa10f566c"),
},
{
"type": "dir",
"perms": 16384,
"name": b"test",
"target": _x("035f0437c080bfd8711670b3e8677e686c69c763"),
},
{
"type": "dir",
"perms": 16384,
"name": b"vendor",
"target": _x("7c0dc9ad978c1af3f9a4ce061e50f5918bd27138"),
},
{
"type": "rev",
"perms": 57344,
"name": b"will_paginate",
"target": _x("3d531e169db92a16a9a8974f0ae6edf52e52659e"),
},
# in git order, the dir named "order" should be between the files
# named "order." and "order0"
{
"type": "dir",
"perms": 16384,
"name": b"order",
"target": _x("62cdb7020ff920e5aa642c3d4066950dd1f01f4d"),
},
{
"type": "file",
"perms": 16384,
"name": b"order.",
"target": _x("0beec7b5ea3f0fdbc95d0dd47f3c5bc275da8a33"),
},
{
"type": "file",
"perms": 16384,
"name": b"order0",
"target": _x("bbe960a25ea311d21d40669e93df2003ba9b90a2"),
},
],
}
class DirectoryIdentifier(unittest.TestCase):
def setUp(self):
self.directory = directory_example
self.empty_directory = {
"id": "4b825dc642cb6eb9a060e54bf8d69288fbee4904",
"entries": [],
}
def test_dir_identifier(self):
self.assertEqual(Directory.from_dict(self.directory).id, self.directory["id"])
self.assertEqual(
- Directory.from_dict(remove_id(self.directory)).id, self.directory["id"],
+ Directory.from_dict(remove_id(self.directory)).id,
+ self.directory["id"],
)
def test_dir_identifier_entry_order(self):
# Reverse order of entries, check the id is still the same.
directory = {"entries": reversed(self.directory["entries"])}
self.assertEqual(
- Directory.from_dict(remove_id(directory)).id, self.directory["id"],
+ Directory.from_dict(remove_id(directory)).id,
+ self.directory["id"],
)
def test_dir_identifier_empty_directory(self):
self.assertEqual(
Directory.from_dict(remove_id(self.empty_directory)).id,
_x(self.empty_directory["id"]),
)
linus_tz = datetime.timezone(datetime.timedelta(minutes=-420))
revision_example = {
"id": _x("bc0195aad0daa2ad5b0d76cce22b167bc3435590"),
"directory": _x("85a74718d377195e1efd0843ba4f3260bad4fe07"),
"parents": [_x("01e2d0627a9a6edb24c37db45db5ecb31e9de808")],
"author": {
"name": b"Linus Torvalds",
"email": b"torvalds@linux-foundation.org",
"fullname": b"Linus Torvalds <torvalds@linux-foundation.org>",
},
"date": datetime.datetime(2015, 7, 12, 15, 10, 30, tzinfo=linus_tz),
"committer": {
"name": b"Linus Torvalds",
"email": b"torvalds@linux-foundation.org",
"fullname": b"Linus Torvalds <torvalds@linux-foundation.org>",
},
"committer_date": datetime.datetime(2015, 7, 12, 15, 10, 30, tzinfo=linus_tz),
"message": b"Linux 4.2-rc2\n",
"type": "git",
"synthetic": False,
}
class RevisionIdentifier(unittest.TestCase):
def setUp(self):
gpgsig = b"""\
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (Darwin)
iQIcBAABAgAGBQJVJcYsAAoJEBiY3kIkQRNJVAUQAJ8/XQIfMqqC5oYeEFfHOPYZ
L7qy46bXHVBa9Qd8zAJ2Dou3IbI2ZoF6/Et89K/UggOycMlt5FKV/9toWyuZv4Po
L682wonoxX99qvVTHo6+wtnmYO7+G0f82h+qHMErxjP+I6gzRNBvRr+SfY7VlGdK
wikMKOMWC5smrScSHITnOq1Ews5pe3N7qDYMzK0XVZmgDoaem4RSWMJs4My/qVLN
e0CqYWq2A22GX7sXl6pjneJYQvcAXUX+CAzp24QnPSb+Q22Guj91TcxLFcHCTDdn
qgqMsEyMiisoglwrCbO+D+1xq9mjN9tNFWP66SQ48mrrHYTBV5sz9eJyDfroJaLP
CWgbDTgq6GzRMehHT3hXfYS5NNatjnhkNISXR7pnVP/obIi/vpWh5ll6Gd8q26z+
a/O41UzOaLTeNI365MWT4/cnXohVLRG7iVJbAbCxoQmEgsYMRc/pBAzWJtLfcB2G
jdTswYL6+MUdL8sB9pZ82D+BP/YAdHe69CyTu1lk9RT2pYtI/kkfjHubXBCYEJSG
+VGllBbYG6idQJpyrOYNRJyrDi9yvDJ2W+S0iQrlZrxzGBVGTB/y65S8C+2WTBcE
lf1Qb5GDsQrZWgD+jtWTywOYHtCBwyCKSAXxSARMbNPeak9WPlcW/Jmu+fUcMe2x
dg1KdHOa34shrKDaOVzW
=od6m
-----END PGP SIGNATURE-----"""
self.revision = revision_example
self.revision_none_metadata = {
"id": _x("bc0195aad0daa2ad5b0d76cce22b167bc3435590"),
"directory": _x("85a74718d377195e1efd0843ba4f3260bad4fe07"),
"parents": [_x("01e2d0627a9a6edb24c37db45db5ecb31e9de808")],
"author": {
"name": b"Linus Torvalds",
"email": b"torvalds@linux-foundation.org",
},
"date": datetime.datetime(2015, 7, 12, 15, 10, 30, tzinfo=linus_tz),
"committer": {
"name": b"Linus Torvalds",
"email": b"torvalds@linux-foundation.org",
},
"committer_date": datetime.datetime(
2015, 7, 12, 15, 10, 30, tzinfo=linus_tz
),
"message": b"Linux 4.2-rc2\n",
"type": "git",
"synthetic": False,
"metadata": None,
}
self.synthetic_revision = {
"id": _x("b2a7e1260492e344fab3cbf91bc13c91e05426fd"),
"author": {
"name": b"Software Heritage",
"email": b"robot@softwareheritage.org",
},
- "date": {"timestamp": {"seconds": 1437047495}, "offset_bytes": b"+0000",},
+ "date": {
+ "timestamp": {"seconds": 1437047495},
+ "offset_bytes": b"+0000",
+ },
"type": "tar",
"committer": {
"name": b"Software Heritage",
"email": b"robot@softwareheritage.org",
},
"committer_date": 1437047495,
"synthetic": True,
"parents": [],
"message": b"synthetic revision message\n",
"directory": _x("d11f00a6a0fea6055341d25584b5a96516c0d2b8"),
"metadata": {
"original_artifact": [
{
"archive_type": "tar",
"name": "gcc-5.2.0.tar.bz2",
"sha1_git": "39d281aff934d44b439730057e55b055e206a586",
"sha1": "fe3f5390949d47054b613edc36c557eb1d51c18e",
"sha256": "5f835b04b5f7dd4f4d2dc96190ec1621b8d89f"
"2dc6f638f9f8bc1b1014ba8cad",
}
]
},
}
# cat commit.txt | git hash-object -t commit --stdin
self.revision_with_extra_headers = {
"id": _x("010d34f384fa99d047cdd5e2f41e56e5c2feee45"),
"directory": _x("85a74718d377195e1efd0843ba4f3260bad4fe07"),
"parents": [_x("01e2d0627a9a6edb24c37db45db5ecb31e9de808")],
"author": {
"name": b"Linus Torvalds",
"email": b"torvalds@linux-foundation.org",
"fullname": b"Linus Torvalds <torvalds@linux-foundation.org>",
},
"date": datetime.datetime(2015, 7, 12, 15, 10, 30, tzinfo=linus_tz),
"committer": {
"name": b"Linus Torvalds",
"email": b"torvalds@linux-foundation.org",
"fullname": b"Linus Torvalds <torvalds@linux-foundation.org>",
},
"committer_date": datetime.datetime(
2015, 7, 12, 15, 10, 30, tzinfo=linus_tz
),
"message": b"Linux 4.2-rc2\n",
"type": "git",
"synthetic": False,
"extra_headers": (
(b"svn-repo-uuid", b"046f1af7-66c2-d61b-5410-ce57b7db7bff"),
(b"svn-revision", b"10"),
),
}
self.revision_with_gpgsig = {
"id": _x("44cc742a8ca17b9c279be4cc195a93a6ef7a320e"),
"directory": _x("b134f9b7dc434f593c0bab696345548b37de0558"),
"parents": [
_x("689664ae944b4692724f13b709a4e4de28b54e57"),
_x("c888305e1efbaa252d01b4e5e6b778f865a97514"),
],
"author": {
"name": b"Jiang Xin",
"email": b"worldhello.net@gmail.com",
"fullname": b"Jiang Xin <worldhello.net@gmail.com>",
},
- "date": {"timestamp": 1428538899, "offset": 480,},
- "committer": {"name": b"Jiang Xin", "email": b"worldhello.net@gmail.com",},
- "committer_date": {"timestamp": 1428538899, "offset": 480,},
+ "date": {
+ "timestamp": 1428538899,
+ "offset": 480,
+ },
+ "committer": {
+ "name": b"Jiang Xin",
+ "email": b"worldhello.net@gmail.com",
+ },
+ "committer_date": {
+ "timestamp": 1428538899,
+ "offset": 480,
+ },
"extra_headers": ((b"gpgsig", gpgsig),),
"message": b"""Merge branch 'master' of git://github.com/alexhenrie/git-po
* 'master' of git://github.com/alexhenrie/git-po:
l10n: ca.po: update translation
""",
"type": "git",
"synthetic": False,
}
self.revision_no_message = {
"id": _x("4cfc623c9238fa92c832beed000ce2d003fd8333"),
"directory": _x("b134f9b7dc434f593c0bab696345548b37de0558"),
"parents": [
_x("689664ae944b4692724f13b709a4e4de28b54e57"),
_x("c888305e1efbaa252d01b4e5e6b778f865a97514"),
],
"author": {
"name": b"Jiang Xin",
"email": b"worldhello.net@gmail.com",
"fullname": b"Jiang Xin <worldhello.net@gmail.com>",
},
- "date": {"timestamp": 1428538899, "offset": 480,},
- "committer": {"name": b"Jiang Xin", "email": b"worldhello.net@gmail.com",},
- "committer_date": {"timestamp": 1428538899, "offset": 480,},
+ "date": {
+ "timestamp": 1428538899,
+ "offset": 480,
+ },
+ "committer": {
+ "name": b"Jiang Xin",
+ "email": b"worldhello.net@gmail.com",
+ },
+ "committer_date": {
+ "timestamp": 1428538899,
+ "offset": 480,
+ },
"message": None,
"type": "git",
"synthetic": False,
}
self.revision_empty_message = {
"id": _x("7442cd78bd3b4966921d6a7f7447417b7acb15eb"),
"directory": _x("b134f9b7dc434f593c0bab696345548b37de0558"),
"parents": [
_x("689664ae944b4692724f13b709a4e4de28b54e57"),
_x("c888305e1efbaa252d01b4e5e6b778f865a97514"),
],
"author": {
"name": b"Jiang Xin",
"email": b"worldhello.net@gmail.com",
"fullname": b"Jiang Xin <worldhello.net@gmail.com>",
},
- "date": {"timestamp": 1428538899, "offset": 480,},
- "committer": {"name": b"Jiang Xin", "email": b"worldhello.net@gmail.com",},
- "committer_date": {"timestamp": 1428538899, "offset": 480,},
+ "date": {
+ "timestamp": 1428538899,
+ "offset": 480,
+ },
+ "committer": {
+ "name": b"Jiang Xin",
+ "email": b"worldhello.net@gmail.com",
+ },
+ "committer_date": {
+ "timestamp": 1428538899,
+ "offset": 480,
+ },
"message": b"",
"type": "git",
"synthetic": False,
}
self.revision_only_fullname = {
"id": _x("010d34f384fa99d047cdd5e2f41e56e5c2feee45"),
"directory": _x("85a74718d377195e1efd0843ba4f3260bad4fe07"),
"parents": [_x("01e2d0627a9a6edb24c37db45db5ecb31e9de808")],
- "author": {"fullname": b"Linus Torvalds <torvalds@linux-foundation.org>",},
+ "author": {
+ "fullname": b"Linus Torvalds <torvalds@linux-foundation.org>",
+ },
"date": datetime.datetime(2015, 7, 12, 15, 10, 30, tzinfo=linus_tz),
"committer": {
"fullname": b"Linus Torvalds <torvalds@linux-foundation.org>",
},
"committer_date": datetime.datetime(
2015, 7, 12, 15, 10, 30, tzinfo=linus_tz
),
"message": b"Linux 4.2-rc2\n",
"type": "git",
"synthetic": False,
"extra_headers": (
(b"svn-repo-uuid", b"046f1af7-66c2-d61b-5410-ce57b7db7bff"),
(b"svn-revision", b"10"),
),
}
def test_revision_identifier(self):
self.assertEqual(
- Revision.from_dict(self.revision).id, self.revision["id"],
+ Revision.from_dict(self.revision).id,
+ self.revision["id"],
)
self.assertEqual(
- Revision.from_dict(remove_id(self.revision)).id, self.revision["id"],
+ Revision.from_dict(remove_id(self.revision)).id,
+ self.revision["id"],
)
def test_revision_identifier_none_metadata(self):
self.assertEqual(
Revision.from_dict(remove_id(self.revision_none_metadata)).id,
self.revision_none_metadata["id"],
)
def test_revision_identifier_synthetic(self):
self.assertEqual(
Revision.from_dict(remove_id(self.synthetic_revision)).id,
self.synthetic_revision["id"],
)
def test_revision_identifier_with_extra_headers(self):
self.assertEqual(
Revision.from_dict(remove_id(self.revision_with_extra_headers)).id,
self.revision_with_extra_headers["id"],
)
def test_revision_identifier_with_gpgsig(self):
self.assertEqual(
Revision.from_dict(remove_id(self.revision_with_gpgsig)).id,
self.revision_with_gpgsig["id"],
)
def test_revision_identifier_no_message(self):
self.assertEqual(
Revision.from_dict(remove_id(self.revision_no_message)).id,
self.revision_no_message["id"],
)
def test_revision_identifier_empty_message(self):
self.assertEqual(
Revision.from_dict(remove_id(self.revision_empty_message)).id,
self.revision_empty_message["id"],
)
def test_revision_identifier_only_fullname(self):
self.assertEqual(
Revision.from_dict(remove_id(self.revision_only_fullname)).id,
self.revision_only_fullname["id"],
)
release_example = {
"id": _x("2b10839e32c4c476e9d94492756bb1a3e1ec4aa8"),
"target": _x("741b2252a5e14d6c60a913c77a6099abe73a854a"),
"target_type": "revision",
"name": b"v2.6.14",
"author": {
"name": b"Linus Torvalds",
"email": b"torvalds@g5.osdl.org",
"fullname": b"Linus Torvalds <torvalds@g5.osdl.org>",
},
"date": datetime.datetime(2005, 10, 27, 17, 2, 33, tzinfo=linus_tz),
"message": b"""\
Linux 2.6.14 release
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.1 (GNU/Linux)
iD8DBQBDYWq6F3YsRnbiHLsRAmaeAJ9RCez0y8rOBbhSv344h86l/VVcugCeIhO1
wdLOnvj91G4wxYqrvThthbE=
=7VeT
-----END PGP SIGNATURE-----
""",
"synthetic": False,
}
class ReleaseIdentifier(unittest.TestCase):
def setUp(self):
linus_tz = datetime.timezone(datetime.timedelta(minutes=-420))
self.release = release_example
self.release_no_author = {
"id": _x("26791a8bcf0e6d33f43aef7682bdb555236d56de"),
"target": _x("9ee1c939d1cb936b1f98e8d81aeffab57bae46ab"),
"target_type": "revision",
"name": b"v2.6.12",
"message": b"""\
This is the final 2.6.12 release
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)
iD8DBQBCsykyF3YsRnbiHLsRAvPNAJ482tCZwuxp/bJRz7Q98MHlN83TpACdHr37
o6X/3T+vm8K3bf3driRr34c=
=sBHn
-----END PGP SIGNATURE-----
""",
"synthetic": False,
}
self.release_no_message = {
"id": _x("b6f4f446715f7d9543ef54e41b62982f0db40045"),
"target": _x("9ee1c939d1cb936b1f98e8d81aeffab57bae46ab"),
"target_type": "revision",
"name": b"v2.6.12",
- "author": {"name": b"Linus Torvalds", "email": b"torvalds@g5.osdl.org",},
+ "author": {
+ "name": b"Linus Torvalds",
+ "email": b"torvalds@g5.osdl.org",
+ },
"date": datetime.datetime(2005, 10, 27, 17, 2, 33, tzinfo=linus_tz),
"message": None,
"synthetic": False,
}
self.release_empty_message = {
"id": _x("71a0aea72444d396575dc25ac37fec87ee3c6492"),
"target": _x("9ee1c939d1cb936b1f98e8d81aeffab57bae46ab"),
"target_type": "revision",
"name": b"v2.6.12",
- "author": {"name": b"Linus Torvalds", "email": b"torvalds@g5.osdl.org",},
+ "author": {
+ "name": b"Linus Torvalds",
+ "email": b"torvalds@g5.osdl.org",
+ },
"date": datetime.datetime(2005, 10, 27, 17, 2, 33, tzinfo=linus_tz),
"message": b"",
"synthetic": False,
}
self.release_negative_utc = {
"id": _x("97c8d2573a001f88e72d75f596cf86b12b82fd01"),
"name": b"20081029",
"target": _x("54e9abca4c77421e2921f5f156c9fe4a9f7441c7"),
"target_type": "revision",
- "date": {"timestamp": {"seconds": 1225281976}, "offset_bytes": b"-0000",},
- "author": {"name": b"Otavio Salvador", "email": b"otavio@debian.org",},
+ "date": {
+ "timestamp": {"seconds": 1225281976},
+ "offset_bytes": b"-0000",
+ },
+ "author": {
+ "name": b"Otavio Salvador",
+ "email": b"otavio@debian.org",
+ },
"synthetic": False,
"message": b"tagging version 20081029\n\nr56558\n",
}
self.release_newline_in_author = {
"author": {
"email": b"esycat@gmail.com",
"fullname": b"Eugene Janusov\n<esycat@gmail.com>",
"name": b"Eugene Janusov\n",
},
"date": {
"offset_bytes": b"+1000",
- "timestamp": {"microseconds": 0, "seconds": 1377480558,},
+ "timestamp": {
+ "microseconds": 0,
+ "seconds": 1377480558,
+ },
},
"id": _x("5c98f559d034162de22d3ebeb95433e6f8885231"),
"message": b"Release of v0.3.2.",
"name": b"0.3.2",
"synthetic": False,
"target": _x("c06aa3d93b78a2865c4935170030f8c2d7396fd3"),
"target_type": "revision",
}
self.release_snapshot_target = dict(self.release)
self.release_snapshot_target["target_type"] = "snapshot"
self.release_snapshot_target["id"] = _x(
"c29c3ddcc6769a04e54dd69d63a6fdcbc566f850"
)
def test_release_identifier(self):
self.assertEqual(
- Release.from_dict(self.release).id, self.release["id"],
+ Release.from_dict(self.release).id,
+ self.release["id"],
)
self.assertEqual(
- Release.from_dict(remove_id(self.release)).id, self.release["id"],
+ Release.from_dict(remove_id(self.release)).id,
+ self.release["id"],
)
def test_release_identifier_no_author(self):
self.assertEqual(
Release.from_dict(remove_id(self.release_no_author)).id,
self.release_no_author["id"],
)
def test_release_identifier_no_message(self):
self.assertEqual(
Release.from_dict(remove_id(self.release_no_message)).id,
self.release_no_message["id"],
)
def test_release_identifier_empty_message(self):
self.assertEqual(
Release.from_dict(remove_id(self.release_empty_message)).id,
self.release_empty_message["id"],
)
def test_release_identifier_negative_utc(self):
self.assertEqual(
Release.from_dict(remove_id(self.release_negative_utc)).id,
self.release_negative_utc["id"],
)
def test_release_identifier_newline_in_author(self):
self.assertEqual(
Release.from_dict(remove_id(self.release_newline_in_author)).id,
self.release_newline_in_author["id"],
)
def test_release_identifier_snapshot_target(self):
self.assertEqual(
Release.from_dict(self.release_snapshot_target).id,
self.release_snapshot_target["id"],
)
snapshot_example = {
"id": _x("6e65b86363953b780d92b0a928f3e8fcdd10db36"),
"branches": {
b"directory": {
"target": _x("1bd0e65f7d2ff14ae994de17a1e7fe65111dcad8"),
"target_type": "directory",
},
b"content": {
"target": _x("fe95a46679d128ff167b7c55df5d02356c5a1ae1"),
"target_type": "content",
},
- b"alias": {"target": b"revision", "target_type": "alias",},
+ b"alias": {
+ "target": b"revision",
+ "target_type": "alias",
+ },
b"revision": {
"target": _x("aafb16d69fd30ff58afdd69036a26047f3aebdc6"),
"target_type": "revision",
},
b"release": {
"target": _x("7045404f3d1c54e6473c71bbb716529fbad4be24"),
"target_type": "release",
},
b"snapshot": {
"target": _x("1a8893e6a86f444e8be8e7bda6cb34fb1735a00e"),
"target_type": "snapshot",
},
b"dangling": None,
},
}
class SnapshotIdentifier(unittest.TestCase):
def setUp(self):
super().setUp()
self.empty = {
"id": _x("1a8893e6a86f444e8be8e7bda6cb34fb1735a00e"),
"branches": {},
}
self.dangling_branch = {
"id": _x("c84502e821eb21ed84e9fd3ec40973abc8b32353"),
- "branches": {b"HEAD": None,},
+ "branches": {
+ b"HEAD": None,
+ },
}
self.unresolved = {
"id": _x("84b4548ea486e4b0a7933fa541ff1503a0afe1e0"),
- "branches": {b"foo": {"target": b"bar", "target_type": "alias",},},
+ "branches": {
+ b"foo": {
+ "target": b"bar",
+ "target_type": "alias",
+ },
+ },
}
self.all_types = snapshot_example
def test_empty_snapshot(self):
self.assertEqual(
- Snapshot.from_dict(remove_id(self.empty)).id, self.empty["id"],
+ Snapshot.from_dict(remove_id(self.empty)).id,
+ self.empty["id"],
)
def test_dangling_branch(self):
self.assertEqual(
Snapshot.from_dict(remove_id(self.dangling_branch)).id,
self.dangling_branch["id"],
)
def test_unresolved(self):
with self.assertRaisesRegex(ValueError, "b'foo' -> b'bar'"):
Snapshot.from_dict(remove_id(self.unresolved))
def test_all_types(self):
self.assertEqual(
- Snapshot.from_dict(remove_id(self.all_types)).id, self.all_types["id"],
+ Snapshot.from_dict(remove_id(self.all_types)).id,
+ self.all_types["id"],
)
authority_example = {
"type": "forge",
"url": "https://forge.softwareheritage.org/",
}
fetcher_example = {
"name": "swh-phabricator-metadata-fetcher",
"version": "0.0.1",
}
metadata_example = {
"target": "swh:1:cnt:568aaf43d83b2c3df8067f3bedbb97d83260be6d",
"discovery_date": datetime.datetime(
2021, 1, 25, 11, 27, 51, tzinfo=datetime.timezone.utc
),
"authority": authority_example,
"fetcher": fetcher_example,
"format": "json",
"metadata": b'{"foo": "bar"}',
}
class RawExtrinsicMetadataIdentifier(unittest.TestCase):
def setUp(self):
super().setUp()
self.minimal = metadata_example
self.maximal = {
**self.minimal,
"origin": "https://forge.softwareheritage.org/source/swh-model/",
"visit": 42,
"snapshot": "swh:1:snp:" + "00" * 20,
"release": "swh:1:rel:" + "01" * 20,
"revision": "swh:1:rev:" + "02" * 20,
"path": b"/abc/def",
"directory": "swh:1:dir:" + "03" * 20,
}
def test_minimal(self):
git_object = (
b"raw_extrinsic_metadata 210\0"
b"target swh:1:cnt:568aaf43d83b2c3df8067f3bedbb97d83260be6d\n"
b"discovery_date 1611574071\n"
b"authority forge https://forge.softwareheritage.org/\n"
b"fetcher swh-phabricator-metadata-fetcher 0.0.1\n"
b"format json\n"
b"\n"
b'{"foo": "bar"}'
)
self.assertEqual(
git_objects.raw_extrinsic_metadata_git_object(
RawExtrinsicMetadata.from_dict(self.minimal)
),
git_object,
)
self.assertEqual(
RawExtrinsicMetadata.from_dict(self.minimal).id,
hashlib.sha1(git_object).digest(),
)
self.assertEqual(
RawExtrinsicMetadata.from_dict(self.minimal).id,
_x("5c13f20ba336e44549baf3d7b9305b027ec9f43d"),
)
def test_maximal(self):
git_object = (
b"raw_extrinsic_metadata 533\0"
b"target swh:1:cnt:568aaf43d83b2c3df8067f3bedbb97d83260be6d\n"
b"discovery_date 1611574071\n"
b"authority forge https://forge.softwareheritage.org/\n"
b"fetcher swh-phabricator-metadata-fetcher 0.0.1\n"
b"format json\n"
b"origin https://forge.softwareheritage.org/source/swh-model/\n"
b"visit 42\n"
b"snapshot swh:1:snp:0000000000000000000000000000000000000000\n"
b"release swh:1:rel:0101010101010101010101010101010101010101\n"
b"revision swh:1:rev:0202020202020202020202020202020202020202\n"
b"path /abc/def\n"
b"directory swh:1:dir:0303030303030303030303030303030303030303\n"
b"\n"
b'{"foo": "bar"}'
)
self.assertEqual(
git_objects.raw_extrinsic_metadata_git_object(
RawExtrinsicMetadata.from_dict(self.maximal)
),
git_object,
)
self.assertEqual(
RawExtrinsicMetadata.from_dict(self.maximal).id,
hashlib.sha1(git_object).digest(),
)
self.assertEqual(
RawExtrinsicMetadata.from_dict(self.maximal).id,
_x("f96966e1093d15236a31fde07e47d5b1c9428049"),
)
def test_nonascii_path(self):
metadata = {
**self.minimal,
"path": b"/ab\nc/d\xf0\x9f\xa4\xb7e\x00f",
}
git_object = (
b"raw_extrinsic_metadata 231\0"
b"target swh:1:cnt:568aaf43d83b2c3df8067f3bedbb97d83260be6d\n"
b"discovery_date 1611574071\n"
b"authority forge https://forge.softwareheritage.org/\n"
b"fetcher swh-phabricator-metadata-fetcher 0.0.1\n"
b"format json\n"
b"path /ab\n"
b" c/d\xf0\x9f\xa4\xb7e\x00f\n"
b"\n"
b'{"foo": "bar"}'
)
self.assertEqual(
git_objects.raw_extrinsic_metadata_git_object(
RawExtrinsicMetadata.from_dict(metadata)
),
git_object,
)
self.assertEqual(
RawExtrinsicMetadata.from_dict(metadata).id,
hashlib.sha1(git_object).digest(),
)
self.assertEqual(
RawExtrinsicMetadata.from_dict(metadata).id,
_x("7cc83fd1912176510c083f5df43f01b09af4b333"),
)
def test_timezone_insensitive(self):
"""Checks the timezone of the datetime.datetime does not affect the
hashed git_object."""
utc_plus_one = datetime.timezone(datetime.timedelta(hours=1))
metadata = {
**self.minimal,
"discovery_date": datetime.datetime(
- 2021, 1, 25, 12, 27, 51, tzinfo=utc_plus_one,
+ 2021,
+ 1,
+ 25,
+ 12,
+ 27,
+ 51,
+ tzinfo=utc_plus_one,
),
}
self.assertEqual(
git_objects.raw_extrinsic_metadata_git_object(
RawExtrinsicMetadata.from_dict(self.minimal)
),
git_objects.raw_extrinsic_metadata_git_object(
RawExtrinsicMetadata.from_dict(metadata)
),
)
self.assertEqual(
RawExtrinsicMetadata.from_dict(self.minimal).id,
RawExtrinsicMetadata.from_dict(metadata).id,
)
self.assertEqual(
RawExtrinsicMetadata.from_dict(metadata).id,
_x("5c13f20ba336e44549baf3d7b9305b027ec9f43d"),
)
def test_microsecond_insensitive(self):
"""Checks the microseconds of the datetime.datetime does not affect the
hashed manifest."""
metadata = {
**self.minimal,
"discovery_date": datetime.datetime(
- 2021, 1, 25, 11, 27, 51, 123456, tzinfo=datetime.timezone.utc,
+ 2021,
+ 1,
+ 25,
+ 11,
+ 27,
+ 51,
+ 123456,
+ tzinfo=datetime.timezone.utc,
),
}
self.assertEqual(
git_objects.raw_extrinsic_metadata_git_object(
RawExtrinsicMetadata.from_dict(self.minimal)
),
git_objects.raw_extrinsic_metadata_git_object(
RawExtrinsicMetadata.from_dict(metadata)
),
)
self.assertEqual(
RawExtrinsicMetadata.from_dict(self.minimal).id,
RawExtrinsicMetadata.from_dict(metadata).id,
)
self.assertEqual(
RawExtrinsicMetadata.from_dict(metadata).id,
_x("5c13f20ba336e44549baf3d7b9305b027ec9f43d"),
)
def test_noninteger_timezone(self):
"""Checks the discovery_date is translated to UTC before truncating
microseconds"""
tz = datetime.timezone(datetime.timedelta(microseconds=-42))
metadata = {
**self.minimal,
"discovery_date": datetime.datetime(
- 2021, 1, 25, 11, 27, 50, 1_000_000 - 42, tzinfo=tz,
+ 2021,
+ 1,
+ 25,
+ 11,
+ 27,
+ 50,
+ 1_000_000 - 42,
+ tzinfo=tz,
),
}
self.assertEqual(
git_objects.raw_extrinsic_metadata_git_object(
RawExtrinsicMetadata.from_dict(self.minimal)
),
git_objects.raw_extrinsic_metadata_git_object(
RawExtrinsicMetadata.from_dict(metadata)
),
)
self.assertEqual(
RawExtrinsicMetadata.from_dict(self.minimal).id,
RawExtrinsicMetadata.from_dict(metadata).id,
)
self.assertEqual(
RawExtrinsicMetadata.from_dict(metadata).id,
_x("5c13f20ba336e44549baf3d7b9305b027ec9f43d"),
)
def test_negative_timestamp(self):
metadata = {
**self.minimal,
"discovery_date": datetime.datetime(
- 1960, 1, 25, 11, 27, 51, tzinfo=datetime.timezone.utc,
+ 1960,
+ 1,
+ 25,
+ 11,
+ 27,
+ 51,
+ tzinfo=datetime.timezone.utc,
),
}
git_object = (
b"raw_extrinsic_metadata 210\0"
b"target swh:1:cnt:568aaf43d83b2c3df8067f3bedbb97d83260be6d\n"
b"discovery_date -313504329\n"
b"authority forge https://forge.softwareheritage.org/\n"
b"fetcher swh-phabricator-metadata-fetcher 0.0.1\n"
b"format json\n"
b"\n"
b'{"foo": "bar"}'
)
self.assertEqual(
git_objects.raw_extrinsic_metadata_git_object(
RawExtrinsicMetadata.from_dict(metadata)
),
git_object,
)
self.assertEqual(
RawExtrinsicMetadata.from_dict(metadata).id,
hashlib.sha1(git_object).digest(),
)
self.assertEqual(
RawExtrinsicMetadata.from_dict(metadata).id,
_x("895d0821a2991dd376ddc303424aceb7c68280f9"),
)
def test_epoch(self):
metadata = {
**self.minimal,
"discovery_date": datetime.datetime(
- 1970, 1, 1, 0, 0, 0, tzinfo=datetime.timezone.utc,
+ 1970,
+ 1,
+ 1,
+ 0,
+ 0,
+ 0,
+ tzinfo=datetime.timezone.utc,
),
}
git_object = (
b"raw_extrinsic_metadata 201\0"
b"target swh:1:cnt:568aaf43d83b2c3df8067f3bedbb97d83260be6d\n"
b"discovery_date 0\n"
b"authority forge https://forge.softwareheritage.org/\n"
b"fetcher swh-phabricator-metadata-fetcher 0.0.1\n"
b"format json\n"
b"\n"
b'{"foo": "bar"}'
)
self.assertEqual(
git_objects.raw_extrinsic_metadata_git_object(
RawExtrinsicMetadata.from_dict(metadata)
),
git_object,
)
self.assertEqual(
RawExtrinsicMetadata.from_dict(metadata).id,
hashlib.sha1(git_object).digest(),
)
self.assertEqual(
RawExtrinsicMetadata.from_dict(metadata).id,
_x("27a53df54ace35ebd910493cdc70b334d6b7cb88"),
)
def test_negative_epoch(self):
metadata = {
**self.minimal,
"discovery_date": datetime.datetime(
- 1969, 12, 31, 23, 59, 59, 1, tzinfo=datetime.timezone.utc,
+ 1969,
+ 12,
+ 31,
+ 23,
+ 59,
+ 59,
+ 1,
+ tzinfo=datetime.timezone.utc,
),
}
git_object = (
b"raw_extrinsic_metadata 202\0"
b"target swh:1:cnt:568aaf43d83b2c3df8067f3bedbb97d83260be6d\n"
b"discovery_date -1\n"
b"authority forge https://forge.softwareheritage.org/\n"
b"fetcher swh-phabricator-metadata-fetcher 0.0.1\n"
b"format json\n"
b"\n"
b'{"foo": "bar"}'
)
self.assertEqual(
git_objects.raw_extrinsic_metadata_git_object(
RawExtrinsicMetadata.from_dict(metadata)
),
git_object,
)
self.assertEqual(
RawExtrinsicMetadata.from_dict(metadata).id,
hashlib.sha1(git_object).digest(),
)
self.assertEqual(
RawExtrinsicMetadata.from_dict(metadata).id,
_x("be7154a8fd49d87f81547ea634d1e2152907d089"),
)
origin_example = {
"url": "https://github.com/torvalds/linux",
}
class OriginIdentifier(unittest.TestCase):
def test_content_identifier(self):
self.assertEqual(
Origin.from_dict(origin_example).id,
_x("b63a575fe3faab7692c9f38fb09d4bb45651bb0f"),
)
# Format: [
# (
# input1,
# expected_output1,
# ),
# (
# input2,
# expected_output2,
# ),
# ...
# ]
TS_DICTS = [
# with current input dict format (offset_bytes)
(
{"timestamp": 12345, "offset_bytes": b"+0000"},
- {"timestamp": {"seconds": 12345, "microseconds": 0}, "offset_bytes": b"+0000",},
+ {
+ "timestamp": {"seconds": 12345, "microseconds": 0},
+ "offset_bytes": b"+0000",
+ },
),
(
{"timestamp": 12345, "offset_bytes": b"-0000"},
- {"timestamp": {"seconds": 12345, "microseconds": 0}, "offset_bytes": b"-0000",},
+ {
+ "timestamp": {"seconds": 12345, "microseconds": 0},
+ "offset_bytes": b"-0000",
+ },
),
(
{"timestamp": 12345, "offset_bytes": b"+0200"},
- {"timestamp": {"seconds": 12345, "microseconds": 0}, "offset_bytes": b"+0200",},
+ {
+ "timestamp": {"seconds": 12345, "microseconds": 0},
+ "offset_bytes": b"+0200",
+ },
),
(
{"timestamp": 12345, "offset_bytes": b"-0200"},
- {"timestamp": {"seconds": 12345, "microseconds": 0}, "offset_bytes": b"-0200",},
+ {
+ "timestamp": {"seconds": 12345, "microseconds": 0},
+ "offset_bytes": b"-0200",
+ },
),
(
{"timestamp": 12345, "offset_bytes": b"--700"},
- {"timestamp": {"seconds": 12345, "microseconds": 0}, "offset_bytes": b"--700",},
+ {
+ "timestamp": {"seconds": 12345, "microseconds": 0},
+ "offset_bytes": b"--700",
+ },
),
(
{"timestamp": 12345, "offset_bytes": b"1234567"},
{
"timestamp": {"seconds": 12345, "microseconds": 0},
"offset_bytes": b"1234567",
},
),
# with old-style input dicts (numeric offset + optional negative_utc):
(
{"timestamp": 12345, "offset": 0},
- {"timestamp": {"seconds": 12345, "microseconds": 0}, "offset_bytes": b"+0000",},
+ {
+ "timestamp": {"seconds": 12345, "microseconds": 0},
+ "offset_bytes": b"+0000",
+ },
),
(
{"timestamp": 12345, "offset": 0, "negative_utc": False},
- {"timestamp": {"seconds": 12345, "microseconds": 0}, "offset_bytes": b"+0000",},
+ {
+ "timestamp": {"seconds": 12345, "microseconds": 0},
+ "offset_bytes": b"+0000",
+ },
),
(
{"timestamp": 12345, "offset": 0, "negative_utc": False},
- {"timestamp": {"seconds": 12345, "microseconds": 0}, "offset_bytes": b"+0000",},
+ {
+ "timestamp": {"seconds": 12345, "microseconds": 0},
+ "offset_bytes": b"+0000",
+ },
),
(
{"timestamp": 12345, "offset": 0, "negative_utc": None},
- {"timestamp": {"seconds": 12345, "microseconds": 0}, "offset_bytes": b"+0000",},
+ {
+ "timestamp": {"seconds": 12345, "microseconds": 0},
+ "offset_bytes": b"+0000",
+ },
),
(
{"timestamp": {"seconds": 12345}, "offset": 0, "negative_utc": None},
- {"timestamp": {"seconds": 12345, "microseconds": 0}, "offset_bytes": b"+0000",},
+ {
+ "timestamp": {"seconds": 12345, "microseconds": 0},
+ "offset_bytes": b"+0000",
+ },
),
(
{
"timestamp": {"seconds": 12345, "microseconds": 0},
"offset": 0,
"negative_utc": None,
},
- {"timestamp": {"seconds": 12345, "microseconds": 0}, "offset_bytes": b"+0000",},
+ {
+ "timestamp": {"seconds": 12345, "microseconds": 0},
+ "offset_bytes": b"+0000",
+ },
),
(
{
"timestamp": {"seconds": 12345, "microseconds": 100},
"offset": 0,
"negative_utc": None,
},
{
"timestamp": {"seconds": 12345, "microseconds": 100},
"offset_bytes": b"+0000",
},
),
(
{"timestamp": 12345, "offset": 0, "negative_utc": True},
- {"timestamp": {"seconds": 12345, "microseconds": 0}, "offset_bytes": b"-0000",},
+ {
+ "timestamp": {"seconds": 12345, "microseconds": 0},
+ "offset_bytes": b"-0000",
+ },
),
(
{"timestamp": 12345, "offset": 0, "negative_utc": None},
- {"timestamp": {"seconds": 12345, "microseconds": 0}, "offset_bytes": b"+0000",},
+ {
+ "timestamp": {"seconds": 12345, "microseconds": 0},
+ "offset_bytes": b"+0000",
+ },
),
]
@pytest.mark.parametrize("dict_input,expected", TS_DICTS)
def test_normalize_timestamp_dict(dict_input, expected):
assert TimestampWithTimezone.from_dict(dict_input).to_dict() == expected
TS_DICTS_INVALID_TIMESTAMP = [
{"timestamp": 1.2, "offset": 0},
{"timestamp": "1", "offset": 0},
# these below should really also trigger a ValueError...
# {"timestamp": {"seconds": "1"}, "offset": 0},
# {"timestamp": {"seconds": 1.2}, "offset": 0},
# {"timestamp": {"seconds": 1.2}, "offset": 0},
]
@pytest.mark.parametrize("dict_input", TS_DICTS_INVALID_TIMESTAMP)
def test_normalize_timestamp_dict_invalid_timestamp(dict_input):
with pytest.raises(ValueError, match="non-integer timestamp"):
TimestampWithTimezone.from_dict(dict_input)
UTC = datetime.timezone.utc
TS_TIMEZONES = [
datetime.timezone.min,
datetime.timezone(datetime.timedelta(hours=-1)),
UTC,
datetime.timezone(datetime.timedelta(minutes=+60)),
datetime.timezone.max,
]
TS_TZ_EXPECTED = [-1439, -60, 0, 60, 1439]
TS_TZ_BYTES_EXPECTED = [b"-2359", b"-0100", b"+0000", b"+0100", b"+2359"]
TS_DATETIMES = [
datetime.datetime(2020, 2, 27, 14, 39, 19, tzinfo=UTC),
datetime.datetime(2120, 12, 31, 23, 59, 59, tzinfo=UTC),
datetime.datetime(1610, 5, 14, 15, 43, 0, tzinfo=UTC),
]
TS_DT_EXPECTED = [1582814359, 4765132799, -11348929020]
@pytest.mark.parametrize("date, seconds", zip(TS_DATETIMES, TS_DT_EXPECTED))
@pytest.mark.parametrize(
"tz, offset, offset_bytes", zip(TS_TIMEZONES, TS_TZ_EXPECTED, TS_TZ_BYTES_EXPECTED)
)
@pytest.mark.parametrize("microsecond", [0, 1, 10, 100, 1000, 999999])
def test_normalize_timestamp_datetime(
date, seconds, tz, offset, offset_bytes, microsecond
):
date = date.astimezone(tz).replace(microsecond=microsecond)
assert TimestampWithTimezone.from_dict(date).to_dict() == {
"timestamp": {"seconds": seconds, "microseconds": microsecond},
"offset_bytes": offset_bytes,
}
def test_extid_identifier_bwcompat():
extid_dict = {
"extid_type": "test-type",
"extid": b"extid",
"target": "swh:1:dir:" + "00" * 20,
}
assert ExtID.from_dict(extid_dict).id == _x(
"b9295e1931c31e40a7e3e1e967decd1c89426455"
)
assert (
ExtID.from_dict({**extid_dict, "extid_version": 0}).id
== ExtID.from_dict(extid_dict).id
)
assert (
ExtID.from_dict({**extid_dict, "extid_version": 1}).id
!= ExtID.from_dict(extid_dict).id
)
diff --git a/swh/model/tests/test_merkle.py b/swh/model/tests/test_merkle.py
index 32de872..52edb2c 100644
--- a/swh/model/tests/test_merkle.py
+++ b/swh/model/tests/test_merkle.py
@@ -1,255 +1,267 @@
# Copyright (C) 2017-2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import unittest
from swh.model import merkle
class MerkleTestNode(merkle.MerkleNode):
object_type = "tested_merkle_node_type"
def __init__(self, data):
super().__init__(data)
self.compute_hash_called = 0
def compute_hash(self):
self.compute_hash_called += 1
child_data = [child + b"=" + self[child].hash for child in sorted(self)]
return b"hash(" + b", ".join([self.data["value"]] + child_data) + b")"
class MerkleTestLeaf(merkle.MerkleLeaf):
object_type = "tested_merkle_leaf_type"
def __init__(self, data):
super().__init__(data)
self.compute_hash_called = 0
def compute_hash(self):
self.compute_hash_called += 1
return b"hash(" + self.data["value"] + b")"
class TestMerkleLeaf(unittest.TestCase):
def setUp(self):
self.data = {"value": b"value"}
self.instance = MerkleTestLeaf(self.data)
def test_equality(self):
leaf1 = MerkleTestLeaf(self.data)
leaf2 = MerkleTestLeaf(self.data)
leaf3 = MerkleTestLeaf({})
self.assertEqual(leaf1, leaf2)
self.assertNotEqual(leaf1, leaf3)
def test_hash(self):
self.assertEqual(self.instance.compute_hash_called, 0)
instance_hash = self.instance.hash
self.assertEqual(self.instance.compute_hash_called, 1)
instance_hash2 = self.instance.hash
self.assertEqual(self.instance.compute_hash_called, 1)
self.assertEqual(instance_hash, instance_hash2)
def test_data(self):
self.assertEqual(self.instance.get_data(), self.data)
def test_collect(self):
collected = self.instance.collect()
self.assertEqual(
collected,
{
self.instance.object_type: {
self.instance.hash: self.instance.get_data(),
},
},
)
collected2 = self.instance.collect()
self.assertEqual(collected2, {})
self.instance.reset_collect()
collected3 = self.instance.collect()
self.assertEqual(collected, collected3)
def test_leaf(self):
with self.assertRaisesRegex(ValueError, "is a leaf"):
self.instance[b"key1"] = "Test"
with self.assertRaisesRegex(ValueError, "is a leaf"):
del self.instance[b"key1"]
with self.assertRaisesRegex(ValueError, "is a leaf"):
self.instance[b"key1"]
with self.assertRaisesRegex(ValueError, "is a leaf"):
self.instance.update(self.data)
class TestMerkleNode(unittest.TestCase):
maxDiff = None
def setUp(self):
self.root = MerkleTestNode({"value": b"root"})
self.nodes = {b"root": self.root}
for i in (b"a", b"b", b"c"):
value = b"root/" + i
- node = MerkleTestNode({"value": value,})
+ node = MerkleTestNode(
+ {
+ "value": value,
+ }
+ )
self.root[i] = node
self.nodes[value] = node
for j in (b"a", b"b", b"c"):
value2 = value + b"/" + j
- node2 = MerkleTestNode({"value": value2,})
+ node2 = MerkleTestNode(
+ {
+ "value": value2,
+ }
+ )
node[j] = node2
self.nodes[value2] = node2
for k in (b"a", b"b", b"c"):
value3 = value2 + b"/" + j
- node3 = MerkleTestNode({"value": value3,})
+ node3 = MerkleTestNode(
+ {
+ "value": value3,
+ }
+ )
node2[j] = node3
self.nodes[value3] = node3
def test_equality(self):
node1 = merkle.MerkleNode({"foo": b"bar"})
node2 = merkle.MerkleNode({"foo": b"bar"})
node3 = merkle.MerkleNode({})
self.assertEqual(node1, node2)
self.assertNotEqual(node1, node3, node1 == node3)
node1["foo"] = node3
self.assertNotEqual(node1, node2)
node2["foo"] = node3
self.assertEqual(node1, node2)
def test_hash(self):
for node in self.nodes.values():
self.assertEqual(node.compute_hash_called, 0)
# Root hash will compute hash for all the nodes
hash = self.root.hash
for node in self.nodes.values():
self.assertEqual(node.compute_hash_called, 1)
self.assertIn(node.data["value"], hash)
# Should use the cached value
hash2 = self.root.hash
self.assertEqual(hash, hash2)
for node in self.nodes.values():
self.assertEqual(node.compute_hash_called, 1)
# Should still use the cached value
hash3 = self.root.update_hash(force=False)
self.assertEqual(hash, hash3)
for node in self.nodes.values():
self.assertEqual(node.compute_hash_called, 1)
# Force update of the cached value for a deeply nested node
self.root[b"a"][b"b"].update_hash(force=True)
for key, node in self.nodes.items():
# update_hash rehashes all children
if key.startswith(b"root/a/b"):
self.assertEqual(node.compute_hash_called, 2)
else:
self.assertEqual(node.compute_hash_called, 1)
hash4 = self.root.hash
self.assertEqual(hash, hash4)
for key, node in self.nodes.items():
# update_hash also invalidates all parents
if key in (b"root", b"root/a") or key.startswith(b"root/a/b"):
self.assertEqual(node.compute_hash_called, 2)
else:
self.assertEqual(node.compute_hash_called, 1)
def test_collect(self):
collected = self.root.collect()
self.assertEqual(len(collected[self.root.object_type]), len(self.nodes))
for node in self.nodes.values():
self.assertTrue(node.collected)
collected2 = self.root.collect()
self.assertEqual(collected2, {})
def test_iter_tree_with_deduplication(self):
nodes = list(self.root.iter_tree())
self.assertCountEqual(nodes, self.nodes.values())
def test_iter_tree_without_deduplication(self):
# duplicate existing hash in merkle tree
self.root[b"d"] = MerkleTestNode({"value": b"root/c/c/c"})
nodes_dedup = list(self.root.iter_tree())
nodes = list(self.root.iter_tree(dedup=False))
assert nodes != nodes_dedup
assert len(nodes) == len(nodes_dedup) + 1
def test_get(self):
for key in (b"a", b"b", b"c"):
self.assertEqual(self.root[key], self.nodes[b"root/" + key])
with self.assertRaisesRegex(KeyError, "b'nonexistent'"):
self.root[b"nonexistent"]
def test_del(self):
hash_root = self.root.hash
hash_a = self.nodes[b"root/a"].hash
del self.root[b"a"][b"c"]
hash_root2 = self.root.hash
hash_a2 = self.nodes[b"root/a"].hash
self.assertNotEqual(hash_root, hash_root2)
self.assertNotEqual(hash_a, hash_a2)
self.assertEqual(self.nodes[b"root/a/c"].parents, [])
with self.assertRaisesRegex(KeyError, "b'nonexistent'"):
del self.root[b"nonexistent"]
def test_update(self):
hash_root = self.root.hash
hash_b = self.root[b"b"].hash
new_children = {
b"c": MerkleTestNode({"value": b"root/b/new_c"}),
b"d": MerkleTestNode({"value": b"root/b/d"}),
}
# collect all nodes
self.root.collect()
self.root[b"b"].update(new_children)
# Ensure everyone got reparented
self.assertEqual(new_children[b"c"].parents, [self.root[b"b"]])
self.assertEqual(new_children[b"d"].parents, [self.root[b"b"]])
self.assertEqual(self.nodes[b"root/b/c"].parents, [])
hash_root2 = self.root.hash
self.assertNotEqual(hash_root, hash_root2)
self.assertIn(b"root/b/new_c", hash_root2)
self.assertIn(b"root/b/d", hash_root2)
hash_b2 = self.root[b"b"].hash
self.assertNotEqual(hash_b, hash_b2)
for key, node in self.nodes.items():
if key in (b"root", b"root/b"):
self.assertEqual(node.compute_hash_called, 2)
else:
self.assertEqual(node.compute_hash_called, 1)
# Ensure we collected root, root/b, and both new children
collected_after_update = self.root.collect()
self.assertCountEqual(
collected_after_update[MerkleTestNode.object_type],
[
self.nodes[b"root"].hash,
self.nodes[b"root/b"].hash,
new_children[b"c"].hash,
new_children[b"d"].hash,
],
)
# test that noop updates doesn't invalidate anything
self.root[b"a"][b"b"].update({})
self.assertEqual(self.root.collect(), {})
diff --git a/swh/model/tests/test_model.py b/swh/model/tests/test_model.py
index 2e7fa77..8c058b9 100644
--- a/swh/model/tests/test_model.py
+++ b/swh/model/tests/test_model.py
@@ -1,1518 +1,1641 @@
# Copyright (C) 2019-2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import collections
import copy
import datetime
import hashlib
from typing import Any, List, Optional, Tuple, Union
import attr
from attrs_strict import AttributeTypeError
import dateutil
from hypothesis import given
from hypothesis.strategies import binary
import pytest
from swh.model.collections import ImmutableDict
from swh.model.from_disk import DentryPerms
import swh.model.git_objects
from swh.model.hashutil import MultiHash, hash_to_bytes
import swh.model.hypothesis_strategies as strategies
import swh.model.model
from swh.model.model import (
BaseModel,
Content,
Directory,
DirectoryEntry,
MetadataAuthority,
MetadataAuthorityType,
MetadataFetcher,
MissingData,
Origin,
OriginVisit,
OriginVisitStatus,
Person,
RawExtrinsicMetadata,
Release,
Revision,
SkippedContent,
Snapshot,
TargetType,
Timestamp,
TimestampWithTimezone,
type_validator,
)
import swh.model.swhids
from swh.model.swhids import CoreSWHID, ExtendedSWHID, ObjectType
from swh.model.tests.swh_model_data import TEST_OBJECTS
from swh.model.tests.test_identifiers import (
TS_DATETIMES,
TS_TIMEZONES,
directory_example,
metadata_example,
release_example,
revision_example,
snapshot_example,
)
EXAMPLE_HASH = hash_to_bytes("94a9ed024d3859793618152ea559a168bbcbb5e2")
@given(strategies.objects())
def test_todict_inverse_fromdict(objtype_and_obj):
(obj_type, obj) = objtype_and_obj
if obj_type in ("origin", "origin_visit"):
return
obj_as_dict = obj.to_dict()
obj_as_dict_copy = copy.deepcopy(obj_as_dict)
# Check the composition of to_dict and from_dict is the identity
assert obj == type(obj).from_dict(obj_as_dict)
# Check from_dict() does not change the input dict
assert obj_as_dict == obj_as_dict_copy
# Check the composition of from_dict and to_dict is the identity
assert obj_as_dict == type(obj).from_dict(obj_as_dict).to_dict()
@given(strategies.objects())
def test_repr(objtype_and_obj):
"""Checks every model object has a working repr(), and that it can be eval()uated
(so that printed objects can be copy-pasted to write test cases.)"""
(obj_type, obj) = objtype_and_obj
r = repr(obj)
env = {
"tzutc": lambda: datetime.timezone.utc,
"tzfile": dateutil.tz.tzfile,
"hash_to_bytes": hash_to_bytes,
**swh.model.swhids.__dict__,
**swh.model.model.__dict__,
}
assert eval(r, env) == obj
@attr.s
class Cls1:
pass
@attr.s
class Cls2(Cls1):
pass
_custom_namedtuple = collections.namedtuple("_custom_namedtuple", "a b")
class _custom_tuple(tuple):
pass
# List of (type, valid_values, invalid_values)
_TYPE_VALIDATOR_PARAMETERS: List[Tuple[Any, List[Any], List[Any]]] = [
# base types:
(
bool,
[True, False],
[-1, 0, 1, 42, 1000, None, "123", 0.0, (), ("foo",), ImmutableDict()],
),
(
int,
[-1, 0, 1, 42, 1000, DentryPerms.directory, True, False],
[None, "123", 0.0, (), ImmutableDict()],
),
(
float,
[-1.0, 0.0, 1.0, float("infinity"), float("NaN")],
[True, False, None, 1, "1.2", (), ImmutableDict()],
),
(
bytes,
[b"", b"123"],
[None, bytearray(b"\x12\x34"), "123", 0, 123, (), (1, 2, 3), ImmutableDict()],
),
(str, ["", "123"], [None, b"123", b"", 0, (), (1, 2, 3), ImmutableDict()]),
(None, [None], [b"", b"123", "", "foo", 0, 123, ImmutableDict(), float("NaN")]),
# unions:
(
Optional[int],
[None, -1, 0, 1, 42, 1000, DentryPerms.directory],
["123", 0.0, (), ImmutableDict()],
),
(
Optional[bytes],
[None, b"", b"123"],
["123", "", 0, (), (1, 2, 3), ImmutableDict()],
),
(
Union[str, bytes],
["", "123", b"123", b""],
[None, 0, (), (1, 2, 3), ImmutableDict()],
),
(
Union[str, bytes, None],
["", "123", b"123", b"", None],
[0, (), (1, 2, 3), ImmutableDict()],
),
# tuples
(
Tuple[str, str],
[("foo", "bar"), ("", ""), _custom_namedtuple("", ""), _custom_tuple(("", ""))],
[("foo",), ("foo", "bar", "baz"), ("foo", 42), (42, "foo")],
),
(
Tuple[str, ...],
[
("foo",),
("foo", "bar"),
("", ""),
("foo", "bar", "baz"),
_custom_namedtuple("", ""),
_custom_tuple(("", "")),
],
[("foo", 42), (42, "foo")],
),
# composite generic:
(
Tuple[Union[str, int], Union[str, int]],
[("foo", "foo"), ("foo", 42), (42, "foo"), (42, 42)],
[("foo", b"bar"), (b"bar", "foo")],
),
(
Union[Tuple[str, str], Tuple[int, int]],
[("foo", "foo"), (42, 42)],
[("foo", b"bar"), (b"bar", "foo"), ("foo", 42), (42, "foo")],
),
(
Tuple[Tuple[bytes, bytes], ...],
[(), ((b"foo", b"bar"),), ((b"foo", b"bar"), (b"baz", b"qux"))],
[((b"foo", "bar"),), ((b"foo", b"bar"), ("baz", b"qux"))],
),
# standard types:
(
datetime.datetime,
[
datetime.datetime(2021, 12, 15, 12, 59, 27),
datetime.datetime(2021, 12, 15, 12, 59, 27, tzinfo=datetime.timezone.utc),
],
[None, 123],
),
# ImmutableDict
(
ImmutableDict[str, int],
[
ImmutableDict(),
ImmutableDict({"foo": 42}),
ImmutableDict({"foo": 42, "bar": 123}),
],
[ImmutableDict({"foo": "bar"}), ImmutableDict({42: 123})],
),
# Any:
- (object, [-1, 0, 1, 42, 1000, None, "123", 0.0, (), ImmutableDict()], [],),
- (Any, [-1, 0, 1, 42, 1000, None, "123", 0.0, (), ImmutableDict()], [],),
+ (
+ object,
+ [-1, 0, 1, 42, 1000, None, "123", 0.0, (), ImmutableDict()],
+ [],
+ ),
+ (
+ Any,
+ [-1, 0, 1, 42, 1000, None, "123", 0.0, (), ImmutableDict()],
+ [],
+ ),
(
ImmutableDict[Any, int],
[
ImmutableDict(),
ImmutableDict({"foo": 42}),
ImmutableDict({"foo": 42, "bar": 123}),
ImmutableDict({42: 123}),
],
[ImmutableDict({"foo": "bar"})],
),
(
ImmutableDict[str, Any],
[
ImmutableDict(),
ImmutableDict({"foo": 42}),
ImmutableDict({"foo": "bar"}),
ImmutableDict({"foo": 42, "bar": 123}),
],
[ImmutableDict({42: 123})],
),
# attr objects:
(
Timestamp,
- [Timestamp(seconds=123, microseconds=0),],
+ [
+ Timestamp(seconds=123, microseconds=0),
+ ],
[None, "2021-09-28T11:27:59", 123],
),
- (Cls1, [Cls1(), Cls2()], [None, b"abcd"],),
+ (
+ Cls1,
+ [Cls1(), Cls2()],
+ [None, b"abcd"],
+ ),
# enums:
(
TargetType,
[TargetType.CONTENT, TargetType.ALIAS],
["content", "alias", 123, None],
),
]
@pytest.mark.parametrize(
"type_,value",
[
pytest.param(type_, value, id=f"type={type_}, value={value}")
for (type_, values, _) in _TYPE_VALIDATOR_PARAMETERS
for value in values
],
)
def test_type_validator_valid(type_, value):
type_validator()(None, attr.ib(type=type_), value)
@pytest.mark.parametrize(
"type_,value",
[
pytest.param(type_, value, id=f"type={type_}, value={value}")
for (type_, _, values) in _TYPE_VALIDATOR_PARAMETERS
for value in values
],
)
def test_type_validator_invalid(type_, value):
with pytest.raises(AttributeTypeError):
type_validator()(None, attr.ib(type=type_), value)
@pytest.mark.parametrize("object_type, objects", TEST_OBJECTS.items())
def test_swh_model_todict_fromdict(object_type, objects):
"""checks model objects in swh_model_data are in correct shape"""
assert objects
for obj in objects:
# Check the composition of from_dict and to_dict is the identity
obj_as_dict = obj.to_dict()
assert obj == type(obj).from_dict(obj_as_dict)
assert obj_as_dict == type(obj).from_dict(obj_as_dict).to_dict()
def test_unique_key():
url = "http://example.org/"
date = datetime.datetime.now(tz=datetime.timezone.utc)
id_ = b"42" * 10
assert Origin(url=url).unique_key() == {"url": url}
assert OriginVisit(origin=url, date=date, type="git").unique_key() == {
"origin": url,
"date": str(date),
}
assert OriginVisitStatus(
origin=url, visit=42, date=date, status="created", snapshot=None
- ).unique_key() == {"origin": url, "visit": "42", "date": str(date),}
+ ).unique_key() == {
+ "origin": url,
+ "visit": "42",
+ "date": str(date),
+ }
assert Snapshot.from_dict({**snapshot_example, "id": id_}).unique_key() == id_
assert Release.from_dict({**release_example, "id": id_}).unique_key() == id_
assert Revision.from_dict({**revision_example, "id": id_}).unique_key() == id_
assert Directory.from_dict({**directory_example, "id": id_}).unique_key() == id_
assert (
RawExtrinsicMetadata.from_dict({**metadata_example, "id": id_}).unique_key()
== id_
)
cont = Content.from_data(b"foo")
assert cont.unique_key().hex() == "0beec7b5ea3f0fdbc95d0dd47f3c5bc275da8a33"
kwargs = {
**cont.to_dict(),
"reason": "foo",
"status": "absent",
}
del kwargs["data"]
assert SkippedContent(**kwargs).unique_key() == cont.hashes()
# Anonymization
@given(strategies.objects())
def test_anonymization(objtype_and_obj):
(obj_type, obj) = objtype_and_obj
def check_person(p):
if p is not None:
assert p.name is None
assert p.email is None
assert len(p.fullname) == 32
anon_obj = obj.anonymize()
if obj_type == "person":
assert anon_obj is not None
check_person(anon_obj)
elif obj_type == "release":
assert anon_obj is not None
check_person(anon_obj.author)
elif obj_type == "revision":
assert anon_obj is not None
check_person(anon_obj.author)
check_person(anon_obj.committer)
else:
assert anon_obj is None
# Origin, OriginVisit, OriginVisitStatus
@given(strategies.origins())
def test_todict_origins(origin):
obj = origin.to_dict()
assert "type" not in obj
assert type(origin)(url=origin.url) == type(origin).from_dict(obj)
@given(strategies.origin_visits())
def test_todict_origin_visits(origin_visit):
obj = origin_visit.to_dict()
assert origin_visit == type(origin_visit).from_dict(obj)
def test_origin_visit_naive_datetime():
with pytest.raises(ValueError, match="must be a timezone-aware datetime"):
OriginVisit(
- origin="http://foo/", date=datetime.datetime.now(), type="git",
+ origin="http://foo/",
+ date=datetime.datetime.now(),
+ type="git",
)
@given(strategies.origin_visit_statuses())
def test_todict_origin_visit_statuses(origin_visit_status):
obj = origin_visit_status.to_dict()
assert origin_visit_status == type(origin_visit_status).from_dict(obj)
def test_origin_visit_status_naive_datetime():
with pytest.raises(ValueError, match="must be a timezone-aware datetime"):
OriginVisitStatus(
origin="http://foo/",
visit=42,
date=datetime.datetime.now(),
status="ongoing",
snapshot=None,
)
# Timestamp
@given(strategies.timestamps())
def test_timestamps_strategy(timestamp):
attr.validate(timestamp)
def test_timestamp_seconds():
attr.validate(Timestamp(seconds=0, microseconds=0))
with pytest.raises(AttributeTypeError):
Timestamp(seconds="0", microseconds=0)
- attr.validate(Timestamp(seconds=2 ** 63 - 1, microseconds=0))
+ attr.validate(Timestamp(seconds=2**63 - 1, microseconds=0))
with pytest.raises(ValueError):
- Timestamp(seconds=2 ** 63, microseconds=0)
+ Timestamp(seconds=2**63, microseconds=0)
- attr.validate(Timestamp(seconds=-(2 ** 63), microseconds=0))
+ attr.validate(Timestamp(seconds=-(2**63), microseconds=0))
with pytest.raises(ValueError):
- Timestamp(seconds=-(2 ** 63) - 1, microseconds=0)
+ Timestamp(seconds=-(2**63) - 1, microseconds=0)
def test_timestamp_microseconds():
attr.validate(Timestamp(seconds=0, microseconds=0))
with pytest.raises(AttributeTypeError):
Timestamp(seconds=0, microseconds="0")
- attr.validate(Timestamp(seconds=0, microseconds=10 ** 6 - 1))
+ attr.validate(Timestamp(seconds=0, microseconds=10**6 - 1))
with pytest.raises(ValueError):
- Timestamp(seconds=0, microseconds=10 ** 6)
+ Timestamp(seconds=0, microseconds=10**6)
with pytest.raises(ValueError):
Timestamp(seconds=0, microseconds=-1)
def test_timestamp_from_dict():
assert Timestamp.from_dict({"seconds": 10, "microseconds": 5})
with pytest.raises(AttributeTypeError):
Timestamp.from_dict({"seconds": "10", "microseconds": 5})
with pytest.raises(AttributeTypeError):
Timestamp.from_dict({"seconds": 10, "microseconds": "5"})
with pytest.raises(ValueError):
Timestamp.from_dict({"seconds": 0, "microseconds": -1})
- Timestamp.from_dict({"seconds": 0, "microseconds": 10 ** 6 - 1})
+ Timestamp.from_dict({"seconds": 0, "microseconds": 10**6 - 1})
with pytest.raises(ValueError):
- Timestamp.from_dict({"seconds": 0, "microseconds": 10 ** 6})
+ Timestamp.from_dict({"seconds": 0, "microseconds": 10**6})
# TimestampWithTimezone
def test_timestampwithtimezone():
ts = Timestamp(seconds=0, microseconds=0)
tstz = TimestampWithTimezone(timestamp=ts, offset_bytes=b"+0000")
attr.validate(tstz)
assert tstz.offset_minutes() == 0
assert tstz.offset_bytes == b"+0000"
tstz = TimestampWithTimezone(timestamp=ts, offset_bytes=b"+0010")
attr.validate(tstz)
assert tstz.offset_minutes() == 10
assert tstz.offset_bytes == b"+0010"
tstz = TimestampWithTimezone(timestamp=ts, offset_bytes=b"-0010")
attr.validate(tstz)
assert tstz.offset_minutes() == -10
assert tstz.offset_bytes == b"-0010"
tstz = TimestampWithTimezone(timestamp=ts, offset_bytes=b"-0000")
attr.validate(tstz)
assert tstz.offset_minutes() == 0
assert tstz.offset_bytes == b"-0000"
tstz = TimestampWithTimezone(timestamp=ts, offset_bytes=b"-1030")
attr.validate(tstz)
assert tstz.offset_minutes() == -630
assert tstz.offset_bytes == b"-1030"
tstz = TimestampWithTimezone(timestamp=ts, offset_bytes=b"+1320")
attr.validate(tstz)
assert tstz.offset_minutes() == 800
assert tstz.offset_bytes == b"+1320"
tstz = TimestampWithTimezone(timestamp=ts, offset_bytes=b"+200")
attr.validate(tstz)
assert tstz.offset_minutes() == 120
assert tstz.offset_bytes == b"+200"
tstz = TimestampWithTimezone(timestamp=ts, offset_bytes=b"+02")
attr.validate(tstz)
assert tstz.offset_minutes() == 120
assert tstz.offset_bytes == b"+02"
tstz = TimestampWithTimezone(timestamp=ts, offset_bytes=b"+2000000000")
attr.validate(tstz)
assert tstz.offset_minutes() == 0
assert tstz.offset_bytes == b"+2000000000"
with pytest.raises(AttributeTypeError):
TimestampWithTimezone(timestamp=datetime.datetime.now(), offset_bytes=b"+0000")
with pytest.raises((AttributeTypeError, TypeError)):
TimestampWithTimezone(timestamp=ts, offset_bytes=0)
def test_timestampwithtimezone_from_datetime():
# Typical case
tz = datetime.timezone(datetime.timedelta(minutes=+60))
date = datetime.datetime(2020, 2, 27, 14, 39, 19, tzinfo=tz)
tstz = TimestampWithTimezone.from_datetime(date)
assert tstz == TimestampWithTimezone(
- timestamp=Timestamp(seconds=1582810759, microseconds=0,), offset_bytes=b"+0100"
+ timestamp=Timestamp(
+ seconds=1582810759,
+ microseconds=0,
+ ),
+ offset_bytes=b"+0100",
)
# Typical case (close to epoch)
tz = datetime.timezone(datetime.timedelta(minutes=+60))
date = datetime.datetime(1970, 1, 1, 1, 0, 5, tzinfo=tz)
tstz = TimestampWithTimezone.from_datetime(date)
assert tstz == TimestampWithTimezone(
- timestamp=Timestamp(seconds=5, microseconds=0,), offset_bytes=b"+0100"
+ timestamp=Timestamp(
+ seconds=5,
+ microseconds=0,
+ ),
+ offset_bytes=b"+0100",
)
# non-integer number of seconds before UNIX epoch
date = datetime.datetime(
1969, 12, 31, 23, 59, 59, 100000, tzinfo=datetime.timezone.utc
)
tstz = TimestampWithTimezone.from_datetime(date)
assert tstz == TimestampWithTimezone(
- timestamp=Timestamp(seconds=-1, microseconds=100000,), offset_bytes=b"+0000"
+ timestamp=Timestamp(
+ seconds=-1,
+ microseconds=100000,
+ ),
+ offset_bytes=b"+0000",
)
# non-integer number of seconds in both the timestamp and the offset
tz = datetime.timezone(datetime.timedelta(microseconds=-600000))
date = datetime.datetime(1969, 12, 31, 23, 59, 59, 600000, tzinfo=tz)
tstz = TimestampWithTimezone.from_datetime(date)
assert tstz == TimestampWithTimezone(
- timestamp=Timestamp(seconds=0, microseconds=200000,), offset_bytes=b"+0000"
+ timestamp=Timestamp(
+ seconds=0,
+ microseconds=200000,
+ ),
+ offset_bytes=b"+0000",
)
# timezone offset with non-integer number of seconds, for dates before epoch
# we round down to the previous second, so it should be the same as
# 1969-01-01T23:59:59Z
tz = datetime.timezone(datetime.timedelta(microseconds=900000))
date = datetime.datetime(1970, 1, 1, 0, 0, 0, tzinfo=tz)
tstz = TimestampWithTimezone.from_datetime(date)
assert tstz == TimestampWithTimezone(
- timestamp=Timestamp(seconds=-1, microseconds=100000,), offset_bytes=b"+0000"
+ timestamp=Timestamp(
+ seconds=-1,
+ microseconds=100000,
+ ),
+ offset_bytes=b"+0000",
)
def test_timestampwithtimezone_from_naive_datetime():
date = datetime.datetime(2020, 2, 27, 14, 39, 19)
with pytest.raises(ValueError, match="datetime without timezone"):
TimestampWithTimezone.from_datetime(date)
def test_timestampwithtimezone_from_iso8601():
date = "2020-02-27 14:39:19.123456+0100"
tstz = TimestampWithTimezone.from_iso8601(date)
assert tstz == TimestampWithTimezone(
- timestamp=Timestamp(seconds=1582810759, microseconds=123456,),
+ timestamp=Timestamp(
+ seconds=1582810759,
+ microseconds=123456,
+ ),
offset_bytes=b"+0100",
)
def test_timestampwithtimezone_from_iso8601_negative_utc():
date = "2020-02-27 13:39:19-0000"
tstz = TimestampWithTimezone.from_iso8601(date)
assert tstz == TimestampWithTimezone(
- timestamp=Timestamp(seconds=1582810759, microseconds=0,), offset_bytes=b"-0000"
+ timestamp=Timestamp(
+ seconds=1582810759,
+ microseconds=0,
+ ),
+ offset_bytes=b"-0000",
)
@pytest.mark.parametrize("date", TS_DATETIMES)
@pytest.mark.parametrize("tz", TS_TIMEZONES)
@pytest.mark.parametrize("microsecond", [0, 1, 10, 100, 1000, 999999])
def test_timestampwithtimezone_to_datetime(date, tz, microsecond):
date = date.replace(tzinfo=tz, microsecond=microsecond)
tstz = TimestampWithTimezone.from_datetime(date)
assert tstz.to_datetime() == date
assert tstz.to_datetime().utcoffset() == date.utcoffset()
def test_person_from_fullname():
- """The author should have name, email and fullname filled.
-
- """
+ """The author should have name, email and fullname filled."""
actual_person = Person.from_fullname(b"tony <ynot@dagobah>")
assert actual_person == Person(
- fullname=b"tony <ynot@dagobah>", name=b"tony", email=b"ynot@dagobah",
+ fullname=b"tony <ynot@dagobah>",
+ name=b"tony",
+ email=b"ynot@dagobah",
)
def test_person_from_fullname_no_email():
- """The author and fullname should be the same as the input (author).
-
- """
+ """The author and fullname should be the same as the input (author)."""
actual_person = Person.from_fullname(b"tony")
- assert actual_person == Person(fullname=b"tony", name=b"tony", email=None,)
+ assert actual_person == Person(
+ fullname=b"tony",
+ name=b"tony",
+ email=None,
+ )
def test_person_from_fullname_empty_person():
"""Empty person has only its fullname filled with the empty
byte-string.
"""
actual_person = Person.from_fullname(b"")
- assert actual_person == Person(fullname=b"", name=None, email=None,)
+ assert actual_person == Person(
+ fullname=b"",
+ name=None,
+ email=None,
+ )
def test_git_author_line_to_author():
# edge case out of the way
with pytest.raises(TypeError):
Person.from_fullname(None)
tests = {
- b"a <b@c.com>": Person(name=b"a", email=b"b@c.com", fullname=b"a <b@c.com>",),
+ b"a <b@c.com>": Person(
+ name=b"a",
+ email=b"b@c.com",
+ fullname=b"a <b@c.com>",
+ ),
b"<foo@bar.com>": Person(
- name=None, email=b"foo@bar.com", fullname=b"<foo@bar.com>",
+ name=None,
+ email=b"foo@bar.com",
+ fullname=b"<foo@bar.com>",
),
b"malformed <email": Person(
name=b"malformed", email=b"email", fullname=b"malformed <email"
),
b'malformed <"<br"@ckets>': Person(
name=b"malformed",
email=b'"<br"@ckets',
fullname=b'malformed <"<br"@ckets>',
),
b"trailing <sp@c.e> ": Person(
- name=b"trailing", email=b"sp@c.e", fullname=b"trailing <sp@c.e> ",
+ name=b"trailing",
+ email=b"sp@c.e",
+ fullname=b"trailing <sp@c.e> ",
+ ),
+ b"no<sp@c.e>": Person(
+ name=b"no",
+ email=b"sp@c.e",
+ fullname=b"no<sp@c.e>",
),
- b"no<sp@c.e>": Person(name=b"no", email=b"sp@c.e", fullname=b"no<sp@c.e>",),
b" more <sp@c.es>": Person(
- name=b"more", email=b"sp@c.es", fullname=b" more <sp@c.es>",
+ name=b"more",
+ email=b"sp@c.es",
+ fullname=b" more <sp@c.es>",
+ ),
+ b" <>": Person(
+ name=None,
+ email=None,
+ fullname=b" <>",
),
- b" <>": Person(name=None, email=None, fullname=b" <>",),
}
for person in sorted(tests):
expected_person = tests[person]
assert expected_person == Person.from_fullname(person)
def test_person_comparison():
- """Check only the fullname attribute is used to compare Person objects
-
- """
+ """Check only the fullname attribute is used to compare Person objects"""
person = Person(fullname=b"p1", name=None, email=None)
assert attr.evolve(person, name=b"toto") == person
assert attr.evolve(person, email=b"toto@example.com") == person
person = Person(fullname=b"", name=b"toto", email=b"toto@example.com")
assert attr.evolve(person, fullname=b"dude") != person
# Content
def test_content_get_hash():
hashes = dict(sha1=b"foo", sha1_git=b"bar", sha256=b"baz", blake2s256=b"qux")
c = Content(length=42, status="visible", **hashes)
for (hash_name, hash_) in hashes.items():
assert c.get_hash(hash_name) == hash_
def test_content_hashes():
hashes = dict(sha1=b"foo", sha1_git=b"bar", sha256=b"baz", blake2s256=b"qux")
c = Content(length=42, status="visible", **hashes)
assert c.hashes() == hashes
def test_content_data():
c = Content(
length=42,
status="visible",
data=b"foo",
sha1=b"foo",
sha1_git=b"bar",
sha256=b"baz",
blake2s256=b"qux",
)
assert c.with_data() == c
def test_content_data_missing():
c = Content(
length=42,
status="visible",
sha1=b"foo",
sha1_git=b"bar",
sha256=b"baz",
blake2s256=b"qux",
)
with pytest.raises(MissingData):
c.with_data()
@given(strategies.present_contents_d())
def test_content_from_dict(content_d):
c = Content.from_data(**content_d)
assert c
assert c.ctime == content_d["ctime"]
content_d2 = c.to_dict()
c2 = Content.from_dict(content_d2)
assert c2.ctime == c.ctime
def test_content_from_dict_str_ctime():
# test with ctime as a string
n = datetime.datetime(2020, 5, 6, 12, 34, tzinfo=datetime.timezone.utc)
content_d = {
"ctime": n.isoformat(),
"data": b"",
"length": 0,
"sha1": b"\x00",
"sha256": b"\x00",
"sha1_git": b"\x00",
"blake2s256": b"\x00",
}
c = Content.from_dict(content_d)
assert c.ctime == n
def test_content_from_dict_str_naive_ctime():
# test with ctime as a string
n = datetime.datetime(2020, 5, 6, 12, 34)
content_d = {
"ctime": n.isoformat(),
"data": b"",
"length": 0,
"sha1": b"\x00",
"sha256": b"\x00",
"sha1_git": b"\x00",
"blake2s256": b"\x00",
}
with pytest.raises(ValueError, match="must be a timezone-aware datetime."):
Content.from_dict(content_d)
@given(binary(max_size=4096))
def test_content_from_data(data):
c = Content.from_data(data)
assert c.data == data
assert c.length == len(data)
assert c.status == "visible"
for key, value in MultiHash.from_data(data).digest().items():
assert getattr(c, key) == value
@given(binary(max_size=4096))
def test_hidden_content_from_data(data):
c = Content.from_data(data, status="hidden")
assert c.data == data
assert c.length == len(data)
assert c.status == "hidden"
for key, value in MultiHash.from_data(data).digest().items():
assert getattr(c, key) == value
def test_content_naive_datetime():
c = Content.from_data(b"foo")
with pytest.raises(ValueError, match="must be a timezone-aware datetime"):
Content(
- **c.to_dict(), ctime=datetime.datetime.now(),
+ **c.to_dict(),
+ ctime=datetime.datetime.now(),
)
# SkippedContent
@given(binary(max_size=4096))
def test_skipped_content_from_data(data):
c = SkippedContent.from_data(data, reason="reason")
assert c.reason == "reason"
assert c.length == len(data)
assert c.status == "absent"
for key, value in MultiHash.from_data(data).digest().items():
assert getattr(c, key) == value
@given(strategies.skipped_contents_d())
def test_skipped_content_origin_is_str(skipped_content_d):
assert SkippedContent.from_dict(skipped_content_d)
skipped_content_d["origin"] = "http://path/to/origin"
assert SkippedContent.from_dict(skipped_content_d)
skipped_content_d["origin"] = Origin(url="http://path/to/origin")
with pytest.raises(ValueError, match="origin"):
SkippedContent.from_dict(skipped_content_d)
def test_skipped_content_naive_datetime():
c = SkippedContent.from_data(b"foo", reason="reason")
with pytest.raises(ValueError, match="must be a timezone-aware datetime"):
SkippedContent(
- **c.to_dict(), ctime=datetime.datetime.now(),
+ **c.to_dict(),
+ ctime=datetime.datetime.now(),
)
# Directory
@given(strategies.directories().filter(lambda d: d.raw_manifest is None))
def test_directory_check(directory):
directory.check()
directory2 = attr.evolve(directory, id=b"\x00" * 20)
with pytest.raises(ValueError, match="does not match recomputed hash"):
directory2.check()
directory2 = attr.evolve(
directory, raw_manifest=swh.model.git_objects.directory_git_object(directory)
)
with pytest.raises(
ValueError, match="non-none raw_manifest attribute, but does not need it."
):
directory2.check()
@given(strategies.directories().filter(lambda d: d.raw_manifest is None))
def test_directory_raw_manifest(directory):
assert "raw_manifest" not in directory.to_dict()
raw_manifest = b"foo"
id_ = hashlib.new("sha1", raw_manifest).digest()
directory2 = attr.evolve(directory, raw_manifest=raw_manifest)
assert directory2.to_dict()["raw_manifest"] == raw_manifest
with pytest.raises(ValueError, match="does not match recomputed hash"):
directory2.check()
directory2 = attr.evolve(directory, raw_manifest=raw_manifest, id=id_)
assert directory2.id is not None
assert directory2.id == id_ != directory.id
assert directory2.to_dict()["raw_manifest"] == raw_manifest
directory2.check()
def test_directory_entry_name_validation():
with pytest.raises(ValueError, match="valid directory entry name."):
DirectoryEntry(name=b"foo/", type="dir", target=b"\x00" * 20, perms=0),
def test_directory_duplicate_entry_name():
entries = (
DirectoryEntry(name=b"foo", type="file", target=b"\x00" * 20, perms=0),
DirectoryEntry(name=b"foo", type="dir", target=b"\x01" * 20, perms=1),
)
with pytest.raises(ValueError, match="duplicated entry name"):
Directory(entries=entries)
entries = (
DirectoryEntry(name=b"foo", type="file", target=b"\x00" * 20, perms=0),
DirectoryEntry(name=b"foo", type="file", target=b"\x00" * 20, perms=0),
)
with pytest.raises(ValueError, match="duplicated entry name"):
Directory(entries=entries)
# Release
@given(strategies.releases().filter(lambda rel: rel.raw_manifest is None))
def test_release_check(release):
release.check()
release2 = attr.evolve(release, id=b"\x00" * 20)
with pytest.raises(ValueError, match="does not match recomputed hash"):
release2.check()
release2 = attr.evolve(
release, raw_manifest=swh.model.git_objects.release_git_object(release)
)
with pytest.raises(
ValueError, match="non-none raw_manifest attribute, but does not need it."
):
release2.check()
@given(strategies.releases().filter(lambda rev: rev.raw_manifest is None))
def test_release_raw_manifest(release):
raw_manifest = b"foo"
id_ = hashlib.new("sha1", raw_manifest).digest()
release2 = attr.evolve(release, raw_manifest=raw_manifest)
assert release2.to_dict()["raw_manifest"] == raw_manifest
with pytest.raises(ValueError, match="does not match recomputed hash"):
release2.check()
release2 = attr.evolve(release, raw_manifest=raw_manifest, id=id_)
assert release2.id is not None
assert release2.id == id_ != release.id
assert release2.to_dict()["raw_manifest"] == raw_manifest
release2.check()
# Revision
@given(strategies.revisions().filter(lambda rev: rev.raw_manifest is None))
def test_revision_check(revision):
revision.check()
revision2 = attr.evolve(revision, id=b"\x00" * 20)
with pytest.raises(ValueError, match="does not match recomputed hash"):
revision2.check()
revision2 = attr.evolve(
revision, raw_manifest=swh.model.git_objects.revision_git_object(revision)
)
with pytest.raises(
ValueError, match="non-none raw_manifest attribute, but does not need it."
):
revision2.check()
@given(strategies.revisions().filter(lambda rev: rev.raw_manifest is None))
def test_revision_raw_manifest(revision):
raw_manifest = b"foo"
id_ = hashlib.new("sha1", raw_manifest).digest()
revision2 = attr.evolve(revision, raw_manifest=raw_manifest)
assert revision2.to_dict()["raw_manifest"] == raw_manifest
with pytest.raises(ValueError, match="does not match recomputed hash"):
revision2.check()
revision2 = attr.evolve(revision, raw_manifest=raw_manifest, id=id_)
assert revision2.id is not None
assert revision2.id == id_ != revision.id
assert revision2.to_dict()["raw_manifest"] == raw_manifest
revision2.check()
def test_revision_extra_headers_no_headers():
rev_dict = revision_example.copy()
rev_dict.pop("id")
rev = Revision.from_dict(rev_dict)
rev_dict = attr.asdict(rev, recurse=False)
rev_model = Revision(**rev_dict)
assert rev_model.metadata is None
assert rev_model.extra_headers == ()
rev_dict["metadata"] = {
"something": "somewhere",
"some other thing": "stranger",
}
rev_model = Revision(**rev_dict)
assert rev_model.metadata == rev_dict["metadata"]
assert rev_model.extra_headers == ()
def test_revision_extra_headers_with_headers():
rev_dict = revision_example.copy()
rev_dict.pop("id")
rev = Revision.from_dict(rev_dict)
rev_dict = attr.asdict(rev, recurse=False)
rev_dict["metadata"] = {
"something": "somewhere",
"some other thing": "stranger",
}
extra_headers = (
(b"header1", b"value1"),
(b"header2", b"42"),
(b"header3", b"should I?\x00"),
(b"header1", b"again"),
)
rev_dict["extra_headers"] = extra_headers
rev_model = Revision(**rev_dict)
assert "extra_headers" not in rev_model.metadata
assert rev_model.extra_headers == extra_headers
def test_revision_extra_headers_in_metadata():
rev_dict = revision_example.copy()
rev_dict.pop("id")
rev = Revision.from_dict(rev_dict)
rev_dict = attr.asdict(rev, recurse=False)
rev_dict["metadata"] = {
"something": "somewhere",
"some other thing": "stranger",
}
extra_headers = (
(b"header1", b"value1"),
(b"header2", b"42"),
(b"header3", b"should I?\x00"),
(b"header1", b"again"),
)
# check the bw-compat init hook does the job
# ie. extra_headers are given in the metadata field
rev_dict["metadata"]["extra_headers"] = extra_headers
rev_model = Revision(**rev_dict)
assert "extra_headers" not in rev_model.metadata
assert rev_model.extra_headers == extra_headers
def test_revision_extra_headers_as_lists():
rev_dict = revision_example.copy()
rev_dict.pop("id")
rev = Revision.from_dict(rev_dict)
rev_dict = attr.asdict(rev, recurse=False)
rev_dict["metadata"] = {}
extra_headers = (
(b"header1", b"value1"),
(b"header2", b"42"),
(b"header3", b"should I?\x00"),
(b"header1", b"again"),
)
# check Revision.extra_headers tuplify does the job
rev_dict["extra_headers"] = [list(x) for x in extra_headers]
rev_model = Revision(**rev_dict)
assert "extra_headers" not in rev_model.metadata
assert rev_model.extra_headers == extra_headers
def test_revision_extra_headers_type_error():
rev_dict = revision_example.copy()
rev_dict.pop("id")
rev = Revision.from_dict(rev_dict)
orig_rev_dict = attr.asdict(rev, recurse=False)
orig_rev_dict["metadata"] = {
"something": "somewhere",
"some other thing": "stranger",
}
extra_headers = (
("header1", b"value1"),
(b"header2", 42),
("header1", "again"),
)
# check headers one at a time
# if given as extra_header
for extra_header in extra_headers:
rev_dict = copy.deepcopy(orig_rev_dict)
rev_dict["extra_headers"] = (extra_header,)
with pytest.raises(AttributeTypeError):
Revision(**rev_dict)
# if given as metadata
for extra_header in extra_headers:
rev_dict = copy.deepcopy(orig_rev_dict)
rev_dict["metadata"]["extra_headers"] = (extra_header,)
with pytest.raises(AttributeTypeError):
Revision(**rev_dict)
def test_revision_extra_headers_from_dict():
rev_dict = revision_example.copy()
rev_dict.pop("id")
rev_model = Revision.from_dict(rev_dict)
assert rev_model.metadata is None
assert rev_model.extra_headers == ()
rev_dict["metadata"] = {
"something": "somewhere",
"some other thing": "stranger",
}
rev_model = Revision.from_dict(rev_dict)
assert rev_model.metadata == rev_dict["metadata"]
assert rev_model.extra_headers == ()
extra_headers = (
(b"header1", b"value1"),
(b"header2", b"42"),
(b"header3", b"should I?\nmaybe\x00\xff"),
(b"header1", b"again"),
)
rev_dict["extra_headers"] = extra_headers
rev_model = Revision.from_dict(rev_dict)
assert "extra_headers" not in rev_model.metadata
assert rev_model.extra_headers == extra_headers
def test_revision_extra_headers_in_metadata_from_dict():
rev_dict = revision_example.copy()
rev_dict.pop("id")
rev_dict["metadata"] = {
"something": "somewhere",
"some other thing": "stranger",
}
extra_headers = (
(b"header1", b"value1"),
(b"header2", b"42"),
(b"header3", b"should I?\nmaybe\x00\xff"),
(b"header1", b"again"),
)
# check the bw-compat init hook does the job
rev_dict["metadata"]["extra_headers"] = extra_headers
rev_model = Revision.from_dict(rev_dict)
assert "extra_headers" not in rev_model.metadata
assert rev_model.extra_headers == extra_headers
def test_revision_extra_headers_as_lists_from_dict():
rev_dict = revision_example.copy()
rev_dict.pop("id")
rev_model = Revision.from_dict(rev_dict)
rev_dict["metadata"] = {
"something": "somewhere",
"some other thing": "stranger",
}
extra_headers = (
(b"header1", b"value1"),
(b"header2", b"42"),
(b"header3", b"should I?\nmaybe\x00\xff"),
(b"header1", b"again"),
)
# check Revision.extra_headers converter does the job
rev_dict["extra_headers"] = [list(x) for x in extra_headers]
rev_model = Revision.from_dict(rev_dict)
assert "extra_headers" not in rev_model.metadata
assert rev_model.extra_headers == extra_headers
def test_revision_no_author_or_committer_from_dict():
rev_dict = revision_example.copy()
rev_dict["author"] = rev_dict["date"] = None
rev_dict["committer"] = rev_dict["committer_date"] = None
rev_model = Revision.from_dict(rev_dict)
assert rev_model.to_dict() == {
**rev_dict,
"parents": tuple(rev_dict["parents"]),
"extra_headers": (),
"metadata": None,
}
def test_revision_none_author_or_committer():
rev_dict = revision_example.copy()
rev_dict["author"] = None
with pytest.raises(ValueError, match=".*date must be None if author is None.*"):
Revision.from_dict(rev_dict)
rev_dict = revision_example.copy()
rev_dict["committer"] = None
with pytest.raises(
ValueError, match=".*committer_date must be None if committer is None.*"
):
Revision.from_dict(rev_dict)
@given(strategies.objects(split_content=True))
def test_object_type(objtype_and_obj):
obj_type, obj = objtype_and_obj
assert obj_type == obj.object_type
def test_object_type_is_final():
object_types = set()
def check_final(cls):
if hasattr(cls, "object_type"):
assert cls.object_type not in object_types
object_types.add(cls.object_type)
if cls.__subclasses__():
assert not hasattr(cls, "object_type")
for subcls in cls.__subclasses__():
check_final(subcls)
check_final(BaseModel)
_metadata_authority = MetadataAuthority(
- type=MetadataAuthorityType.FORGE, url="https://forge.softwareheritage.org",
+ type=MetadataAuthorityType.FORGE,
+ url="https://forge.softwareheritage.org",
+)
+_metadata_fetcher = MetadataFetcher(
+ name="test-fetcher",
+ version="0.0.1",
)
-_metadata_fetcher = MetadataFetcher(name="test-fetcher", version="0.0.1",)
_content_swhid = ExtendedSWHID.from_string(
"swh:1:cnt:94a9ed024d3859793618152ea559a168bbcbb5e2"
)
_origin_url = "https://forge.softwareheritage.org/source/swh-model.git"
_origin_swhid = ExtendedSWHID.from_string(
"swh:1:ori:94a9ed024d3859793618152ea559a168bbcbb5e2"
)
_dummy_qualifiers = {"origin": "https://example.com", "lines": "42"}
_common_metadata_fields = dict(
discovery_date=datetime.datetime(
2021, 1, 29, 13, 57, 9, tzinfo=datetime.timezone.utc
),
authority=_metadata_authority,
fetcher=_metadata_fetcher,
format="json",
metadata=b'{"origin": "https://example.com", "lines": "42"}',
)
def test_metadata_valid():
"""Checks valid RawExtrinsicMetadata objects don't raise an error."""
# Simplest case
RawExtrinsicMetadata(target=_origin_swhid, **_common_metadata_fields)
# Object with an SWHID
RawExtrinsicMetadata(
- target=_content_swhid, **_common_metadata_fields,
+ target=_content_swhid,
+ **_common_metadata_fields,
)
def test_metadata_to_dict():
"""Checks valid RawExtrinsicMetadata objects don't raise an error."""
common_fields = {
"authority": {"type": "forge", "url": "https://forge.softwareheritage.org"},
- "fetcher": {"name": "test-fetcher", "version": "0.0.1",},
+ "fetcher": {
+ "name": "test-fetcher",
+ "version": "0.0.1",
+ },
"discovery_date": _common_metadata_fields["discovery_date"],
"format": "json",
"metadata": b'{"origin": "https://example.com", "lines": "42"}',
}
- m = RawExtrinsicMetadata(target=_origin_swhid, **_common_metadata_fields,)
+ m = RawExtrinsicMetadata(
+ target=_origin_swhid,
+ **_common_metadata_fields,
+ )
assert m.to_dict() == {
"target": str(_origin_swhid),
"id": b"@j\xc9\x01\xbc\x1e#p*\xf3q9\xa7u\x97\x00\x14\x02xa",
**common_fields,
}
assert RawExtrinsicMetadata.from_dict(m.to_dict()) == m
- m = RawExtrinsicMetadata(target=_content_swhid, **_common_metadata_fields,)
+ m = RawExtrinsicMetadata(
+ target=_content_swhid,
+ **_common_metadata_fields,
+ )
assert m.to_dict() == {
"target": "swh:1:cnt:94a9ed024d3859793618152ea559a168bbcbb5e2",
"id": b"\xbc\xa3U\xddf\x19U\xc5\xd2\xd7\xdfK\xd7c\x1f\xa8\xfeh\x992",
**common_fields,
}
assert RawExtrinsicMetadata.from_dict(m.to_dict()) == m
hash_hex = "6162" * 10
hash_bin = b"ab" * 10
m = RawExtrinsicMetadata(
target=_content_swhid,
**_common_metadata_fields,
origin="https://example.org/",
snapshot=CoreSWHID(object_type=ObjectType.SNAPSHOT, object_id=hash_bin),
release=CoreSWHID(object_type=ObjectType.RELEASE, object_id=hash_bin),
revision=CoreSWHID(object_type=ObjectType.REVISION, object_id=hash_bin),
path=b"/foo/bar",
directory=CoreSWHID(object_type=ObjectType.DIRECTORY, object_id=hash_bin),
)
assert m.to_dict() == {
"target": "swh:1:cnt:94a9ed024d3859793618152ea559a168bbcbb5e2",
"id": b"\x14l\xb0\x1f\xb9\xc0{)\xc7\x0f\xbd\xc0*,YZ\xf5C\xab\xfc",
**common_fields,
"origin": "https://example.org/",
"snapshot": f"swh:1:snp:{hash_hex}",
"release": f"swh:1:rel:{hash_hex}",
"revision": f"swh:1:rev:{hash_hex}",
"path": b"/foo/bar",
"directory": f"swh:1:dir:{hash_hex}",
}
assert RawExtrinsicMetadata.from_dict(m.to_dict()) == m
def test_metadata_invalid_target():
"""Checks various invalid values for the 'target' field."""
# SWHID passed as string instead of SWHID
with pytest.raises(ValueError, match="target must be.*ExtendedSWHID"):
RawExtrinsicMetadata(
target="swh:1:cnt:94a9ed024d3859793618152ea559a168bbcbb5e2",
**_common_metadata_fields,
)
def test_metadata_naive_datetime():
with pytest.raises(ValueError, match="must be a timezone-aware datetime"):
RawExtrinsicMetadata(
target=_origin_swhid,
**{**_common_metadata_fields, "discovery_date": datetime.datetime.now()},
)
def test_metadata_validate_context_origin():
"""Checks validation of RawExtrinsicMetadata.origin."""
# Origins can't have an 'origin' context
with pytest.raises(
ValueError, match="Unexpected 'origin' context for origin object"
):
RawExtrinsicMetadata(
- target=_origin_swhid, origin=_origin_url, **_common_metadata_fields,
+ target=_origin_swhid,
+ origin=_origin_url,
+ **_common_metadata_fields,
)
# but all other types can
RawExtrinsicMetadata(
- target=_content_swhid, origin=_origin_url, **_common_metadata_fields,
+ target=_content_swhid,
+ origin=_origin_url,
+ **_common_metadata_fields,
)
# SWHIDs aren't valid origin URLs
with pytest.raises(ValueError, match="SWHID used as context origin URL"):
RawExtrinsicMetadata(
target=_content_swhid,
origin="swh:1:cnt:94a9ed024d3859793618152ea559a168bbcbb5e2",
**_common_metadata_fields,
)
def test_metadata_validate_context_visit():
"""Checks validation of RawExtrinsicMetadata.visit."""
# Origins can't have a 'visit' context
with pytest.raises(
ValueError, match="Unexpected 'visit' context for origin object"
):
RawExtrinsicMetadata(
- target=_origin_swhid, visit=42, **_common_metadata_fields,
+ target=_origin_swhid,
+ visit=42,
+ **_common_metadata_fields,
)
# but all other types can
RawExtrinsicMetadata(
- target=_content_swhid, origin=_origin_url, visit=42, **_common_metadata_fields,
+ target=_content_swhid,
+ origin=_origin_url,
+ visit=42,
+ **_common_metadata_fields,
)
# Missing 'origin'
with pytest.raises(ValueError, match="'origin' context must be set if 'visit' is"):
RawExtrinsicMetadata(
- target=_content_swhid, visit=42, **_common_metadata_fields,
+ target=_content_swhid,
+ visit=42,
+ **_common_metadata_fields,
)
# visit id must be positive
with pytest.raises(ValueError, match="Nonpositive visit id"):
RawExtrinsicMetadata(
target=_content_swhid,
origin=_origin_url,
visit=-42,
**_common_metadata_fields,
)
def test_metadata_validate_context_snapshot():
"""Checks validation of RawExtrinsicMetadata.snapshot."""
# Origins can't have a 'snapshot' context
with pytest.raises(
ValueError, match="Unexpected 'snapshot' context for origin object"
):
RawExtrinsicMetadata(
target=_origin_swhid,
snapshot=CoreSWHID(
- object_type=ObjectType.SNAPSHOT, object_id=EXAMPLE_HASH,
+ object_type=ObjectType.SNAPSHOT,
+ object_id=EXAMPLE_HASH,
),
**_common_metadata_fields,
)
# but content can
RawExtrinsicMetadata(
target=_content_swhid,
snapshot=CoreSWHID(object_type=ObjectType.SNAPSHOT, object_id=EXAMPLE_HASH),
**_common_metadata_fields,
)
# SWHID type doesn't match the expected type of this context key
with pytest.raises(
ValueError, match="Expected SWHID type 'snapshot', got 'content'"
):
RawExtrinsicMetadata(
target=_content_swhid,
- snapshot=CoreSWHID(object_type=ObjectType.CONTENT, object_id=EXAMPLE_HASH,),
+ snapshot=CoreSWHID(
+ object_type=ObjectType.CONTENT,
+ object_id=EXAMPLE_HASH,
+ ),
**_common_metadata_fields,
)
def test_metadata_validate_context_release():
"""Checks validation of RawExtrinsicMetadata.release."""
# Origins can't have a 'release' context
with pytest.raises(
ValueError, match="Unexpected 'release' context for origin object"
):
RawExtrinsicMetadata(
target=_origin_swhid,
- release=CoreSWHID(object_type=ObjectType.RELEASE, object_id=EXAMPLE_HASH,),
+ release=CoreSWHID(
+ object_type=ObjectType.RELEASE,
+ object_id=EXAMPLE_HASH,
+ ),
**_common_metadata_fields,
)
# but content can
RawExtrinsicMetadata(
target=_content_swhid,
release=CoreSWHID(object_type=ObjectType.RELEASE, object_id=EXAMPLE_HASH),
**_common_metadata_fields,
)
# SWHID type doesn't match the expected type of this context key
with pytest.raises(
ValueError, match="Expected SWHID type 'release', got 'content'"
):
RawExtrinsicMetadata(
target=_content_swhid,
- release=CoreSWHID(object_type=ObjectType.CONTENT, object_id=EXAMPLE_HASH,),
+ release=CoreSWHID(
+ object_type=ObjectType.CONTENT,
+ object_id=EXAMPLE_HASH,
+ ),
**_common_metadata_fields,
)
def test_metadata_validate_context_revision():
"""Checks validation of RawExtrinsicMetadata.revision."""
# Origins can't have a 'revision' context
with pytest.raises(
ValueError, match="Unexpected 'revision' context for origin object"
):
RawExtrinsicMetadata(
target=_origin_swhid,
revision=CoreSWHID(
- object_type=ObjectType.REVISION, object_id=EXAMPLE_HASH,
+ object_type=ObjectType.REVISION,
+ object_id=EXAMPLE_HASH,
),
**_common_metadata_fields,
)
# but content can
RawExtrinsicMetadata(
target=_content_swhid,
revision=CoreSWHID(object_type=ObjectType.REVISION, object_id=EXAMPLE_HASH),
**_common_metadata_fields,
)
# SWHID type doesn't match the expected type of this context key
with pytest.raises(
ValueError, match="Expected SWHID type 'revision', got 'content'"
):
RawExtrinsicMetadata(
target=_content_swhid,
- revision=CoreSWHID(object_type=ObjectType.CONTENT, object_id=EXAMPLE_HASH,),
+ revision=CoreSWHID(
+ object_type=ObjectType.CONTENT,
+ object_id=EXAMPLE_HASH,
+ ),
**_common_metadata_fields,
)
def test_metadata_validate_context_path():
"""Checks validation of RawExtrinsicMetadata.path."""
# Origins can't have a 'path' context
with pytest.raises(ValueError, match="Unexpected 'path' context for origin object"):
RawExtrinsicMetadata(
- target=_origin_swhid, path=b"/foo/bar", **_common_metadata_fields,
+ target=_origin_swhid,
+ path=b"/foo/bar",
+ **_common_metadata_fields,
)
# but content can
RawExtrinsicMetadata(
- target=_content_swhid, path=b"/foo/bar", **_common_metadata_fields,
+ target=_content_swhid,
+ path=b"/foo/bar",
+ **_common_metadata_fields,
)
def test_metadata_validate_context_directory():
"""Checks validation of RawExtrinsicMetadata.directory."""
# Origins can't have a 'directory' context
with pytest.raises(
ValueError, match="Unexpected 'directory' context for origin object"
):
RawExtrinsicMetadata(
target=_origin_swhid,
directory=CoreSWHID(
- object_type=ObjectType.DIRECTORY, object_id=EXAMPLE_HASH,
+ object_type=ObjectType.DIRECTORY,
+ object_id=EXAMPLE_HASH,
),
**_common_metadata_fields,
)
# but content can
RawExtrinsicMetadata(
target=_content_swhid,
- directory=CoreSWHID(object_type=ObjectType.DIRECTORY, object_id=EXAMPLE_HASH,),
+ directory=CoreSWHID(
+ object_type=ObjectType.DIRECTORY,
+ object_id=EXAMPLE_HASH,
+ ),
**_common_metadata_fields,
)
# SWHID type doesn't match the expected type of this context key
with pytest.raises(
ValueError, match="Expected SWHID type 'directory', got 'content'"
):
RawExtrinsicMetadata(
target=_content_swhid,
directory=CoreSWHID(
- object_type=ObjectType.CONTENT, object_id=EXAMPLE_HASH,
+ object_type=ObjectType.CONTENT,
+ object_id=EXAMPLE_HASH,
),
**_common_metadata_fields,
)
def test_metadata_normalize_discovery_date():
fields_copy = {**_common_metadata_fields}
truncated_date = fields_copy.pop("discovery_date")
assert truncated_date.microsecond == 0
# Check for TypeError on disabled object type: we removed attrs_strict's
# type_validator
with pytest.raises(TypeError):
RawExtrinsicMetadata(
target=_content_swhid, discovery_date="not a datetime", **fields_copy
)
# Check for truncation to integral second
date_with_us = truncated_date.replace(microsecond=42)
md = RawExtrinsicMetadata(
- target=_content_swhid, discovery_date=date_with_us, **fields_copy,
+ target=_content_swhid,
+ discovery_date=date_with_us,
+ **fields_copy,
)
assert md.discovery_date == truncated_date
assert md.discovery_date.tzinfo == datetime.timezone.utc
# Check that the timezone gets normalized. Timezones can be offset by a
# non-integral number of seconds, so we need to handle that.
timezone = datetime.timezone(offset=datetime.timedelta(hours=2))
date_with_tz = truncated_date.astimezone(timezone)
assert date_with_tz.tzinfo != datetime.timezone.utc
md = RawExtrinsicMetadata(
- target=_content_swhid, discovery_date=date_with_tz, **fields_copy,
+ target=_content_swhid,
+ discovery_date=date_with_tz,
+ **fields_copy,
)
assert md.discovery_date == truncated_date
assert md.discovery_date.tzinfo == datetime.timezone.utc
diff --git a/swh/model/tests/test_swh_model_data.py b/swh/model/tests/test_swh_model_data.py
index 5e2521d..e9d7374 100644
--- a/swh/model/tests/test_swh_model_data.py
+++ b/swh/model/tests/test_swh_model_data.py
@@ -1,54 +1,54 @@
# Copyright (C) 2021 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import attr
import pytest
from swh.model.tests.swh_model_data import TEST_OBJECTS
@pytest.mark.parametrize("object_type, objects", TEST_OBJECTS.items())
def test_swh_model_data(object_type, objects):
"""checks model objects in swh_model_data are in correct shape"""
assert objects
for obj in objects:
assert obj.object_type == object_type
attr.validate(obj)
@pytest.mark.parametrize(
- "object_type", ("directory", "revision", "release", "snapshot"),
+ "object_type",
+ ("directory", "revision", "release", "snapshot"),
)
def test_swh_model_data_hash(object_type):
for obj in TEST_OBJECTS[object_type]:
assert (
obj.compute_hash() == obj.id
), f"{obj.compute_hash().hex()} != {obj.id.hex()}"
def test_ensure_visit_status_date_consistency():
"""ensure origin-visit-status dates are more recent than their visit counterpart
The origin-visit-status dates needs to be shifted slightly in the future from their
visit dates counterpart. Otherwise, we are hitting storage-wise the "on conflict"
ignore policy (because origin-visit-add creates an origin-visit-status with the same
parameters from the origin-visit {origin, visit, date}...
"""
visits = TEST_OBJECTS["origin_visit"]
visit_statuses = TEST_OBJECTS["origin_visit_status"]
for visit, visit_status in zip(visits, visit_statuses):
assert visit.origin == visit_status.origin
assert visit.visit == visit_status.visit
assert visit.date < visit_status.date
def test_ensure_visit_status_snapshot_consistency():
- """ensure origin-visit-status snapshots exist in the test dataset
- """
+ """ensure origin-visit-status snapshots exist in the test dataset"""
snapshots = [snp.id for snp in TEST_OBJECTS["snapshot"]]
for visit_status in TEST_OBJECTS["origin_visit_status"]:
if visit_status.snapshot:
assert visit_status.snapshot in snapshots
diff --git a/swh/model/tests/test_swhids.py b/swh/model/tests/test_swhids.py
index 34a55f6..5c3cab2 100644
--- a/swh/model/tests/test_swhids.py
+++ b/swh/model/tests/test_swhids.py
@@ -1,638 +1,801 @@
# Copyright (C) 2015-2021 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
import itertools
import attr
import pytest
from swh.model.exceptions import ValidationError
from swh.model.hashutil import hash_to_bytes as _x
from swh.model.swhids import (
SWHID_QUALIFIERS,
CoreSWHID,
ExtendedObjectType,
ExtendedSWHID,
ObjectType,
QualifiedSWHID,
)
dummy_qualifiers = {"origin": "https://example.com", "lines": "42"}
# SWHIDs that are outright invalid, no matter the context
INVALID_SWHIDS = [
"swh:1:cnt",
"swh:1:",
"swh:",
"swh:1:cnt:",
"foo:1:cnt:abc8bc9d7a6bcf6db04f476d29314f157507d505",
"swh:2:dir:def8bc9d7a6bcf6db04f476d29314f157507d505",
"swh:1:foo:fed8bc9d7a6bcf6db04f476d29314f157507d505",
"swh:1:dir:0b6959356d30f1a4e9b7f6bca59b9a336464c03d;invalid;malformed",
"swh:1:snp:gh6959356d30f1a4e9b7f6bca59b9a336464c03d",
"swh:1:snp:foo",
# wrong qualifier: ori should be origin
"swh:1:dir:0b6959356d30f1a4e9b7f6bca59b9a336464c03d;ori=something;anchor=1;visit=1;path=/", # noqa
# wrong qualifier: anc should be anchor
"swh:1:dir:0b6959356d30f1a4e9b7f6bca59b9a336464c03d;origin=something;anc=1;visit=1;path=/", # noqa
# wrong qualifier: vis should be visit
"swh:1:dir:0b6959356d30f1a4e9b7f6bca59b9a336464c03d;origin=something;anchor=1;vis=1;path=/", # noqa
# wrong qualifier: pa should be path
"swh:1:dir:0b6959356d30f1a4e9b7f6bca59b9a336464c03d;origin=something;anchor=1;visit=1;pa=/", # noqa
# wrong qualifier: line should be lines
"swh:1:dir:0b6959356d30f1a4e9b7f6bca59b9a336464c03d;line=10;origin=something;anchor=1;visit=1;path=/", # noqa
# wrong qualifier value: it contains space before of after
"swh:1:dir:0b6959356d30f1a4e9b7f6bca59b9a336464c03d;origin= https://some-url", # noqa
"swh:1:dir:0b6959356d30f1a4e9b7f6bca59b9a336464c03d;origin=something;anchor=some-anchor ", # noqa
"swh:1:dir:0b6959356d30f1a4e9b7f6bca59b9a336464c03d;origin=something;anchor=some-anchor ;visit=1", # noqa
# invalid swhid: whitespaces
"swh :1:dir:0b6959356d30f1a4e9b7f6bca59b9a336464c03d;ori=something;anchor=1;visit=1;path=/", # noqa
"swh: 1:dir:0b6959356d30f1a4e9b7f6bca59b9a336464c03d;ori=something;anchor=1;visit=1;path=/", # noqa
"swh: 1: dir:0b6959356d30f1a4e9b7f6bca59b9a336464c03d;ori=something;anchor=1;visit=1;path=/", # noqa
"swh:1: dir: 0b6959356d30f1a4e9b7f6bca59b9a336464c03d",
"swh:1: dir: 0b6959356d30f1a4e9b7f6bca59b9a336464c03d; origin=blah",
"swh:1: dir: 0b6959356d30f1a4e9b7f6bca59b9a336464c03d;lines=12",
# other whitespaces
"swh\t:1:dir:0b6959356d30f1a4e9b7f6bca59b9a336464c03d;lines=12",
"swh:1\n:dir:0b6959356d30f1a4e9b7f6bca59b9a336464c03d;lines=12",
"swh:1:\rdir:0b6959356d30f1a4e9b7f6bca59b9a336464c03d;lines=12",
"swh:1:dir:0b6959356d30f1a4e9b7f6bca59b9a336464c03d\f;lines=12",
"swh:1:dir:0b6959356d30f1a4e9b7f6bca59b9a336464c03d;lines=12\v",
]
SWHID_CLASSES = [CoreSWHID, QualifiedSWHID, ExtendedSWHID]
@pytest.mark.parametrize(
"invalid_swhid,swhid_class", itertools.product(INVALID_SWHIDS, SWHID_CLASSES)
)
def test_swhid_parsing_error(invalid_swhid, swhid_class):
"""Tests SWHID strings that are invalid for all SWHID classes do raise
a ValidationError"""
with pytest.raises(ValidationError):
swhid_class.from_string(invalid_swhid)
# string SWHIDs, and how they should be parsed by each of the classes,
# or None if the class does not support it
HASH = "94a9ed024d3859793618152ea559a168bbcbb5e2"
VALID_SWHIDS = [
(
f"swh:1:cnt:{HASH}",
- CoreSWHID(object_type=ObjectType.CONTENT, object_id=_x(HASH),),
- QualifiedSWHID(object_type=ObjectType.CONTENT, object_id=_x(HASH),),
- ExtendedSWHID(object_type=ExtendedObjectType.CONTENT, object_id=_x(HASH),),
+ CoreSWHID(
+ object_type=ObjectType.CONTENT,
+ object_id=_x(HASH),
+ ),
+ QualifiedSWHID(
+ object_type=ObjectType.CONTENT,
+ object_id=_x(HASH),
+ ),
+ ExtendedSWHID(
+ object_type=ExtendedObjectType.CONTENT,
+ object_id=_x(HASH),
+ ),
),
(
f"swh:1:dir:{HASH}",
- CoreSWHID(object_type=ObjectType.DIRECTORY, object_id=_x(HASH),),
- QualifiedSWHID(object_type=ObjectType.DIRECTORY, object_id=_x(HASH),),
- ExtendedSWHID(object_type=ExtendedObjectType.DIRECTORY, object_id=_x(HASH),),
+ CoreSWHID(
+ object_type=ObjectType.DIRECTORY,
+ object_id=_x(HASH),
+ ),
+ QualifiedSWHID(
+ object_type=ObjectType.DIRECTORY,
+ object_id=_x(HASH),
+ ),
+ ExtendedSWHID(
+ object_type=ExtendedObjectType.DIRECTORY,
+ object_id=_x(HASH),
+ ),
),
(
f"swh:1:rev:{HASH}",
- CoreSWHID(object_type=ObjectType.REVISION, object_id=_x(HASH),),
- QualifiedSWHID(object_type=ObjectType.REVISION, object_id=_x(HASH),),
- ExtendedSWHID(object_type=ExtendedObjectType.REVISION, object_id=_x(HASH),),
+ CoreSWHID(
+ object_type=ObjectType.REVISION,
+ object_id=_x(HASH),
+ ),
+ QualifiedSWHID(
+ object_type=ObjectType.REVISION,
+ object_id=_x(HASH),
+ ),
+ ExtendedSWHID(
+ object_type=ExtendedObjectType.REVISION,
+ object_id=_x(HASH),
+ ),
),
(
f"swh:1:rel:{HASH}",
- CoreSWHID(object_type=ObjectType.RELEASE, object_id=_x(HASH),),
- QualifiedSWHID(object_type=ObjectType.RELEASE, object_id=_x(HASH),),
- ExtendedSWHID(object_type=ExtendedObjectType.RELEASE, object_id=_x(HASH),),
+ CoreSWHID(
+ object_type=ObjectType.RELEASE,
+ object_id=_x(HASH),
+ ),
+ QualifiedSWHID(
+ object_type=ObjectType.RELEASE,
+ object_id=_x(HASH),
+ ),
+ ExtendedSWHID(
+ object_type=ExtendedObjectType.RELEASE,
+ object_id=_x(HASH),
+ ),
),
(
f"swh:1:snp:{HASH}",
- CoreSWHID(object_type=ObjectType.SNAPSHOT, object_id=_x(HASH),),
- QualifiedSWHID(object_type=ObjectType.SNAPSHOT, object_id=_x(HASH),),
- ExtendedSWHID(object_type=ExtendedObjectType.SNAPSHOT, object_id=_x(HASH),),
+ CoreSWHID(
+ object_type=ObjectType.SNAPSHOT,
+ object_id=_x(HASH),
+ ),
+ QualifiedSWHID(
+ object_type=ObjectType.SNAPSHOT,
+ object_id=_x(HASH),
+ ),
+ ExtendedSWHID(
+ object_type=ExtendedObjectType.SNAPSHOT,
+ object_id=_x(HASH),
+ ),
),
(
f"swh:1:cnt:{HASH};origin=https://github.com/python/cpython;lines=1-18",
None, # CoreSWHID does not allow qualifiers
QualifiedSWHID(
object_type=ObjectType.CONTENT,
object_id=_x(HASH),
origin="https://github.com/python/cpython",
lines=(1, 18),
),
None, # Neither does ExtendedSWHID
),
(
f"swh:1:cnt:{HASH};origin=https://github.com/python/cpython;lines=1-18/",
None, # likewise
None,
None, # likewise
),
(
f"swh:1:cnt:{HASH};origin=https://github.com/python/cpython;lines=18",
None, # likewise
QualifiedSWHID(
object_type=ObjectType.CONTENT,
object_id=_x(HASH),
origin="https://github.com/python/cpython",
lines=(18, None),
),
None, # likewise
),
(
f"swh:1:dir:{HASH};origin=deb://Debian/packages/linuxdoc-tools",
None, # likewise
QualifiedSWHID(
object_type=ObjectType.DIRECTORY,
object_id=_x(HASH),
origin="deb://Debian/packages/linuxdoc-tools",
),
None, # likewise
),
(
f"swh:1:ori:{HASH}",
None, # CoreSWHID does not allow origin pseudo-SWHIDs
None, # Neither does QualifiedSWHID
- ExtendedSWHID(object_type=ExtendedObjectType.ORIGIN, object_id=_x(HASH),),
+ ExtendedSWHID(
+ object_type=ExtendedObjectType.ORIGIN,
+ object_id=_x(HASH),
+ ),
),
(
f"swh:1:emd:{HASH}",
None, # likewise for metadata pseudo-SWHIDs
None, # Neither does QualifiedSWHID
ExtendedSWHID(
- object_type=ExtendedObjectType.RAW_EXTRINSIC_METADATA, object_id=_x(HASH),
+ object_type=ExtendedObjectType.RAW_EXTRINSIC_METADATA,
+ object_id=_x(HASH),
),
),
(
f"swh:1:emd:{HASH};origin=https://github.com/python/cpython",
None, # CoreSWHID does not allow metadata pseudo-SWHIDs or qualifiers
None, # QualifiedSWHID does not allow metadata pseudo-SWHIDs
None, # ExtendedSWHID does not allow qualifiers
),
]
@pytest.mark.parametrize(
"string,core,qualified,extended",
[
pytest.param(string, core, qualified, extended, id=string)
for (string, core, qualified, extended) in VALID_SWHIDS
],
)
def test_parse_unparse_swhids(string, core, qualified, extended):
"""Tests parsing and serializing valid SWHIDs with the various SWHID classes."""
classes = [CoreSWHID, QualifiedSWHID, ExtendedSWHID]
for (cls, parsed_swhid) in zip(classes, [core, qualified, extended]):
if parsed_swhid is None:
# This class should not accept this SWHID
with pytest.raises(ValidationError) as excinfo:
cls.from_string(string)
# Check string serialization for exception
assert str(excinfo.value) is not None
else:
# This class should
assert cls.from_string(string) == parsed_swhid
# Also check serialization
assert string == str(parsed_swhid)
@pytest.mark.parametrize(
"core,extended",
[
pytest.param(core, extended, id=string)
for (string, core, qualified, extended) in VALID_SWHIDS
if core is not None
],
)
def test_core_to_extended(core, extended):
assert core.to_extended() == extended
@pytest.mark.parametrize(
"ns,version,type,id,qualifiers",
[
("foo", 1, ObjectType.CONTENT, "abc8bc9d7a6bcf6db04f476d29314f157507d505", {}),
("swh", 2, ObjectType.CONTENT, "def8bc9d7a6bcf6db04f476d29314f157507d505", {}),
("swh", 1, ObjectType.DIRECTORY, "aaaa", {}),
],
)
def test_QualifiedSWHID_validation_error(ns, version, type, id, qualifiers):
with pytest.raises(ValidationError):
QualifiedSWHID(
namespace=ns,
scheme_version=version,
object_type=type,
object_id=_x(id),
**qualifiers,
)
@pytest.mark.parametrize(
"object_type,qualifiers,expected",
[
# No qualifier:
(ObjectType.CONTENT, {}, f"swh:1:cnt:{HASH}"),
# origin:
(ObjectType.CONTENT, {"origin": None}, f"swh:1:cnt:{HASH}"),
(ObjectType.CONTENT, {"origin": 42}, ValueError),
# visit:
(
ObjectType.CONTENT,
{"visit": f"swh:1:snp:{HASH}"},
f"swh:1:cnt:{HASH};visit=swh:1:snp:{HASH}",
),
(
ObjectType.CONTENT,
{"visit": CoreSWHID(object_type=ObjectType.SNAPSHOT, object_id=_x(HASH))},
f"swh:1:cnt:{HASH};visit=swh:1:snp:{HASH}",
),
(ObjectType.CONTENT, {"visit": 42}, TypeError),
- (ObjectType.CONTENT, {"visit": f"swh:1:rel:{HASH}"}, ValidationError,),
+ (
+ ObjectType.CONTENT,
+ {"visit": f"swh:1:rel:{HASH}"},
+ ValidationError,
+ ),
(
ObjectType.CONTENT,
{"visit": CoreSWHID(object_type=ObjectType.RELEASE, object_id=_x(HASH))},
ValidationError,
),
# anchor:
(
ObjectType.CONTENT,
{"anchor": f"swh:1:snp:{HASH}"},
f"swh:1:cnt:{HASH};anchor=swh:1:snp:{HASH}",
),
(
ObjectType.CONTENT,
{"anchor": CoreSWHID(object_type=ObjectType.SNAPSHOT, object_id=_x(HASH))},
f"swh:1:cnt:{HASH};anchor=swh:1:snp:{HASH}",
),
(
ObjectType.CONTENT,
{"anchor": f"swh:1:dir:{HASH}"},
f"swh:1:cnt:{HASH};anchor=swh:1:dir:{HASH}",
),
(
ObjectType.CONTENT,
{"anchor": CoreSWHID(object_type=ObjectType.DIRECTORY, object_id=_x(HASH))},
f"swh:1:cnt:{HASH};anchor=swh:1:dir:{HASH}",
),
(ObjectType.CONTENT, {"anchor": 42}, TypeError),
- (ObjectType.CONTENT, {"anchor": f"swh:1:cnt:{HASH}"}, ValidationError,),
+ (
+ ObjectType.CONTENT,
+ {"anchor": f"swh:1:cnt:{HASH}"},
+ ValidationError,
+ ),
(
ObjectType.CONTENT,
{"anchor": CoreSWHID(object_type=ObjectType.CONTENT, object_id=_x(HASH))},
ValidationError,
),
# path:
- (ObjectType.CONTENT, {"path": b"/foo"}, f"swh:1:cnt:{HASH};path=/foo",),
+ (
+ ObjectType.CONTENT,
+ {"path": b"/foo"},
+ f"swh:1:cnt:{HASH};path=/foo",
+ ),
(
ObjectType.CONTENT,
{"path": b"/foo;bar"},
f"swh:1:cnt:{HASH};path=/foo%3Bbar",
),
- (ObjectType.CONTENT, {"path": "/foo"}, f"swh:1:cnt:{HASH};path=/foo",),
+ (
+ ObjectType.CONTENT,
+ {"path": "/foo"},
+ f"swh:1:cnt:{HASH};path=/foo",
+ ),
(
ObjectType.CONTENT,
{"path": "/foo;bar"},
f"swh:1:cnt:{HASH};path=/foo%3Bbar",
),
(ObjectType.CONTENT, {"path": 42}, Exception),
# lines:
- (ObjectType.CONTENT, {"lines": (42, None)}, f"swh:1:cnt:{HASH};lines=42",),
- (ObjectType.CONTENT, {"lines": (21, 42)}, f"swh:1:cnt:{HASH};lines=21-42",),
- (ObjectType.CONTENT, {"lines": 42}, TypeError,),
- (ObjectType.CONTENT, {"lines": (None, 42)}, ValueError,),
- (ObjectType.CONTENT, {"lines": ("42", None)}, ValueError,),
+ (
+ ObjectType.CONTENT,
+ {"lines": (42, None)},
+ f"swh:1:cnt:{HASH};lines=42",
+ ),
+ (
+ ObjectType.CONTENT,
+ {"lines": (21, 42)},
+ f"swh:1:cnt:{HASH};lines=21-42",
+ ),
+ (
+ ObjectType.CONTENT,
+ {"lines": 42},
+ TypeError,
+ ),
+ (
+ ObjectType.CONTENT,
+ {"lines": (None, 42)},
+ ValueError,
+ ),
+ (
+ ObjectType.CONTENT,
+ {"lines": ("42", None)},
+ ValueError,
+ ),
],
)
def test_QualifiedSWHID_init(object_type, qualifiers, expected):
"""Tests validation and converters of qualifiers"""
if isinstance(expected, type):
assert issubclass(expected, Exception)
with pytest.raises(expected):
QualifiedSWHID(object_type=object_type, object_id=_x(HASH), **qualifiers)
else:
assert isinstance(expected, str)
swhid = QualifiedSWHID(
object_type=object_type, object_id=_x(HASH), **qualifiers
)
# Check the build object has the right serialization
assert expected == str(swhid)
# Check the internal state of the object is the same as if parsed from a string
assert QualifiedSWHID.from_string(expected) == swhid
def test_QualifiedSWHID_hash():
object_id = _x("94a9ed024d3859793618152ea559a168bbcbb5e2")
assert hash(
QualifiedSWHID(object_type=ObjectType.DIRECTORY, object_id=object_id)
) == hash(QualifiedSWHID(object_type=ObjectType.DIRECTORY, object_id=object_id))
assert hash(
QualifiedSWHID(
- object_type=ObjectType.DIRECTORY, object_id=object_id, **dummy_qualifiers,
+ object_type=ObjectType.DIRECTORY,
+ object_id=object_id,
+ **dummy_qualifiers,
)
) == hash(
QualifiedSWHID(
- object_type=ObjectType.DIRECTORY, object_id=object_id, **dummy_qualifiers,
+ object_type=ObjectType.DIRECTORY,
+ object_id=object_id,
+ **dummy_qualifiers,
)
)
# Different order of the dictionary, so the underlying order of the tuple in
# ImmutableDict is different.
assert hash(
QualifiedSWHID(
object_type=ObjectType.DIRECTORY,
object_id=object_id,
origin="https://example.com",
lines=(42, None),
)
) == hash(
QualifiedSWHID(
object_type=ObjectType.DIRECTORY,
object_id=object_id,
lines=(42, None),
origin="https://example.com",
)
)
def test_QualifiedSWHID_eq():
object_id = _x("94a9ed024d3859793618152ea559a168bbcbb5e2")
assert QualifiedSWHID(
object_type=ObjectType.DIRECTORY, object_id=object_id
) == QualifiedSWHID(object_type=ObjectType.DIRECTORY, object_id=object_id)
assert QualifiedSWHID(
- object_type=ObjectType.DIRECTORY, object_id=object_id, **dummy_qualifiers,
+ object_type=ObjectType.DIRECTORY,
+ object_id=object_id,
+ **dummy_qualifiers,
) == QualifiedSWHID(
- object_type=ObjectType.DIRECTORY, object_id=object_id, **dummy_qualifiers,
+ object_type=ObjectType.DIRECTORY,
+ object_id=object_id,
+ **dummy_qualifiers,
)
assert QualifiedSWHID(
- object_type=ObjectType.DIRECTORY, object_id=object_id, **dummy_qualifiers,
+ object_type=ObjectType.DIRECTORY,
+ object_id=object_id,
+ **dummy_qualifiers,
) == QualifiedSWHID(
- object_type=ObjectType.DIRECTORY, object_id=object_id, **dummy_qualifiers,
+ object_type=ObjectType.DIRECTORY,
+ object_id=object_id,
+ **dummy_qualifiers,
)
QUALIFIED_SWHIDS = [
# origin:
(
f"swh:1:cnt:{HASH};origin=https://github.com/python/cpython",
QualifiedSWHID(
object_type=ObjectType.CONTENT,
object_id=_x(HASH),
origin="https://github.com/python/cpython",
),
),
(
f"swh:1:cnt:{HASH};origin=https://example.org/foo%3Bbar%25baz",
QualifiedSWHID(
object_type=ObjectType.CONTENT,
object_id=_x(HASH),
origin="https://example.org/foo%3Bbar%25baz",
),
),
(
f"swh:1:cnt:{HASH};origin=https://example.org?project=test",
QualifiedSWHID(
object_type=ObjectType.CONTENT,
object_id=_x(HASH),
origin="https://example.org?project=test",
),
),
# visit:
(
f"swh:1:cnt:{HASH};visit=swh:1:snp:{HASH}",
QualifiedSWHID(
object_type=ObjectType.CONTENT,
object_id=_x(HASH),
visit=CoreSWHID(object_type=ObjectType.SNAPSHOT, object_id=_x(HASH)),
),
),
- (f"swh:1:cnt:{HASH};visit=swh:1:rel:{HASH}", None,),
+ (
+ f"swh:1:cnt:{HASH};visit=swh:1:rel:{HASH}",
+ None,
+ ),
# anchor:
(
f"swh:1:cnt:{HASH};anchor=swh:1:dir:{HASH}",
QualifiedSWHID(
object_type=ObjectType.CONTENT,
object_id=_x(HASH),
anchor=CoreSWHID(object_type=ObjectType.DIRECTORY, object_id=_x(HASH)),
),
),
(
f"swh:1:cnt:{HASH};anchor=swh:1:rev:{HASH}",
QualifiedSWHID(
object_type=ObjectType.CONTENT,
object_id=_x(HASH),
anchor=CoreSWHID(object_type=ObjectType.REVISION, object_id=_x(HASH)),
),
),
(
f"swh:1:cnt:{HASH};anchor=swh:1:cnt:{HASH}",
None, # 'cnt' is not valid in anchor
),
(
f"swh:1:cnt:{HASH};anchor=swh:1:ori:{HASH}",
None, # 'ori' is not valid in a CoreSWHID
),
# path:
(
f"swh:1:cnt:{HASH};path=/foo",
QualifiedSWHID(
object_type=ObjectType.CONTENT, object_id=_x(HASH), path=b"/foo"
),
),
(
f"swh:1:cnt:{HASH};path=/foo%3Bbar",
QualifiedSWHID(
object_type=ObjectType.CONTENT, object_id=_x(HASH), path=b"/foo;bar"
),
),
(
f"swh:1:cnt:{HASH};path=/foo%25bar",
QualifiedSWHID(
object_type=ObjectType.CONTENT, object_id=_x(HASH), path=b"/foo%bar"
),
),
(
f"swh:1:cnt:{HASH};path=/foo/bar%3Dbaz",
QualifiedSWHID(
object_type=ObjectType.CONTENT, object_id=_x(HASH), path=b"/foo/bar=baz"
),
),
# lines
(
f"swh:1:cnt:{HASH};lines=1-18",
QualifiedSWHID(
- object_type=ObjectType.CONTENT, object_id=_x(HASH), lines=(1, 18),
+ object_type=ObjectType.CONTENT,
+ object_id=_x(HASH),
+ lines=(1, 18),
),
),
(
f"swh:1:cnt:{HASH};lines=18",
QualifiedSWHID(
- object_type=ObjectType.CONTENT, object_id=_x(HASH), lines=(18, None),
+ object_type=ObjectType.CONTENT,
+ object_id=_x(HASH),
+ lines=(18, None),
),
),
- (f"swh:1:cnt:{HASH};lines=", None,),
- (f"swh:1:cnt:{HASH};lines=aa", None,),
- (f"swh:1:cnt:{HASH};lines=18-aa", None,),
+ (
+ f"swh:1:cnt:{HASH};lines=",
+ None,
+ ),
+ (
+ f"swh:1:cnt:{HASH};lines=aa",
+ None,
+ ),
+ (
+ f"swh:1:cnt:{HASH};lines=18-aa",
+ None,
+ ),
]
@pytest.mark.parametrize("string,parsed", QUALIFIED_SWHIDS)
def test_QualifiedSWHID_parse_serialize_qualifiers(string, parsed):
"""Tests parsing and serializing valid SWHIDs with the various SWHID classes."""
if parsed is None:
with pytest.raises(ValidationError):
print(repr(QualifiedSWHID.from_string(string)))
else:
assert QualifiedSWHID.from_string(string) == parsed
assert str(parsed) == string
def test_QualifiedSWHID_serialize_origin():
"""Checks that semicolon in origins are escaped."""
string = f"swh:1:cnt:{HASH};origin=https://example.org/foo%3Bbar%25baz"
swhid = QualifiedSWHID(
object_type=ObjectType.CONTENT,
object_id=_x(HASH),
origin="https://example.org/foo;bar%25baz",
)
assert str(swhid) == string
def test_QualifiedSWHID_attributes():
"""Checks the set of QualifiedSWHID attributes match the SWHID_QUALIFIERS
constant."""
assert set(attr.fields_dict(QualifiedSWHID)) == {
"namespace",
"scheme_version",
"object_type",
"object_id",
*SWHID_QUALIFIERS,
}
@pytest.mark.parametrize(
"ns,version,type,id",
[
("foo", 1, ObjectType.CONTENT, "abc8bc9d7a6bcf6db04f476d29314f157507d505"),
("swh", 2, ObjectType.CONTENT, "def8bc9d7a6bcf6db04f476d29314f157507d505"),
("swh", 1, ObjectType.DIRECTORY, "aaaa"),
],
)
def test_CoreSWHID_validation_error(ns, version, type, id):
with pytest.raises(ValidationError):
CoreSWHID(
- namespace=ns, scheme_version=version, object_type=type, object_id=_x(id),
+ namespace=ns,
+ scheme_version=version,
+ object_type=type,
+ object_id=_x(id),
)
def test_CoreSWHID_hash():
object_id = _x("94a9ed024d3859793618152ea559a168bbcbb5e2")
assert hash(
CoreSWHID(object_type=ObjectType.DIRECTORY, object_id=object_id)
) == hash(CoreSWHID(object_type=ObjectType.DIRECTORY, object_id=object_id))
assert hash(
- CoreSWHID(object_type=ObjectType.DIRECTORY, object_id=object_id,)
- ) == hash(CoreSWHID(object_type=ObjectType.DIRECTORY, object_id=object_id,))
+ CoreSWHID(
+ object_type=ObjectType.DIRECTORY,
+ object_id=object_id,
+ )
+ ) == hash(
+ CoreSWHID(
+ object_type=ObjectType.DIRECTORY,
+ object_id=object_id,
+ )
+ )
# Different order of the dictionary, so the underlying order of the tuple in
# ImmutableDict is different.
assert hash(
- CoreSWHID(object_type=ObjectType.DIRECTORY, object_id=object_id,)
- ) == hash(CoreSWHID(object_type=ObjectType.DIRECTORY, object_id=object_id,))
+ CoreSWHID(
+ object_type=ObjectType.DIRECTORY,
+ object_id=object_id,
+ )
+ ) == hash(
+ CoreSWHID(
+ object_type=ObjectType.DIRECTORY,
+ object_id=object_id,
+ )
+ )
def test_CoreSWHID_eq():
object_id = _x("94a9ed024d3859793618152ea559a168bbcbb5e2")
assert CoreSWHID(
object_type=ObjectType.DIRECTORY, object_id=object_id
) == CoreSWHID(object_type=ObjectType.DIRECTORY, object_id=object_id)
assert CoreSWHID(
- object_type=ObjectType.DIRECTORY, object_id=object_id,
- ) == CoreSWHID(object_type=ObjectType.DIRECTORY, object_id=object_id,)
+ object_type=ObjectType.DIRECTORY,
+ object_id=object_id,
+ ) == CoreSWHID(
+ object_type=ObjectType.DIRECTORY,
+ object_id=object_id,
+ )
assert CoreSWHID(
- object_type=ObjectType.DIRECTORY, object_id=object_id,
- ) == CoreSWHID(object_type=ObjectType.DIRECTORY, object_id=object_id,)
+ object_type=ObjectType.DIRECTORY,
+ object_id=object_id,
+ ) == CoreSWHID(
+ object_type=ObjectType.DIRECTORY,
+ object_id=object_id,
+ )
@pytest.mark.parametrize(
"ns,version,type,id",
[
(
"foo",
1,
ExtendedObjectType.CONTENT,
"abc8bc9d7a6bcf6db04f476d29314f157507d505",
),
(
"swh",
2,
ExtendedObjectType.CONTENT,
"def8bc9d7a6bcf6db04f476d29314f157507d505",
),
("swh", 1, ExtendedObjectType.DIRECTORY, "aaaa"),
],
)
def test_ExtendedSWHID_validation_error(ns, version, type, id):
with pytest.raises(ValidationError):
ExtendedSWHID(
- namespace=ns, scheme_version=version, object_type=type, object_id=_x(id),
+ namespace=ns,
+ scheme_version=version,
+ object_type=type,
+ object_id=_x(id),
)
def test_ExtendedSWHID_hash():
object_id = _x("94a9ed024d3859793618152ea559a168bbcbb5e2")
assert hash(
ExtendedSWHID(object_type=ExtendedObjectType.DIRECTORY, object_id=object_id)
) == hash(
ExtendedSWHID(object_type=ExtendedObjectType.DIRECTORY, object_id=object_id)
)
assert hash(
- ExtendedSWHID(object_type=ExtendedObjectType.DIRECTORY, object_id=object_id,)
+ ExtendedSWHID(
+ object_type=ExtendedObjectType.DIRECTORY,
+ object_id=object_id,
+ )
) == hash(
- ExtendedSWHID(object_type=ExtendedObjectType.DIRECTORY, object_id=object_id,)
+ ExtendedSWHID(
+ object_type=ExtendedObjectType.DIRECTORY,
+ object_id=object_id,
+ )
)
# Different order of the dictionary, so the underlying order of the tuple in
# ImmutableDict is different.
assert hash(
- ExtendedSWHID(object_type=ExtendedObjectType.DIRECTORY, object_id=object_id,)
+ ExtendedSWHID(
+ object_type=ExtendedObjectType.DIRECTORY,
+ object_id=object_id,
+ )
) == hash(
- ExtendedSWHID(object_type=ExtendedObjectType.DIRECTORY, object_id=object_id,)
+ ExtendedSWHID(
+ object_type=ExtendedObjectType.DIRECTORY,
+ object_id=object_id,
+ )
)
def test_ExtendedSWHID_eq():
object_id = _x("94a9ed024d3859793618152ea559a168bbcbb5e2")
assert ExtendedSWHID(
object_type=ExtendedObjectType.DIRECTORY, object_id=object_id
) == ExtendedSWHID(object_type=ExtendedObjectType.DIRECTORY, object_id=object_id)
assert ExtendedSWHID(
- object_type=ExtendedObjectType.DIRECTORY, object_id=object_id,
- ) == ExtendedSWHID(object_type=ExtendedObjectType.DIRECTORY, object_id=object_id,)
+ object_type=ExtendedObjectType.DIRECTORY,
+ object_id=object_id,
+ ) == ExtendedSWHID(
+ object_type=ExtendedObjectType.DIRECTORY,
+ object_id=object_id,
+ )
assert ExtendedSWHID(
- object_type=ExtendedObjectType.DIRECTORY, object_id=object_id,
- ) == ExtendedSWHID(object_type=ExtendedObjectType.DIRECTORY, object_id=object_id,)
+ object_type=ExtendedObjectType.DIRECTORY,
+ object_id=object_id,
+ ) == ExtendedSWHID(
+ object_type=ExtendedObjectType.DIRECTORY,
+ object_id=object_id,
+ )
def test_object_types():
"""Checks ExtendedObjectType is a superset of ObjectType"""
for member in ObjectType:
assert getattr(ExtendedObjectType, member.name).value == member.value
diff --git a/tox.ini b/tox.ini
index 5211a7c..cf034c9 100644
--- a/tox.ini
+++ b/tox.ini
@@ -1,82 +1,83 @@
[tox]
envlist=black,flake8,mypy,py3-{minimal,full}
[testenv]
extras =
full: testing
minimal: testing-minimal
deps =
pytest-cov
commands =
pytest \
--doctest-modules \
full: --cov={envsitepackagesdir}/swh/model --cov-branch {posargs} \
full: {envsitepackagesdir}/swh/model
minimal: {envsitepackagesdir}/swh/model/tests/test_cli.py -m 'not requires_optional_deps'
[testenv:py3]
skip_install = true
deps = tox
commands =
tox -e py3-full -- {posargs}
tox -e py3-minimal -- {posargs}
[testenv:black]
skip_install = true
deps =
- black==19.10b0
+ black==22.3.0
commands =
{envpython} -m black --check swh
[testenv:flake8]
skip_install = true
deps =
- flake8
+ flake8==4.0.1
+ flake8-bugbear==22.3.23
commands =
{envpython} -m flake8
[testenv:mypy]
extras =
testing
deps =
mypy==0.920
commands =
mypy swh
# build documentation outside swh-environment using the current
# git HEAD of swh-docs, is executed on CI for each diff to prevent
# breaking doc build
[testenv:sphinx]
whitelist_externals = make
usedevelop = true
extras =
testing
deps =
# fetch and install swh-docs in develop mode
-e git+https://forge.softwareheritage.org/source/swh-docs#egg=swh.docs
setenv =
SWH_PACKAGE_DOC_TOX_BUILD = 1
# turn warnings into errors
SPHINXOPTS = -W
commands =
make -I ../.tox/sphinx/src/swh-docs/swh/ -C docs
# build documentation only inside swh-environment using local state
# of swh-docs package
[testenv:sphinx-dev]
whitelist_externals = make
usedevelop = true
extras =
testing
deps =
# install swh-docs in develop mode
-e ../swh-docs
setenv =
SWH_PACKAGE_DOC_TOX_BUILD = 1
# turn warnings into errors
SPHINXOPTS = -W
commands =
make -I ../.tox/sphinx-dev/src/swh-docs/swh/ -C docs

File Metadata

Mime Type
text/x-diff
Expires
Fri, Jul 4, 12:33 PM (2 w, 3 d ago)
Storage Engine
blob
Storage Format
Raw Data
Storage Handle
3332122

Event Timeline