diff --git a/PKG-INFO b/PKG-INFO
index e87727ab..5568062e 100644
--- a/PKG-INFO
+++ b/PKG-INFO
@@ -1,153 +1,153 @@
 Metadata-Version: 2.1
 Name: swh.storage
-Version: 0.0.117
+Version: 0.0.118
 Summary: Software Heritage storage manager
 Home-page: https://forge.softwareheritage.org/diffusion/DSTO/
 Author: Software Heritage developers
 Author-email: swh-devel@inria.fr
 License: UNKNOWN
 Project-URL: Bug Reports, https://forge.softwareheritage.org/maniphest
 Project-URL: Funding, https://www.softwareheritage.org/donate
 Project-URL: Source, https://forge.softwareheritage.org/source/swh-storage
 Description: swh-storage
         ===========
         
         Abstraction layer over the archive, allowing to access all stored source code
         artifacts as well as their metadata.
         
         See the
         [documentation](https://docs.softwareheritage.org/devel/swh-storage/index.html)
         for more details.
         
         Tests
         -----
         
         Python tests for this module include tests that cannot be run without a local
         Postgres database. You are not obliged to run those tests though:
         
         - `make test`:      will run all tests
         - `make test-nodb`: will run only tests that do not need a local DB
         - `make test-db`:   will run only tests that do need a local DB
         
         If you do want to run DB-related tests, you should ensure you have access zith
         sufficient privileges to a Postgresql database.
         
         ### Using your system database
         
         You need to ensure that your user is authorized to create and drop DBs, and in
         particular DBs named "softwareheritage-test" and "softwareheritage-dev"
         
         Note: the testdata repository (swh-storage-testdata) is not required any more.
         
         ### Using pifpaf
         
         [pifpaf](https://github.com/jd/pifpaf) is a suite of fixtures and a
         command-line tool that allows to start and stop daemons for a quick throw-away
         usage.
         
         It can be used to run tests that need a Postgres database without any other
         configuration reauired nor the need to have special access to a running
         database:
         
         ```bash
         
         $ pifpaf run postgresql make test-db
         [snip]
         ----------------------------------------------------------------------
         Ran 124 tests in 56.203s
         
         OK
         ```
         
         Note that pifpaf is not yet available as a Debian package, so you may have to
         install it in a venv.
         
         
         Development
         -----------
         
         A test server could locally be running for tests.
         
         ### Sample configuration
         
         In either /etc/softwareheritage/storage/storage.yml,
         ~/.config/swh/storage.yml or ~/.swh/storage.yml:
         
         ```
         storage:
           cls: local
           args:
             db: "dbname=softwareheritage-dev user=<user>"
             objstorage:
               cls: pathslicing
               args:
                 root: /home/storage/swh-storage/
                 slicing: 0:2/2:4/4:6
         ```
         
         which means, this uses:
         
         - a local storage instance whose db connection is to
           softwareheritage-dev local instance
         
         - the objstorage uses a local objstorage instance whose:
         
           - root path is /home/storage/swh-storage
         
           - slicing scheme is 0:2/2:4/4:6. This means that the identifier of
             the content (sha1) which will be stored on disk at first level
             with the first 2 hex characters, the second level with the next 2
             hex characters and the third level with the next 2 hex
             characters. And finally the complete hash file holding the raw
             content. For example: 00062f8bd330715c4f819373653d97b3cd34394c
             will be stored at 00/06/2f/00062f8bd330715c4f819373653d97b3cd34394c
         
         Note that the 'root' path should exist on disk.
         
         
         ### Run server
         
         Command:
         ```
         python3 -m swh.storage.api.server ~/.config/swh/storage.yml
         ```
         
         This runs a local swh-storage api at 5002 port.
         
         
         ### And then what?
         
         In your upper layer (loader-git, loader-svn, etc...), you can define a
         remote storage with this snippet of yaml configuration.
         
         ```
         storage:
           cls: remote
           args:
             url: http://localhost:5002/
         ```
         
         You could directly define a local storage with the following snippet:
         
         ```
         storage:
           cls: local
           args:
             db: service=swh-dev
             objstorage:
               cls: pathslicing
               args:
                 root: /home/storage/swh-storage/
                 slicing: 0:2/2:4/4:6
         ```
         
 Platform: UNKNOWN
 Classifier: Programming Language :: Python :: 3
 Classifier: Intended Audience :: Developers
 Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3)
 Classifier: Operating System :: OS Independent
 Classifier: Development Status :: 5 - Production/Stable
 Description-Content-Type: text/markdown
-Provides-Extra: listener
-Provides-Extra: schemata
 Provides-Extra: testing
+Provides-Extra: schemata
+Provides-Extra: listener
diff --git a/swh.storage.egg-info/PKG-INFO b/swh.storage.egg-info/PKG-INFO
index e87727ab..5568062e 100644
--- a/swh.storage.egg-info/PKG-INFO
+++ b/swh.storage.egg-info/PKG-INFO
@@ -1,153 +1,153 @@
 Metadata-Version: 2.1
 Name: swh.storage
-Version: 0.0.117
+Version: 0.0.118
 Summary: Software Heritage storage manager
 Home-page: https://forge.softwareheritage.org/diffusion/DSTO/
 Author: Software Heritage developers
 Author-email: swh-devel@inria.fr
 License: UNKNOWN
 Project-URL: Bug Reports, https://forge.softwareheritage.org/maniphest
 Project-URL: Funding, https://www.softwareheritage.org/donate
 Project-URL: Source, https://forge.softwareheritage.org/source/swh-storage
 Description: swh-storage
         ===========
         
         Abstraction layer over the archive, allowing to access all stored source code
         artifacts as well as their metadata.
         
         See the
         [documentation](https://docs.softwareheritage.org/devel/swh-storage/index.html)
         for more details.
         
         Tests
         -----
         
         Python tests for this module include tests that cannot be run without a local
         Postgres database. You are not obliged to run those tests though:
         
         - `make test`:      will run all tests
         - `make test-nodb`: will run only tests that do not need a local DB
         - `make test-db`:   will run only tests that do need a local DB
         
         If you do want to run DB-related tests, you should ensure you have access zith
         sufficient privileges to a Postgresql database.
         
         ### Using your system database
         
         You need to ensure that your user is authorized to create and drop DBs, and in
         particular DBs named "softwareheritage-test" and "softwareheritage-dev"
         
         Note: the testdata repository (swh-storage-testdata) is not required any more.
         
         ### Using pifpaf
         
         [pifpaf](https://github.com/jd/pifpaf) is a suite of fixtures and a
         command-line tool that allows to start and stop daemons for a quick throw-away
         usage.
         
         It can be used to run tests that need a Postgres database without any other
         configuration reauired nor the need to have special access to a running
         database:
         
         ```bash
         
         $ pifpaf run postgresql make test-db
         [snip]
         ----------------------------------------------------------------------
         Ran 124 tests in 56.203s
         
         OK
         ```
         
         Note that pifpaf is not yet available as a Debian package, so you may have to
         install it in a venv.
         
         
         Development
         -----------
         
         A test server could locally be running for tests.
         
         ### Sample configuration
         
         In either /etc/softwareheritage/storage/storage.yml,
         ~/.config/swh/storage.yml or ~/.swh/storage.yml:
         
         ```
         storage:
           cls: local
           args:
             db: "dbname=softwareheritage-dev user=<user>"
             objstorage:
               cls: pathslicing
               args:
                 root: /home/storage/swh-storage/
                 slicing: 0:2/2:4/4:6
         ```
         
         which means, this uses:
         
         - a local storage instance whose db connection is to
           softwareheritage-dev local instance
         
         - the objstorage uses a local objstorage instance whose:
         
           - root path is /home/storage/swh-storage
         
           - slicing scheme is 0:2/2:4/4:6. This means that the identifier of
             the content (sha1) which will be stored on disk at first level
             with the first 2 hex characters, the second level with the next 2
             hex characters and the third level with the next 2 hex
             characters. And finally the complete hash file holding the raw
             content. For example: 00062f8bd330715c4f819373653d97b3cd34394c
             will be stored at 00/06/2f/00062f8bd330715c4f819373653d97b3cd34394c
         
         Note that the 'root' path should exist on disk.
         
         
         ### Run server
         
         Command:
         ```
         python3 -m swh.storage.api.server ~/.config/swh/storage.yml
         ```
         
         This runs a local swh-storage api at 5002 port.
         
         
         ### And then what?
         
         In your upper layer (loader-git, loader-svn, etc...), you can define a
         remote storage with this snippet of yaml configuration.
         
         ```
         storage:
           cls: remote
           args:
             url: http://localhost:5002/
         ```
         
         You could directly define a local storage with the following snippet:
         
         ```
         storage:
           cls: local
           args:
             db: service=swh-dev
             objstorage:
               cls: pathslicing
               args:
                 root: /home/storage/swh-storage/
                 slicing: 0:2/2:4/4:6
         ```
         
 Platform: UNKNOWN
 Classifier: Programming Language :: Python :: 3
 Classifier: Intended Audience :: Developers
 Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3)
 Classifier: Operating System :: OS Independent
 Classifier: Development Status :: 5 - Production/Stable
 Description-Content-Type: text/markdown
-Provides-Extra: listener
-Provides-Extra: schemata
 Provides-Extra: testing
+Provides-Extra: schemata
+Provides-Extra: listener
diff --git a/swh.storage.egg-info/requires.txt b/swh.storage.egg-info/requires.txt
index ac81d8c1..ac46d3fe 100644
--- a/swh.storage.egg-info/requires.txt
+++ b/swh.storage.egg-info/requires.txt
@@ -1,20 +1,20 @@
-aiohttp
 click
 flask
 psycopg2
 python-dateutil
+vcversioner
+aiohttp
 swh.core>=0.0.48
 swh.model>=0.0.27
 swh.objstorage>=0.0.17
 swh.scheduler>=0.0.14
-vcversioner
 
 [listener]
 kafka_python
 
 [schemata]
 SQLAlchemy
 
 [testing]
 hypothesis>=3.11.0
 pytest
diff --git a/swh/storage/db.py b/swh/storage/db.py
index 9cb3ec7e..62541fac 100644
--- a/swh/storage/db.py
+++ b/swh/storage/db.py
@@ -1,1059 +1,1039 @@
 # Copyright (C) 2015-2018  The Software Heritage developers
 # See the AUTHORS file at the top-level directory of this distribution
 # License: GNU General Public License version 3, or any later version
 # See top-level LICENSE file for more information
 
 import binascii
 import datetime
 import enum
 import functools
 import json
 import os
 import select
 import threading
 
 from contextlib import contextmanager
 
 import psycopg2
 import psycopg2.extras
 
 from .db_utils import execute_values_generator
 
 TMP_CONTENT_TABLE = 'tmp_content'
 
 
 psycopg2.extras.register_uuid()
 
 
 def stored_procedure(stored_proc):
     """decorator to execute remote stored procedure, specified as argument
 
     Generally, the body of the decorated function should be empty. If it is
     not, the stored procedure will be executed first; the function body then.
 
     """
     def wrap(meth):
         @functools.wraps(meth)
         def _meth(self, *args, **kwargs):
             cur = kwargs.get('cur', None)
             self._cursor(cur).execute('SELECT %s()' % stored_proc)
             meth(self, *args, **kwargs)
         return _meth
     return wrap
 
 
 def jsonize(value):
     """Convert a value to a psycopg2 JSON object if necessary"""
     if isinstance(value, dict):
         return psycopg2.extras.Json(value)
 
     return value
 
 
 def entry_to_bytes(entry):
     """Convert an entry coming from the database to bytes"""
     if isinstance(entry, memoryview):
         return entry.tobytes()
     if isinstance(entry, list):
         return [entry_to_bytes(value) for value in entry]
     return entry
 
 
 def line_to_bytes(line):
     """Convert a line coming from the database to bytes"""
     if not line:
         return line
     if isinstance(line, dict):
         return {k: entry_to_bytes(v) for k, v in line.items()}
     return line.__class__(entry_to_bytes(entry) for entry in line)
 
 
 def cursor_to_bytes(cursor):
     """Yield all the data from a cursor as bytes"""
     yield from (line_to_bytes(line) for line in cursor)
 
 
 def execute_values_to_bytes(*args, **kwargs):
     for line in execute_values_generator(*args, **kwargs):
         yield line_to_bytes(line)
 
 
 class BaseDb:
     """Base class for swh.storage.*Db.
 
     cf. swh.storage.db.Db, swh.archiver.db.ArchiverDb
 
     """
 
     @classmethod
     def connect(cls, *args, **kwargs):
         """factory method to create a DB proxy
 
         Accepts all arguments of psycopg2.connect; only some specific
         possibilities are reported below.
 
         Args:
             connstring: libpq2 connection string
 
         """
         conn = psycopg2.connect(*args, **kwargs)
         return cls(conn)
 
     @classmethod
     def from_pool(cls, pool):
         return cls(pool.getconn(), pool=pool)
 
     def _cursor(self, cur_arg):
         """get a cursor: from cur_arg if given, or a fresh one otherwise
 
         meant to avoid boilerplate if/then/else in methods that proxy stored
         procedures
 
         """
         if cur_arg is not None:
             return cur_arg
         # elif self.cur is not None:
         #     return self.cur
         else:
             return self.conn.cursor()
 
     def __init__(self, conn, pool=None):
         """create a DB proxy
 
         Args:
             conn: psycopg2 connection to the SWH DB
             pool: psycopg2 pool of connections
 
         """
         self.conn = conn
         self.pool = pool
 
     def __del__(self):
         if self.pool:
             self.pool.putconn(self.conn)
 
     @contextmanager
     def transaction(self):
         """context manager to execute within a DB transaction
 
         Yields:
             a psycopg2 cursor
 
         """
         with self.conn.cursor() as cur:
             try:
                 yield cur
                 self.conn.commit()
             except Exception:
                 if not self.conn.closed:
                     self.conn.rollback()
                 raise
 
     def copy_to(self, items, tblname, columns, cur=None, item_cb=None):
         """Copy items' entries to table tblname with columns information.
 
         Args:
             items (dict): dictionary of data to copy over tblname
             tblname (str): Destination table's name
             columns ([str]): keys to access data in items and also the
               column names in the destination table.
             item_cb (fn): optional function to apply to items's entry
 
         """
         def escape(data):
             if data is None:
                 return ''
             if isinstance(data, bytes):
                 return '\\x%s' % binascii.hexlify(data).decode('ascii')
             elif isinstance(data, str):
                 return '"%s"' % data.replace('"', '""')
             elif isinstance(data, datetime.datetime):
                 # We escape twice to make sure the string generated by
                 # isoformat gets escaped
                 return escape(data.isoformat())
             elif isinstance(data, dict):
                 return escape(json.dumps(data))
             elif isinstance(data, list):
                 return escape("{%s}" % ','.join(escape(d) for d in data))
             elif isinstance(data, psycopg2.extras.Range):
                 # We escape twice here too, so that we make sure
                 # everything gets passed to copy properly
                 return escape(
                     '%s%s,%s%s' % (
                         '[' if data.lower_inc else '(',
                         '-infinity' if data.lower_inf else escape(data.lower),
                         'infinity' if data.upper_inf else escape(data.upper),
                         ']' if data.upper_inc else ')',
                     )
                 )
             elif isinstance(data, enum.IntEnum):
                 return escape(int(data))
             else:
                 # We don't escape here to make sure we pass literals properly
                 return str(data)
 
         read_file, write_file = os.pipe()
 
         def writer():
             cursor = self._cursor(cur)
             with open(read_file, 'r') as f:
                 cursor.copy_expert('COPY %s (%s) FROM STDIN CSV' % (
                     tblname, ', '.join(columns)), f)
 
         write_thread = threading.Thread(target=writer)
         write_thread.start()
 
         try:
             with open(write_file, 'w') as f:
                 for d in items:
                     if item_cb is not None:
                         item_cb(d)
                     line = [escape(d.get(k)) for k in columns]
                     f.write(','.join(line))
                     f.write('\n')
         finally:
             # No problem bubbling up exceptions, but we still need to make sure
             # we finish copying, even though we're probably going to cancel the
             # transaction.
             write_thread.join()
 
     def mktemp(self, tblname, cur=None):
         self._cursor(cur).execute('SELECT swh_mktemp(%s)', (tblname,))
 
 
 class Db(BaseDb):
     """Proxy to the SWH DB, with wrappers around stored procedures
 
     """
     def mktemp_dir_entry(self, entry_type, cur=None):
         self._cursor(cur).execute('SELECT swh_mktemp_dir_entry(%s)',
                                   (('directory_entry_%s' % entry_type),))
 
     @stored_procedure('swh_mktemp_revision')
     def mktemp_revision(self, cur=None): pass
 
     @stored_procedure('swh_mktemp_release')
     def mktemp_release(self, cur=None): pass
 
     @stored_procedure('swh_mktemp_snapshot_branch')
     def mktemp_snapshot_branch(self, cur=None): pass
 
     def register_listener(self, notify_queue, cur=None):
         """Register a listener for NOTIFY queue `notify_queue`"""
         self._cursor(cur).execute("LISTEN %s" % notify_queue)
 
     def listen_notifies(self, timeout):
         """Listen to notifications for `timeout` seconds"""
         if select.select([self.conn], [], [], timeout) == ([], [], []):
             return
         else:
             self.conn.poll()
             while self.conn.notifies:
                 yield self.conn.notifies.pop(0)
 
     @stored_procedure('swh_content_add')
     def content_add_from_temp(self, cur=None): pass
 
     @stored_procedure('swh_directory_add')
     def directory_add_from_temp(self, cur=None): pass
 
     @stored_procedure('swh_skipped_content_add')
     def skipped_content_add_from_temp(self, cur=None): pass
 
     @stored_procedure('swh_revision_add')
     def revision_add_from_temp(self, cur=None): pass
 
     @stored_procedure('swh_release_add')
     def release_add_from_temp(self, cur=None): pass
 
     def content_update_from_temp(self, keys_to_update, cur=None):
         cur = self._cursor(cur)
         cur.execute("""select swh_content_update(ARRAY[%s] :: text[])""" %
                     keys_to_update)
 
     content_get_metadata_keys = [
         'sha1', 'sha1_git', 'sha256', 'blake2s256', 'length', 'status']
 
     skipped_content_keys = [
         'sha1', 'sha1_git', 'sha256', 'blake2s256',
         'length', 'reason', 'status', 'origin']
 
     def content_get_metadata_from_sha1s(self, sha1s, cur=None):
         cur = self._cursor(cur)
 
         yield from execute_values_to_bytes(
             cur, """
             select t.sha1, %s from (values %%s) as t (sha1)
             left join content using (sha1)
             """ % ', '.join(self.content_get_metadata_keys[1:]),
             ((sha1,) for sha1 in sha1s),
         )
 
     def content_get_range(self, start, end, limit=None, cur=None):
         """Retrieve contents within range [start, end].
 
         """
         cur = self._cursor(cur)
         query = """select %s from content
                    where %%s <= sha1 and sha1 <= %%s
                    order by sha1
                    limit %%s""" % ', '.join(self.content_get_metadata_keys)
         cur.execute(query, (start, end, limit))
         yield from cursor_to_bytes(cur)
 
     content_hash_keys = ['sha1', 'sha1_git', 'sha256', 'blake2s256']
 
     def content_missing_from_list(self, contents, cur=None):
         cur = self._cursor(cur)
 
         keys = ', '.join(self.content_hash_keys)
         equality = ' AND '.join(
             ('t.%s = c.%s' % (key, key))
             for key in self.content_hash_keys
         )
 
         yield from execute_values_to_bytes(
             cur, """
             SELECT %s
             FROM (VALUES %%s) as t(%s)
             WHERE NOT EXISTS (
                 SELECT 1 FROM content c
                 WHERE %s
             )
             """ % (keys, keys, equality),
             (tuple(c[key] for key in self.content_hash_keys) for c in contents)
         )
 
     def content_missing_per_sha1(self, sha1s, cur=None):
         cur = self._cursor(cur)
 
         yield from execute_values_to_bytes(cur, """
         SELECT t.sha1 FROM (VALUES %s) AS t(sha1)
         WHERE NOT EXISTS (
             SELECT 1 FROM content c WHERE c.sha1 = t.sha1
         )""", ((sha1,) for sha1 in sha1s))
 
     def skipped_content_missing_from_temp(self, cur=None):
         cur = self._cursor(cur)
 
         cur.execute("""SELECT sha1, sha1_git, sha256, blake2s256
                        FROM swh_skipped_content_missing()""")
 
         yield from cursor_to_bytes(cur)
 
     def snapshot_exists(self, snapshot_id, cur=None):
         """Check whether a snapshot with the given id exists"""
         cur = self._cursor(cur)
 
         cur.execute("""SELECT 1 FROM snapshot where id=%s""", (snapshot_id,))
 
         return bool(cur.fetchone())
 
     def snapshot_add(self, origin, visit, snapshot_id, cur=None):
         """Add a snapshot for origin/visit from the temporary table"""
         cur = self._cursor(cur)
 
         cur.execute("""SELECT swh_snapshot_add(%s, %s, %s)""",
                     (origin, visit, snapshot_id))
 
     snapshot_count_cols = ['target_type', 'count']
 
     def snapshot_count_branches(self, snapshot_id, cur=None):
         cur = self._cursor(cur)
         query = """\
            SELECT %s FROM swh_snapshot_count_branches(%%s)
         """ % ', '.join(self.snapshot_count_cols)
 
         cur.execute(query, (snapshot_id,))
 
         yield from cursor_to_bytes(cur)
 
     snapshot_get_cols = ['snapshot_id', 'name', 'target', 'target_type']
 
     def snapshot_get_by_id(self, snapshot_id, branches_from=b'',
                            branches_count=None, target_types=None,
                            cur=None):
         cur = self._cursor(cur)
         query = """\
            SELECT %s
            FROM swh_snapshot_get_by_id(%%s, %%s, %%s, %%s :: snapshot_target[])
         """ % ', '.join(self.snapshot_get_cols)
 
         cur.execute(query, (snapshot_id, branches_from, branches_count,
                             target_types))
 
         yield from cursor_to_bytes(cur)
 
     def snapshot_get_by_origin_visit(self, origin_id, visit_id, cur=None):
         cur = self._cursor(cur)
         query = """\
            SELECT swh_snapshot_get_by_origin_visit(%s, %s)
         """
 
         cur.execute(query, (origin_id, visit_id))
         ret = cur.fetchone()
         if ret:
             return line_to_bytes(ret)[0]
 
     content_find_cols = ['sha1', 'sha1_git', 'sha256', 'blake2s256', 'length',
                          'ctime', 'status']
 
     def content_find(self, sha1=None, sha1_git=None, sha256=None,
                      blake2s256=None, cur=None):
         """Find the content optionally on a combination of the following
         checksums sha1, sha1_git, sha256 or blake2s256.
 
         Args:
             sha1: sha1 content
             git_sha1: the sha1 computed `a la git` sha1 of the content
             sha256: sha256 content
             blake2s256: blake2s256 content
 
         Returns:
             The tuple (sha1, sha1_git, sha256, blake2s256) if found or None.
 
         """
         cur = self._cursor(cur)
 
         cur.execute("""SELECT %s
                        FROM swh_content_find(%%s, %%s, %%s, %%s)
                        LIMIT 1""" % ','.join(self.content_find_cols),
                     (sha1, sha1_git, sha256, blake2s256))
 
         content = line_to_bytes(cur.fetchone())
         if set(content) == {None}:
             return None
         else:
             return content
 
     def directory_missing_from_list(self, directories, cur=None):
         cur = self._cursor(cur)
         yield from execute_values_to_bytes(
             cur, """
             SELECT id FROM (VALUES %s) as t(id)
             WHERE NOT EXISTS (
                 SELECT 1 FROM directory d WHERE d.id = t.id
             )
             """, ((id,) for id in directories))
 
     directory_ls_cols = ['dir_id', 'type', 'target', 'name', 'perms',
                          'status', 'sha1', 'sha1_git', 'sha256', 'length']
 
     def directory_walk_one(self, directory, cur=None):
         cur = self._cursor(cur)
         cols = ', '.join(self.directory_ls_cols)
         query = 'SELECT %s FROM swh_directory_walk_one(%%s)' % cols
         cur.execute(query, (directory,))
         yield from cursor_to_bytes(cur)
 
     def directory_walk(self, directory, cur=None):
         cur = self._cursor(cur)
         cols = ', '.join(self.directory_ls_cols)
         query = 'SELECT %s FROM swh_directory_walk(%%s)' % cols
         cur.execute(query, (directory,))
         yield from cursor_to_bytes(cur)
 
     def directory_entry_get_by_path(self, directory, paths, cur=None):
         """Retrieve a directory entry by path.
 
         """
         cur = self._cursor(cur)
 
         cols = ', '.join(self.directory_ls_cols)
         query = (
             'SELECT %s FROM swh_find_directory_entry_by_path(%%s, %%s)' % cols)
         cur.execute(query, (directory, paths))
 
         data = cur.fetchone()
         if set(data) == {None}:
             return None
         return line_to_bytes(data)
 
     def revision_missing_from_list(self, revisions, cur=None):
         cur = self._cursor(cur)
 
         yield from execute_values_to_bytes(
             cur, """
             SELECT id FROM (VALUES %s) as t(id)
             WHERE NOT EXISTS (
                 SELECT 1 FROM revision r WHERE r.id = t.id
             )
             """, ((id,) for id in revisions))
 
     revision_add_cols = [
         'id', 'date', 'date_offset', 'date_neg_utc_offset', 'committer_date',
         'committer_date_offset', 'committer_date_neg_utc_offset', 'type',
         'directory', 'message', 'author_fullname', 'author_name',
         'author_email', 'committer_fullname', 'committer_name',
         'committer_email', 'metadata', 'synthetic',
     ]
 
     revision_get_cols = revision_add_cols + [
         'author_id', 'committer_id', 'parents']
 
     def origin_visit_add(self, origin, ts, cur=None):
         """Add a new origin_visit for origin origin at timestamp ts with
         status 'ongoing'.
 
         Args:
             origin: origin concerned by the visit
             ts: the date of the visit
 
         Returns:
             The new visit index step for that origin
 
         """
         cur = self._cursor(cur)
         self._cursor(cur).execute('SELECT swh_origin_visit_add(%s, %s)',
                                   (origin, ts))
         return cur.fetchone()[0]
 
     def origin_visit_update(self, origin, visit_id, status,
                             metadata, cur=None):
         """Update origin_visit's status."""
         cur = self._cursor(cur)
         update = """UPDATE origin_visit
                     SET status=%s, metadata=%s
                     WHERE origin=%s AND visit=%s"""
         cur.execute(update, (status, jsonize(metadata), origin, visit_id))
 
     origin_visit_get_cols = ['origin', 'visit', 'date', 'status', 'metadata',
                              'snapshot']
 
     def origin_visit_get_all(self, origin_id,
                              last_visit=None, limit=None, cur=None):
         """Retrieve all visits for origin with id origin_id.
 
         Args:
             origin_id: The occurrence's origin
 
         Yields:
             The occurrence's history visits
 
         """
         cur = self._cursor(cur)
 
         if last_visit:
             extra_condition = 'and visit > %s'
             args = (origin_id, last_visit, limit)
         else:
             extra_condition = ''
             args = (origin_id, limit)
 
         query = """\
         SELECT %s,
             (select id from snapshot where object_id = snapshot_id) as snapshot
         FROM origin_visit
         WHERE origin=%%s %s
         order by visit asc
         limit %%s""" % (
             ', '.join(self.origin_visit_get_cols[:-1]), extra_condition
         )
 
         cur.execute(query, args)
 
         yield from cursor_to_bytes(cur)
 
     def origin_visit_get(self, origin_id, visit_id, cur=None):
         """Retrieve information on visit visit_id of origin origin_id.
 
         Args:
             origin_id: the origin concerned
             visit_id: The visit step for that origin
 
         Returns:
             The origin_visit information
 
         """
         cur = self._cursor(cur)
 
         query = """\
             SELECT %s,
                 (select id from snapshot where object_id = snapshot_id)
                 as snapshot
             FROM origin_visit
             WHERE origin = %%s AND visit = %%s
             """ % (', '.join(self.origin_visit_get_cols[:-1]))
 
         cur.execute(query, (origin_id, visit_id))
         r = cur.fetchall()
         if not r:
             return None
         return line_to_bytes(r[0])
 
     def origin_visit_exists(self, origin_id, visit_id, cur=None):
         """Check whether an origin visit with the given ids exists"""
         cur = self._cursor(cur)
 
         query = "SELECT 1 FROM origin_visit where origin = %s AND visit = %s"
 
         cur.execute(query, (origin_id, visit_id))
 
         return bool(cur.fetchone())
 
     def origin_visit_get_latest_snapshot(self, origin_id,
                                          allowed_statuses=None,
                                          cur=None):
         """Retrieve the most recent origin_visit which references a snapshot
 
         Args:
             origin_id: the origin concerned
             allowed_statuses: the visit statuses allowed for the returned visit
 
         Returns:
             The origin_visit information, or None if no visit matches.
         """
         cur = self._cursor(cur)
 
         extra_clause = ""
         if allowed_statuses:
             extra_clause = cur.mogrify("AND status IN %s",
                                        (tuple(allowed_statuses),)).decode()
 
         query = """\
             SELECT %s,
                 (select id from snapshot where object_id = snapshot_id)
                 as snapshot
             FROM origin_visit
             WHERE
                 origin = %%s AND snapshot_id is not null %s
             ORDER BY date DESC, visit DESC
             LIMIT 1
             """ % (', '.join(self.origin_visit_get_cols[:-1]), extra_clause)
 
         cur.execute(query, (origin_id,))
         r = cur.fetchone()
         if not r:
             return None
         return line_to_bytes(r)
 
     @staticmethod
     def mangle_query_key(key, main_table):
         if key == 'id':
             return 't.id'
         if key == 'parents':
             return '''
             ARRAY(
             SELECT rh.parent_id::bytea
             FROM revision_history rh
             WHERE rh.id = t.id
             ORDER BY rh.parent_rank
             )'''
         if '_' not in key:
             return '%s.%s' % (main_table, key)
 
         head, tail = key.split('_', 1)
         if (head in ('author', 'committer')
                 and tail in ('name', 'email', 'id', 'fullname')):
             return '%s.%s' % (head, tail)
 
         return '%s.%s' % (main_table, key)
 
     def revision_get_from_list(self, revisions, cur=None):
         cur = self._cursor(cur)
 
         query_keys = ', '.join(
             self.mangle_query_key(k, 'revision')
             for k in self.revision_get_cols
         )
 
         yield from execute_values_to_bytes(
             cur, """
             SELECT %s FROM (VALUES %%s) as t(id)
             LEFT JOIN revision ON t.id = revision.id
             LEFT JOIN person author ON revision.author = author.id
             LEFT JOIN person committer ON revision.committer = committer.id
             """ % query_keys,
             ((id,) for id in revisions))
 
     def revision_log(self, root_revisions, limit=None, cur=None):
         cur = self._cursor(cur)
 
         query = """SELECT %s
                    FROM swh_revision_log(%%s, %%s)
                 """ % ', '.join(self.revision_get_cols)
 
         cur.execute(query, (root_revisions, limit))
         yield from cursor_to_bytes(cur)
 
     revision_shortlog_cols = ['id', 'parents']
 
     def revision_shortlog(self, root_revisions, limit=None, cur=None):
         cur = self._cursor(cur)
 
         query = """SELECT %s
                    FROM swh_revision_list(%%s, %%s)
                 """ % ', '.join(self.revision_shortlog_cols)
 
         cur.execute(query, (root_revisions, limit))
         yield from cursor_to_bytes(cur)
 
     def release_missing_from_list(self, releases, cur=None):
         cur = self._cursor(cur)
         yield from execute_values_to_bytes(
             cur, """
             SELECT id FROM (VALUES %s) as t(id)
             WHERE NOT EXISTS (
                 SELECT 1 FROM release r WHERE r.id = t.id
             )
             """, ((id,) for id in releases))
 
     object_find_by_sha1_git_cols = ['sha1_git', 'type', 'id', 'object_id']
 
     def object_find_by_sha1_git(self, ids, cur=None):
         cur = self._cursor(cur)
 
         yield from execute_values_to_bytes(
             cur, """
             WITH t (id) AS (VALUES %s),
             known_objects as ((
                 select
                   id as sha1_git,
                   'release'::object_type as type,
                   id,
                   object_id
                 from release r
                 where exists (select 1 from t where t.id = r.id)
             ) union all (
                 select
                   id as sha1_git,
                   'revision'::object_type as type,
                   id,
                   object_id
                 from revision r
                 where exists (select 1 from t where t.id = r.id)
             ) union all (
                 select
                   id as sha1_git,
                   'directory'::object_type as type,
                   id,
                   object_id
                 from directory d
                 where exists (select 1 from t where t.id = d.id)
             ) union all (
                 select
                   sha1_git as sha1_git,
                   'content'::object_type as type,
                   sha1 as id,
                   object_id
                 from content c
                 where exists (select 1 from t where t.id = c.sha1_git)
             ))
             select t.id as sha1_git, k.type, k.id, k.object_id
             from t
             left join known_objects k on t.id = k.sha1_git
             """,
             ((id,) for id in ids)
         )
 
     def stat_counters(self, cur=None):
         cur = self._cursor(cur)
         cur.execute('SELECT * FROM swh_stat_counters()')
         yield from cur
 
     fetch_history_cols = ['origin', 'date', 'status', 'result', 'stdout',
                           'stderr', 'duration']
 
     def create_fetch_history(self, fetch_history, cur=None):
         """Create a fetch_history entry with the data in fetch_history"""
         cur = self._cursor(cur)
         query = '''INSERT INTO fetch_history (%s)
                    VALUES (%s) RETURNING id''' % (
             ','.join(self.fetch_history_cols),
             ','.join(['%s'] * len(self.fetch_history_cols))
         )
         cur.execute(query, [fetch_history.get(col) for col in
                             self.fetch_history_cols])
 
         return cur.fetchone()[0]
 
     def get_fetch_history(self, fetch_history_id, cur=None):
         """Get a fetch_history entry with the given id"""
         cur = self._cursor(cur)
         query = '''SELECT %s FROM fetch_history WHERE id=%%s''' % (
             ', '.join(self.fetch_history_cols),
         )
         cur.execute(query, (fetch_history_id,))
 
         data = cur.fetchone()
 
         if not data:
             return None
 
         ret = {'id': fetch_history_id}
         for i, col in enumerate(self.fetch_history_cols):
             ret[col] = data[i]
 
         return ret
 
     def update_fetch_history(self, fetch_history, cur=None):
         """Update the fetch_history entry from the data in fetch_history"""
         cur = self._cursor(cur)
         query = '''UPDATE fetch_history
                    SET %s
                    WHERE id=%%s''' % (
             ','.join('%s=%%s' % col for col in self.fetch_history_cols)
         )
         cur.execute(query, [jsonize(fetch_history.get(col)) for col in
                             self.fetch_history_cols + ['id']])
 
     def origin_add(self, type, url, cur=None):
         """Insert a new origin and return the new identifier."""
         insert = """INSERT INTO origin (type, url) values (%s, %s)
                     RETURNING id"""
 
         cur.execute(insert, (type, url))
         return cur.fetchone()[0]
 
     origin_cols = ['id', 'type', 'url']
 
     def origin_get_with(self, type, url, cur=None):
         """Retrieve the origin id from its type and url if found."""
         cur = self._cursor(cur)
 
         query = """SELECT %s
                    FROM origin
                    WHERE type=%%s AND url=%%s
                 """ % ','.join(self.origin_cols)
 
         cur.execute(query, (type, url))
         data = cur.fetchone()
         if data:
             return line_to_bytes(data)
         return None
 
     def origin_get(self, id, cur=None):
         """Retrieve the origin per its identifier.
 
         """
         cur = self._cursor(cur)
 
         query = """SELECT %s
                    FROM origin WHERE id=%%s
                 """ % ','.join(self.origin_cols)
 
         cur.execute(query, (id,))
         data = cur.fetchone()
         if data:
             return line_to_bytes(data)
         return None
 
     def origin_search(self, url_pattern, offset=0, limit=50,
                       regexp=False, with_visit=False, cur=None):
         """Search for origins whose urls contain a provided string pattern
         or match a provided regular expression.
         The search is performed in a case insensitive way.
 
         Args:
             url_pattern (str): the string pattern to search for in origin urls
             offset (int): number of found origins to skip before returning
                 results
             limit (int): the maximum number of found origins to return
             regexp (bool): if True, consider the provided pattern as a regular
                 expression and returns origins whose urls match it
             with_visit (bool): if True, filter out origins with no visit
 
         """
         cur = self._cursor(cur)
         origin_cols = ','.join(self.origin_cols)
         query = """SELECT %s
                    FROM origin
                    WHERE """
         if with_visit:
             query += """
                    EXISTS (SELECT 1 from origin_visit WHERE origin=origin.id)
                    AND """
         query += """
                    url %s %%s
                    ORDER BY id
                    OFFSET %%s LIMIT %%s"""
 
         if not regexp:
             query = query % (origin_cols, 'ILIKE')
             query_params = ('%'+url_pattern+'%', offset, limit)
         else:
             query = query % (origin_cols, '~*')
             query_params = (url_pattern, offset, limit)
 
         cur.execute(query, query_params)
         yield from cursor_to_bytes(cur)
 
     person_cols = ['fullname', 'name', 'email']
     person_get_cols = person_cols + ['id']
 
-    def person_add(self, person, cur=None):
-        """Add a person identified by its name and email.
-
-        Returns:
-            The new person's id
-
-        """
-        cur = self._cursor(cur)
-
-        query_new_person = '''\
-        INSERT INTO person(%s)
-        VALUES (%s)
-        RETURNING id''' % (
-            ', '.join(self.person_cols),
-            ', '.join('%s' for i in range(len(self.person_cols)))
-        )
-        cur.execute(query_new_person,
-                    [person[col] for col in self.person_cols])
-        return cur.fetchone()[0]
-
     def person_get(self, ids, cur=None):
         """Retrieve the persons identified by the list of ids.
 
         """
         cur = self._cursor(cur)
 
         query = """SELECT %s
                    FROM person
                    WHERE id IN %%s""" % ', '.join(self.person_get_cols)
 
         cur.execute(query, (tuple(ids),))
         yield from cursor_to_bytes(cur)
 
     release_add_cols = [
         'id', 'target', 'target_type', 'date', 'date_offset',
         'date_neg_utc_offset', 'name', 'comment', 'synthetic',
         'author_fullname', 'author_name', 'author_email',
     ]
     release_get_cols = release_add_cols + ['author_id']
 
     def release_get_from_list(self, releases, cur=None):
         cur = self._cursor(cur)
         query_keys = ', '.join(
             self.mangle_query_key(k, 'release')
             for k in self.release_get_cols
         )
 
         yield from execute_values_to_bytes(
             cur, """
             SELECT %s FROM (VALUES %%s) as t(id)
             LEFT JOIN release ON t.id = release.id
             LEFT JOIN person author ON release.author = author.id
             """ % query_keys,
             ((id,) for id in releases))
 
     def origin_metadata_add(self, origin, ts, provider, tool,
                             metadata, cur=None):
         """ Add an origin_metadata for the origin at ts with provider, tool and
         metadata.
 
         Args:
             origin (int): the origin's id for which the metadata is added
             ts (datetime): time when the metadata was found
             provider (int): the metadata provider identifier
             tool (int): the tool's identifier used to extract metadata
             metadata (jsonb): the metadata retrieved at the time and location
 
         Returns:
             id (int): the origin_metadata unique id
 
         """
         cur = self._cursor(cur)
         insert = """INSERT INTO origin_metadata (origin_id, discovery_date,
                     provider_id, tool_id, metadata) values (%s, %s, %s, %s, %s)
                     RETURNING id"""
         cur.execute(insert, (origin, ts, provider, tool, jsonize(metadata)))
 
         return cur.fetchone()[0]
 
     origin_metadata_get_cols = ['origin_id', 'discovery_date',
                                 'tool_id', 'metadata', 'provider_id',
                                 'provider_name', 'provider_type',
                                 'provider_url']
 
     def origin_metadata_get_by(self, origin_id, provider_type=None, cur=None):
         """Retrieve all origin_metadata entries for one origin_id
 
         """
         cur = self._cursor(cur)
         if not provider_type:
             query = '''SELECT %s
                        FROM swh_origin_metadata_get_by_origin(
                             %%s)''' % (','.join(
                                           self.origin_metadata_get_cols))
 
             cur.execute(query, (origin_id, ))
 
         else:
             query = '''SELECT %s
                        FROM swh_origin_metadata_get_by_provider_type(
                             %%s, %%s)''' % (','.join(
                                           self.origin_metadata_get_cols))
 
             cur.execute(query, (origin_id, provider_type))
 
         yield from cursor_to_bytes(cur)
 
     tool_cols = ['id', 'name', 'version', 'configuration']
 
     @stored_procedure('swh_mktemp_tool')
     def mktemp_tool(self, cur=None):
         pass
 
     def tool_add_from_temp(self, cur=None):
         cur = self._cursor(cur)
         cur.execute("SELECT %s from swh_tool_add()" % (
             ','.join(self.tool_cols), ))
         yield from cursor_to_bytes(cur)
 
     def tool_get(self, name, version, configuration, cur=None):
         cur = self._cursor(cur)
         cur.execute('''select %s
                        from tool
                        where name=%%s and
                              version=%%s and
                              configuration=%%s''' % (
                                  ','.join(self.tool_cols)),
                     (name, version, configuration))
 
         data = cur.fetchone()
         if not data:
             return None
         return line_to_bytes(data)
 
     metadata_provider_cols = ['id', 'provider_name', 'provider_type',
                               'provider_url', 'metadata']
 
     def metadata_provider_add(self, provider_name, provider_type,
                               provider_url, metadata, cur=None):
         """Insert a new provider and return the new identifier."""
         cur = self._cursor(cur)
         insert = """INSERT INTO metadata_provider (provider_name, provider_type,
                     provider_url, metadata) values (%s, %s, %s, %s)
                     RETURNING id"""
 
         cur.execute(insert, (provider_name, provider_type, provider_url,
                     jsonize(metadata)))
         return cur.fetchone()[0]
 
     def metadata_provider_get(self, provider_id, cur=None):
         cur = self._cursor(cur)
         cur.execute('''select %s
                        from metadata_provider
                        where id=%%s ''' % (
                                  ','.join(self.metadata_provider_cols)),
                     (provider_id, ))
 
         data = cur.fetchone()
         if not data:
             return None
         return line_to_bytes(data)
 
     def metadata_provider_get_by(self, provider_name, provider_url,
                                  cur=None):
         cur = self._cursor(cur)
         cur.execute('''select %s
                        from metadata_provider
                        where provider_name=%%s and
                              provider_url=%%s''' % (
                                  ','.join(self.metadata_provider_cols)),
                     (provider_name, provider_url))
 
         data = cur.fetchone()
         if not data:
             return None
         return line_to_bytes(data)
diff --git a/swh/storage/in_memory.py b/swh/storage/in_memory.py
index 44085c1f..fa6a017f 100644
--- a/swh/storage/in_memory.py
+++ b/swh/storage/in_memory.py
@@ -1,1221 +1,1262 @@
 # Copyright (C) 2015-2018  The Software Heritage developers
 # See the AUTHORS file at the top-level directory of this distribution
 # License: GNU General Public License version 3, or any later version
 # See top-level LICENSE file for more information
 
 import re
 import bisect
 import dateutil
 import collections
 from collections import defaultdict
 import copy
 import datetime
 import itertools
 import random
 import warnings
 
 from swh.model.hashutil import DEFAULT_ALGORITHMS
 from swh.model.identifiers import normalize_timestamp
 from swh.objstorage import get_objstorage
 from swh.objstorage.exc import ObjNotFoundError
 
 # Max block size of contents to return
 BULK_BLOCK_CONTENT_LEN_MAX = 10000
 
 
 def now():
     return datetime.datetime.now(tz=datetime.timezone.utc)
 
 
 class Storage:
     def __init__(self):
         self._contents = {}
         self._content_indexes = defaultdict(lambda: defaultdict(set))
 
         self._directories = {}
         self._revisions = {}
         self._releases = {}
         self._snapshots = {}
         self._origins = []
         self._origin_visits = []
+        self._persons = []
         self._origin_metadata = defaultdict(list)
         self._tools = {}
         self._metadata_providers = {}
         self._objects = defaultdict(list)
 
         # ideally we would want a skip list for both fast inserts and searches
         self._sorted_sha1s = []
 
         self.objstorage = get_objstorage('memory', {})
 
     def check_config(self, *, check_write):
         """Check that the storage is configured and ready to go."""
         return True
 
     def content_add(self, contents):
         """Add content blobs to the storage
 
         Args:
             content (iterable): iterable of dictionaries representing
                 individual pieces of content to add. Each dictionary has the
                 following keys:
 
                 - data (bytes): the actual content
                 - length (int): content length (default: -1)
                 - one key for each checksum algorithm in
                   :data:`swh.model.hashutil.DEFAULT_ALGORITHMS`, mapped to the
                   corresponding checksum
                 - status (str): one of visible, hidden, absent
                 - reason (str): if status = absent, the reason why
                 - origin (int): if status = absent, the origin we saw the
                   content in
 
         """
         for content in contents:
             key = self._content_key(content)
             if key in self._contents:
                 continue
             for algorithm in DEFAULT_ALGORITHMS:
                 if content[algorithm] in self._content_indexes[algorithm]:
                     from . import HashCollision
                     raise HashCollision(algorithm, content[algorithm], key)
             for algorithm in DEFAULT_ALGORITHMS:
                 self._content_indexes[algorithm][content[algorithm]].add(key)
             self._objects[content['sha1_git']].append(
                 ('content', content['sha1']))
             self._contents[key] = copy.deepcopy(content)
             self._contents[key]['ctime'] = now()
             bisect.insort(self._sorted_sha1s, content['sha1'])
             if self._contents[key]['status'] == 'visible':
                 content_data = self._contents[key].pop('data')
                 self.objstorage.add(content_data, content['sha1'])
 
     def content_get(self, ids):
         """Retrieve in bulk contents and their data.
 
         This function may yield more blobs than provided sha1 identifiers,
         in case they collide.
 
         Args:
             content: iterables of sha1
 
         Yields:
             Dict[str, bytes]: Generates streams of contents as dict with their
                 raw data:
 
                 - sha1 (bytes): content id
                 - data (bytes): content's raw data
 
         Raises:
             ValueError in case of too much contents are required.
             cf. BULK_BLOCK_CONTENT_LEN_MAX
 
         """
         # FIXME: Make this method support slicing the `data`.
         if len(ids) > BULK_BLOCK_CONTENT_LEN_MAX:
             raise ValueError(
                 "Sending at most %s contents." % BULK_BLOCK_CONTENT_LEN_MAX)
         for obj_id in ids:
             try:
                 data = self.objstorage.get(obj_id)
             except ObjNotFoundError:
                 yield None
                 continue
 
             yield {'sha1': obj_id, 'data': data}
 
     def content_get_range(self, start, end, limit=1000, db=None, cur=None):
         """Retrieve contents within range [start, end] bound by limit.
 
         Note that this function may return more than one blob per hash. The
         limit is enforced with multiplicity (ie. two blobs with the same hash
         will count twice toward the limit).
 
         Args:
             **start** (bytes): Starting identifier range (expected smaller
                            than end)
             **end** (bytes): Ending identifier range (expected larger
                              than start)
             **limit** (int): Limit result (default to 1000)
 
         Returns:
             a dict with keys:
             - contents [dict]: iterable of contents in between the range.
             - next (bytes): There remains content in the range
               starting from this next sha1
 
         """
         if limit is None:
             raise ValueError('Development error: limit should not be None')
         from_index = bisect.bisect_left(self._sorted_sha1s, start)
         sha1s = itertools.islice(self._sorted_sha1s, from_index, None)
         sha1s = ((sha1, content_key)
                  for sha1 in sha1s
                  for content_key in self._content_indexes['sha1'][sha1])
         matched = []
         next_content = None
         for sha1, key in sha1s:
             if sha1 > end:
                 break
             if len(matched) >= limit:
                 next_content = sha1
                 break
             matched.append({
                 **self._contents[key],
             })
         return {
             'contents': matched,
             'next': next_content,
         }
 
     def content_get_metadata(self, sha1s):
         """Retrieve content metadata in bulk
 
         Args:
             content: iterable of content identifiers (sha1)
 
         Returns:
             an iterable with content metadata corresponding to the given ids
         """
         # FIXME: the return value should be a mapping from search key to found
         # content*s*
         for sha1 in sha1s:
             if sha1 in self._content_indexes['sha1']:
                 objs = self._content_indexes['sha1'][sha1]
                 # FIXME: rather than selecting one of the objects with that
                 # hash, we should return all of them. See:
                 # https://forge.softwareheritage.org/D645?id=1994#inline-3389
                 key = random.sample(objs, 1)[0]
                 data = copy.deepcopy(self._contents[key])
                 data.pop('ctime')
                 yield data
             else:
                 # FIXME: should really be None
                 yield {
                     'sha1': sha1,
                     'sha1_git': None,
                     'sha256': None,
                     'blake2s256': None,
                     'length': None,
                     'status': None,
                 }
 
     def content_find(self, content):
         if not set(content).intersection(DEFAULT_ALGORITHMS):
             raise ValueError('content keys must contain at least one of: '
                              '%s' % ', '.join(sorted(DEFAULT_ALGORITHMS)))
         found = []
         for algo in DEFAULT_ALGORITHMS:
             hash = content.get(algo)
             if hash and hash in self._content_indexes[algo]:
                 found.append(self._content_indexes[algo][hash])
         if not found:
             return
         keys = list(set.intersection(*found))
 
         # FIXME: should really be a list of all the objects found
         return copy.deepcopy(self._contents[keys[0]])
 
     def content_missing(self, contents, key_hash='sha1'):
         """List content missing from storage
 
         Args:
             contents ([dict]): iterable of dictionaries whose keys are
                                either 'length' or an item of
                                :data:`swh.model.hashutil.ALGORITHMS`;
                                mapped to the corresponding checksum
                                (or length).
 
             key_hash (str): name of the column to use as hash id
                             result (default: 'sha1')
 
         Returns:
             iterable ([bytes]): missing content ids (as per the
             key_hash column)
         """
         for content in contents:
             for (algo, hash_) in content.items():
                 if algo not in DEFAULT_ALGORITHMS:
                     continue
                 if hash_ not in self._content_indexes.get(algo, []):
                     yield content[key_hash]
                     break
             else:
                 # content_find cannot return None here, because we checked
                 # above that there is a content with matching hashes.
                 if self.content_find(content)['status'] == 'missing':
                     yield content[key_hash]
 
     def content_missing_per_sha1(self, contents):
         """List content missing from storage based only on sha1.
 
         Args:
             contents: Iterable of sha1 to check for absence.
 
         Returns:
             iterable: missing ids
 
         Raises:
             TODO: an exception when we get a hash collision.
 
         """
         for content in contents:
             if content not in self._content_indexes['sha1']:
                 yield content
 
     def directory_add(self, directories):
         """Add directories to the storage
 
         Args:
             directories (iterable): iterable of dictionaries representing the
                 individual directories to add. Each dict has the following
                 keys:
 
                 - id (sha1_git): the id of the directory to add
                 - entries (list): list of dicts for each entry in the
                       directory.  Each dict has the following keys:
 
                       - name (bytes)
                       - type (one of 'file', 'dir', 'rev'): type of the
                         directory entry (file, directory, revision)
                       - target (sha1_git): id of the object pointed at by the
                         directory entry
                       - perms (int): entry permissions
         """
         for directory in directories:
             if directory['id'] not in self._directories:
                 self._directories[directory['id']] = copy.deepcopy(directory)
                 self._objects[directory['id']].append(
                     ('directory', directory['id']))
 
     def directory_missing(self, directory_ids):
         """List directories missing from storage
 
         Args:
             directories (iterable): an iterable of directory ids
 
         Yields:
             missing directory ids
 
         """
         for id in directory_ids:
             if id not in self._directories:
                 yield id
 
     def _join_dentry_to_content(self, dentry):
         keys = (
             'status',
             'sha1',
             'sha1_git',
             'sha256',
             'length',
         )
         ret = dict.fromkeys(keys)
         ret.update(dentry)
         if ret['type'] == 'file':
             content = self.content_find({'sha1_git': ret['target']})
             if content:
                 for key in keys:
                     ret[key] = content[key]
         return ret
 
     def _directory_ls(self, directory_id, recursive, prefix=b''):
         if directory_id in self._directories:
             for entry in self._directories[directory_id]['entries']:
                 ret = self._join_dentry_to_content(entry)
                 ret['name'] = prefix + ret['name']
                 ret['dir_id'] = directory_id
                 yield ret
                 if recursive and ret['type'] == 'dir':
                     yield from self._directory_ls(
                         ret['target'], True, prefix + ret['name'] + b'/')
 
     def directory_ls(self, directory_id, recursive=False):
         """Get entries for one directory.
 
         Args:
             - directory: the directory to list entries from.
             - recursive: if flag on, this list recursively from this directory.
 
         Returns:
             List of entries for such directory.
 
         If `recursive=True`, names in the path of a dir/file not at the
         root are concatenated with a slash (`/`).
         """
         yield from self._directory_ls(directory_id, recursive)
 
     def directory_entry_get_by_path(self, directory, paths):
         """Get the directory entry (either file or dir) from directory with path.
 
         Args:
             - directory: sha1 of the top level directory
             - paths: path to lookup from the top level directory. From left
               (top) to right (bottom).
 
         Returns:
             The corresponding directory entry if found, None otherwise.
 
         """
         if not paths:
             return
 
         contents = list(self.directory_ls(directory))
 
         if not contents:
             return
 
         def _get_entry(entries, name):
             for entry in entries:
                 if entry['name'] == name:
                     return entry
 
         first_item = _get_entry(contents, paths[0])
 
         if len(paths) == 1:
             return first_item
 
         if not first_item or first_item['type'] != 'dir':
             return
 
         return self.directory_entry_get_by_path(
                 first_item['target'], paths[1:])
 
     def revision_add(self, revisions):
         """Add revisions to the storage
 
         Args:
             revisions (Iterable[dict]): iterable of dictionaries representing
                 the individual revisions to add. Each dict has the following
                 keys:
 
                 - **id** (:class:`sha1_git`): id of the revision to add
                 - **date** (:class:`dict`): date the revision was written
                 - **committer_date** (:class:`dict`): date the revision got
                   added to the origin
                 - **type** (one of 'git', 'tar'): type of the
                   revision added
                 - **directory** (:class:`sha1_git`): the directory the
                   revision points at
                 - **message** (:class:`bytes`): the message associated with
                   the revision
                 - **author** (:class:`Dict[str, bytes]`): dictionary with
                   keys: name, fullname, email
                 - **committer** (:class:`Dict[str, bytes]`): dictionary with
                   keys: name, fullname, email
                 - **metadata** (:class:`jsonb`): extra information as
                   dictionary
                 - **synthetic** (:class:`bool`): revision's nature (tarball,
                   directory creates synthetic revision`)
                 - **parents** (:class:`list[sha1_git]`): the parents of
                   this revision
 
         date dictionaries have the form defined in :mod:`swh.model`.
         """
         for revision in revisions:
             if revision['id'] not in self._revisions:
                 self._revisions[revision['id']] = rev = copy.deepcopy(revision)
+                self._person_add(rev['committer'])
+                self._person_add(rev['author'])
                 rev['date'] = normalize_timestamp(rev.get('date'))
                 rev['committer_date'] = normalize_timestamp(
                         rev.get('committer_date'))
                 self._objects[revision['id']].append(
                     ('revision', revision['id']))
 
     def revision_missing(self, revision_ids):
         """List revisions missing from storage
 
         Args:
             revisions (iterable): revision ids
 
         Yields:
             missing revision ids
 
         """
         for id in revision_ids:
             if id not in self._revisions:
                 yield id
 
     def revision_get(self, revision_ids):
         for id in revision_ids:
             yield copy.deepcopy(self._revisions.get(id))
 
     def _get_parent_revs(self, rev_id, seen, limit):
         if limit and len(seen) >= limit:
             return
         if rev_id in seen:
             return
         seen.add(rev_id)
         yield self._revisions[rev_id]
         for parent in self._revisions[rev_id]['parents']:
             yield from self._get_parent_revs(parent, seen, limit)
 
     def revision_log(self, revision_ids, limit=None):
         """Fetch revision entry from the given root revisions.
 
         Args:
             revisions: array of root revision to lookup
             limit: limitation on the output result. Default to None.
 
         Yields:
             List of revision log from such revisions root.
 
         """
         seen = set()
         for rev_id in revision_ids:
             yield from self._get_parent_revs(rev_id, seen, limit)
 
     def revision_shortlog(self, revisions, limit=None):
         """Fetch the shortlog for the given revisions
 
         Args:
             revisions: list of root revisions to lookup
             limit: depth limitation for the output
 
         Yields:
             a list of (id, parents) tuples.
 
         """
         yield from ((rev['id'], rev['parents'])
                     for rev in self.revision_log(revisions, limit))
 
     def release_add(self, releases):
         """Add releases to the storage
 
         Args:
             releases (Iterable[dict]): iterable of dictionaries representing
                 the individual releases to add. Each dict has the following
                 keys:
 
                 - **id** (:class:`sha1_git`): id of the release to add
                 - **revision** (:class:`sha1_git`): id of the revision the
                   release points to
                 - **date** (:class:`dict`): the date the release was made
                 - **name** (:class:`bytes`): the name of the release
                 - **comment** (:class:`bytes`): the comment associated with
                   the release
                 - **author** (:class:`Dict[str, bytes]`): dictionary with
                   keys: name, fullname, email
 
         the date dictionary has the form defined in :mod:`swh.model`.
         """
         for rel in releases:
+            rel = copy.deepcopy(rel)
             rel['date'] = normalize_timestamp(rel['date'])
+            self._person_add(rel['author'])
             self._objects[rel['id']].append(
                 ('release', rel['id']))
-        self._releases.update((rel['id'], rel) for rel in releases)
+            self._releases[rel['id']] = rel
 
     def release_missing(self, releases):
         """List releases missing from storage
 
         Args:
             releases: an iterable of release ids
 
         Returns:
             a list of missing release ids
 
         """
         yield from (rel for rel in releases if rel not in self._releases)
 
     def release_get(self, releases):
         """Given a list of sha1, return the releases's information
 
         Args:
             releases: list of sha1s
 
         Yields:
             dicts with the same keys as those given to `release_add`
             (or ``None`` if a release does not exist)
 
         """
         for rel_id in releases:
             yield copy.deepcopy(self._releases.get(rel_id))
 
     def snapshot_add(self, origin, visit, snapshot):
         """Add a snapshot for the given origin/visit couple
 
         Args:
             origin (int): id of the origin
             visit (int): id of the visit
             snapshot (dict): the snapshot to add to the visit, containing the
               following keys:
 
               - **id** (:class:`bytes`): id of the snapshot
               - **branches** (:class:`dict`): branches the snapshot contains,
                 mapping the branch name (:class:`bytes`) to the branch target,
                 itself a :class:`dict` (or ``None`` if the branch points to an
                 unknown object)
 
                 - **target_type** (:class:`str`): one of ``content``,
                   ``directory``, ``revision``, ``release``,
                   ``snapshot``, ``alias``
                 - **target** (:class:`bytes`): identifier of the target
                   (currently a ``sha1_git`` for all object kinds, or the name
                   of the target branch for aliases)
 
         Raises:
             ValueError: if the origin's or visit's identifier does not exist.
         """
         snapshot_id = snapshot['id']
         if snapshot_id not in self._snapshots:
             self._snapshots[snapshot_id] = {
                 'origin': origin,
                 'visit': visit,
                 'id': snapshot_id,
                 'branches': copy.deepcopy(snapshot['branches']),
                 '_sorted_branch_names': sorted(snapshot['branches'])
                 }
             self._objects[snapshot_id].append(('snapshot', snapshot_id))
         if origin <= len(self._origin_visits) and \
            visit <= len(self._origin_visits[origin-1]):
             self._origin_visits[origin-1][visit-1]['snapshot'] = snapshot_id
         else:
             raise ValueError('Origin with id %s does not exist or has no visit'
                              ' with id %s' % (origin, visit))
 
     def snapshot_get(self, snapshot_id):
         """Get the content, possibly partial, of a snapshot with the given id
 
         The branches of the snapshot are iterated in the lexicographical
         order of their names.
 
         .. warning:: At most 1000 branches contained in the snapshot will be
             returned for performance reasons. In order to browse the whole
             set of branches, the method :meth:`snapshot_get_branches`
             should be used instead.
 
         Args:
             snapshot_id (bytes): identifier of the snapshot
         Returns:
             dict: a dict with three keys:
                 * **id**: identifier of the snapshot
                 * **branches**: a dict of branches contained in the snapshot
                   whose keys are the branches' names.
                 * **next_branch**: the name of the first branch not returned
                   or :const:`None` if the snapshot has less than 1000
                   branches.
         """
         return self.snapshot_get_branches(snapshot_id)
 
     def snapshot_get_by_origin_visit(self, origin, visit):
         """Get the content, possibly partial, of a snapshot for the given origin visit
 
         The branches of the snapshot are iterated in the lexicographical
         order of their names.
 
         .. warning:: At most 1000 branches contained in the snapshot will be
             returned for performance reasons. In order to browse the whole
             set of branches, the method :meth:`snapshot_get_branches`
             should be used instead.
 
         Args:
             origin (int): the origin's identifier
             visit (int): the visit's identifier
         Returns:
             dict: None if the snapshot does not exist;
               a dict with three keys otherwise:
                 * **id**: identifier of the snapshot
                 * **branches**: a dict of branches contained in the snapshot
                   whose keys are the branches' names.
                 * **next_branch**: the name of the first branch not returned
                   or :const:`None` if the snapshot has less than 1000
                   branches.
 
         """
         if origin > len(self._origins) or \
            visit > len(self._origin_visits[origin-1]):
             return None
         snapshot_id = self._origin_visits[origin-1][visit-1]['snapshot']
         if snapshot_id:
             return self.snapshot_get(snapshot_id)
         else:
             return None
 
     def snapshot_get_latest(self, origin, allowed_statuses=None):
         """Get the content, possibly partial, of the latest snapshot for the
         given origin, optionally only from visits that have one of the given
         allowed_statuses
 
         The branches of the snapshot are iterated in the lexicographical
         order of their names.
 
         .. warning:: At most 1000 branches contained in the snapshot will be
             returned for performance reasons. In order to browse the whole
             set of branches, the method :meth:`snapshot_get_branches`
             should be used instead.
 
         Args:
             origin (int): the origin's identifier
             allowed_statuses (list of str): list of visit statuses considered
                 to find the latest snapshot for the visit. For instance,
                 ``allowed_statuses=['full']`` will only consider visits that
                 have successfully run to completion.
         Returns:
             dict: a dict with three keys:
                 * **id**: identifier of the snapshot
                 * **branches**: a dict of branches contained in the snapshot
                   whose keys are the branches' names.
                 * **next_branch**: the name of the first branch not returned
                   or :const:`None` if the snapshot has less than 1000
                   branches.
         """
         visits = self._origin_visits[origin-1]
         if allowed_statuses is not None:
             visits = [visit for visit in visits
                       if visit['status'] in allowed_statuses]
         snapshot = None
         for visit in sorted(visits, key=lambda v: (v['date'], v['visit']),
                             reverse=True):
             snapshot_id = visit['snapshot']
             snapshot = self.snapshot_get(snapshot_id)
             if snapshot:
                 break
 
         return snapshot
 
     def snapshot_count_branches(self, snapshot_id, db=None, cur=None):
         """Count the number of branches in the snapshot with the given id
 
         Args:
             snapshot_id (bytes): identifier of the snapshot
 
         Returns:
             dict: A dict whose keys are the target types of branches and
             values their corresponding amount
         """
         branches = list(self._snapshots[snapshot_id]['branches'].values())
         return collections.Counter(branch['target_type'] if branch else None
                                    for branch in branches)
 
     def snapshot_get_branches(self, snapshot_id, branches_from=b'',
                               branches_count=1000, target_types=None):
         """Get the content, possibly partial, of a snapshot with the given id
 
         The branches of the snapshot are iterated in the lexicographical
         order of their names.
 
         Args:
             snapshot_id (bytes): identifier of the snapshot
             branches_from (bytes): optional parameter used to skip branches
                 whose name is lesser than it before returning them
             branches_count (int): optional parameter used to restrain
                 the amount of returned branches
             target_types (list): optional parameter used to filter the
                 target types of branch to return (possible values that can be
                 contained in that list are `'content', 'directory',
                 'revision', 'release', 'snapshot', 'alias'`)
         Returns:
             dict: None if the snapshot does not exist;
               a dict with three keys otherwise:
                 * **id**: identifier of the snapshot
                 * **branches**: a dict of branches contained in the snapshot
                   whose keys are the branches' names.
                 * **next_branch**: the name of the first branch not returned
                   or :const:`None` if the snapshot has less than
                   `branches_count` branches after `branches_from` included.
         """
         snapshot = self._snapshots.get(snapshot_id)
         if snapshot is None:
             return None
         sorted_branch_names = snapshot['_sorted_branch_names']
         from_index = bisect.bisect_left(
                 sorted_branch_names, branches_from)
         if target_types:
             next_branch = None
             branches = {}
             for branch_name in sorted_branch_names[from_index:]:
                 branch = snapshot['branches'][branch_name]
                 if branch and branch['target_type'] in target_types:
                     if len(branches) < branches_count:
                         branches[branch_name] = branch
                     else:
                         next_branch = branch_name
                         break
         else:
             # As there is no 'target_types', we can do that much faster
             to_index = from_index + branches_count
             returned_branch_names = sorted_branch_names[from_index:to_index]
             branches = {branch_name: snapshot['branches'][branch_name]
                         for branch_name in returned_branch_names}
             if to_index >= len(sorted_branch_names):
                 next_branch = None
             else:
                 next_branch = sorted_branch_names[to_index]
         return {
                 'id': snapshot_id,
                 'branches': branches,
                 'next_branch': next_branch,
                 }
 
     def object_find_by_sha1_git(self, ids, db=None, cur=None):
         """Return the objects found with the given ids.
 
         Args:
             ids: a generator of sha1_gits
 
         Returns:
             dict: a mapping from id to the list of objects found. Each object
             found is itself a dict with keys:
 
             - sha1_git: the input id
             - type: the type of object found
             - id: the id of the object found
             - object_id: the numeric id of the object found.
 
         """
         ret = {}
         for id_ in ids:
             objs = self._objects.get(id_, [])
             ret[id_] = [{
                     'sha1_git': id_,
                     'type': obj[0],
                     'id': obj[1],
                     'object_id': id_,
                     } for obj in objs]
         return ret
 
     def origin_get(self, origin):
         """Return the origin either identified by its id or its tuple
         (type, url).
 
         Args:
             origin: dictionary representing the individual origin to find.
                 This dict has either the keys type and url:
 
                 - type (FIXME: enum TBD): the origin type ('git', 'wget', ...)
                 - url (bytes): the url the origin points to
 
                 or the id:
 
                 - id (int): the origin's identifier
 
         Returns:
             dict: the origin dictionary with the keys:
 
             - id: origin's id
             - type: origin's type
             - url: origin's url
 
         Raises:
             ValueError: if the keys does not match (url and type) nor id.
 
         """
         if 'id' in origin:
             origin_id = origin['id']
         elif 'type' in origin and 'url' in origin:
             origin_id = self._origin_id(origin)
         else:
             raise ValueError('Origin must have either id or (type and url).')
         origin = None
         # self._origin_id can return None
         if origin_id is not None:
             origin = copy.deepcopy(self._origins[origin_id-1])
             origin['id'] = origin_id
         return origin
 
     def origin_search(self, url_pattern, offset=0, limit=50,
                       regexp=False, with_visit=False, db=None, cur=None):
         """Search for origins whose urls contain a provided string pattern
         or match a provided regular expression.
         The search is performed in a case insensitive way.
 
         Args:
             url_pattern (str): the string pattern to search for in origin urls
             offset (int): number of found origins to skip before returning
                 results
             limit (int): the maximum number of found origins to return
             regexp (bool): if True, consider the provided pattern as a regular
                 expression and return origins whose urls match it
             with_visit (bool): if True, filter out origins with no visit
 
         Returns:
             An iterable of dict containing origin information as returned
             by :meth:`swh.storage.storage.Storage.origin_get`.
         """
         origins = self._origins
         if regexp:
             pat = re.compile(url_pattern)
             origins = [orig for orig in origins if pat.match(orig['url'])]
         else:
             origins = [orig for orig in origins if url_pattern in orig['url']]
         if with_visit:
             origins = [orig for orig in origins
                        if len(self._origin_visits[orig['id']-1]) > 0]
         origins = copy.deepcopy(origins[offset:offset+limit])
         return origins
 
     def origin_add(self, origins):
         """Add origins to the storage
 
         Args:
             origins: list of dictionaries representing the individual origins,
                 with the following keys:
 
                 - type: the origin type ('git', 'svn', 'deb', ...)
                 - url (bytes): the url the origin points to
 
         Returns:
             list: given origins as dict updated with their id
 
         """
         origins = copy.deepcopy(origins)
         for origin in origins:
             origin['id'] = self.origin_add_one(origin)
         return origins
 
     def origin_add_one(self, origin):
         """Add origin to the storage
 
         Args:
             origin: dictionary representing the individual origin to add. This
                 dict has the following keys:
 
                 - type (FIXME: enum TBD): the origin type ('git', 'wget', ...)
                 - url (bytes): the url the origin points to
 
         Returns:
             the id of the added origin, or of the identical one that already
             exists.
 
         """
         origin = copy.deepcopy(origin)
         assert 'id' not in origin
         origin_id = self._origin_id(origin)
         if origin_id is None:
             # origin ids are in the range [1, +inf[
             origin_id = len(self._origins) + 1
             origin['id'] = origin_id
             self._origins.append(origin)
             self._origin_visits.append([])
             key = (origin['type'], origin['url'])
             self._objects[key].append(('origin', origin_id))
         return origin_id
 
     def fetch_history_start(self, origin_id):
         """Add an entry for origin origin_id in fetch_history. Returns the id
         of the added fetch_history entry
         """
         pass
 
     def fetch_history_end(self, fetch_history_id, data):
         """Close the fetch_history entry with id `fetch_history_id`, replacing
            its data with `data`.
         """
         pass
 
     def fetch_history_get(self, fetch_history_id):
         """Get the fetch_history entry with id `fetch_history_id`.
         """
         raise NotImplementedError('fetch_history_get is deprecated, use '
                                   'origin_visit_get instead.')
 
     def origin_visit_add(self, origin, date=None, *, ts=None):
         """Add an origin_visit for the origin at date with status 'ongoing'.
 
         Args:
             origin (int): visited origin's identifier
             date: timestamp of such visit
 
         Returns:
             dict: dictionary with keys origin and visit where:
 
             - origin: origin's identifier
             - visit: the visit's identifier for the new visit occurrence
 
         """
         if ts is None:
             if date is None:
                 raise TypeError('origin_visit_add expected 2 arguments.')
         else:
             assert date is None
             warnings.warn("argument 'ts' of origin_visit_add was renamed "
                           "to 'date' in v0.0.109.",
                           DeprecationWarning)
             date = ts
 
         if isinstance(date, str):
             date = dateutil.parser.parse(date)
 
         visit_ret = None
         if origin <= len(self._origin_visits):
             # visit ids are in the range [1, +inf[
             visit_id = len(self._origin_visits[origin-1]) + 1
             status = 'ongoing'
             visit = {
                 'origin': origin,
                 'date': date,
                 'status': status,
                 'snapshot': None,
                 'metadata': None,
                 'visit': visit_id
             }
-            self._origin_visits[origin-1].append(copy.deepcopy(visit))
+            self._origin_visits[origin-1].append(visit)
             visit_ret = {
                 'origin': origin,
                 'visit': visit_id,
             }
 
         return visit_ret
 
     def origin_visit_update(self, origin, visit_id, status, metadata=None):
         """Update an origin_visit's status.
 
         Args:
             origin (int): visited origin's identifier
             visit_id (int): visit's identifier
             status: visit's new status
             metadata: data associated to the visit
 
         Returns:
             None
 
         """
         if origin > len(self._origin_visits) or \
            visit_id > len(self._origin_visits[origin-1]):
             return
         self._origin_visits[origin-1][visit_id-1].update({
             'status': status,
             'metadata': metadata})
 
     def origin_visit_get(self, origin, last_visit=None, limit=None):
         """Retrieve all the origin's visit's information.
 
         Args:
             origin (int): the origin's identifier
             last_visit (int): visit's id from which listing the next ones,
                 default to None
             limit (int): maximum number of results to return,
                 default to None
 
         Yields:
             List of visits.
 
         """
         visits = self._origin_visits[origin-1]
         if last_visit is not None:
             visits = visits[last_visit:]
         if limit is not None:
             visits = visits[:limit]
         for visit in visits:
             visit_id = visit['visit']
-            yield self._origin_visits[origin-1][visit_id-1]
+            yield copy.deepcopy(self._origin_visits[origin-1][visit_id-1])
 
     def origin_visit_get_by(self, origin, visit):
         """Retrieve origin visit's information.
 
         Args:
             origin (int): the origin's identifier
 
         Returns:
             The information on that particular (origin, visit) or None if
             it does not exist
 
         """
         origin_visit = None
         if origin <= len(self._origin_visits) and \
            visit <= len(self._origin_visits[origin-1]):
             origin_visit = self._origin_visits[origin-1][visit-1]
-        return origin_visit
+        return copy.deepcopy(origin_visit)
+
+    def person_get(self, person):
+        """Return the persons identified by their ids.
+
+        Args:
+            person: array of ids.
+
+        Returns:
+            The array of persons corresponding of the ids.
+
+        """
+        for p in person:
+            if 0 <= (p - 1) < len(self._persons):
+                yield dict(self._persons[p - 1], id=p)
+            else:
+                yield None
 
     def stat_counters(self):
         """compute statistics about the number of tuples in various tables
 
         Returns:
             dict: a dictionary mapping textual labels (e.g., content) to
             integer values (e.g., the number of tuples in table content)
 
         """
         keys = (
             'content',
             'directory',
             'origin',
             'origin_visit',
             'person',
             'release',
             'revision',
             'skipped_content',
             'snapshot'
             )
         stats = {key: 0 for key in keys}
         stats.update(collections.Counter(
             obj_type
             for (obj_type, obj_id)
             in itertools.chain(*self._objects.values())))
         return stats
 
     def refresh_stat_counters(self):
         """Recomputes the statistics for `stat_counters`."""
         pass
 
     def origin_metadata_add(self, origin_id, ts, provider, tool, metadata,
                             db=None, cur=None):
         """ Add an origin_metadata for the origin at ts with provenance and
         metadata.
 
         Args:
             origin_id (int): the origin's id for which the metadata is added
             ts (datetime): timestamp of the found metadata
             provider: id of the provider of metadata (ex:'hal')
             tool: id of the tool used to extract metadata
             metadata (jsonb): the metadata retrieved at the time and location
         """
         if isinstance(ts, str):
             ts = dateutil.parser.parse(ts)
 
         origin_metadata = {
                 'origin_id': origin_id,
                 'discovery_date': ts,
                 'tool_id': tool,
                 'metadata': metadata,
                 'provider_id': provider,
                 }
         self._origin_metadata[origin_id].append(origin_metadata)
         return None
 
     def origin_metadata_get_by(self, origin_id, provider_type=None, db=None,
                                cur=None):
         """Retrieve list of all origin_metadata entries for the origin_id
 
         Args:
             origin_id (int): the unique origin's identifier
             provider_type (str): (optional) type of provider
 
         Returns:
             list of dicts: the origin_metadata dictionary with the keys:
 
             - origin_id (int): origin's identifier
             - discovery_date (datetime): timestamp of discovery
             - tool_id (int): metadata's extracting tool
             - metadata (jsonb)
             - provider_id (int): metadata's provider
             - provider_name (str)
             - provider_type (str)
             - provider_url (str)
 
         """
         metadata = []
         for item in self._origin_metadata[origin_id]:
             item = copy.deepcopy(item)
             provider = self.metadata_provider_get(item['provider_id'])
             for attr in ('name', 'type', 'url'):
                 item['provider_' + attr] = provider[attr]
             metadata.append(item)
         return metadata
 
     def tool_add(self, tools):
         """Add new tools to the storage.
 
         Args:
             tools (iterable of :class:`dict`): Tool information to add to
               storage. Each tool is a :class:`dict` with the following keys:
 
               - name (:class:`str`): name of the tool
               - version (:class:`str`): version of the tool
               - configuration (:class:`dict`): configuration of the tool,
                 must be json-encodable
 
         Yields:
             :class:`dict`: All the tools inserted in storage
             (including the internal ``id``). The order of the list is not
             guaranteed to match the order of the initial list.
 
         """
         inserted = []
         for tool in tools:
             key = self._tool_key(tool)
             assert 'id' not in tool
             record = copy.deepcopy(tool)
             record['id'] = key  # TODO: remove this
             if key not in self._tools:
                 self._tools[key] = record
             inserted.append(copy.deepcopy(self._tools[key]))
 
         yield from inserted
 
     def tool_get(self, tool):
         """Retrieve tool information.
 
         Args:
             tool (dict): Tool information we want to retrieve from storage.
               The dicts have the same keys as those used in :func:`tool_add`.
 
         Returns:
             dict: The full tool information if it exists (``id`` included),
             None otherwise.
 
         """
         return self._tools.get(self._tool_key(tool))
 
     def metadata_provider_add(self, provider_name, provider_type, provider_url,
                               metadata):
         """Add a metadata provider.
 
         Args:
             provider_name (str): Its name
             provider_type (str): Its type
             provider_url (str): Its URL
             metadata: JSON-encodable object
 
         Returns:
             an identifier of the provider
         """
         provider = {
                 'name': provider_name,
                 'type': provider_type,
                 'url': provider_url,
                 'metadata': metadata,
                 }
         key = self._metadata_provider_key(provider)
         provider['id'] = key
         self._metadata_providers[key] = provider
         return key
 
     def metadata_provider_get(self, provider_id, db=None, cur=None):
         """Get a metadata provider
 
         Args:
             provider_id: Its identifier, as given by `metadata_provider_add`.
 
         Returns:
             dict: same as `metadata_provider_add`;
                   or None if it does not exist.
         """
         return self._metadata_providers.get(provider_id)
 
     def metadata_provider_get_by(self, provider, db=None, cur=None):
         """Get a metadata provider
 
         Args:
             provider_name: Its name
             provider_url: Its URL
 
         Returns:
             dict: same as `metadata_provider_add`;
                   or None if it does not exist.
         """
         key = self._metadata_provider_key({
             'name': provider['provider_name'],
             'url': provider['provider_url']})
         return self._metadata_providers.get(key)
 
     def _origin_id(self, origin):
         origin_id = None
         for stored_origin in self._origins:
             if stored_origin['type'] == origin['type'] and \
                stored_origin['url'] == origin['url']:
                 origin_id = stored_origin['id']
                 break
         return origin_id
 
+    def _person_add(self, person):
+        """Add a person in storage.
+
+        Note: Private method, do not use outside of this class.
+
+        Args:
+            person: dictionary with keys fullname, name and email.
+
+        """
+        key = ('person', person['fullname'])
+        if key not in self._objects:
+            person_id = len(self._persons) + 1
+            self._persons.append(dict(person))
+            self._objects[key].append(('person', person_id))
+        else:
+            person_id = self._objects[key][0][1]
+            p = next(self.person_get([person_id]))
+            person.update(p.items())
+        person['id'] = person_id
+
     @staticmethod
     def _content_key(content):
         """A stable key for a content"""
         return tuple(content.get(key) for key in sorted(DEFAULT_ALGORITHMS))
 
     @staticmethod
     def _tool_key(tool):
         return (tool['name'], tool['version'],
                 tuple(sorted(tool['configuration'].items())))
 
     @staticmethod
     def _metadata_provider_key(provider):
         return (provider['name'], provider['url'])
diff --git a/swh/storage/storage.py b/swh/storage/storage.py
index 5feae04e..51b5ff0d 100644
--- a/swh/storage/storage.py
+++ b/swh/storage/storage.py
@@ -1,1484 +1,1466 @@
 # Copyright (C) 2015-2018  The Software Heritage developers
 # See the AUTHORS file at the top-level directory of this distribution
 # License: GNU General Public License version 3, or any later version
 # See top-level LICENSE file for more information
 
 
 from collections import defaultdict
 from concurrent.futures import ThreadPoolExecutor
 import datetime
 import itertools
 import json
 import warnings
 
 import dateutil.parser
 import psycopg2
 import psycopg2.pool
 
 from . import converters
 from .common import db_transaction_generator, db_transaction
 from .db import Db
 from .exc import StorageDBError
 from .algos import diff
 
 from swh.model.hashutil import ALGORITHMS, hash_to_bytes
 from swh.objstorage import get_objstorage
 from swh.objstorage.exc import ObjNotFoundError
 
 # Max block size of contents to return
 BULK_BLOCK_CONTENT_LEN_MAX = 10000
 
 EMPTY_SNAPSHOT_ID = hash_to_bytes('1a8893e6a86f444e8be8e7bda6cb34fb1735a00e')
 """Identifier for the empty snapshot"""
 
 
 class Storage():
     """SWH storage proxy, encompassing DB and object storage
 
     """
 
     def __init__(self, db, objstorage, min_pool_conns=1, max_pool_conns=10):
         """
         Args:
             db_conn: either a libpq connection string, or a psycopg2 connection
             obj_root: path to the root of the object storage
 
         """
         try:
             if isinstance(db, psycopg2.extensions.connection):
                 self._pool = None
                 self._db = Db(db)
             else:
                 self._pool = psycopg2.pool.ThreadedConnectionPool(
                     min_pool_conns, max_pool_conns, db
                 )
                 self._db = None
         except psycopg2.OperationalError as e:
             raise StorageDBError(e)
 
         self.objstorage = get_objstorage(**objstorage)
 
     def get_db(self):
         if self._db:
             return self._db
         else:
             return Db.from_pool(self._pool)
 
     def check_config(self, *, check_write):
         """Check that the storage is configured and ready to go."""
 
         if not self.objstorage.check_config(check_write=check_write):
             return False
 
         # Check permissions on one of the tables
         with self.get_db().transaction() as cur:
             if check_write:
                 check = 'INSERT'
             else:
                 check = 'SELECT'
 
             cur.execute(
                 "select has_table_privilege(current_user, 'content', %s)",
                 (check,)
             )
             return cur.fetchone()[0]
 
         return True
 
     def content_add(self, content):
         """Add content blobs to the storage
 
         Note: in case of DB errors, objects might have already been added to
         the object storage and will not be removed. Since addition to the
         object storage is idempotent, that should not be a problem.
 
         Args:
             content (iterable): iterable of dictionaries representing
                 individual pieces of content to add. Each dictionary has the
                 following keys:
 
                 - data (bytes): the actual content
                 - length (int): content length (default: -1)
                 - one key for each checksum algorithm in
                   :data:`swh.model.hashutil.ALGORITHMS`, mapped to the
                   corresponding checksum
                 - status (str): one of visible, hidden, absent
                 - reason (str): if status = absent, the reason why
                 - origin (int): if status = absent, the origin we saw the
                   content in
 
         """
         db = self.get_db()
 
         def _unique_key(hash, keys=db.content_hash_keys):
             """Given a hash (tuple or dict), return a unique key from the
                aggregation of keys.
 
             """
             if isinstance(hash, tuple):
                 return hash
             return tuple([hash[k] for k in keys])
 
         content_by_status = defaultdict(list)
         for d in content:
             if 'status' not in d:
                 d['status'] = 'visible'
             if 'length' not in d:
                 d['length'] = -1
             content_by_status[d['status']].append(d)
 
         content_with_data = content_by_status['visible']
         content_without_data = content_by_status['absent']
 
         missing_content = set(self.content_missing(content_with_data))
         missing_skipped = set(_unique_key(hashes) for hashes
                               in self.skipped_content_missing(
                                   content_without_data))
 
         def add_to_objstorage():
             data = {
                 cont['sha1']: cont['data']
                 for cont in content_with_data
                 if cont['sha1'] in missing_content
             }
             self.objstorage.add_batch(data)
 
         with db.transaction() as cur:
             with ThreadPoolExecutor(max_workers=1) as executor:
                 added_to_objstorage = executor.submit(add_to_objstorage)
                 if missing_content:
                     # create temporary table for metadata injection
                     db.mktemp('content', cur)
 
                     content_filtered = (cont for cont in content_with_data
                                         if cont['sha1'] in missing_content)
 
                     db.copy_to(content_filtered, 'tmp_content',
                                db.content_get_metadata_keys, cur)
 
                     # move metadata in place
                     try:
                         db.content_add_from_temp(cur)
                     except psycopg2.IntegrityError as e:
                         from . import HashCollision
                         if e.diag.sqlstate == '23505' and \
                                 e.diag.table_name == 'content':
                             constaint_to_hash_name = {
                                 'content_pkey': 'sha1',
                                 'content_sha1_git_idx': 'sha1_git',
                                 'content_sha256_idx': 'sha256',
                                 }
                             colliding_hash_name = constaint_to_hash_name \
                                 .get(e.diag.constraint_name)
                             raise HashCollision(colliding_hash_name)
                         else:
                             raise
 
                 if missing_skipped:
                     missing_filtered = (
                         cont for cont in content_without_data
                         if _unique_key(cont) in missing_skipped
                     )
 
                     db.mktemp('skipped_content', cur)
                     db.copy_to(missing_filtered, 'tmp_skipped_content',
                                db.skipped_content_keys, cur)
 
                     # move metadata in place
                     db.skipped_content_add_from_temp(cur)
 
                 # Wait for objstorage addition before returning from the
                 # transaction, bubbling up any exception
                 added_to_objstorage.result()
 
     @db_transaction()
     def content_update(self, content, keys=[], db=None, cur=None):
         """Update content blobs to the storage. Does nothing for unknown
         contents or skipped ones.
 
         Args:
             content (iterable): iterable of dictionaries representing
                 individual pieces of content to update. Each dictionary has the
                 following keys:
 
                 - data (bytes): the actual content
                 - length (int): content length (default: -1)
                 - one key for each checksum algorithm in
                   :data:`swh.model.hashutil.ALGORITHMS`, mapped to the
                   corresponding checksum
                 - status (str): one of visible, hidden, absent
 
             keys (list): List of keys (str) whose values needs an update, e.g.,
                 new hash column
 
         """
         # TODO: Add a check on input keys. How to properly implement
         # this? We don't know yet the new columns.
 
         db.mktemp('content', cur)
         select_keys = list(set(db.content_get_metadata_keys).union(set(keys)))
         db.copy_to(content, 'tmp_content', select_keys, cur)
         db.content_update_from_temp(keys_to_update=keys,
                                     cur=cur)
 
     def content_get(self, content):
         """Retrieve in bulk contents and their data.
 
         This generator yields exactly as many items than provided sha1
         identifiers, but callers should not assume this will always be true.
 
         It may also yield `None` values in case an object was not found.
 
         Args:
             content: iterables of sha1
 
         Yields:
             Dict[str, bytes]: Generates streams of contents as dict with their
                 raw data:
 
                 - sha1 (bytes): content id
                 - data (bytes): content's raw data
 
         Raises:
             ValueError in case of too much contents are required.
             cf. BULK_BLOCK_CONTENT_LEN_MAX
 
         """
         # FIXME: Make this method support slicing the `data`.
         if len(content) > BULK_BLOCK_CONTENT_LEN_MAX:
             raise ValueError(
                 "Send at maximum %s contents." % BULK_BLOCK_CONTENT_LEN_MAX)
 
         for obj_id in content:
             try:
                 data = self.objstorage.get(obj_id)
             except ObjNotFoundError:
                 yield None
                 continue
 
             yield {'sha1': obj_id, 'data': data}
 
     @db_transaction()
     def content_get_range(self, start, end, limit=1000, db=None, cur=None):
         """Retrieve contents within range [start, end] bound by limit.
 
         Note that this function may return more than one blob per hash. The
         limit is enforced with multiplicity (ie. two blobs with the same hash
         will count twice toward the limit).
 
         Args:
             **start** (bytes): Starting identifier range (expected smaller
                            than end)
             **end** (bytes): Ending identifier range (expected larger
                              than start)
             **limit** (int): Limit result (default to 1000)
 
         Returns:
             a dict with keys:
             - contents [dict]: iterable of contents in between the range.
             - next (bytes): There remains content in the range
               starting from this next sha1
 
         """
         if limit is None:
             raise ValueError('Development error: limit should not be None')
         contents = []
         next_content = None
         for counter, content_row in enumerate(
                 db.content_get_range(start, end, limit+1, cur)):
             content = dict(zip(db.content_get_metadata_keys, content_row))
             if counter >= limit:
                 # take the last commit for the next page starting from this
                 next_content = content['sha1']
                 break
             contents.append(content)
         return {
             'contents': contents,
             'next': next_content,
         }
 
     @db_transaction_generator(statement_timeout=500)
     def content_get_metadata(self, content, db=None, cur=None):
         """Retrieve content metadata in bulk
 
         Args:
             content: iterable of content identifiers (sha1)
 
         Returns:
             an iterable with content metadata corresponding to the given ids
         """
         for metadata in db.content_get_metadata_from_sha1s(content, cur):
             yield dict(zip(db.content_get_metadata_keys, metadata))
 
     @db_transaction_generator()
     def content_missing(self, content, key_hash='sha1', db=None, cur=None):
         """List content missing from storage
 
         Args:
             content ([dict]): iterable of dictionaries whose keys are
                               either 'length' or an item of
                               :data:`swh.model.hashutil.ALGORITHMS`;
                               mapped to the corresponding checksum
                               (or length).
 
             key_hash (str): name of the column to use as hash id
                             result (default: 'sha1')
 
         Returns:
             iterable ([bytes]): missing content ids (as per the
             key_hash column)
 
         Raises:
             TODO: an exception when we get a hash collision.
 
         """
         keys = db.content_hash_keys
 
         if key_hash not in keys:
             raise ValueError("key_hash should be one of %s" % keys)
 
         key_hash_idx = keys.index(key_hash)
 
         if not content:
             return
 
         for obj in db.content_missing_from_list(content, cur):
             yield obj[key_hash_idx]
 
     @db_transaction_generator()
     def content_missing_per_sha1(self, contents, db=None, cur=None):
         """List content missing from storage based only on sha1.
 
         Args:
             contents: Iterable of sha1 to check for absence.
 
         Returns:
             iterable: missing ids
 
         Raises:
             TODO: an exception when we get a hash collision.
 
         """
         for obj in db.content_missing_per_sha1(contents, cur):
             yield obj[0]
 
     @db_transaction_generator()
     def skipped_content_missing(self, content, db=None, cur=None):
         """List skipped_content missing from storage
 
         Args:
             content: iterable of dictionaries containing the data for each
                 checksum algorithm.
 
         Returns:
             iterable: missing signatures
 
         """
         keys = db.content_hash_keys
 
         db.mktemp('skipped_content', cur)
         db.copy_to(content, 'tmp_skipped_content',
                    keys + ['length', 'reason'], cur)
 
         yield from db.skipped_content_missing_from_temp(cur)
 
     @db_transaction()
     def content_find(self, content, db=None, cur=None):
         """Find a content hash in db.
 
         Args:
             content: a dictionary representing one content hash, mapping
                 checksum algorithm names (see swh.model.hashutil.ALGORITHMS) to
                 checksum values
 
         Returns:
             a triplet (sha1, sha1_git, sha256) if the content exist
             or None otherwise.
 
         Raises:
             ValueError: in case the key of the dictionary is not sha1, sha1_git
                 nor sha256.
 
         """
         if not set(content).intersection(ALGORITHMS):
             raise ValueError('content keys must contain at least one of: '
                              'sha1, sha1_git, sha256, blake2s256')
 
         c = db.content_find(sha1=content.get('sha1'),
                             sha1_git=content.get('sha1_git'),
                             sha256=content.get('sha256'),
                             blake2s256=content.get('blake2s256'),
                             cur=cur)
         if c:
             return dict(zip(db.content_find_cols, c))
         return None
 
     def directory_add(self, directories):
         """Add directories to the storage
 
         Args:
             directories (iterable): iterable of dictionaries representing the
                 individual directories to add. Each dict has the following
                 keys:
 
                 - id (sha1_git): the id of the directory to add
                 - entries (list): list of dicts for each entry in the
                       directory.  Each dict has the following keys:
 
                       - name (bytes)
                       - type (one of 'file', 'dir', 'rev'): type of the
                         directory entry (file, directory, revision)
                       - target (sha1_git): id of the object pointed at by the
                         directory entry
                       - perms (int): entry permissions
         """
         dirs = set()
         dir_entries = {
             'file': defaultdict(list),
             'dir': defaultdict(list),
             'rev': defaultdict(list),
         }
 
         for cur_dir in directories:
             dir_id = cur_dir['id']
             dirs.add(dir_id)
             for src_entry in cur_dir['entries']:
                 entry = src_entry.copy()
                 entry['dir_id'] = dir_id
                 dir_entries[entry['type']][dir_id].append(entry)
 
         dirs_missing = set(self.directory_missing(dirs))
         if not dirs_missing:
             return
 
         db = self.get_db()
         with db.transaction() as cur:
             # Copy directory ids
             dirs_missing_dict = ({'id': dir} for dir in dirs_missing)
             db.mktemp('directory', cur)
             db.copy_to(dirs_missing_dict, 'tmp_directory', ['id'], cur)
 
             # Copy entries
             for entry_type, entry_list in dir_entries.items():
                 entries = itertools.chain.from_iterable(
                     entries_for_dir
                     for dir_id, entries_for_dir
                     in entry_list.items()
                     if dir_id in dirs_missing)
 
                 db.mktemp_dir_entry(entry_type)
 
                 db.copy_to(
                     entries,
                     'tmp_directory_entry_%s' % entry_type,
                     ['target', 'name', 'perms', 'dir_id'],
                     cur,
                 )
 
             # Do the final copy
             db.directory_add_from_temp(cur)
 
     @db_transaction_generator()
     def directory_missing(self, directories, db=None, cur=None):
         """List directories missing from storage
 
         Args:
             directories (iterable): an iterable of directory ids
 
         Yields:
             missing directory ids
 
         """
         for obj in db.directory_missing_from_list(directories, cur):
             yield obj[0]
 
     @db_transaction_generator(statement_timeout=20000)
     def directory_ls(self, directory, recursive=False, db=None, cur=None):
         """Get entries for one directory.
 
         Args:
             - directory: the directory to list entries from.
             - recursive: if flag on, this list recursively from this directory.
 
         Returns:
             List of entries for such directory.
 
         If `recursive=True`, names in the path of a dir/file not at the
         root are concatenated with a slash (`/`).
 
         """
         if recursive:
             res_gen = db.directory_walk(directory, cur=cur)
         else:
             res_gen = db.directory_walk_one(directory, cur=cur)
 
         for line in res_gen:
             yield dict(zip(db.directory_ls_cols, line))
 
     @db_transaction(statement_timeout=2000)
     def directory_entry_get_by_path(self, directory, paths, db=None, cur=None):
         """Get the directory entry (either file or dir) from directory with path.
 
         Args:
             - directory: sha1 of the top level directory
             - paths: path to lookup from the top level directory. From left
               (top) to right (bottom).
 
         Returns:
             The corresponding directory entry if found, None otherwise.
 
         """
         res = db.directory_entry_get_by_path(directory, paths, cur)
         if res:
             return dict(zip(db.directory_ls_cols, res))
 
     def revision_add(self, revisions):
         """Add revisions to the storage
 
         Args:
             revisions (Iterable[dict]): iterable of dictionaries representing
                 the individual revisions to add. Each dict has the following
                 keys:
 
                 - **id** (:class:`sha1_git`): id of the revision to add
                 - **date** (:class:`dict`): date the revision was written
                 - **committer_date** (:class:`dict`): date the revision got
                   added to the origin
                 - **type** (one of 'git', 'tar'): type of the
                   revision added
                 - **directory** (:class:`sha1_git`): the directory the
                   revision points at
                 - **message** (:class:`bytes`): the message associated with
                   the revision
                 - **author** (:class:`Dict[str, bytes]`): dictionary with
                   keys: name, fullname, email
                 - **committer** (:class:`Dict[str, bytes]`): dictionary with
                   keys: name, fullname, email
                 - **metadata** (:class:`jsonb`): extra information as
                   dictionary
                 - **synthetic** (:class:`bool`): revision's nature (tarball,
                   directory creates synthetic revision`)
                 - **parents** (:class:`list[sha1_git]`): the parents of
                   this revision
 
         date dictionaries have the form defined in :mod:`swh.model`.
         """
         db = self.get_db()
 
         revisions_missing = set(self.revision_missing(
             set(revision['id'] for revision in revisions)))
 
         if not revisions_missing:
             return
 
         with db.transaction() as cur:
             db.mktemp_revision(cur)
 
             revisions_filtered = (
                 converters.revision_to_db(revision) for revision in revisions
                 if revision['id'] in revisions_missing)
 
             parents_filtered = []
 
             db.copy_to(
                 revisions_filtered, 'tmp_revision', db.revision_add_cols,
                 cur,
                 lambda rev: parents_filtered.extend(rev['parents']))
 
             db.revision_add_from_temp(cur)
 
             db.copy_to(parents_filtered, 'revision_history',
                        ['id', 'parent_id', 'parent_rank'], cur)
 
     @db_transaction_generator()
     def revision_missing(self, revisions, db=None, cur=None):
         """List revisions missing from storage
 
         Args:
             revisions (iterable): revision ids
 
         Yields:
             missing revision ids
 
         """
         if not revisions:
             return
 
         for obj in db.revision_missing_from_list(revisions, cur):
             yield obj[0]
 
     @db_transaction_generator(statement_timeout=500)
     def revision_get(self, revisions, db=None, cur=None):
         """Get all revisions from storage
 
         Args:
             revisions: an iterable of revision ids
 
         Returns:
             iterable: an iterable of revisions as dictionaries (or None if the
                 revision doesn't exist)
 
         """
         for line in db.revision_get_from_list(revisions, cur):
             data = converters.db_to_revision(
                 dict(zip(db.revision_get_cols, line))
             )
             if not data['type']:
                 yield None
                 continue
             yield data
 
     @db_transaction_generator(statement_timeout=2000)
     def revision_log(self, revisions, limit=None, db=None, cur=None):
         """Fetch revision entry from the given root revisions.
 
         Args:
             revisions: array of root revision to lookup
             limit: limitation on the output result. Default to None.
 
         Yields:
             List of revision log from such revisions root.
 
         """
         for line in db.revision_log(revisions, limit, cur):
             data = converters.db_to_revision(
                 dict(zip(db.revision_get_cols, line))
             )
             if not data['type']:
                 yield None
                 continue
             yield data
 
     @db_transaction_generator(statement_timeout=2000)
     def revision_shortlog(self, revisions, limit=None, db=None, cur=None):
         """Fetch the shortlog for the given revisions
 
         Args:
             revisions: list of root revisions to lookup
             limit: depth limitation for the output
 
         Yields:
             a list of (id, parents) tuples.
 
         """
 
         yield from db.revision_shortlog(revisions, limit, cur)
 
     def release_add(self, releases):
         """Add releases to the storage
 
         Args:
             releases (Iterable[dict]): iterable of dictionaries representing
                 the individual releases to add. Each dict has the following
                 keys:
 
                 - **id** (:class:`sha1_git`): id of the release to add
                 - **revision** (:class:`sha1_git`): id of the revision the
                   release points to
                 - **date** (:class:`dict`): the date the release was made
                 - **name** (:class:`bytes`): the name of the release
                 - **comment** (:class:`bytes`): the comment associated with
                   the release
                 - **author** (:class:`Dict[str, bytes]`): dictionary with
                   keys: name, fullname, email
 
         the date dictionary has the form defined in :mod:`swh.model`.
         """
         db = self.get_db()
 
         release_ids = set(release['id'] for release in releases)
         releases_missing = set(self.release_missing(release_ids))
 
         if not releases_missing:
             return
 
         with db.transaction() as cur:
             db.mktemp_release(cur)
 
             releases_filtered = (
                 converters.release_to_db(release) for release in releases
                 if release['id'] in releases_missing
             )
 
             db.copy_to(releases_filtered, 'tmp_release', db.release_add_cols,
                        cur)
 
             db.release_add_from_temp(cur)
 
     @db_transaction_generator()
     def release_missing(self, releases, db=None, cur=None):
         """List releases missing from storage
 
         Args:
             releases: an iterable of release ids
 
         Returns:
             a list of missing release ids
 
         """
         if not releases:
             return
 
         for obj in db.release_missing_from_list(releases, cur):
             yield obj[0]
 
     @db_transaction_generator(statement_timeout=500)
     def release_get(self, releases, db=None, cur=None):
         """Given a list of sha1, return the releases's information
 
         Args:
             releases: list of sha1s
 
         Yields:
             dicts with the same keys as those given to `release_add`
             (or ``None`` if a release does not exist)
 
         """
         for release in db.release_get_from_list(releases, cur):
             data = converters.db_to_release(
                 dict(zip(db.release_get_cols, release))
             )
             yield data if data['target_type'] else None
 
     @db_transaction()
     def snapshot_add(self, origin, visit, snapshot,
                      db=None, cur=None):
         """Add a snapshot for the given origin/visit couple
 
         Args:
             origin (int): id of the origin
             visit (int): id of the visit
             snapshot (dict): the snapshot to add to the visit, containing the
               following keys:
 
               - **id** (:class:`bytes`): id of the snapshot
               - **branches** (:class:`dict`): branches the snapshot contains,
                 mapping the branch name (:class:`bytes`) to the branch target,
                 itself a :class:`dict` (or ``None`` if the branch points to an
                 unknown object)
 
                 - **target_type** (:class:`str`): one of ``content``,
                   ``directory``, ``revision``, ``release``,
                   ``snapshot``, ``alias``
                 - **target** (:class:`bytes`): identifier of the target
                   (currently a ``sha1_git`` for all object kinds, or the name
                   of the target branch for aliases)
 
         Raises:
             ValueError: if the origin or visit id does not exist.
         """
         if not db.snapshot_exists(snapshot['id'], cur):
             db.mktemp_snapshot_branch(cur)
             db.copy_to(
                 (
                     {
                         'name': name,
                         'target': info['target'] if info else None,
                         'target_type': info['target_type'] if info else None,
                     }
                     for name, info in snapshot['branches'].items()
                 ),
                 'tmp_snapshot_branch',
                 ['name', 'target', 'target_type'],
                 cur,
             )
         if not db.origin_visit_exists(origin, visit):
             raise ValueError('Not origin visit with ids (%s, %s)' %
                              (origin, visit))
 
         db.snapshot_add(origin, visit, snapshot['id'], cur)
 
     @db_transaction(statement_timeout=2000)
     def snapshot_get(self, snapshot_id, db=None, cur=None):
         """Get the content, possibly partial, of a snapshot with the given id
 
         The branches of the snapshot are iterated in the lexicographical
         order of their names.
 
         .. warning:: At most 1000 branches contained in the snapshot will be
             returned for performance reasons. In order to browse the whole
             set of branches, the method :meth:`snapshot_get_branches`
             should be used instead.
 
         Args:
             snapshot_id (bytes): identifier of the snapshot
         Returns:
             dict: a dict with three keys:
                 * **id**: identifier of the snapshot
                 * **branches**: a dict of branches contained in the snapshot
                   whose keys are the branches' names.
                 * **next_branch**: the name of the first branch not returned
                   or :const:`None` if the snapshot has less than 1000
                   branches.
         """
 
         return self.snapshot_get_branches(snapshot_id, db=db, cur=cur)
 
     @db_transaction(statement_timeout=2000)
     def snapshot_get_by_origin_visit(self, origin, visit, db=None, cur=None):
         """Get the content, possibly partial, of a snapshot for the given origin visit
 
         The branches of the snapshot are iterated in the lexicographical
         order of their names.
 
         .. warning:: At most 1000 branches contained in the snapshot will be
             returned for performance reasons. In order to browse the whole
             set of branches, the method :meth:`snapshot_get_branches`
             should be used instead.
 
         Args:
             origin (int): the origin identifier
             visit (int): the visit identifier
         Returns:
             dict: None if the snapshot does not exist;
               a dict with three keys otherwise:
                 * **id**: identifier of the snapshot
                 * **branches**: a dict of branches contained in the snapshot
                   whose keys are the branches' names.
                 * **next_branch**: the name of the first branch not returned
                   or :const:`None` if the snapshot has less than 1000
                   branches.
 
         """
         snapshot_id = db.snapshot_get_by_origin_visit(origin, visit, cur)
 
         if snapshot_id:
             return self.snapshot_get(snapshot_id, db=db, cur=cur)
 
         return None
 
     @db_transaction(statement_timeout=2000)
     def snapshot_get_latest(self, origin, allowed_statuses=None, db=None,
                             cur=None):
         """Get the content, possibly partial, of the latest snapshot for the
         given origin, optionally only from visits that have one of the given
         allowed_statuses
 
         The branches of the snapshot are iterated in the lexicographical
         order of their names.
 
         .. warning:: At most 1000 branches contained in the snapshot will be
             returned for performance reasons. In order to browse the whole
             set of branches, the method :meth:`snapshot_get_branches`
             should be used instead.
 
         Args:
             origin (int): the origin identifier
             allowed_statuses (list of str): list of visit statuses considered
                 to find the latest snapshot for the visit. For instance,
                 ``allowed_statuses=['full']`` will only consider visits that
                 have successfully run to completion.
         Returns:
             dict: a dict with three keys:
                 * **id**: identifier of the snapshot
                 * **branches**: a dict of branches contained in the snapshot
                   whose keys are the branches' names.
                 * **next_branch**: the name of the first branch not returned
                   or :const:`None` if the snapshot has less than 1000
                   branches.
         """
         origin_visit = db.origin_visit_get_latest_snapshot(
             origin, allowed_statuses=allowed_statuses, cur=cur)
         if origin_visit:
             origin_visit = dict(zip(db.origin_visit_get_cols, origin_visit))
             return self.snapshot_get(origin_visit['snapshot'], db=db, cur=cur)
 
     @db_transaction(statement_timeout=2000)
     def snapshot_count_branches(self, snapshot_id, db=None, cur=None):
         """Count the number of branches in the snapshot with the given id
 
         Args:
             snapshot_id (bytes): identifier of the snapshot
 
         Returns:
             dict: A dict whose keys are the target types of branches and
             values their corresponding amount
         """
         return dict([bc for bc in
                      db.snapshot_count_branches(snapshot_id, cur)])
 
     @db_transaction(statement_timeout=2000)
     def snapshot_get_branches(self, snapshot_id, branches_from=b'',
                               branches_count=1000, target_types=None,
                               db=None, cur=None):
         """Get the content, possibly partial, of a snapshot with the given id
 
         The branches of the snapshot are iterated in the lexicographical
         order of their names.
 
         Args:
             snapshot_id (bytes): identifier of the snapshot
             branches_from (bytes): optional parameter used to skip branches
                 whose name is lesser than it before returning them
             branches_count (int): optional parameter used to restrain
                 the amount of returned branches
             target_types (list): optional parameter used to filter the
                 target types of branch to return (possible values that can be
                 contained in that list are `'content', 'directory',
                 'revision', 'release', 'snapshot', 'alias'`)
         Returns:
             dict: None if the snapshot does not exist;
               a dict with three keys otherwise:
                 * **id**: identifier of the snapshot
                 * **branches**: a dict of branches contained in the snapshot
                   whose keys are the branches' names.
                 * **next_branch**: the name of the first branch not returned
                   or :const:`None` if the snapshot has less than
                   `branches_count` branches after `branches_from` included.
         """
         if snapshot_id == EMPTY_SNAPSHOT_ID:
             return {
                 'id': snapshot_id,
                 'branches': {},
                 'next_branch': None,
             }
 
         branches = {}
         next_branch = None
 
         fetched_branches = list(db.snapshot_get_by_id(
             snapshot_id, branches_from=branches_from,
             branches_count=branches_count+1, target_types=target_types,
             cur=cur,
         ))
         for branch in fetched_branches[:branches_count]:
             branch = dict(zip(db.snapshot_get_cols, branch))
             del branch['snapshot_id']
             name = branch.pop('name')
             if branch == {'target': None, 'target_type': None}:
                 branch = None
             branches[name] = branch
 
         if len(fetched_branches) > branches_count:
             branch = dict(zip(db.snapshot_get_cols, fetched_branches[-1]))
             next_branch = branch['name']
 
         if branches:
             return {
                 'id': snapshot_id,
                 'branches': branches,
                 'next_branch': next_branch,
             }
 
         return None
 
     @db_transaction()
     def origin_visit_add(self, origin, date=None, db=None, cur=None, *,
                          ts=None):
         """Add an origin_visit for the origin at ts with status 'ongoing'.
 
         Args:
             origin: Visited Origin id
             date: timestamp of such visit
 
         Returns:
             dict: dictionary with keys origin and visit where:
 
             - origin: origin identifier
             - visit: the visit identifier for the new visit occurrence
 
         """
         if ts is None:
             if date is None:
                 raise TypeError('origin_visit_add expected 2 arguments.')
         else:
             assert date is None
             warnings.warn("argument 'ts' of origin_visit_add was renamed "
                           "to 'date' in v0.0.109.",
                           DeprecationWarning)
             date = ts
 
         if isinstance(date, str):
             date = dateutil.parser.parse(date)
 
         return {
             'origin': origin,
             'visit': db.origin_visit_add(origin, date, cur)
         }
 
     @db_transaction()
     def origin_visit_update(self, origin, visit_id, status, metadata=None,
                             db=None, cur=None):
         """Update an origin_visit's status.
 
         Args:
             origin: Visited Origin id
             visit_id: Visit's id
             status: Visit's new status
             metadata: Data associated to the visit
 
         Returns:
             None
 
         """
         return db.origin_visit_update(origin, visit_id, status, metadata, cur)
 
     @db_transaction_generator(statement_timeout=500)
     def origin_visit_get(self, origin, last_visit=None, limit=None, db=None,
                          cur=None):
         """Retrieve all the origin's visit's information.
 
         Args:
             origin (int): The occurrence's origin (identifier).
             last_visit: Starting point from which listing the next visits
                 Default to None
             limit (int): Number of results to return from the last visit.
                 Default to None
 
         Yields:
             List of visits.
 
         """
         for line in db.origin_visit_get_all(
                 origin, last_visit=last_visit, limit=limit, cur=cur):
             data = dict(zip(db.origin_visit_get_cols, line))
             yield data
 
     @db_transaction(statement_timeout=500)
     def origin_visit_get_by(self, origin, visit, db=None, cur=None):
         """Retrieve origin visit's information.
 
         Args:
             origin: The occurrence's origin (identifier).
 
         Returns:
             The information on that particular (origin, visit) or None if
             it does not exist
 
         """
         ori_visit = db.origin_visit_get(origin, visit, cur)
         if not ori_visit:
             return None
 
         return dict(zip(db.origin_visit_get_cols, ori_visit))
 
     @db_transaction(statement_timeout=2000)
     def object_find_by_sha1_git(self, ids, db=None, cur=None):
         """Return the objects found with the given ids.
 
         Args:
             ids: a generator of sha1_gits
 
         Returns:
             dict: a mapping from id to the list of objects found. Each object
             found is itself a dict with keys:
 
             - sha1_git: the input id
             - type: the type of object found
             - id: the id of the object found
             - object_id: the numeric id of the object found.
 
         """
         ret = {id: [] for id in ids}
 
         for retval in db.object_find_by_sha1_git(ids, cur=cur):
             if retval[1]:
                 ret[retval[0]].append(dict(zip(db.object_find_by_sha1_git_cols,
                                                retval)))
 
         return ret
 
     origin_keys = ['id', 'type', 'url']
 
     @db_transaction(statement_timeout=500)
     def origin_get(self, origin, db=None, cur=None):
         """Return the origin either identified by its id or its tuple
         (type, url).
 
         Args:
             origin: dictionary representing the individual origin to find.
                 This dict has either the keys type and url:
 
                 - type (FIXME: enum TBD): the origin type ('git', 'wget', ...)
                 - url (bytes): the url the origin points to
 
                 or the id:
 
                 - id: the origin id
 
         Returns:
             dict: the origin dictionary with the keys:
 
             - id: origin's id
             - type: origin's type
             - url: origin's url
 
         Raises:
             ValueError: if the keys does not match (url and type) nor id.
 
         """
         origin_id = origin.get('id')
         if origin_id:  # check lookup per id first
             ori = db.origin_get(origin_id, cur)
         elif 'type' in origin and 'url' in origin:  # or lookup per type, url
             ori = db.origin_get_with(origin['type'], origin['url'], cur)
         else:  # unsupported lookup
             raise ValueError('Origin must have either id or (type and url).')
 
         if ori:
             return dict(zip(self.origin_keys, ori))
         return None
 
     @db_transaction_generator()
     def origin_search(self, url_pattern, offset=0, limit=50,
                       regexp=False, with_visit=False, db=None, cur=None):
         """Search for origins whose urls contain a provided string pattern
         or match a provided regular expression.
         The search is performed in a case insensitive way.
 
         Args:
             url_pattern (str): the string pattern to search for in origin urls
             offset (int): number of found origins to skip before returning
                 results
             limit (int): the maximum number of found origins to return
             regexp (bool): if True, consider the provided pattern as a regular
                 expression and return origins whose urls match it
             with_visit (bool): if True, filter out origins with no visit
 
         Returns:
             An iterable of dict containing origin information as returned
             by :meth:`swh.storage.storage.Storage.origin_get`.
         """
         for origin in db.origin_search(url_pattern, offset, limit,
                                        regexp, with_visit, cur):
             yield dict(zip(self.origin_keys, origin))
 
-    @db_transaction()
-    def _person_add(self, person, db=None, cur=None):
-        """Add a person in storage.
-
-        Note: Internal function for now, do not use outside of this module.
-
-        Do not do anything fancy in case a person already exists.
-        Please adapt code if more checks are needed.
-
-        Args:
-            person: dictionary with keys name and email.
-
-        Returns:
-            Id of the new person.
-
-        """
-        return db.person_add(person)
-
     @db_transaction_generator(statement_timeout=500)
     def person_get(self, person, db=None, cur=None):
         """Return the persons identified by their ids.
 
         Args:
             person: array of ids.
 
         Returns:
             The array of persons corresponding of the ids.
 
         """
         for person in db.person_get(person):
             yield dict(zip(db.person_get_cols, person))
 
     @db_transaction()
     def origin_add(self, origins, db=None, cur=None):
         """Add origins to the storage
 
         Args:
             origins: list of dictionaries representing the individual origins,
                 with the following keys:
 
                 - type: the origin type ('git', 'svn', 'deb', ...)
                 - url (bytes): the url the origin points to
 
         Returns:
             list: given origins as dict updated with their id
 
         """
         for origin in origins:
             origin['id'] = self.origin_add_one(origin, db=db, cur=cur)
         return origins
 
     @db_transaction()
     def origin_add_one(self, origin, db=None, cur=None):
         """Add origin to the storage
 
         Args:
             origin: dictionary representing the individual origin to add. This
                 dict has the following keys:
 
                 - type (FIXME: enum TBD): the origin type ('git', 'wget', ...)
                 - url (bytes): the url the origin points to
 
         Returns:
             the id of the added origin, or of the identical one that already
             exists.
 
         """
         data = db.origin_get_with(origin['type'], origin['url'], cur)
         if data:
             return data[0]
 
         return db.origin_add(origin['type'], origin['url'], cur)
 
     @db_transaction()
     def fetch_history_start(self, origin_id, db=None, cur=None):
         """Add an entry for origin origin_id in fetch_history. Returns the id
         of the added fetch_history entry
         """
         fetch_history = {
             'origin': origin_id,
             'date': datetime.datetime.now(tz=datetime.timezone.utc),
         }
 
         return db.create_fetch_history(fetch_history, cur)
 
     @db_transaction()
     def fetch_history_end(self, fetch_history_id, data, db=None, cur=None):
         """Close the fetch_history entry with id `fetch_history_id`, replacing
            its data with `data`.
         """
         now = datetime.datetime.now(tz=datetime.timezone.utc)
         fetch_history = db.get_fetch_history(fetch_history_id, cur)
 
         if not fetch_history:
             raise ValueError('No fetch_history with id %d' % fetch_history_id)
 
         fetch_history['duration'] = now - fetch_history['date']
 
         fetch_history.update(data)
 
         db.update_fetch_history(fetch_history, cur)
 
     @db_transaction()
     def fetch_history_get(self, fetch_history_id, db=None, cur=None):
         """Get the fetch_history entry with id `fetch_history_id`.
         """
         return db.get_fetch_history(fetch_history_id, cur)
 
     @db_transaction(statement_timeout=500)
     def stat_counters(self, db=None, cur=None):
         """compute statistics about the number of tuples in various tables
 
         Returns:
             dict: a dictionary mapping textual labels (e.g., content) to
             integer values (e.g., the number of tuples in table content)
 
         """
         return {k: v for (k, v) in db.stat_counters()}
 
     @db_transaction()
     def refresh_stat_counters(self, db=None, cur=None):
         """Recomputes the statistics for `stat_counters`."""
         keys = [
             'content',
             'directory',
             'directory_entry_dir',
             'directory_entry_file',
             'directory_entry_rev',
             'origin',
             'origin_visit',
             'person',
             'release',
             'revision',
             'revision_history',
             'skipped_content',
             'snapshot']
 
         for key in keys:
             cur.execute('select * from swh_update_counter(%s)', (key,))
 
     @db_transaction()
     def origin_metadata_add(self, origin_id, ts, provider, tool, metadata,
                             db=None, cur=None):
         """ Add an origin_metadata for the origin at ts with provenance and
         metadata.
 
         Args:
             origin_id (int): the origin's id for which the metadata is added
             ts (datetime): timestamp of the found metadata
             provider (int): the provider of metadata (ex:'hal')
             tool (int): tool used to extract metadata
             metadata (jsonb): the metadata retrieved at the time and location
 
         Returns:
             id (int): the origin_metadata unique id
         """
         if isinstance(ts, str):
             ts = dateutil.parser.parse(ts)
 
         return db.origin_metadata_add(origin_id, ts, provider, tool,
                                       metadata, cur)
 
     @db_transaction_generator(statement_timeout=500)
     def origin_metadata_get_by(self, origin_id, provider_type=None, db=None,
                                cur=None):
         """Retrieve list of all origin_metadata entries for the origin_id
 
         Args:
             origin_id (int): the unique origin identifier
             provider_type (str): (optional) type of provider
 
         Returns:
             list of dicts: the origin_metadata dictionary with the keys:
 
             - origin_id (int): origin's id
             - discovery_date (datetime): timestamp of discovery
             - tool_id (int): metadata's extracting tool
             - metadata (jsonb)
             - provider_id (int): metadata's provider
             - provider_name (str)
             - provider_type (str)
             - provider_url (str)
 
         """
         for line in db.origin_metadata_get_by(origin_id, provider_type, cur):
             yield dict(zip(db.origin_metadata_get_cols, line))
 
     @db_transaction_generator()
     def tool_add(self, tools, db=None, cur=None):
         """Add new tools to the storage.
 
         Args:
             tools (iterable of :class:`dict`): Tool information to add to
               storage. Each tool is a :class:`dict` with the following keys:
 
               - name (:class:`str`): name of the tool
               - version (:class:`str`): version of the tool
               - configuration (:class:`dict`): configuration of the tool,
                 must be json-encodable
 
         Yields:
             :class:`dict`: All the tools inserted in storage
             (including the internal ``id``). The order of the list is not
             guaranteed to match the order of the initial list.
 
         """
         db.mktemp_tool(cur)
         db.copy_to(tools, 'tmp_tool',
                    ['name', 'version', 'configuration'],
                    cur)
 
         tools = db.tool_add_from_temp(cur)
         for line in tools:
             yield dict(zip(db.tool_cols, line))
 
     @db_transaction(statement_timeout=500)
     def tool_get(self, tool, db=None, cur=None):
         """Retrieve tool information.
 
         Args:
             tool (dict): Tool information we want to retrieve from storage.
               The dicts have the same keys as those used in :func:`tool_add`.
 
         Returns:
             dict: The full tool information if it exists (``id`` included),
             None otherwise.
 
         """
         tool_conf = tool['configuration']
         if isinstance(tool_conf, dict):
             tool_conf = json.dumps(tool_conf)
 
         idx = db.tool_get(tool['name'],
                           tool['version'],
                           tool_conf)
         if not idx:
             return None
         return dict(zip(db.tool_cols, idx))
 
     @db_transaction()
     def metadata_provider_add(self, provider_name, provider_type, provider_url,
                               metadata, db=None, cur=None):
         """Add a metadata provider.
 
         Args:
             provider_name (str): Its name
             provider_type (str): Its type (eg. `'deposit-client'`)
             provider_url (str): Its URL
             metadata: JSON-encodable object
 
         Returns:
             int: an identifier of the provider
         """
         return db.metadata_provider_add(provider_name, provider_type,
                                         provider_url, metadata, cur)
 
     @db_transaction()
     def metadata_provider_get(self, provider_id, db=None, cur=None):
         """Get a metadata provider
 
         Args:
             provider_id: Its identifier, as given by `metadata_provider_add`.
 
         Returns:
             dict: same as `metadata_provider_add`;
                   or None if it does not exist.
         """
         result = db.metadata_provider_get(provider_id)
         if not result:
             return None
         return dict(zip(db.metadata_provider_cols, result))
 
     @db_transaction()
     def metadata_provider_get_by(self, provider, db=None, cur=None):
         """Get a metadata provider
 
         Args:
             provider (dict): A dictionary with keys:
                 * provider_name: Its name
                 * provider_url: Its URL
 
         Returns:
             dict: same as `metadata_provider_add`;
                   or None if it does not exist.
         """
         result = db.metadata_provider_get_by(provider['provider_name'],
                                              provider['provider_url'])
         if not result:
             return None
         return dict(zip(db.metadata_provider_cols, result))
 
     def diff_directories(self, from_dir, to_dir, track_renaming=False):
         """Compute the list of file changes introduced between two arbitrary
         directories (insertion / deletion / modification / renaming of files).
 
         Args:
             from_dir (bytes): identifier of the directory to compare from
             to_dir (bytes): identifier of the directory to compare to
             track_renaming (bool): whether or not to track files renaming
 
         Returns:
             A list of dict describing the introduced file changes
             (see :func:`swh.storage.algos.diff.diff_directories`
             for more details).
         """
         return diff.diff_directories(self, from_dir, to_dir, track_renaming)
 
     def diff_revisions(self, from_rev, to_rev, track_renaming=False):
         """Compute the list of file changes introduced between two arbitrary
         revisions (insertion / deletion / modification / renaming of files).
 
         Args:
             from_rev (bytes): identifier of the revision to compare from
             to_rev (bytes): identifier of the revision to compare to
             track_renaming (bool): whether or not to track files renaming
 
         Returns:
             A list of dict describing the introduced file changes
             (see :func:`swh.storage.algos.diff.diff_directories`
             for more details).
         """
         return diff.diff_revisions(self, from_rev, to_rev, track_renaming)
 
     def diff_revision(self, revision, track_renaming=False):
         """Compute the list of file changes introduced by a specific revision
         (insertion / deletion / modification / renaming of files) by comparing
         it against its first parent.
 
         Args:
             revision (bytes): identifier of the revision from which to
                 compute the list of files changes
             track_renaming (bool): whether or not to track files renaming
 
         Returns:
             A list of dict describing the introduced file changes
             (see :func:`swh.storage.algos.diff.diff_directories`
             for more details).
         """
         return diff.diff_revision(self, revision, track_renaming)
diff --git a/swh/storage/tests/test_storage.py b/swh/storage/tests/test_storage.py
index 42890b74..b1b4fdec 100644
--- a/swh/storage/tests/test_storage.py
+++ b/swh/storage/tests/test_storage.py
@@ -1,2224 +1,2250 @@
 # Copyright (C) 2015-2018  The Software Heritage developers
 # See the AUTHORS file at the top-level directory of this distribution
 # License: GNU General Public License version 3, or any later version
 # See top-level LICENSE file for more information
 
 import copy
 import datetime
 import unittest
 import itertools
 from collections import defaultdict
 from unittest.mock import Mock, patch
 
 import pytest
 
 from hypothesis import given, strategies
 
 from swh.model import from_disk, identifiers
 from swh.model.hashutil import hash_to_bytes
 from swh.storage.tests.storage_testing import StorageTestFixture
 from swh.storage import HashCollision
 
 from .generate_data_test import gen_contents
 
 
 @pytest.mark.db
 class StorageTestDbFixture(StorageTestFixture):
     def setUp(self):
         super().setUp()
 
         db = self.test_db[self.TEST_DB_NAME]
         self.conn = db.conn
         self.cursor = db.cursor
 
         self.maxDiff = None
 
     def tearDown(self):
         self.reset_storage_tables()
         super().tearDown()
 
 
 class TestStorageData:
     def setUp(self):
         super().setUp()
 
         self.cont = {
             'data': b'42\n',
             'length': 3,
             'sha1': hash_to_bytes(
                 '34973274ccef6ab4dfaaf86599792fa9c3fe4689'),
             'sha1_git': hash_to_bytes(
                 'd81cc0710eb6cf9efd5b920a8453e1e07157b6cd'),
             'sha256': hash_to_bytes(
                 '673650f936cb3b0a2f93ce09d81be107'
                 '48b1b203c19e8176b4eefc1964a0cf3a'),
             'blake2s256': hash_to_bytes('d5fe1939576527e42cfd76a9455a2'
                                         '432fe7f56669564577dd93c4280e76d661d'),
             'status': 'visible',
         }
 
         self.cont2 = {
             'data': b'4242\n',
             'length': 5,
             'sha1': hash_to_bytes(
                 '61c2b3a30496d329e21af70dd2d7e097046d07b7'),
             'sha1_git': hash_to_bytes(
                 '36fade77193cb6d2bd826161a0979d64c28ab4fa'),
             'sha256': hash_to_bytes(
                 '859f0b154fdb2d630f45e1ecae4a8629'
                 '15435e663248bb8461d914696fc047cd'),
             'blake2s256': hash_to_bytes('849c20fad132b7c2d62c15de310adfe87be'
                                         '94a379941bed295e8141c6219810d'),
             'status': 'visible',
         }
 
         self.cont3 = {
             'data': b'424242\n',
             'length': 7,
             'sha1': hash_to_bytes(
                 '3e21cc4942a4234c9e5edd8a9cacd1670fe59f13'),
             'sha1_git': hash_to_bytes(
                 'c932c7649c6dfa4b82327d121215116909eb3bea'),
             'sha256': hash_to_bytes(
                 '92fb72daf8c6818288a35137b72155f5'
                 '07e5de8d892712ab96277aaed8cf8a36'),
             'blake2s256': hash_to_bytes('76d0346f44e5a27f6bafdd9c2befd304af'
                                         'f83780f93121d801ab6a1d4769db11'),
             'status': 'visible',
         }
 
         self.missing_cont = {
             'data': b'missing\n',
             'length': 8,
             'sha1': hash_to_bytes(
                 'f9c24e2abb82063a3ba2c44efd2d3c797f28ac90'),
             'sha1_git': hash_to_bytes(
                 '33e45d56f88993aae6a0198013efa80716fd8919'),
             'sha256': hash_to_bytes(
                 '6bbd052ab054ef222c1c87be60cd191a'
                 'ddedd24cc882d1f5f7f7be61dc61bb3a'),
             'blake2s256': hash_to_bytes('306856b8fd879edb7b6f1aeaaf8db9bbecc9'
                                         '93cd7f776c333ac3a782fa5c6eba'),
             'status': 'absent',
         }
 
         self.skipped_cont = {
             'length': 1024 * 1024 * 200,
             'sha1_git': hash_to_bytes(
                 '33e45d56f88993aae6a0198013efa80716fd8920'),
             'sha1': hash_to_bytes(
                 '43e45d56f88993aae6a0198013efa80716fd8920'),
             'sha256': hash_to_bytes(
                 '7bbd052ab054ef222c1c87be60cd191a'
                 'ddedd24cc882d1f5f7f7be61dc61bb3a'),
             'blake2s256': hash_to_bytes(
                 'ade18b1adecb33f891ca36664da676e1'
                 '2c772cc193778aac9a137b8dc5834b9b'),
             'reason': 'Content too long',
             'status': 'absent',
         }
 
         self.skipped_cont2 = {
             'length': 1024 * 1024 * 300,
             'sha1_git': hash_to_bytes(
                 '44e45d56f88993aae6a0198013efa80716fd8921'),
             'sha1': hash_to_bytes(
                 '54e45d56f88993aae6a0198013efa80716fd8920'),
             'sha256': hash_to_bytes(
                 '8cbd052ab054ef222c1c87be60cd191a'
                 'ddedd24cc882d1f5f7f7be61dc61bb3a'),
             'blake2s256': hash_to_bytes(
                 '9ce18b1adecb33f891ca36664da676e1'
                 '2c772cc193778aac9a137b8dc5834b9b'),
             'reason': 'Content too long',
             'status': 'absent',
         }
 
         self.dir = {
             'id': b'4\x013\x422\x531\x000\xf51\xe62\xa73\xff7\xc3\xa90',
             'entries': [
                 {
                     'name': b'foo',
                     'type': 'file',
                     'target': self.cont['sha1_git'],
                     'perms': from_disk.DentryPerms.content,
                 },
                 {
                     'name': b'bar\xc3',
                     'type': 'dir',
                     'target': b'12345678901234567890',
                     'perms': from_disk.DentryPerms.directory,
                 },
             ],
         }
 
         self.dir2 = {
             'id': b'4\x013\x422\x531\x000\xf51\xe62\xa73\xff7\xc3\xa95',
             'entries': [
                 {
                     'name': b'oof',
                     'type': 'file',
                     'target': self.cont2['sha1_git'],
                     'perms': from_disk.DentryPerms.content,
                 }
             ],
         }
 
         self.dir3 = {
             'id': hash_to_bytes('33e45d56f88993aae6a0198013efa80716fd8921'),
             'entries': [
                 {
                     'name': b'foo',
                     'type': 'file',
                     'target': self.cont['sha1_git'],
                     'perms': from_disk.DentryPerms.content,
                 },
                 {
                     'name': b'subdir',
                     'type': 'dir',
                     'target': self.dir['id'],
                     'perms': from_disk.DentryPerms.directory,
                 },
                 {
                     'name': b'hello',
                     'type': 'file',
                     'target': b'12345678901234567890',
                     'perms': from_disk.DentryPerms.content,
                 },
 
             ],
         }
 
         self.minus_offset = datetime.timezone(datetime.timedelta(minutes=-120))
         self.plus_offset = datetime.timezone(datetime.timedelta(minutes=120))
 
         self.revision = {
             'id': b'56789012345678901234',
             'message': b'hello',
             'author': {
                 'name': b'Nicolas Dandrimont',
                 'email': b'nicolas@example.com',
                 'fullname': b'Nicolas Dandrimont <nicolas@example.com> ',
             },
             'date': {
                 'timestamp': 1234567890,
                 'offset': 120,
                 'negative_utc': None,
             },
             'committer': {
                 'name': b'St\xc3fano Zacchiroli',
                 'email': b'stefano@example.com',
                 'fullname': b'St\xc3fano Zacchiroli <stefano@example.com>'
             },
             'committer_date': {
                 'timestamp': 1123456789,
                 'offset': 0,
                 'negative_utc': True,
             },
             'parents': [b'01234567890123456789', b'23434512345123456789'],
             'type': 'git',
             'directory': self.dir['id'],
             'metadata': {
                 'checksums': {
                     'sha1': 'tarball-sha1',
                     'sha256': 'tarball-sha256',
                 },
                 'signed-off-by': 'some-dude',
                 'extra_headers': [
                     ['gpgsig', b'test123'],
                     ['mergetags', [b'foo\\bar', b'\x22\xaf\x89\x80\x01\x00']],
                 ],
             },
             'synthetic': True
         }
 
         self.revision2 = {
             'id': b'87659012345678904321',
             'message': b'hello again',
             'author': {
                 'name': b'Roberto Dicosmo',
                 'email': b'roberto@example.com',
                 'fullname': b'Roberto Dicosmo <roberto@example.com>',
             },
             'date': {
                 'timestamp': {
                     'seconds': 1234567843,
                     'microseconds': 220000,
                 },
                 'offset': -720,
                 'negative_utc': None,
             },
             'committer': {
                 'name': b'tony',
                 'email': b'ar@dumont.fr',
                 'fullname': b'tony <ar@dumont.fr>',
             },
             'committer_date': {
                 'timestamp': 1123456789,
                 'offset': 0,
                 'negative_utc': False,
             },
             'parents': [b'01234567890123456789'],
             'type': 'git',
             'directory': self.dir2['id'],
             'metadata': None,
             'synthetic': False
         }
 
         self.revision3 = {
             'id': hash_to_bytes('7026b7c1a2af56521e951c01ed20f255fa054238'),
             'message': b'a simple revision with no parents this time',
             'author': {
                 'name': b'Roberto Dicosmo',
                 'email': b'roberto@example.com',
                 'fullname': b'Roberto Dicosmo <roberto@example.com>',
             },
             'date': {
                 'timestamp': {
                     'seconds': 1234567843,
                     'microseconds': 220000,
                 },
                 'offset': -720,
                 'negative_utc': None,
             },
             'committer': {
                 'name': b'tony',
                 'email': b'ar@dumont.fr',
                 'fullname': b'tony <ar@dumont.fr>',
             },
             'committer_date': {
                 'timestamp': 1127351742,
                 'offset': 0,
                 'negative_utc': False,
             },
             'parents': [],
             'type': 'git',
             'directory': self.dir2['id'],
             'metadata': None,
             'synthetic': True
         }
 
         self.revision4 = {
             'id': hash_to_bytes('368a48fe15b7db2383775f97c6b247011b3f14f4'),
             'message': b'parent of self.revision2',
             'author': {
                 'name': b'me',
                 'email': b'me@soft.heri',
                 'fullname': b'me <me@soft.heri>',
             },
             'date': {
                 'timestamp': {
                     'seconds': 1244567843,
                     'microseconds': 220000,
                 },
                 'offset': -720,
                 'negative_utc': None,
             },
             'committer': {
                 'name': b'committer-dude',
                 'email': b'committer@dude.com',
                 'fullname': b'committer-dude <committer@dude.com>',
             },
             'committer_date': {
                 'timestamp': {
                     'seconds': 1244567843,
                     'microseconds': 220000,
                 },
                 'offset': -720,
                 'negative_utc': None,
             },
             'parents': [self.revision3['id']],
             'type': 'git',
             'directory': self.dir['id'],
             'metadata': None,
             'synthetic': False
         }
 
         self.origin = {
             'url': 'file:///dev/null',
             'type': 'git',
         }
 
         self.origin2 = {
             'url': 'file:///dev/zero',
             'type': 'git',
         }
 
         self.provider = {
             'name': 'hal',
             'type': 'deposit-client',
             'url': 'http:///hal/inria',
             'metadata': {
                 'location': 'France'
             }
         }
 
         self.metadata_tool = {
             'name': 'swh-deposit',
             'version': '0.0.1',
             'configuration': {
                 'sword_version': '2'
             }
         }
 
         self.origin_metadata = {
             'origin': self.origin,
             'discovery_date': datetime.datetime(2015, 1, 1, 23, 0, 0,
                                                 tzinfo=datetime.timezone.utc),
             'provider': self.provider,
             'tool': 'swh-deposit',
             'metadata': {
                 'name': 'test_origin_metadata',
                 'version': '0.0.1'
              }
         }
 
         self.origin_metadata2 = {
             'origin': self.origin,
             'discovery_date': datetime.datetime(2017, 1, 1, 23, 0, 0,
                                                 tzinfo=datetime.timezone.utc),
             'provider': self.provider,
             'tool': 'swh-deposit',
             'metadata': {
                 'name': 'test_origin_metadata',
                 'version': '0.0.1'
              }
         }
 
         self.date_visit1 = datetime.datetime(2015, 1, 1, 23, 0, 0,
                                              tzinfo=datetime.timezone.utc)
 
         self.date_visit2 = datetime.datetime(2017, 1, 1, 23, 0, 0,
                                              tzinfo=datetime.timezone.utc)
 
         self.date_visit3 = datetime.datetime(2018, 1, 1, 23, 0, 0,
                                              tzinfo=datetime.timezone.utc)
 
         self.release = {
             'id': b'87659012345678901234',
             'name': b'v0.0.1',
             'author': {
                 'name': b'olasd',
                 'email': b'nic@olasd.fr',
                 'fullname': b'olasd <nic@olasd.fr>',
             },
             'date': {
                 'timestamp': 1234567890,
                 'offset': 42,
                 'negative_utc': None,
             },
             'target': b'43210987654321098765',
             'target_type': 'revision',
             'message': b'synthetic release',
             'synthetic': True,
         }
 
         self.release2 = {
             'id': b'56789012348765901234',
             'name': b'v0.0.2',
             'author': {
                 'name': b'tony',
                 'email': b'ar@dumont.fr',
                 'fullname': b'tony <ar@dumont.fr>',
             },
             'date': {
                 'timestamp': 1634366813,
                 'offset': -120,
                 'negative_utc': None,
             },
             'target': b'432109\xa9765432\xc309\x00765',
             'target_type': 'revision',
             'message': b'v0.0.2\nMisc performance improvements + bug fixes',
             'synthetic': False
         }
 
         self.release3 = {
             'id': b'87659012345678904321',
             'name': b'v0.0.2',
             'author': {
                 'name': b'tony',
                 'email': b'tony@ardumont.fr',
                 'fullname': b'tony <tony@ardumont.fr>',
             },
             'date': {
                 'timestamp': 1634336813,
                 'offset': 0,
                 'negative_utc': False,
             },
             'target': self.revision2['id'],
             'target_type': 'revision',
             'message': b'yet another synthetic release',
             'synthetic': True,
         }
 
         self.fetch_history_date = datetime.datetime(
             2015, 1, 2, 21, 0, 0,
             tzinfo=datetime.timezone.utc)
         self.fetch_history_end = datetime.datetime(
             2015, 1, 2, 23, 0, 0,
             tzinfo=datetime.timezone.utc)
 
         self.fetch_history_duration = (self.fetch_history_end -
                                        self.fetch_history_date)
 
         self.fetch_history_data = {
             'status': True,
             'result': {'foo': 'bar'},
             'stdout': 'blabla',
             'stderr': 'blablabla',
         }
 
         self.snapshot = {
             'id': hash_to_bytes('2498dbf535f882bc7f9a18fb16c9ad27fda7bab7'),
             'branches': {
                 b'master': {
                     'target': self.revision['id'],
                     'target_type': 'revision',
                 },
             },
             'next_branch': None
         }
 
         self.empty_snapshot = {
             'id': hash_to_bytes('1a8893e6a86f444e8be8e7bda6cb34fb1735a00e'),
             'branches': {},
             'next_branch': None
         }
 
         self.complete_snapshot = {
             'id': hash_to_bytes('6e65b86363953b780d92b0a928f3e8fcdd10db36'),
             'branches': {
                 b'directory': {
                     'target': hash_to_bytes(
                         '1bd0e65f7d2ff14ae994de17a1e7fe65111dcad8'),
                     'target_type': 'directory',
                 },
                 b'content': {
                     'target': hash_to_bytes(
                         'fe95a46679d128ff167b7c55df5d02356c5a1ae1'),
                     'target_type': 'content',
                 },
                 b'alias': {
                     'target': b'revision',
                     'target_type': 'alias',
                 },
                 b'revision': {
                     'target': hash_to_bytes(
                         'aafb16d69fd30ff58afdd69036a26047f3aebdc6'),
                     'target_type': 'revision',
                 },
                 b'release': {
                     'target': hash_to_bytes(
                         '7045404f3d1c54e6473c71bbb716529fbad4be24'),
                     'target_type': 'release',
                 },
                 b'snapshot': {
                     'target': hash_to_bytes(
                         '1a8893e6a86f444e8be8e7bda6cb34fb1735a00e'),
                     'target_type': 'snapshot',
                 },
                 b'dangling': None,
             },
             'next_branch': None
         }
 
 
 class CommonTestStorage(TestStorageData):
     """Base class for Storage testing.
 
     This class is used as-is to test local storage (see TestLocalStorage
     below) and remote storage (see TestRemoteStorage in
     test_remote_storage.py.
 
     We need to have the two classes inherit from this base class
     separately to avoid nosetests running the tests from the base
     class twice.
 
     """
     @staticmethod
     def normalize_entity(entity):
         entity = copy.deepcopy(entity)
         for key in ('date', 'committer_date'):
             if key in entity:
                 entity[key] = identifiers.normalize_timestamp(entity[key])
 
         return entity
 
     def test_check_config(self):
         self.assertTrue(self.storage.check_config(check_write=True))
         self.assertTrue(self.storage.check_config(check_write=False))
 
     def test_content_add(self):
         cont = self.cont
 
         self.storage.content_add([cont])
         if hasattr(self.storage, 'objstorage'):
             self.assertIn(cont['sha1'], self.storage.objstorage)
         self.cursor.execute('SELECT sha1, sha1_git, sha256, length, status'
                             ' FROM content WHERE sha1 = %s',
                             (cont['sha1'],))
         datum = self.cursor.fetchone()
         self.assertEqual(
             (datum[0].tobytes(), datum[1].tobytes(), datum[2].tobytes(),
              datum[3], datum[4]),
             (cont['sha1'], cont['sha1_git'], cont['sha256'],
              cont['length'], 'visible'))
 
     def test_content_add_collision(self):
         cont1 = self.cont
 
         # create (corrupted) content with same sha1{,_git} but != sha256
         cont1b = cont1.copy()
         sha256_array = bytearray(cont1b['sha256'])
         sha256_array[0] += 1
         cont1b['sha256'] = bytes(sha256_array)
 
         with self.assertRaises(HashCollision) as cm:
             self.storage.content_add([cont1, cont1b])
 
         self.assertIn(cm.exception.args[0], ['sha1', 'sha1_git', 'blake2s256'])
 
     def test_skipped_content_add(self):
         cont = self.skipped_cont.copy()
         cont2 = self.skipped_cont2.copy()
         cont2['blake2s256'] = None
 
         self.storage.content_add([cont, cont, cont2])
 
         self.cursor.execute('SELECT sha1, sha1_git, sha256, blake2s256, '
                             'length, status, reason '
                             'FROM skipped_content ORDER BY sha1_git')
 
         datums = self.cursor.fetchall()
 
         self.assertEqual(2, len(datums))
         datum = datums[0]
         self.assertEqual(
             (datum[0].tobytes(), datum[1].tobytes(), datum[2].tobytes(),
              datum[3].tobytes(), datum[4], datum[5], datum[6]),
             (cont['sha1'], cont['sha1_git'], cont['sha256'],
              cont['blake2s256'], cont['length'], 'absent',
              'Content too long')
         )
 
         datum2 = datums[1]
         self.assertEqual(
             (datum2[0].tobytes(), datum2[1].tobytes(), datum2[2].tobytes(),
              datum2[3], datum2[4], datum2[5], datum2[6]),
             (cont2['sha1'], cont2['sha1_git'], cont2['sha256'],
              cont2['blake2s256'], cont2['length'], 'absent',
              'Content too long')
         )
 
     @pytest.mark.property_based
     @given(strategies.sets(
         elements=strategies.sampled_from(
             ['sha256', 'sha1_git', 'blake2s256']),
         min_size=0))
     def test_content_missing(self, algos):
         algos |= {'sha1'}
         cont2 = self.cont2
         missing_cont = self.missing_cont
         self.storage.content_add([cont2])
         test_contents = [cont2]
         missing_per_hash = defaultdict(list)
         for i in range(256):
             test_content = missing_cont.copy()
             for hash in algos:
                 test_content[hash] = bytes([i]) + test_content[hash][1:]
                 missing_per_hash[hash].append(test_content[hash])
             test_contents.append(test_content)
 
         self.assertCountEqual(
             self.storage.content_missing(test_contents),
             missing_per_hash['sha1']
         )
 
         for hash in algos:
             self.assertCountEqual(
                 self.storage.content_missing(test_contents, key_hash=hash),
                 missing_per_hash[hash]
             )
 
     def test_content_missing_per_sha1(self):
         # given
         cont2 = self.cont2
         missing_cont = self.missing_cont
         self.storage.content_add([cont2])
         # when
         gen = self.storage.content_missing_per_sha1([cont2['sha1'],
                                                      missing_cont['sha1']])
 
         # then
         self.assertEqual(list(gen), [missing_cont['sha1']])
 
     def test_content_get_metadata(self):
         cont1 = self.cont.copy()
         cont2 = self.cont2.copy()
 
         self.storage.content_add([cont1, cont2])
 
         gen = self.storage.content_get_metadata([cont1['sha1'], cont2['sha1']])
 
         # we only retrieve the metadata
         cont1.pop('data')
         cont2.pop('data')
 
         self.assertCountEqual(list(gen), [cont1, cont2])
 
     def test_content_get_metadata_missing_sha1(self):
         cont1 = self.cont.copy()
         cont2 = self.cont2.copy()
 
         missing_cont = self.missing_cont.copy()
 
         self.storage.content_add([cont1, cont2])
 
         gen = self.storage.content_get_metadata([missing_cont['sha1']])
 
         # All the metadata keys are None
         missing_cont.pop('data')
         for key in list(missing_cont):
             if key != 'sha1':
                 missing_cont[key] = None
 
         self.assertEqual(list(gen), [missing_cont])
 
     @staticmethod
     def _transform_entries(dir_, *, prefix=b''):
         for ent in dir_['entries']:
             yield {
                 'dir_id': dir_['id'],
                 'type': ent['type'],
                 'target': ent['target'],
                 'name': prefix + ent['name'],
                 'perms': ent['perms'],
                 'status': None,
                 'sha1': None,
                 'sha1_git': None,
                 'sha256': None,
                 'length': None,
             }
 
     def test_directory_add(self):
         init_missing = list(self.storage.directory_missing([self.dir['id']]))
         self.assertEqual([self.dir['id']], init_missing)
 
         self.storage.directory_add([self.dir])
 
         actual_data = list(self.storage.directory_ls(self.dir['id']))
         expected_data = list(self._transform_entries(self.dir))
         self.assertCountEqual(expected_data, actual_data)
 
         after_missing = list(self.storage.directory_missing([self.dir['id']]))
         self.assertEqual([], after_missing)
 
     def test_directory_get_recursive(self):
         init_missing = list(self.storage.directory_missing([self.dir['id']]))
         self.assertEqual([self.dir['id']], init_missing)
 
         self.storage.directory_add([self.dir, self.dir2, self.dir3])
 
         actual_data = list(self.storage.directory_ls(
             self.dir['id'], recursive=True))
         expected_data = list(self._transform_entries(self.dir))
         self.assertCountEqual(expected_data, actual_data)
 
         actual_data = list(self.storage.directory_ls(
             self.dir2['id'], recursive=True))
         expected_data = list(self._transform_entries(self.dir2))
         self.assertCountEqual(expected_data, actual_data)
 
         actual_data = list(self.storage.directory_ls(
             self.dir3['id'], recursive=True))
         expected_data = list(itertools.chain(
             self._transform_entries(self.dir3),
             self._transform_entries(self.dir, prefix=b'subdir/')))
         self.assertCountEqual(expected_data, actual_data)
 
     def test_directory_entry_get_by_path(self):
         # given
         init_missing = list(self.storage.directory_missing([self.dir3['id']]))
         self.assertEqual([self.dir3['id']], init_missing)
 
         self.storage.directory_add([self.dir3])
 
         expected_entries = [
             {
                 'dir_id': self.dir3['id'],
                 'name': b'foo',
                 'type': 'file',
                 'target': self.cont['sha1_git'],
                 'sha1': None,
                 'sha1_git': None,
                 'sha256': None,
                 'status': None,
                 'perms': from_disk.DentryPerms.content,
                 'length': None,
             },
             {
                 'dir_id': self.dir3['id'],
                 'name': b'subdir',
                 'type': 'dir',
                 'target': self.dir['id'],
                 'sha1': None,
                 'sha1_git': None,
                 'sha256': None,
                 'status': None,
                 'perms': from_disk.DentryPerms.directory,
                 'length': None,
             },
             {
                 'dir_id': self.dir3['id'],
                 'name': b'hello',
                 'type': 'file',
                 'target': b'12345678901234567890',
                 'sha1': None,
                 'sha1_git': None,
                 'sha256': None,
                 'status': None,
                 'perms': from_disk.DentryPerms.content,
                 'length': None,
             },
         ]
 
         # when (all must be found here)
         for entry, expected_entry in zip(self.dir3['entries'],
                                          expected_entries):
             actual_entry = self.storage.directory_entry_get_by_path(
                 self.dir3['id'],
                 [entry['name']])
             self.assertEqual(actual_entry, expected_entry)
 
         # when (nothing should be found here since self.dir is not persisted.)
         for entry in self.dir['entries']:
             actual_entry = self.storage.directory_entry_get_by_path(
                 self.dir['id'],
                 [entry['name']])
             self.assertIsNone(actual_entry)
 
     def test_revision_add(self):
         init_missing = self.storage.revision_missing([self.revision['id']])
         self.assertEqual([self.revision['id']], list(init_missing))
 
         self.storage.revision_add([self.revision])
 
         end_missing = self.storage.revision_missing([self.revision['id']])
         self.assertEqual([], list(end_missing))
 
     def test_revision_log(self):
         # given
         # self.revision4 -is-child-of-> self.revision3
         self.storage.revision_add([self.revision3,
                                    self.revision4])
 
         # when
         actual_results = list(self.storage.revision_log(
             [self.revision4['id']]))
 
         # hack: ids generated
         for actual_result in actual_results:
             if 'id' in actual_result['author']:
                 del actual_result['author']['id']
             if 'id' in actual_result['committer']:
                 del actual_result['committer']['id']
 
         self.assertEqual(len(actual_results), 2)  # rev4 -child-> rev3
         self.assertEqual(actual_results[0],
                          self.normalize_entity(self.revision4))
         self.assertEqual(actual_results[1],
                          self.normalize_entity(self.revision3))
 
     def test_revision_log_with_limit(self):
         # given
         # self.revision4 -is-child-of-> self.revision3
         self.storage.revision_add([self.revision3,
                                    self.revision4])
         actual_results = list(self.storage.revision_log(
             [self.revision4['id']], 1))
 
         # hack: ids generated
         for actual_result in actual_results:
             if 'id' in actual_result['author']:
                 del actual_result['author']['id']
             if 'id' in actual_result['committer']:
                 del actual_result['committer']['id']
 
         self.assertEqual(len(actual_results), 1)
         self.assertEqual(actual_results[0], self.revision4)
 
     @staticmethod
     def _short_revision(revision):
         return [revision['id'], revision['parents']]
 
     def test_revision_shortlog(self):
         # given
         # self.revision4 -is-child-of-> self.revision3
         self.storage.revision_add([self.revision3,
                                    self.revision4])
 
         # when
         actual_results = list(self.storage.revision_shortlog(
             [self.revision4['id']]))
 
         self.assertEqual(len(actual_results), 2)  # rev4 -child-> rev3
         self.assertEqual(list(actual_results[0]),
                          self._short_revision(self.revision4))
         self.assertEqual(list(actual_results[1]),
                          self._short_revision(self.revision3))
 
     def test_revision_shortlog_with_limit(self):
         # given
         # self.revision4 -is-child-of-> self.revision3
         self.storage.revision_add([self.revision3,
                                    self.revision4])
         actual_results = list(self.storage.revision_shortlog(
             [self.revision4['id']], 1))
 
         self.assertEqual(len(actual_results), 1)
         self.assertEqual(list(actual_results[0]),
                          self._short_revision(self.revision4))
 
     def test_revision_get(self):
         self.storage.revision_add([self.revision])
 
         actual_revisions = list(self.storage.revision_get(
             [self.revision['id'], self.revision2['id']]))
 
         # when
         if 'id' in actual_revisions[0]['author']:
             del actual_revisions[0]['author']['id']  # hack: ids are generated
         if 'id' in actual_revisions[0]['committer']:
             del actual_revisions[0]['committer']['id']
 
         self.assertEqual(len(actual_revisions), 2)
         self.assertEqual(actual_revisions[0],
                          self.normalize_entity(self.revision))
         self.assertIsNone(actual_revisions[1])
 
     def test_revision_get_no_parents(self):
         self.storage.revision_add([self.revision3])
 
         get = list(self.storage.revision_get([self.revision3['id']]))
 
         self.assertEqual(len(get), 1)
         self.assertEqual(get[0]['parents'], [])  # no parents on this one
 
     def test_release_add(self):
         init_missing = self.storage.release_missing([self.release['id'],
                                                      self.release2['id']])
         self.assertEqual([self.release['id'], self.release2['id']],
                          list(init_missing))
 
         self.storage.release_add([self.release, self.release2])
 
         end_missing = self.storage.release_missing([self.release['id'],
                                                     self.release2['id']])
         self.assertEqual([], list(end_missing))
 
     def test_release_get(self):
         # given
         self.storage.release_add([self.release, self.release2])
 
         # when
         actual_releases = list(self.storage.release_get([self.release['id'],
                                                          self.release2['id']]))
 
         # then
         for actual_release in actual_releases:
             if 'id' in actual_release['author']:
                 del actual_release['author']['id']  # hack: ids are generated
 
         self.assertEqual([self.normalize_entity(self.release),
                           self.normalize_entity(self.release2)],
                          [actual_releases[0], actual_releases[1]])
 
         unknown_releases = \
             list(self.storage.release_get([self.release3['id']]))
 
         self.assertIsNone(unknown_releases[0])
 
     def test_origin_add_one(self):
         origin0 = self.storage.origin_get(self.origin)
         self.assertIsNone(origin0)
 
         id = self.storage.origin_add_one(self.origin)
 
         actual_origin = self.storage.origin_get({'url': self.origin['url'],
                                                  'type': self.origin['type']})
         self.assertEqual(actual_origin['id'], id)
 
         id2 = self.storage.origin_add_one(self.origin)
 
         self.assertEqual(id, id2)
 
     def test_origin_add(self):
         origin0 = self.storage.origin_get(self.origin)
         self.assertIsNone(origin0)
 
         origin1, origin2 = self.storage.origin_add([self.origin, self.origin2])
 
         actual_origin = self.storage.origin_get({
             'url': self.origin['url'],
             'type': self.origin['type'],
         })
         self.assertEqual(actual_origin['id'], origin1['id'])
 
         actual_origin2 = self.storage.origin_get({
             'url': self.origin2['url'],
             'type': self.origin2['type'],
         })
         self.assertEqual(actual_origin2['id'], origin2['id'])
 
     def test_origin_add_twice(self):
         add1 = self.storage.origin_add([self.origin, self.origin2])
         add2 = self.storage.origin_add([self.origin, self.origin2])
 
         self.assertEqual(add1, add2)
 
     def test_origin_get(self):
         self.assertIsNone(self.storage.origin_get(self.origin))
         id = self.storage.origin_add_one(self.origin)
 
         # lookup per type and url (returns id)
         actual_origin0 = self.storage.origin_get({'url': self.origin['url'],
                                                   'type': self.origin['type']})
         self.assertEqual(actual_origin0['id'], id)
 
         # lookup per id (returns dict)
         actual_origin1 = self.storage.origin_get({'id': id})
 
         self.assertEqual(actual_origin1, {'id': id,
                                           'type': self.origin['type'],
                                           'url': self.origin['url']})
 
     def test_origin_search(self):
         found_origins = list(self.storage.origin_search(self.origin['url']))
         self.assertEqual(len(found_origins), 0)
 
         found_origins = list(self.storage.origin_search(self.origin['url'],
                                                         regexp=True))
         self.assertEqual(len(found_origins), 0)
 
         id = self.storage.origin_add_one(self.origin)
         origin_data = {'id': id,
                        'type': self.origin['type'],
                        'url': self.origin['url']}
         found_origins = list(self.storage.origin_search(self.origin['url']))
         self.assertEqual(len(found_origins), 1)
         self.assertEqual(found_origins[0], origin_data)
 
         found_origins = list(self.storage.origin_search(
             '.' + self.origin['url'][1:-1] + '.', regexp=True))
         self.assertEqual(len(found_origins), 1)
         self.assertEqual(found_origins[0], origin_data)
 
         id2 = self.storage.origin_add_one(self.origin2)
         origin2_data = {'id': id2,
                         'type': self.origin2['type'],
                         'url': self.origin2['url']}
         found_origins = list(self.storage.origin_search(self.origin2['url']))
         self.assertEqual(len(found_origins), 1)
         self.assertEqual(found_origins[0], origin2_data)
 
         found_origins = list(self.storage.origin_search(
             '.' + self.origin2['url'][1:-1] + '.', regexp=True))
         self.assertEqual(len(found_origins), 1)
         self.assertEqual(found_origins[0], origin2_data)
 
         found_origins = list(self.storage.origin_search('/'))
         self.assertEqual(len(found_origins), 2)
 
         found_origins = list(self.storage.origin_search('.*/.*', regexp=True))
         self.assertEqual(len(found_origins), 2)
 
         found_origins = list(self.storage.origin_search('/', offset=0, limit=1)) # noqa
         self.assertEqual(len(found_origins), 1)
         self.assertEqual(found_origins[0], origin_data)
 
         found_origins = list(self.storage.origin_search('.*/.*', offset=0, limit=1, regexp=True)) # noqa
         self.assertEqual(len(found_origins), 1)
         self.assertEqual(found_origins[0], origin_data)
 
         found_origins = list(self.storage.origin_search('/', offset=1, limit=1)) # noqa
         self.assertEqual(len(found_origins), 1)
         self.assertEqual(found_origins[0], origin2_data)
 
         found_origins = list(self.storage.origin_search('.*/.*', offset=1, limit=1, regexp=True)) # noqa
         self.assertEqual(len(found_origins), 1)
         self.assertEqual(found_origins[0], origin2_data)
 
     def test_origin_visit_add(self):
         # given
         self.assertIsNone(self.storage.origin_get(self.origin2))
 
         origin_id = self.storage.origin_add_one(self.origin2)
         self.assertIsNotNone(origin_id)
 
         # when
         origin_visit1 = self.storage.origin_visit_add(
             origin_id,
             date=self.date_visit2)
 
         # then
         self.assertEqual(origin_visit1['origin'], origin_id)
         self.assertIsNotNone(origin_visit1['visit'])
 
         actual_origin_visits = list(self.storage.origin_visit_get(origin_id))
         self.assertEqual(actual_origin_visits,
                          [{
                              'origin': origin_id,
                              'date': self.date_visit2,
                              'visit': origin_visit1['visit'],
                              'status': 'ongoing',
                              'metadata': None,
                              'snapshot': None,
                          }])
 
     def test_origin_visit_update(self):
         # given
         origin_id = self.storage.origin_add_one(self.origin2)
         origin_id2 = self.storage.origin_add_one(self.origin)
 
         origin_visit1 = self.storage.origin_visit_add(
             origin_id,
             date=self.date_visit2)
 
         origin_visit2 = self.storage.origin_visit_add(
             origin_id,
             date=self.date_visit3)
 
         origin_visit3 = self.storage.origin_visit_add(
             origin_id2,
             date=self.date_visit3)
 
         # when
         visit1_metadata = {
             'contents': 42,
             'directories': 22,
         }
         self.storage.origin_visit_update(
             origin_id, origin_visit1['visit'], status='full',
             metadata=visit1_metadata)
         self.storage.origin_visit_update(origin_id2, origin_visit3['visit'],
                                          status='partial')
 
         # then
         actual_origin_visits = list(self.storage.origin_visit_get(origin_id))
         self.assertEqual(actual_origin_visits, [{
             'origin': origin_visit2['origin'],
             'date': self.date_visit2,
             'visit': origin_visit1['visit'],
             'status': 'full',
             'metadata': visit1_metadata,
             'snapshot': None,
         }, {
             'origin': origin_visit2['origin'],
             'date': self.date_visit3,
             'visit': origin_visit2['visit'],
             'status': 'ongoing',
             'metadata': None,
             'snapshot': None,
         }])
 
         actual_origin_visits_bis = list(self.storage.origin_visit_get(
             origin_id, limit=1))
         self.assertEqual(actual_origin_visits_bis,
                          [{
                              'origin': origin_visit2['origin'],
                              'date': self.date_visit2,
                              'visit': origin_visit1['visit'],
                              'status': 'full',
                              'metadata': visit1_metadata,
                              'snapshot': None,
                          }])
 
         actual_origin_visits_ter = list(self.storage.origin_visit_get(
             origin_id, last_visit=origin_visit1['visit']))
         self.assertEqual(actual_origin_visits_ter,
                          [{
                              'origin': origin_visit2['origin'],
                              'date': self.date_visit3,
                              'visit': origin_visit2['visit'],
                              'status': 'ongoing',
                              'metadata': None,
                              'snapshot': None,
                          }])
 
         actual_origin_visits2 = list(self.storage.origin_visit_get(origin_id2))
         self.assertEqual(actual_origin_visits2,
                          [{
                              'origin': origin_visit3['origin'],
                              'date': self.date_visit3,
                              'visit': origin_visit3['visit'],
                              'status': 'partial',
                              'metadata': None,
                              'snapshot': None,
                          }])
 
     def test_origin_visit_get_by(self):
         origin_id = self.storage.origin_add_one(self.origin2)
         origin_id2 = self.storage.origin_add_one(self.origin)
 
         origin_visit1 = self.storage.origin_visit_add(
             origin_id,
             date=self.date_visit2)
 
         self.storage.snapshot_add(origin_id, origin_visit1['visit'],
                                   self.snapshot)
 
         # Add some other {origin, visit} entries
         self.storage.origin_visit_add(origin_id, date=self.date_visit3)
         self.storage.origin_visit_add(origin_id2, date=self.date_visit3)
 
         # when
         visit1_metadata = {
             'contents': 42,
             'directories': 22,
         }
 
         self.storage.origin_visit_update(
             origin_id, origin_visit1['visit'], status='full',
             metadata=visit1_metadata)
 
         expected_origin_visit = origin_visit1.copy()
         expected_origin_visit.update({
             'origin': origin_id,
             'visit': origin_visit1['visit'],
             'date': self.date_visit2,
             'metadata': visit1_metadata,
             'status': 'full',
             'snapshot': self.snapshot['id'],
         })
 
         # when
         actual_origin_visit1 = self.storage.origin_visit_get_by(
             origin_visit1['origin'], origin_visit1['visit'])
 
         # then
         self.assertEqual(actual_origin_visit1, expected_origin_visit)
 
     def test_origin_visit_get_by_no_result(self):
         # No result
         actual_origin_visit = self.storage.origin_visit_get_by(
             10, 999)
 
         self.assertIsNone(actual_origin_visit)
 
+    def test_person_get(self):
+        # given (person injection through revision for example)
+        self.storage.revision_add([self.revision])
+        rev = list(self.storage.revision_get([self.revision['id']]))[0]
+
+        id0 = rev['committer']['id']
+        person0 = self.revision['committer']
+
+        id1 = rev['author']['id']
+        person1 = self.revision['author']
+
+        # when
+        actual_persons = self.storage.person_get([id0, id1])
+
+        # then
+        self.assertEqual(
+            list(actual_persons), [
+                {
+                    'id': id0,
+                    'fullname': person0['fullname'],
+                    'name': person0['name'],
+                    'email': person0['email'],
+                },
+                {
+                    'id': id1,
+                    'fullname': person1['fullname'],
+                    'name': person1['name'],
+                    'email': person1['email'],
+                }
+            ])
+
+    def test_person_get_fullname_unicity(self):
+        # given (person injection through revisions for example)
+        revision = self.revision
+
+        # create a revision with same committer fullname but wo name and email
+        revision2 = copy.deepcopy(self.revision2)
+        revision2['committer'] = dict(revision['committer'])
+        revision2['committer']['email'] = None
+        revision2['committer']['name'] = None
+
+        self.storage.revision_add([revision])
+        self.storage.revision_add([revision2])
+
+        # when getting added revisions
+        revisions = list(
+            self.storage.revision_get([revision['id'], revision2['id']]))
+
+        # then
+        # check committers are the same
+        self.assertEqual(revisions[0]['committer'],
+                         revisions[1]['committer'])
+
+        # check person_get return same result
+        person0 = list(
+            self.storage.person_get([revisions[0]['committer']['id']]))[0]
+
+        person1 = list(
+            self.storage.person_get([revisions[1]['committer']['id']]))[0]
+
+        self.assertEqual(person0, person1)
+
     def test_snapshot_add_get_empty(self):
         origin_id = self.storage.origin_add_one(self.origin)
         origin_visit1 = self.storage.origin_visit_add(origin_id,
                                                       self.date_visit1)
         visit_id = origin_visit1['visit']
 
         self.storage.snapshot_add(origin_id, visit_id, self.empty_snapshot)
 
         by_id = self.storage.snapshot_get(self.empty_snapshot['id'])
         self.assertEqual(by_id, self.empty_snapshot)
 
         by_ov = self.storage.snapshot_get_by_origin_visit(origin_id, visit_id)
         self.assertEqual(by_ov, self.empty_snapshot)
 
     def test_snapshot_add_get_complete(self):
         origin_id = self.storage.origin_add_one(self.origin)
         origin_visit1 = self.storage.origin_visit_add(origin_id,
                                                       self.date_visit1)
         visit_id = origin_visit1['visit']
 
         self.storage.snapshot_add(origin_id, visit_id, self.complete_snapshot)
 
         by_id = self.storage.snapshot_get(self.complete_snapshot['id'])
         self.assertEqual(by_id, self.complete_snapshot)
 
         by_ov = self.storage.snapshot_get_by_origin_visit(origin_id, visit_id)
         self.assertEqual(by_ov, self.complete_snapshot)
 
     def test_snapshot_add_count_branches(self):
         origin_id = self.storage.origin_add_one(self.origin)
         origin_visit1 = self.storage.origin_visit_add(origin_id,
                                                       self.date_visit1)
         visit_id = origin_visit1['visit']
 
         self.storage.snapshot_add(origin_id, visit_id, self.complete_snapshot)
 
         snp_id = self.complete_snapshot['id']
         snp_size = self.storage.snapshot_count_branches(snp_id)
 
         expected_snp_size = {
             'alias': 1,
             'content': 1,
             'directory': 1,
             'release': 1,
             'revision': 1,
             'snapshot': 1,
             None: 1
         }
 
         self.assertEqual(snp_size, expected_snp_size)
 
     def test_snapshot_add_get_paginated(self):
         origin_id = self.storage.origin_add_one(self.origin)
         origin_visit1 = self.storage.origin_visit_add(origin_id,
                                                       self.date_visit1)
         visit_id = origin_visit1['visit']
 
         self.storage.snapshot_add(origin_id, visit_id, self.complete_snapshot)
 
         snp_id = self.complete_snapshot['id']
         branches = self.complete_snapshot['branches']
         branch_names = list(sorted(branches))
 
         snapshot = self.storage.snapshot_get_branches(snp_id,
                                                       branches_from=b'release')
 
         rel_idx = branch_names.index(b'release')
         expected_snapshot = {
             'id': snp_id,
             'branches': {
                 name: branches[name]
                 for name in branch_names[rel_idx:]
             },
             'next_branch': None,
         }
 
         self.assertEqual(snapshot, expected_snapshot)
 
         snapshot = self.storage.snapshot_get_branches(snp_id,
                                                       branches_count=1)
 
         expected_snapshot = {
             'id': snp_id,
             'branches': {
                  branch_names[0]: branches[branch_names[0]],
             },
             'next_branch': b'content',
         }
         self.assertEqual(snapshot, expected_snapshot)
 
         snapshot = self.storage.snapshot_get_branches(
             snp_id, branches_from=b'directory', branches_count=3)
 
         dir_idx = branch_names.index(b'directory')
         expected_snapshot = {
             'id': snp_id,
             'branches': {
                 name: branches[name]
                 for name in branch_names[dir_idx:dir_idx + 3]
             },
             'next_branch': branch_names[dir_idx + 3],
         }
 
         self.assertEqual(snapshot, expected_snapshot)
 
     def test_snapshot_add_get_filtered(self):
         origin_id = self.storage.origin_add_one(self.origin)
         origin_visit1 = self.storage.origin_visit_add(origin_id,
                                                       self.date_visit1)
         visit_id = origin_visit1['visit']
 
         self.storage.snapshot_add(origin_id, visit_id, self.complete_snapshot)
 
         snp_id = self.complete_snapshot['id']
         branches = self.complete_snapshot['branches']
 
         snapshot = self.storage.snapshot_get_branches(
             snp_id, target_types=['release', 'revision'])
 
         expected_snapshot = {
             'id': snp_id,
             'branches': {
                 name: tgt
                 for name, tgt in branches.items()
                 if tgt and tgt['target_type'] in ['release', 'revision']
             },
             'next_branch': None,
         }
 
         self.assertEqual(snapshot, expected_snapshot)
 
         snapshot = self.storage.snapshot_get_branches(snp_id,
                                                       target_types=['alias'])
 
         expected_snapshot = {
             'id': snp_id,
             'branches': {
                 name: tgt
                 for name, tgt in branches.items()
                 if tgt and tgt['target_type'] == 'alias'
             },
             'next_branch': None,
         }
 
         self.assertEqual(snapshot, expected_snapshot)
 
     def test_snapshot_add_get(self):
         origin_id = self.storage.origin_add_one(self.origin)
         origin_visit1 = self.storage.origin_visit_add(origin_id,
                                                       self.date_visit1)
         visit_id = origin_visit1['visit']
 
         self.storage.snapshot_add(origin_id, visit_id, self.snapshot)
 
         by_id = self.storage.snapshot_get(self.snapshot['id'])
         self.assertEqual(by_id, self.snapshot)
 
         by_ov = self.storage.snapshot_get_by_origin_visit(origin_id, visit_id)
         self.assertEqual(by_ov, self.snapshot)
 
         origin_visit_info = self.storage.origin_visit_get_by(origin_id,
                                                              visit_id)
         self.assertEqual(origin_visit_info['snapshot'], self.snapshot['id'])
 
     def test_snapshot_add_nonexistent_visit(self):
         origin_id = self.storage.origin_add_one(self.origin)
         visit_id = 54164461156
 
         with self.assertRaises(ValueError):
             self.storage.snapshot_add(origin_id, visit_id, self.snapshot)
 
     def test_snapshot_add_twice(self):
         origin_id = self.storage.origin_add_one(self.origin)
         origin_visit1 = self.storage.origin_visit_add(origin_id,
                                                       self.date_visit1)
         visit1_id = origin_visit1['visit']
         self.storage.snapshot_add(origin_id, visit1_id, self.snapshot)
 
         by_ov1 = self.storage.snapshot_get_by_origin_visit(origin_id,
                                                            visit1_id)
         self.assertEqual(by_ov1, self.snapshot)
 
         origin_visit2 = self.storage.origin_visit_add(origin_id,
                                                       self.date_visit2)
         visit2_id = origin_visit2['visit']
 
         self.storage.snapshot_add(origin_id, visit2_id, self.snapshot)
 
         by_ov2 = self.storage.snapshot_get_by_origin_visit(origin_id,
                                                            visit2_id)
         self.assertEqual(by_ov2, self.snapshot)
 
     def test_snapshot_get_nonexistent(self):
         bogus_snapshot_id = b'bogus snapshot id 00'
         bogus_origin_id = 1
         bogus_visit_id = 1
 
         by_id = self.storage.snapshot_get(bogus_snapshot_id)
         self.assertIsNone(by_id)
 
         by_ov = self.storage.snapshot_get_by_origin_visit(bogus_origin_id,
                                                           bogus_visit_id)
         self.assertIsNone(by_ov)
 
     def test_snapshot_get_latest(self):
         origin_id = self.storage.origin_add_one(self.origin)
         origin_visit1 = self.storage.origin_visit_add(origin_id,
                                                       self.date_visit1)
         visit1_id = origin_visit1['visit']
         origin_visit2 = self.storage.origin_visit_add(origin_id,
                                                       self.date_visit2)
         visit2_id = origin_visit2['visit']
 
         # Add a visit with the same date as the previous one
         origin_visit3 = self.storage.origin_visit_add(origin_id,
                                                       self.date_visit2)
         visit3_id = origin_visit3['visit']
 
         # Two visits, both with no snapshot: latest snapshot is None
         self.assertIsNone(self.storage.snapshot_get_latest(origin_id))
 
         # Add snapshot to visit1, latest snapshot = visit 1 snapshot
         self.storage.snapshot_add(origin_id, visit1_id, self.complete_snapshot)
         self.assertEqual(self.complete_snapshot,
                          self.storage.snapshot_get_latest(origin_id))
 
         # Status filter: both visits are status=ongoing, so no snapshot
         # returned
         self.assertIsNone(
             self.storage.snapshot_get_latest(origin_id,
                                              allowed_statuses=['full'])
         )
 
         # Mark the first visit as completed and check status filter again
         self.storage.origin_visit_update(origin_id, visit1_id, status='full')
         self.assertEqual(
             self.complete_snapshot,
             self.storage.snapshot_get_latest(origin_id,
                                              allowed_statuses=['full']),
         )
 
         # Add snapshot to visit2 and check that the new snapshot is returned
         self.storage.snapshot_add(origin_id, visit2_id, self.empty_snapshot)
         self.assertEqual(self.empty_snapshot,
                          self.storage.snapshot_get_latest(origin_id))
 
         # Check that the status filter is still working
         self.assertEqual(
             self.complete_snapshot,
             self.storage.snapshot_get_latest(origin_id,
                                              allowed_statuses=['full']),
         )
 
         # Add snapshot to visit3 (same date as visit2) and check that
         # the new snapshot is returned
         self.storage.snapshot_add(origin_id, visit3_id, self.complete_snapshot)
         self.assertEqual(self.complete_snapshot,
                          self.storage.snapshot_get_latest(origin_id))
 
     def test_stat_counters(self):
         expected_keys = ['content', 'directory',
                          'origin', 'person', 'revision']
 
         # Initially, all counters are 0
 
         self.storage.refresh_stat_counters()
         counters = self.storage.stat_counters()
         self.assertTrue(set(expected_keys) <= set(counters))
         for key in expected_keys:
             self.assertEqual(counters[key], 0)
 
         # Add a content. Only the content counter should increase.
 
         self.storage.content_add([self.cont])
 
         self.storage.refresh_stat_counters()
         counters = self.storage.stat_counters()
 
         self.assertTrue(set(expected_keys) <= set(counters))
         for key in expected_keys:
             if key != 'content':
                 self.assertEqual(counters[key], 0)
         self.assertEqual(counters['content'], 1)
 
         # Add other objects. Check their counter increased as well.
 
         origin_id = self.storage.origin_add_one(self.origin2)
         origin_visit1 = self.storage.origin_visit_add(
             origin_id,
             date=self.date_visit2)
         self.storage.snapshot_add(origin_id, origin_visit1['visit'],
                                   self.snapshot)
         self.storage.directory_add([self.dir])
         self.storage.revision_add([self.revision])
 
         self.storage.refresh_stat_counters()
         counters = self.storage.stat_counters()
         self.assertEqual(counters['content'], 1)
         self.assertEqual(counters['directory'], 1)
         self.assertEqual(counters['snapshot'], 1)
         self.assertEqual(counters['origin'], 1)
         self.assertEqual(counters['revision'], 1)
+        self.assertEqual(counters['person'], 2)
 
     def test_content_find_with_present_content(self):
         # 1. with something to find
         cont = self.cont
         self.storage.content_add([cont])
 
         actually_present = self.storage.content_find({'sha1': cont['sha1']})
 
         actually_present.pop('ctime')
         self.assertEqual(actually_present, {
             'sha1': cont['sha1'],
             'sha256': cont['sha256'],
             'sha1_git': cont['sha1_git'],
             'blake2s256': cont['blake2s256'],
             'length': cont['length'],
             'status': 'visible'
         })
 
         # 2. with something to find
         actually_present = self.storage.content_find(
             {'sha1_git': cont['sha1_git']})
 
         actually_present.pop('ctime')
         self.assertEqual(actually_present, {
             'sha1': cont['sha1'],
             'sha256': cont['sha256'],
             'sha1_git': cont['sha1_git'],
             'blake2s256': cont['blake2s256'],
             'length': cont['length'],
             'status': 'visible'
         })
 
         # 3. with something to find
         actually_present = self.storage.content_find(
             {'sha256': cont['sha256']})
 
         actually_present.pop('ctime')
         self.assertEqual(actually_present, {
             'sha1': cont['sha1'],
             'sha256': cont['sha256'],
             'sha1_git': cont['sha1_git'],
             'blake2s256': cont['blake2s256'],
             'length': cont['length'],
             'status': 'visible'
         })
 
         # 4. with something to find
         actually_present = self.storage.content_find({
             'sha1': cont['sha1'],
             'sha1_git': cont['sha1_git'],
             'sha256': cont['sha256'],
             'blake2s256': cont['blake2s256'],
         })
 
         actually_present.pop('ctime')
         self.assertEqual(actually_present, {
             'sha1': cont['sha1'],
             'sha256': cont['sha256'],
             'sha1_git': cont['sha1_git'],
             'blake2s256': cont['blake2s256'],
             'length': cont['length'],
             'status': 'visible'
         })
 
     def test_content_find_with_non_present_content(self):
         # 1. with something that does not exist
         missing_cont = self.missing_cont
 
         actually_present = self.storage.content_find(
             {'sha1': missing_cont['sha1']})
 
         self.assertIsNone(actually_present)
 
         # 2. with something that does not exist
         actually_present = self.storage.content_find(
             {'sha1_git': missing_cont['sha1_git']})
 
         self.assertIsNone(actually_present)
 
         # 3. with something that does not exist
         actually_present = self.storage.content_find(
             {'sha256': missing_cont['sha256']})
 
         self.assertIsNone(actually_present)
 
     def test_content_find_bad_input(self):
         # 1. with bad input
         with self.assertRaises(ValueError):
             self.storage.content_find({})  # empty is bad
 
         # 2. with bad input
         with self.assertRaises(ValueError):
             self.storage.content_find(
                 {'unknown-sha1': 'something'})  # not the right key
 
     def test_object_find_by_sha1_git(self):
         sha1_gits = [b'00000000000000000000']
         expected = {
             b'00000000000000000000': [],
         }
 
         self.storage.content_add([self.cont])
         sha1_gits.append(self.cont['sha1_git'])
         expected[self.cont['sha1_git']] = [{
             'sha1_git': self.cont['sha1_git'],
             'type': 'content',
             'id': self.cont['sha1'],
         }]
 
         self.storage.directory_add([self.dir])
         sha1_gits.append(self.dir['id'])
         expected[self.dir['id']] = [{
             'sha1_git': self.dir['id'],
             'type': 'directory',
             'id': self.dir['id'],
         }]
 
         self.storage.revision_add([self.revision])
         sha1_gits.append(self.revision['id'])
         expected[self.revision['id']] = [{
             'sha1_git': self.revision['id'],
             'type': 'revision',
             'id': self.revision['id'],
         }]
 
         self.storage.release_add([self.release])
         sha1_gits.append(self.release['id'])
         expected[self.release['id']] = [{
             'sha1_git': self.release['id'],
             'type': 'release',
             'id': self.release['id'],
         }]
 
         ret = self.storage.object_find_by_sha1_git(sha1_gits)
         for val in ret.values():
             for obj in val:
                 del obj['object_id']
 
         self.assertEqual(expected, ret)
 
     def test_tool_add(self):
         tool = {
             'name': 'some-unknown-tool',
             'version': 'some-version',
             'configuration': {"debian-package": "some-package"},
         }
 
         actual_tool = self.storage.tool_get(tool)
         self.assertIsNone(actual_tool)  # does not exist
 
         # add it
         actual_tools = list(self.storage.tool_add([tool]))
 
         self.assertEqual(len(actual_tools), 1)
         actual_tool = actual_tools[0]
         self.assertIsNotNone(actual_tool)  # now it exists
         new_id = actual_tool.pop('id')
         self.assertEqual(actual_tool, tool)
 
         actual_tools2 = list(self.storage.tool_add([tool]))
         actual_tool2 = actual_tools2[0]
         self.assertIsNotNone(actual_tool2)  # now it exists
         new_id2 = actual_tool2.pop('id')
 
         self.assertEqual(new_id, new_id2)
         self.assertEqual(actual_tool, actual_tool2)
 
     def test_tool_add_multiple(self):
         tool = {
             'name': 'some-unknown-tool',
             'version': 'some-version',
             'configuration': {"debian-package": "some-package"},
         }
 
         actual_tools = list(self.storage.tool_add([tool]))
         self.assertEqual(len(actual_tools), 1)
 
         new_tools = [tool, {
             'name': 'yet-another-tool',
             'version': 'version',
             'configuration': {},
         }]
 
         actual_tools = list(self.storage.tool_add(new_tools))
         self.assertEqual(len(actual_tools), 2)
 
         # order not guaranteed, so we iterate over results to check
         for tool in actual_tools:
             _id = tool.pop('id')
             self.assertIsNotNone(_id)
             self.assertIn(tool, new_tools)
 
     def test_tool_get_missing(self):
         tool = {
             'name': 'unknown-tool',
             'version': '3.1.0rc2-31-ga2cbb8c',
             'configuration': {"command_line": "nomossa <filepath>"},
         }
 
         actual_tool = self.storage.tool_get(tool)
 
         self.assertIsNone(actual_tool)
 
     def test_tool_metadata_get_missing_context(self):
         tool = {
             'name': 'swh-metadata-translator',
             'version': '0.0.1',
             'configuration': {"context": "unknown-context"},
         }
 
         actual_tool = self.storage.tool_get(tool)
 
         self.assertIsNone(actual_tool)
 
     def test_tool_metadata_get(self):
         tool = {
             'name': 'swh-metadata-translator',
             'version': '0.0.1',
             'configuration': {"type": "local", "context": "npm"},
         }
 
         tools = list(self.storage.tool_add([tool]))
         expected_tool = tools[0]
 
         # when
         actual_tool = self.storage.tool_get(tool)
 
         # then
         self.assertEqual(expected_tool, actual_tool)
 
     def test_metadata_provider_get(self):
         # given
         no_provider = self.storage.metadata_provider_get(6459456445615)
         self.assertIsNone(no_provider)
         # when
         provider_id = self.storage.metadata_provider_add(
             self.provider['name'],
             self.provider['type'],
             self.provider['url'],
             self.provider['metadata'])
 
         actual_provider = self.storage.metadata_provider_get(provider_id)
         expected_provider = {
             'provider_name': self.provider['name'],
             'provider_url': self.provider['url']
         }
         # then
         del actual_provider['id']
         self.assertTrue(actual_provider, expected_provider)
 
     def test_metadata_provider_get_by(self):
         # given
         no_provider = self.storage.metadata_provider_get_by({
             'provider_name': self.provider['name'],
             'provider_url': self.provider['url']
         })
         self.assertIsNone(no_provider)
         # when
         provider_id = self.storage.metadata_provider_add(
             self.provider['name'],
             self.provider['type'],
             self.provider['url'],
             self.provider['metadata'])
 
         actual_provider = self.storage.metadata_provider_get_by({
             'provider_name': self.provider['name'],
             'provider_url': self.provider['url']
         })
         # then
         self.assertTrue(provider_id, actual_provider['id'])
 
     def test_origin_metadata_add(self):
         # given
         origin_id = self.storage.origin_add([self.origin])[0]['id']
         origin_metadata0 = list(self.storage.origin_metadata_get_by(origin_id))
         self.assertTrue(len(origin_metadata0) == 0)
 
         tools = list(self.storage.tool_add([self.metadata_tool]))
         tool = tools[0]
 
         self.storage.metadata_provider_add(
                            self.provider['name'],
                            self.provider['type'],
                            self.provider['url'],
                            self.provider['metadata'])
         provider = self.storage.metadata_provider_get_by({
                             'provider_name': self.provider['name'],
                             'provider_url': self.provider['url']
                       })
 
         # when adding for the same origin 2 metadatas
         self.storage.origin_metadata_add(
                     origin_id,
                     self.origin_metadata['discovery_date'],
                     provider['id'],
                     tool['id'],
                     self.origin_metadata['metadata'])
         actual_om1 = list(self.storage.origin_metadata_get_by(origin_id))
         # then
         self.assertEqual(len(actual_om1), 1)
         self.assertEqual(actual_om1[0]['origin_id'], origin_id)
 
     def test_origin_metadata_get(self):
         # given
         origin_id = self.storage.origin_add([self.origin])[0]['id']
         origin_id2 = self.storage.origin_add([self.origin2])[0]['id']
 
         self.storage.metadata_provider_add(self.provider['name'],
                                            self.provider['type'],
                                            self.provider['url'],
                                            self.provider['metadata'])
         provider = self.storage.metadata_provider_get_by({
                             'provider_name': self.provider['name'],
                             'provider_url': self.provider['url']
                    })
         tool = list(self.storage.tool_add([self.metadata_tool]))[0]
         # when adding for the same origin 2 metadatas
         self.storage.origin_metadata_add(
                     origin_id,
                     self.origin_metadata['discovery_date'],
                     provider['id'],
                     tool['id'],
                     self.origin_metadata['metadata'])
         self.storage.origin_metadata_add(
                     origin_id2,
                     self.origin_metadata2['discovery_date'],
                     provider['id'],
                     tool['id'],
                     self.origin_metadata2['metadata'])
         self.storage.origin_metadata_add(
                     origin_id,
                     self.origin_metadata2['discovery_date'],
                     provider['id'],
                     tool['id'],
                     self.origin_metadata2['metadata'])
         all_metadatas = list(self.storage.origin_metadata_get_by(origin_id))
         metadatas_for_origin2 = list(self.storage.origin_metadata_get_by(
                                           origin_id2))
         expected_results = [{
             'origin_id': origin_id,
             'discovery_date': datetime.datetime(
                                 2017, 1, 1, 23, 0,
                                 tzinfo=datetime.timezone.utc),
             'metadata': {
                 'name': 'test_origin_metadata',
                 'version': '0.0.1'
             },
             'provider_id': provider['id'],
             'provider_name': 'hal',
             'provider_type': 'deposit-client',
             'provider_url': 'http:///hal/inria',
             'tool_id': tool['id']
         }, {
             'origin_id': origin_id,
             'discovery_date': datetime.datetime(
                                 2015, 1, 1, 23, 0,
                                 tzinfo=datetime.timezone.utc),
             'metadata': {
                 'name': 'test_origin_metadata',
                 'version': '0.0.1'
             },
             'provider_id': provider['id'],
             'provider_name': 'hal',
             'provider_type': 'deposit-client',
             'provider_url': 'http:///hal/inria',
             'tool_id': tool['id']
         }]
 
         # then
         self.assertEqual(len(all_metadatas), 2)
         self.assertEqual(len(metadatas_for_origin2), 1)
         self.assertCountEqual(all_metadatas, expected_results)
 
     def test_origin_metadata_get_by_provider_type(self):
         # given
         origin_id = self.storage.origin_add([self.origin])[0]['id']
         origin_id2 = self.storage.origin_add([self.origin2])[0]['id']
         provider1_id = self.storage.metadata_provider_add(
                            self.provider['name'],
                            self.provider['type'],
                            self.provider['url'],
                            self.provider['metadata'])
         provider1 = self.storage.metadata_provider_get_by({
                             'provider_name': self.provider['name'],
                             'provider_url': self.provider['url']
                    })
         self.assertEqual(provider1,
                          self.storage.metadata_provider_get(provider1_id))
 
         provider2_id = self.storage.metadata_provider_add(
                             'swMATH',
                             'registry',
                             'http://www.swmath.org/',
                             {'email': 'contact@swmath.org',
                              'license': 'All rights reserved'})
         provider2 = self.storage.metadata_provider_get_by({
                             'provider_name': 'swMATH',
                             'provider_url': 'http://www.swmath.org/'
                    })
         self.assertEqual(provider2,
                          self.storage.metadata_provider_get(provider2_id))
 
         # using the only tool now inserted in the data.sql, but for this
         # provider should be a crawler tool (not yet implemented)
         tool = list(self.storage.tool_add([self.metadata_tool]))[0]
 
         # when adding for the same origin 2 metadatas
         self.storage.origin_metadata_add(
                     origin_id,
                     self.origin_metadata['discovery_date'],
                     provider1['id'],
                     tool['id'],
                     self.origin_metadata['metadata'])
         self.storage.origin_metadata_add(
                     origin_id2,
                     self.origin_metadata2['discovery_date'],
                     provider2['id'],
                     tool['id'],
                     self.origin_metadata2['metadata'])
         provider_type = 'registry'
         m_by_provider = list(self.storage.
                              origin_metadata_get_by(
                                 origin_id2,
                                 provider_type))
         for item in m_by_provider:
             if 'id' in item:
                 del item['id']
         expected_results = [{
             'origin_id': origin_id2,
             'discovery_date': datetime.datetime(
                                 2017, 1, 1, 23, 0,
                                 tzinfo=datetime.timezone.utc),
             'metadata': {
                 'name': 'test_origin_metadata',
                 'version': '0.0.1'
             },
             'provider_id': provider2['id'],
             'provider_name': 'swMATH',
             'provider_type': provider_type,
             'provider_url': 'http://www.swmath.org/',
             'tool_id': tool['id']
         }]
         # then
 
         self.assertEqual(len(m_by_provider), 1)
         self.assertEqual(m_by_provider, expected_results)
 
 
 class CommonPropTestStorage:
     def assert_contents_ok(self, expected_contents, actual_contents,
                            keys_to_check={'sha1', 'data'}):
         """Assert that a given list of contents matches on a given set of keys.
 
         """
         for k in keys_to_check:
             expected_list = sorted([c[k] for c in expected_contents])
             actual_list = sorted([c[k] for c in actual_contents])
             self.assertEqual(actual_list, expected_list)
 
     @given(gen_contents(min_size=1, max_size=4))
     def test_generate_content_get(self, contents):
         # add contents to storage
         self.storage.content_add(contents)
 
         # input the list of sha1s we want from storage
         get_sha1s = [c['sha1'] for c in contents]
 
         # retrieve contents
         actual_contents = list(self.storage.content_get(get_sha1s))
 
         self.assert_contents_ok(contents, actual_contents)
 
     @given(gen_contents(min_size=1, max_size=4))
     def test_generate_content_get_metadata(self, contents):
         # add contents to storage
         self.storage.content_add(contents)
 
         # input the list of sha1s we want from storage
         get_sha1s = [c['sha1'] for c in contents]
 
         # retrieve contents
         actual_contents = list(self.storage.content_get_metadata(get_sha1s))
 
         self.assertEqual(len(actual_contents), len(contents))
 
         # will check that all contents are retrieved correctly
         one_content = contents[0]
         # content_get_metadata does not return data
         keys_to_check = set(one_content.keys()) - {'data'}
         self.assert_contents_ok(contents, actual_contents,
                                 keys_to_check=keys_to_check)
 
     def test_generate_content_get_range_limit_none(self):
         """content_get_range call with wrong limit input should fail"""
         with self.assertRaises(ValueError) as e:
             self.storage.content_get_range(start=None, end=None, limit=None)
 
         self.assertEqual(e.exception.args, (
             'Development error: limit should not be None',))
 
     @given(gen_contents(min_size=1, max_size=4))
     def test_generate_content_get_range_no_limit(self, contents):
         """content_get_range returns contents within range provided"""
         self.reset_storage_tables()
         # add contents to storage
         self.storage.content_add(contents)
 
         # input the list of sha1s we want from storage
         get_sha1s = sorted([c['sha1'] for c in contents])
         start = get_sha1s[0]
         end = get_sha1s[-1]
 
         # retrieve contents
         actual_result = self.storage.content_get_range(start, end)
 
         actual_contents = actual_result['contents']
         actual_next = actual_result['next']
 
         self.assertEqual(len(contents), len(actual_contents))
         self.assertIsNone(actual_next)
 
         one_content = contents[0]
         keys_to_check = set(one_content.keys()) - {'data'}
         self.assert_contents_ok(contents, actual_contents, keys_to_check)
 
     @given(gen_contents(min_size=4, max_size=4))
     def test_generate_content_get_range_limit(self, contents):
         """content_get_range paginates results if limit exceeded"""
         self.reset_storage_tables()
         contents_map = {c['sha1']: c for c in contents}
 
         # add contents to storage
         self.storage.content_add(contents)
 
         # input the list of sha1s we want from storage
         get_sha1s = sorted([c['sha1'] for c in contents])
         start = get_sha1s[0]
         end = get_sha1s[-1]
 
         # retrieve contents limited to 3 results
         limited_results = len(contents) - 1
         actual_result = self.storage.content_get_range(start, end,
                                                        limit=limited_results)
 
         actual_contents = actual_result['contents']
         actual_next = actual_result['next']
 
         self.assertEqual(limited_results, len(actual_contents))
         self.assertIsNotNone(actual_next)
         self.assertEqual(actual_next, get_sha1s[-1])
 
         expected_contents = [contents_map[sha1] for sha1 in get_sha1s[:-1]]
         keys_to_check = set(contents[0].keys()) - {'data'}
         self.assert_contents_ok(expected_contents, actual_contents,
                                 keys_to_check)
 
         # retrieve next part
         actual_results2 = self.storage.content_get_range(start=end, end=end)
         actual_contents2 = actual_results2['contents']
         actual_next2 = actual_results2['next']
 
         self.assertEqual(1, len(actual_contents2))
         self.assertIsNone(actual_next2)
 
         self.assert_contents_ok([contents_map[actual_next]], actual_contents2,
                                 keys_to_check)
 
 
 @pytest.mark.db
 class TestLocalStorage(CommonTestStorage, StorageTestDbFixture,
                        unittest.TestCase):
     """Test the local storage"""
 
     # Can only be tested with local storage as you can't mock
     # datetimes for the remote server
     def test_fetch_history(self):
         origin = self.storage.origin_add_one(self.origin)
         with patch('datetime.datetime'):
             datetime.datetime.now.return_value = self.fetch_history_date
             fetch_history_id = self.storage.fetch_history_start(origin)
             datetime.datetime.now.assert_called_with(tz=datetime.timezone.utc)
 
         with patch('datetime.datetime'):
             datetime.datetime.now.return_value = self.fetch_history_end
             self.storage.fetch_history_end(fetch_history_id,
                                            self.fetch_history_data)
 
         fetch_history = self.storage.fetch_history_get(fetch_history_id)
         expected_fetch_history = self.fetch_history_data.copy()
 
         expected_fetch_history['id'] = fetch_history_id
         expected_fetch_history['origin'] = origin
         expected_fetch_history['date'] = self.fetch_history_date
         expected_fetch_history['duration'] = self.fetch_history_duration
 
         self.assertEqual(expected_fetch_history, fetch_history)
 
-    # The remote API doesn't expose _person_add
-    def test_person_get(self):
-        # given
-        person0 = {
-            'fullname': b'bob <alice@bob>',
-            'name': b'bob',
-            'email': b'alice@bob',
-        }
-        id0 = self.storage._person_add(person0)
-
-        person1 = {
-            'fullname': b'tony <tony@bob>',
-            'name': b'tony',
-            'email': b'tony@bob',
-        }
-        id1 = self.storage._person_add(person1)
-
-        # when
-        actual_persons = self.storage.person_get([id0, id1])
-
-        # given (person injection through release for example)
-        self.assertEqual(
-            list(actual_persons), [
-                {
-                    'id': id0,
-                    'fullname': person0['fullname'],
-                    'name': person0['name'],
-                    'email': person0['email'],
-                },
-                {
-                    'id': id1,
-                    'fullname': person1['fullname'],
-                    'name': person1['name'],
-                    'email': person1['email'],
-                },
-            ])
-
     # This test is only relevant on the local storage, with an actual
     # objstorage raising an exception
     def test_content_add_objstorage_exception(self):
         self.storage.objstorage.add = Mock(
             side_effect=Exception('mocked broken objstorage')
         )
 
         with self.assertRaises(Exception) as e:
             self.storage.content_add([self.cont])
 
         self.assertEqual(e.exception.args, ('mocked broken objstorage',))
         missing = list(self.storage.content_missing([self.cont]))
         self.assertEqual(missing, [self.cont['sha1']])
 
 
 @pytest.mark.db
 @pytest.mark.property_based
 class PropTestLocalStorage(CommonPropTestStorage, StorageTestDbFixture,
                            unittest.TestCase):
     pass
 
 
 class AlteringSchemaTest(TestStorageData, StorageTestDbFixture,
                          unittest.TestCase):
     """This class is dedicated for the rare case where the schema needs to
        be altered dynamically.
 
        Otherwise, the tests could be blocking when ran altogether.
 
     """
     def test_content_update(self):
         cont = copy.deepcopy(self.cont)
 
         self.storage.content_add([cont])
         # alter the sha1_git for example
         cont['sha1_git'] = hash_to_bytes(
             '3a60a5275d0333bf13468e8b3dcab90f4046e654')
 
         self.storage.content_update([cont], keys=['sha1_git'])
 
         with self.storage.get_db().transaction() as cur:
             cur.execute('SELECT sha1, sha1_git, sha256, length, status'
                         ' FROM content WHERE sha1 = %s',
                         (cont['sha1'],))
             datum = cur.fetchone()
 
         self.assertEqual(
             (datum[0].tobytes(), datum[1].tobytes(), datum[2].tobytes(),
              datum[3], datum[4]),
             (cont['sha1'], cont['sha1_git'], cont['sha256'],
              cont['length'], 'visible'))
 
     def test_content_update_with_new_cols(self):
         with self.storage.get_db().transaction() as cur:
             cur.execute("""alter table content
                            add column test text default null,
                            add column test2 text default null""")
 
         cont = copy.deepcopy(self.cont2)
         self.storage.content_add([cont])
         cont['test'] = 'value-1'
         cont['test2'] = 'value-2'
 
         self.storage.content_update([cont], keys=['test', 'test2'])
         with self.storage.get_db().transaction() as cur:
             cur.execute(
                 'SELECT sha1, sha1_git, sha256, length, status, test, test2'
                 ' FROM content WHERE sha1 = %s',
                 (cont['sha1'],))
 
             datum = cur.fetchone()
 
         self.assertEqual(
             (datum[0].tobytes(), datum[1].tobytes(), datum[2].tobytes(),
              datum[3], datum[4], datum[5], datum[6]),
             (cont['sha1'], cont['sha1_git'], cont['sha256'],
              cont['length'], 'visible', cont['test'], cont['test2']))
 
         with self.storage.get_db().transaction() as cur:
             cur.execute("""alter table content drop column test,
                            drop column test2""")
diff --git a/version.txt b/version.txt
index e26fde55..2c88fec2 100644
--- a/version.txt
+++ b/version.txt
@@ -1 +1 @@
-v0.0.117-0-gfc7c534
\ No newline at end of file
+v0.0.118-0-gc5cee88
\ No newline at end of file