diff --git a/README.md b/README.md
index 14d37bad..b2a61701 100644
--- a/README.md
+++ b/README.md
@@ -1,90 +1,90 @@
 # swh-web
 
 This repository holds the development of Software Heritage web applications:
 
 * swh-web API (https://archive.softwareheritage.org/api): enables to query the content of the archive through HTTP requests and get responses in JSON or YAML.
 
 * swh-web browse (https://archive.softwareheritage.org/browse): graphical interface that eases the navigation in the archive.
 
 Documentation about how to use these components but also the details of their URI schemes
 can be found in the docs folder. The produced HTML documentation can be read and browsed
 at https://docs.softwareheritage.org/devel/swh-web/index.html.
 
 ## Technical details
 
 Those applications are powered by:
 
   * [Django Web Framework](https://www.djangoproject.com/) on the backend side with the following extensions enabled:
 
     * [django-rest-framework](http://www.django-rest-framework.org/)
     * [django-webpack-loader](https://github.com/owais/django-webpack-loader)
     * [django-js-reverse](http://django-js-reverse.readthedocs.io/en/latest/)
 
   * [webpack](https://webpack.js.org/) on the frontend side for better static assets management, including:
 
     * assets dependencies management and retrieval through [yarn](https://yarnpkg.com/en/)
     * linting of custom javascript code (through [eslint](https://eslint.org/)) and stylesheets (through [stylelint](https://stylelint.io/))
     * use of [es6](http://es6-features.org) syntax and advanced javascript feature like [async/await](https://javascript.info/async-await) or [fetch](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) thanks to [babel](https://babeljs.io/) (es6 to es5 transpiler and polyfills provider)
     * assets minification (using [UglifyJS](https://github.com/mishoo/UglifyJS2) and [cssnano](http://cssnano.co/)) but also dead code elimination for production use
 
 ## How to build and run
 
 ### Requirements
 
 First you will need [Python 3](https://www.python.org) and a complete [swh development environment](https://forge.softwareheritage.org/source/swh-environment/) installed.
 
 To run the backend, you need to have the following Python 3 modules installed:
 * beautifulsoup4
 * django >= 1.10.7
 * djangorestframework >= 3.4.0
 * django_webpack_loader
 * django_js_reverse
 * docutils
 * file_magic >= 0.3.0
 * htmlmin
 * lxml
 * pygments
 * pypandoc
 * python-dateutil
 * pyyaml
 * requests
 
 To compile the frontend assets, you need to have [nodejs](https://nodejs.org/en/) >= 8.x and [yarn](https://yarnpkg.com/en/) installed. If you are on Debian stretch, you can easily install an up to date nodejs from the stretch-backports repository while packages for yarn can be installed by following [these instructions](https://yarnpkg.com/en/docs/install#debian-stable).
 
 Alternatively, you can install yarn with `npm install yarn`, and add `YARN=node_modules/yarn/bin/yarn` as argument whenever you run `make`.
 
 Please note that the static assets bundles generated by webpack are not stored in the git repository. Follow the instructions below in order to generate them in order to be able to run the frontend part of the web applications.
 
 ### Make targets
 
 Below is the list of available make targets that can be executed from the root directory of swh-web in order to build and/or execute the web applications under various configurations:
 
 * **run-django-webpack-devserver**: Compile and serve not optimized (without mignification and dead code elimination) frontend static assets using [webpack-dev-server](https://github.com/webpack/webpack-dev-server) and run django server with development settings. This is the recommended target to use when developing swh-web as it enables automatic reloading of backend and frontend part of the applications when modifying source files (*.py, *.js, *.css, *.html).
 
 * **run-django-webpack-dev**: Compile not optimized (no minification, no dead code elimination) frontend static assets using webpack and run django server with development settings. This is the recommended target when one only wants to develop the backend side of the application.
 
 * **run-django-webpack-prod**: Compile optimized (with minification and dead code elimination) frontend static assets using webpack and run django server with production settings. This is useful to test the applications in production mode (with the difference that static assets are served by django). Production settings notably enable advanced django caching and you will need to have [memcached](https://memcached.org/) installed for that feature to work.
 
 * **run-django-server-dev**: Run the django server with development settings but without compiling frontend static assets through webpack.
 
 * **run-django-server-prod**: Run the django server with production settings but without compiling frontend static assets through webpack.
 
 * **run-gunicorn-server**: Run the web applications with production settings in a [gunicorn](http://gunicorn.org/) worker as they will be in real production environment.
 
 Once one of these targets executed, the web applications can be executed by pointing your browser to http://localhost:5004.
 
 ### Yarn targets
 
 Below is a list of available yarn targets in order to only execute the frontend static assets compilation (no web server will be executed):
 
 * **build-dev**: compile not optimized (without mignification and dead code elimination) frontend static assets and store the results in the `swh/web/static` folder.
 
 * **build**: compile optimized (with mignification and dead code elimination) frontend static assets and store the results in the `swh/web/static` folder.
 
 **The build target must be executed prior performing the Debian packaging of swh-web** in order for the package to contain the optimized assets dedicated to production environment.
 
-To execute these targets, issue the following commmand:
+To execute these targets, issue the following command:
 
 ```
 $ yarn <target_name>
 ```
diff --git a/swh/web/browse/utils.py b/swh/web/browse/utils.py
index 83844544..4c99ce75 100644
--- a/swh/web/browse/utils.py
+++ b/swh/web/browse/utils.py
@@ -1,1120 +1,1120 @@
 # Copyright (C) 2017-2019  The Software Heritage developers
 # See the AUTHORS file at the top-level directory of this distribution
 # License: GNU Affero General Public License version 3, or any later version
 # See top-level LICENSE file for more information
 
 import base64
 from collections import defaultdict
 import magic
 import pypandoc
 import stat
 import textwrap
 
 from django.core.cache import cache
 from django.utils.safestring import mark_safe
 from django.utils.html import escape
 
 from importlib import reload
 
 from swh.model.identifiers import persistent_identifier
 from swh.web.common import highlightjs, service
 from swh.web.common.exc import NotFoundExc, http_status_code_message
 from swh.web.common.origin_visits import get_origin_visit
 from swh.web.common.utils import (
     reverse, format_utc_iso_date, get_swh_persistent_id,
     swh_object_icons
 )
 from swh.web.config import get_config
 
 
 def get_directory_entries(sha1_git):
     """Function that retrieves the content of a directory
     from the archive.
 
     The directories entries are first sorted in lexicographical order.
     Sub-directories and regular files are then extracted.
 
     Args:
         sha1_git: sha1_git identifier of the directory
 
     Returns:
         A tuple whose first member corresponds to the sub-directories list
         and second member the regular files list
 
     Raises:
         NotFoundExc if the directory is not found
     """
     cache_entry_id = 'directory_entries_%s' % sha1_git
     cache_entry = cache.get(cache_entry_id)
 
     if cache_entry:
         return cache_entry
 
     entries = list(service.lookup_directory(sha1_git))
     for e in entries:
         e['perms'] = stat.filemode(e['perms'])
         if e['type'] == 'rev':
             # modify dir entry name to explicitly show it points
             # to a revision
             e['name'] = '%s @ %s' % (e['name'], e['target'][:7])
 
     dirs = [e for e in entries if e['type'] in ('dir', 'rev')]
     files = [e for e in entries if e['type'] == 'file']
 
     dirs = sorted(dirs, key=lambda d: d['name'])
     files = sorted(files, key=lambda f: f['name'])
 
     cache.set(cache_entry_id, (dirs, files))
 
     return dirs, files
 
 
 def get_mimetype_and_encoding_for_content(content):
     """Function that returns the mime type and the encoding associated to
     a content buffer using the magic module under the hood.
 
     Args:
         content (bytes): a content buffer
 
     Returns:
         A tuple (mimetype, encoding), for instance ('text/plain', 'us-ascii'),
         associated to the provided content.
 
     """
     while True:
         try:
             magic_result = magic.detect_from_content(content)
             mime_type = magic_result.mime_type
             encoding = magic_result.encoding
             break
         except Exception:
             # workaround an issue with the magic module who can fail
             # if detect_from_content is called multiple times in
             # a short amount of time
             reload(magic)
 
     return mime_type, encoding
 
 
 # maximum authorized content size in bytes for HTML display
 # with code highlighting
 content_display_max_size = get_config()['content_display_max_size']
 
 snapshot_content_max_size = get_config()['snapshot_content_max_size']
 
 
-def _reencode_content(mimetype, encoding, content_data):
+def _re_encode_content(mimetype, encoding, content_data):
     # encode textual content to utf-8 if needed
     if mimetype.startswith('text/'):
         # probably a malformed UTF-8 content, re-encode it
         # by replacing invalid chars with a substitution one
         if encoding == 'unknown-8bit':
             content_data = content_data.decode('utf-8', 'replace')\
                                        .encode('utf-8')
         elif encoding not in ['utf-8', 'binary']:
             content_data = content_data.decode(encoding, 'replace')\
                                        .encode('utf-8')
     elif mimetype.startswith('application/octet-stream'):
         # file may detect a text content as binary
         # so try to decode it for display
         encodings = ['us-ascii']
         encodings += ['iso-8859-%s' % i for i in range(1, 17)]
         for encoding in encodings:
             try:
                 content_data = content_data.decode(encoding)\
                                            .encode('utf-8')
             except Exception:
                 pass
             else:
                 # ensure display in content view
                 mimetype = 'text/plain'
                 break
     return mimetype, content_data
 
 
 def request_content(query_string, max_size=content_display_max_size,
-                    raise_if_unavailable=True, reencode=True):
+                    raise_if_unavailable=True, re_encode=True):
     """Function that retrieves a content from the archive.
 
     Raw bytes content is first retrieved, then the content mime type.
     If the mime type is not stored in the archive, it will be computed
     using Python magic module.
 
     Args:
         query_string: a string of the form "[ALGO_HASH:]HASH" where
             optional ALGO_HASH can be either ``sha1``, ``sha1_git``,
             ``sha256``, or ``blake2s256`` (default to ``sha1``) and HASH
             the hexadecimal representation of the hash value
         max_size: the maximum size for a content to retrieve (default to 1MB,
             no size limit if None)
 
     Returns:
         A tuple whose first member corresponds to the content raw bytes
         and second member the content mime type
 
     Raises:
         NotFoundExc if the content is not found
     """
     content_data = service.lookup_content(query_string)
     filetype = None
     language = None
     license = None
     # requests to the indexer db may fail so properly handle
     # those cases in order to avoid content display errors
     try:
         filetype = service.lookup_content_filetype(query_string)
         language = service.lookup_content_language(query_string)
         license = service.lookup_content_license(query_string)
     except Exception:
         pass
     mimetype = 'unknown'
     encoding = 'unknown'
     if filetype:
         mimetype = filetype['mimetype']
         encoding = filetype['encoding']
         # workaround when encountering corrupted data due to implicit
         # conversion from bytea to text in the indexer db (see T818)
         # TODO: Remove that code when all data have been correctly converted
         if mimetype.startswith('\\'):
             filetype = None
 
     content_data['error_code'] = 200
     content_data['error_message'] = ''
     content_data['error_description'] = ''
 
     if not max_size or content_data['length'] < max_size:
         try:
             content_raw = service.lookup_content_raw(query_string)
         except Exception as e:
             if raise_if_unavailable:
                 raise e
             else:
                 content_data['raw_data'] = None
                 content_data['error_code'] = 404
                 content_data['error_description'] = \
                     'The bytes of the content are currently not available in the archive.' # noqa
                 content_data['error_message'] = \
                     http_status_code_message[content_data['error_code']]
         else:
             content_data['raw_data'] = content_raw['data']
 
             if not filetype:
                 mimetype, encoding = \
                     get_mimetype_and_encoding_for_content(content_data['raw_data']) # noqa
 
-            if reencode:
-                mimetype, raw_data = _reencode_content(
+            if re_encode:
+                mimetype, raw_data = _re_encode_content(
                     mimetype, encoding, content_data['raw_data'])
                 content_data['raw_data'] = raw_data
 
     else:
         content_data['raw_data'] = None
 
     content_data['mimetype'] = mimetype
     content_data['encoding'] = encoding
 
     if language:
         content_data['language'] = language['lang']
     else:
         content_data['language'] = 'not detected'
     if license:
         content_data['licenses'] = ', '.join(license['facts'][0]['licenses'])
     else:
         content_data['licenses'] = 'not detected'
 
     return content_data
 
 
 _browsers_supported_image_mimes = set(['image/gif', 'image/png',
                                        'image/jpeg', 'image/bmp',
                                        'image/webp', 'image/svg',
                                        'image/svg+xml'])
 
 
 def prepare_content_for_display(content_data, mime_type, path):
     """Function that prepares a content for HTML display.
 
     The function tries to associate a programming language to a
     content in order to perform syntax highlighting client-side
     using highlightjs. The language is determined using either
     the content filename or its mime type.
     If the mime type corresponds to an image format supported
     by web browsers, the content will be encoded in base64
     for displaying the image.
 
     Args:
         content_data (bytes): raw bytes of the content
         mime_type (string): mime type of the content
         path (string): path of the content including filename
 
     Returns:
         A dict containing the content bytes (possibly different from the one
         provided as parameter if it is an image) under the key 'content_data
         and the corresponding highlightjs language class under the
         key 'language'.
     """
 
     language = highlightjs.get_hljs_language_from_filename(path)
 
     if not language:
         language = highlightjs.get_hljs_language_from_mime_type(mime_type)
 
     if not language:
         language = 'nohighlight'
     elif mime_type.startswith('application/'):
         mime_type = mime_type.replace('application/', 'text/')
 
     if mime_type.startswith('image/'):
         if mime_type in _browsers_supported_image_mimes:
             content_data = base64.b64encode(content_data)
         else:
             content_data = None
 
     if mime_type.startswith('image/svg'):
         mime_type = 'image/svg+xml'
 
     return {'content_data': content_data,
             'language': language,
             'mimetype': mime_type}
 
 
 def process_snapshot_branches(snapshot):
     """
     Process a dictionary describing snapshot branches: extract those
     targeting revisions and releases, put them in two different lists,
     then sort those lists in lexicographical order of the branches' names.
 
     Args:
         snapshot_branches (dict): A dict describing the branches of a snapshot
             as returned for instance by
             :func:`swh.web.common.service.lookup_snapshot`
 
     Returns:
         tuple: A tuple whose first member is the sorted list of branches
             targeting revisions and second member the sorted list of branches
             targeting releases
     """
     snapshot_branches = snapshot['branches']
     branches = {}
     branch_aliases = {}
     releases = {}
     revision_to_branch = defaultdict(set)
     revision_to_release = defaultdict(set)
     release_to_branch = defaultdict(set)
     for branch_name, target in snapshot_branches.items():
         if not target:
             # FIXME: display branches with an unknown target anyway
             continue
         target_id = target['target']
         target_type = target['target_type']
         if target_type == 'revision':
             branches[branch_name] = {
                 'name': branch_name,
                 'revision': target_id,
             }
             revision_to_branch[target_id].add(branch_name)
         elif target_type == 'release':
             release_to_branch[target_id].add(branch_name)
         elif target_type == 'alias':
             branch_aliases[branch_name] = target_id
         # FIXME: handle pointers to other object types
 
     def _enrich_release_branch(branch, release):
         releases[branch] = {
             'name': release['name'],
             'branch_name': branch,
             'date': format_utc_iso_date(release['date']),
             'id': release['id'],
             'message': release['message'],
             'target_type': release['target_type'],
             'target': release['target'],
         }
 
     def _enrich_revision_branch(branch, revision):
         branches[branch].update({
             'revision': revision['id'],
             'directory': revision['directory'],
             'date': format_utc_iso_date(revision['date']),
             'message': revision['message']
         })
 
     releases_info = service.lookup_release_multiple(
         release_to_branch.keys()
     )
     for release in releases_info:
         branches_to_update = release_to_branch[release['id']]
         for branch in branches_to_update:
             _enrich_release_branch(branch, release)
         if release['target_type'] == 'revision':
             revision_to_release[release['target']].update(
                 branches_to_update
             )
 
     revisions = service.lookup_revision_multiple(
         set(revision_to_branch.keys()) | set(revision_to_release.keys())
     )
 
     for revision in revisions:
         if not revision:
             continue
         for branch in revision_to_branch[revision['id']]:
             _enrich_revision_branch(branch, revision)
         for release in revision_to_release[revision['id']]:
             releases[release]['directory'] = revision['directory']
 
     for branch_alias, branch_target in branch_aliases.items():
         if branch_target in branches:
             branches[branch_alias] = dict(branches[branch_target])
         else:
             snp = service.lookup_snapshot(snapshot['id'],
                                           branches_from=branch_target,
                                           branches_count=1)
             if snp and branch_target in snp['branches']:
 
                 if snp['branches'][branch_target] is None:
                     continue
 
                 target_type = snp['branches'][branch_target]['target_type']
                 target = snp['branches'][branch_target]['target']
                 if target_type == 'revision':
                     branches[branch_alias] = snp['branches'][branch_target]
                     revision = service.lookup_revision(target)
                     _enrich_revision_branch(branch_alias, revision)
                 elif target_type == 'release':
                     release = service.lookup_release(target)
                     _enrich_release_branch(branch_alias, release)
 
         if branch_alias in branches:
             branches[branch_alias]['name'] = branch_alias
 
     ret_branches = list(sorted(branches.values(), key=lambda b: b['name']))
     ret_releases = list(sorted(releases.values(), key=lambda b: b['name']))
 
     return ret_branches, ret_releases
 
 
 def get_snapshot_content(snapshot_id):
     """Returns the lists of branches and releases
     associated to a swh snapshot.
     That list is put in  cache in order to speedup the navigation
     in the swh-web/browse ui.
 
     .. warning:: At most 1000 branches contained in the snapshot
         will be returned for performance reasons.
 
     Args:
         snapshot_id (str): hexadecimal representation of the snapshot
             identifier
 
     Returns:
         A tuple with two members. The first one is a list of dict describing
         the snapshot branches. The second one is a list of dict describing the
         snapshot releases.
 
     Raises:
         NotFoundExc if the snapshot does not exist
     """
     cache_entry_id = 'swh_snapshot_%s' % snapshot_id
     cache_entry = cache.get(cache_entry_id)
 
     if cache_entry:
         return cache_entry['branches'], cache_entry['releases']
 
     branches = []
     releases = []
 
     if snapshot_id:
         snapshot = service.lookup_snapshot(
             snapshot_id, branches_count=snapshot_content_max_size)
         branches, releases = process_snapshot_branches(snapshot)
 
     cache.set(cache_entry_id, {
         'branches': branches,
         'releases': releases,
     })
 
     return branches, releases
 
 
 def get_origin_visit_snapshot(origin_info, visit_ts=None, visit_id=None,
                               snapshot_id=None):
     """Returns the lists of branches and releases
     associated to a swh origin for a given visit.
     The visit is expressed by a timestamp. In the latter case,
     the closest visit from the provided timestamp will be used.
     If no visit parameter is provided, it returns the list of branches
     found for the latest visit.
     That list is put in  cache in order to speedup the navigation
     in the swh-web/browse ui.
 
     .. warning:: At most 1000 branches contained in the snapshot
         will be returned for performance reasons.
 
     Args:
         origin_info (dict): a dict filled with origin information
             (id, url, type)
         visit_ts (int or str): an ISO date string or Unix timestamp to parse
         visit_id (int): optional visit id for disambiguation in case
             several visits have the same timestamp
 
     Returns:
         A tuple with two members. The first one is a list of dict describing
         the origin branches for the given visit.
         The second one is a list of dict describing the origin releases
         for the given visit.
 
     Raises:
         NotFoundExc if the origin or its visit are not found
     """
 
     visit_info = get_origin_visit(origin_info, visit_ts, visit_id, snapshot_id)
 
     return get_snapshot_content(visit_info['snapshot'])
 
 
 def gen_link(url, link_text=None, link_attrs=None):
     """
     Utility function for generating an HTML link to insert
     in Django templates.
 
     Args:
         url (str): an url
         link_text (str): optional text for the produced link,
             if not provided the url will be used
         link_attrs (dict): optional attributes (e.g. class)
             to add to the link
 
     Returns:
         An HTML link in the form '<a href="url">link_text</a>'
 
     """
     attrs = ' '
     if link_attrs:
         for k, v in link_attrs.items():
             attrs += '%s="%s" ' % (k, v)
     if not link_text:
         link_text = url
     link = '<a%shref="%s">%s</a>' \
         % (attrs, escape(url), escape(link_text))
     return mark_safe(link)
 
 
 def _snapshot_context_query_params(snapshot_context):
     query_params = None
     if snapshot_context and snapshot_context['origin_info']:
         origin_info = snapshot_context['origin_info']
         query_params = {'origin': origin_info['url']}
         if 'timestamp' in snapshot_context['url_args']:
             query_params['timestamp'] = \
                  snapshot_context['url_args']['timestamp']
         if 'visit_id' in snapshot_context['query_params']:
             query_params['visit_id'] = \
                 snapshot_context['query_params']['visit_id']
     elif snapshot_context:
         query_params = {'snapshot_id': snapshot_context['snapshot_id']}
     return query_params
 
 
 def gen_person_link(person_id, person_name, snapshot_context=None,
                     link_attrs=None):
     """
     Utility function for generating a link to a person HTML view
     to insert in Django templates.
 
     Args:
         person_id (int): a person id
         person_name (str): the associated person name
         link_attrs (dict): optional attributes (e.g. class)
             to add to the link
 
     Returns:
         An HTML link in the form '<a href="person_view_url">person_name</a>'
 
     """
     query_params = _snapshot_context_query_params(snapshot_context)
     person_url = reverse('browse-person', url_args={'person_id': person_id},
                          query_params=query_params)
     return gen_link(person_url, person_name or 'None', link_attrs)
 
 
 def gen_revision_url(revision_id, snapshot_context=None):
     """
     Utility function for generating an url to a revision.
 
     Args:
         revision_id (str): a revision id
         snapshot_context (dict): if provided, generate snapshot-dependent
             browsing url
 
     Returns:
         str: The url to browse the revision
 
     """
     query_params = _snapshot_context_query_params(snapshot_context)
 
     return reverse('browse-revision',
                    url_args={'sha1_git': revision_id},
                    query_params=query_params)
 
 
 def gen_revision_link(revision_id, shorten_id=False, snapshot_context=None,
                       link_text='Browse',
                       link_attrs={'class': 'btn btn-default btn-sm',
                                   'role': 'button'}):
     """
     Utility function for generating a link to a revision HTML view
     to insert in Django templates.
 
     Args:
         revision_id (str): a revision id
         shorten_id (boolean): whether to shorten the revision id to 7
             characters for the link text
         snapshot_context (dict): if provided, generate snapshot-dependent
             browsing link
         link_text (str): optional text for the generated link
             (the revision id will be used by default)
         link_attrs (dict): optional attributes (e.g. class)
             to add to the link
 
     Returns:
         str: An HTML link in the form '<a href="revision_url">revision_id</a>'
 
     """
     if not revision_id:
         return None
 
     revision_url = gen_revision_url(revision_id, snapshot_context)
 
     if shorten_id:
         return gen_link(revision_url, revision_id[:7], link_attrs)
     else:
         if not link_text:
             link_text = revision_id
         return gen_link(revision_url, link_text, link_attrs)
 
 
 def gen_directory_link(sha1_git, snapshot_context=None, link_text='Browse',
                        link_attrs={'class': 'btn btn-default btn-sm',
                                    'role': 'button'}):
     """
     Utility function for generating a link to a directory HTML view
     to insert in Django templates.
 
     Args:
         sha1_git (str): directory identifier
         link_text (str): optional text for the generated link
             (the directory id will be used by default)
         link_attrs (dict): optional attributes (e.g. class)
             to add to the link
 
     Returns:
         An HTML link in the form '<a href="directory_view_url">link_text</a>'
 
     """
     if not sha1_git:
         return None
 
     query_params = _snapshot_context_query_params(snapshot_context)
 
     directory_url = reverse('browse-directory',
                             url_args={'sha1_git': sha1_git},
                             query_params=query_params)
 
     if not link_text:
         link_text = sha1_git
     return gen_link(directory_url, link_text, link_attrs)
 
 
 def gen_snapshot_link(snapshot_id, snapshot_context=None, link_text='Browse',
                       link_attrs={'class': 'btn btn-default btn-sm',
                                   'role': 'button'}):
     """
     Utility function for generating a link to a snapshot HTML view
     to insert in Django templates.
 
     Args:
         snapshot_id (str): snapshot identifier
         link_text (str): optional text for the generated link
             (the snapshot id will be used by default)
         link_attrs (dict): optional attributes (e.g. class)
             to add to the link
 
     Returns:
         An HTML link in the form '<a href="snapshot_view_url">link_text</a>'
 
     """
 
     query_params = _snapshot_context_query_params(snapshot_context)
 
     snapshot_url = reverse('browse-snapshot',
                            url_args={'snapshot_id': snapshot_id},
                            query_params=query_params)
     if not link_text:
         link_text = snapshot_id
     return gen_link(snapshot_url, link_text, link_attrs)
 
 
 def gen_content_link(sha1_git, snapshot_context=None, link_text='Browse',
                      link_attrs={'class': 'btn btn-default btn-sm',
                                  'role': 'button'}):
     """
     Utility function for generating a link to a content HTML view
     to insert in Django templates.
 
     Args:
         sha1_git (str): content identifier
         link_text (str): optional text for the generated link
             (the content sha1_git will be used by default)
         link_attrs (dict): optional attributes (e.g. class)
             to add to the link
 
     Returns:
         An HTML link in the form '<a href="content_view_url">link_text</a>'
 
     """
     if not sha1_git:
         return None
 
     query_params = _snapshot_context_query_params(snapshot_context)
 
     content_url = reverse('browse-content',
                           url_args={'query_string': 'sha1_git:' + sha1_git},
                           query_params=query_params)
     if not link_text:
         link_text = sha1_git
     return gen_link(content_url, link_text, link_attrs)
 
 
 def get_revision_log_url(revision_id, snapshot_context=None):
     """
     Utility function for getting the URL for a revision log HTML view
     (possibly in the context of an origin).
 
     Args:
         revision_id (str): revision identifier the history heads to
         snapshot_context (dict): if provided, generate snapshot-dependent
             browsing link
     Returns:
         The revision log view URL
     """
     query_params = {'revision': revision_id}
     if snapshot_context and snapshot_context['origin_info']:
         origin_info = snapshot_context['origin_info']
         url_args = {'origin_url': origin_info['url']}
         if 'timestamp' in snapshot_context['url_args']:
             url_args['timestamp'] = \
                 snapshot_context['url_args']['timestamp']
         if 'visit_id' in snapshot_context['query_params']:
             query_params['visit_id'] = \
                 snapshot_context['query_params']['visit_id']
         revision_log_url = reverse('browse-origin-log',
                                    url_args=url_args,
                                    query_params=query_params)
     elif snapshot_context:
         url_args = {'snapshot_id': snapshot_context['snapshot_id']}
         revision_log_url = reverse('browse-snapshot-log',
                                    url_args=url_args,
                                    query_params=query_params)
     else:
         revision_log_url = reverse('browse-revision-log',
                                    url_args={'sha1_git': revision_id})
     return revision_log_url
 
 
 def gen_revision_log_link(revision_id, snapshot_context=None,
                           link_text='Browse',
                           link_attrs={'class': 'btn btn-default btn-sm',
                                       'role': 'button'}):
     """
     Utility function for generating a link to a revision log HTML view
     (possibly in the context of an origin) to insert in Django templates.
 
     Args:
         revision_id (str): revision identifier the history heads to
         snapshot_context (dict): if provided, generate snapshot-dependent
             browsing link
         link_text (str): optional text to use for the generated link
             (the revision id will be used by default)
         link_attrs (dict): optional attributes (e.g. class)
             to add to the link
 
     Returns:
         An HTML link in the form
         '<a href="revision_log_view_url">link_text</a>'
     """
     if not revision_id:
         return None
 
     revision_log_url = get_revision_log_url(revision_id, snapshot_context)
 
     if not link_text:
         link_text = revision_id
     return gen_link(revision_log_url, link_text, link_attrs)
 
 
 def gen_release_link(sha1_git, snapshot_context=None, link_text='Browse',
                      link_attrs={'class': 'btn btn-default btn-sm',
                                  'role': 'button'}):
     """
     Utility function for generating a link to a release HTML view
     to insert in Django templates.
 
     Args:
         sha1_git (str): release identifier
         link_text (str): optional text for the generated link
             (the release id will be used by default)
         link_attrs (dict): optional attributes (e.g. class)
             to add to the link
 
     Returns:
         An HTML link in the form '<a href="release_view_url">link_text</a>'
 
     """
 
     query_params = _snapshot_context_query_params(snapshot_context)
 
     release_url = reverse('browse-release',
                           url_args={'sha1_git': sha1_git},
                           query_params=query_params)
     if not link_text:
         link_text = sha1_git
     return gen_link(release_url, link_text, link_attrs)
 
 
 def format_log_entries(revision_log, per_page, snapshot_context=None):
     """
     Utility functions that process raw revision log data for HTML display.
     Its purpose is to:
 
         * add links to relevant browse views
         * format date in human readable format
         * truncate the message log
 
     Args:
         revision_log (list): raw revision log as returned by the swh-web api
         per_page (int): number of log entries per page
         snapshot_context (dict): if provided, generate snapshot-dependent
             browsing link
 
 
     """
     revision_log_data = []
     for i, rev in enumerate(revision_log):
         if i == per_page:
             break
         author_name = 'None'
         author_fullname = 'None'
         committer_fullname = 'None'
         if rev['author']:
             author_name = rev['author']['name'] or rev['author']['fullname']
             author_fullname = rev['author']['fullname']
         if rev['committer']:
             committer_fullname = rev['committer']['fullname']
         author_date = format_utc_iso_date(rev['date'])
         committer_date = format_utc_iso_date(rev['committer_date'])
 
         tooltip = 'revision %s\n' % rev['id']
         tooltip += 'author: %s\n' % author_fullname
         tooltip += 'author date: %s\n' % author_date
         tooltip += 'committer: %s\n' % committer_fullname
         tooltip += 'committer date: %s\n\n' % committer_date
         if rev['message']:
             tooltip += textwrap.indent(rev['message'], ' '*4)
 
         revision_log_data.append({
             'author': author_name,
             'id': rev['id'][:7],
             'message': rev['message'],
             'date': author_date,
             'commit_date': committer_date,
             'url': gen_revision_url(rev['id'], snapshot_context),
             'tooltip': tooltip
         })
     return revision_log_data
 
 
 # list of origin types that can be found in the swh archive
 # TODO: retrieve it dynamically in an efficient way instead
 #       of hardcoding it
 _swh_origin_types = ['git', 'svn', 'deb', 'hg', 'ftp', 'deposit',
                      'pypi', 'npm']
 
 
 def get_origin_info(origin_url, origin_type=None):
     """
     Get info about a software origin.
     Its main purpose is to automatically find an origin type
     when it is not provided as parameter.
 
     Args:
         origin_url (str): complete url of a software origin
         origin_type (str): optional origin type
 
     Returns:
         A dict with the following entries:
             * type: the origin type
             * url: the origin url
             * id: the internal id of the origin
     """
     if origin_type:
         return service.lookup_origin({'type': origin_type,
                                       'url': origin_url})
     else:
         for origin_type in _swh_origin_types:
             try:
                 origin_info = service.lookup_origin({'type': origin_type,
                                                      'url': origin_url})
                 return origin_info
             except Exception:
                 pass
     raise NotFoundExc('Origin with url %s not found!' % escape(origin_url))
 
 
 def get_snapshot_context(snapshot_id=None, origin_type=None, origin_url=None,
                          timestamp=None, visit_id=None):
     """
     Utility function to compute relevant information when navigating
     the archive in a snapshot context. The snapshot is either
     referenced by its id or it will be retrieved from an origin visit.
 
     Args:
         snapshot_id (str): hexadecimal representation of a snapshot identifier,
             all other parameters will be ignored if it is provided
         origin_type (str): the origin type (git, svn, deposit, ...)
         origin_url (str): the origin_url
             (e.g. https://github.com/(user)/(repo)/)
         timestamp (str): a datetime string for retrieving the closest
             visit of the origin
         visit_id (int): optional visit id for disambiguation in case
             of several visits with the same timestamp
 
     Returns:
         A dict with the following entries:
             * origin_info: dict containing origin information
             * visit_info: dict containing visit information
             * branches: the list of branches for the origin found
               during the visit
             * releases: the list of releases for the origin found
               during the visit
             * origin_browse_url: the url to browse the origin
             * origin_branches_url: the url to browse the origin branches
             * origin_releases_url': the url to browse the origin releases
             * origin_visit_url: the url to browse the snapshot of the origin
               found during the visit
             * url_args: dict containing url arguments to use when browsing in
               the context of the origin and its visit
 
     Raises:
         NotFoundExc: if no snapshot is found for the visit of an origin.
     """
     origin_info = None
     visit_info = None
     url_args = None
     query_params = {}
     branches = []
     releases = []
     browse_url = None
     visit_url = None
     branches_url = None
     releases_url = None
     swh_type = 'snapshot'
     if origin_url:
         swh_type = 'origin'
         origin_info = get_origin_info(origin_url, origin_type)
 
         visit_info = get_origin_visit(origin_info, timestamp, visit_id,
                                       snapshot_id)
         fmt_date = format_utc_iso_date(visit_info['date'])
         visit_info['fmt_date'] = fmt_date
         snapshot_id = visit_info['snapshot']
 
         if not snapshot_id:
             raise NotFoundExc('No snapshot associated to the visit of origin '
                               '%s on %s' % (escape(origin_url), fmt_date))
 
         # provided timestamp is not necessarily equals to the one
         # of the retrieved visit, so get the exact one in order
         # use it in the urls generated below
         if timestamp:
             timestamp = visit_info['date']
 
         branches, releases = \
             get_origin_visit_snapshot(origin_info, timestamp, visit_id,
                                       snapshot_id)
 
         url_args = {'origin_type': origin_type,
                     'origin_url': origin_info['url']}
 
         query_params = {'visit_id': visit_id}
 
         browse_url = reverse('browse-origin-visits',
                              url_args=url_args)
 
         if timestamp:
             url_args['timestamp'] = format_utc_iso_date(timestamp,
                                                         '%Y-%m-%dT%H:%M:%S')
         visit_url = reverse('browse-origin-directory',
                             url_args=url_args,
                             query_params=query_params)
         visit_info['url'] = visit_url
 
         branches_url = reverse('browse-origin-branches',
                                url_args=url_args,
                                query_params=query_params)
 
         releases_url = reverse('browse-origin-releases',
                                url_args=url_args,
                                query_params=query_params)
     elif snapshot_id:
         branches, releases = get_snapshot_content(snapshot_id)
         url_args = {'snapshot_id': snapshot_id}
         browse_url = reverse('browse-snapshot',
                              url_args=url_args)
         branches_url = reverse('browse-snapshot-branches',
                                url_args=url_args)
 
         releases_url = reverse('browse-snapshot-releases',
                                url_args=url_args)
 
     releases = list(reversed(releases))
 
     snapshot_size = service.lookup_snapshot_size(snapshot_id)
 
     is_empty = sum(snapshot_size.values()) == 0
 
     swh_snp_id = persistent_identifier('snapshot', snapshot_id)
 
     return {
         'swh_type': swh_type,
         'swh_object_id': swh_snp_id,
         'snapshot_id': snapshot_id,
         'snapshot_size': snapshot_size,
         'is_empty': is_empty,
         'origin_info': origin_info,
         # keep track if the origin type was provided as url argument
         'origin_type': origin_type,
         'visit_info': visit_info,
         'branches': branches,
         'releases': releases,
         'branch': None,
         'release': None,
         'browse_url': browse_url,
         'branches_url': branches_url,
         'releases_url': releases_url,
         'url_args': url_args,
         'query_params': query_params
     }
 
 
 # list of common readme names ordered by preference
 # (lower indices have higher priority)
 _common_readme_names = [
     "readme.markdown",
     "readme.md",
     "readme.rst",
     "readme.txt",
     "readme"
 ]
 
 
 def get_readme_to_display(readmes):
     """
     Process a list of readme files found in a directory
     in order to find the adequate one to display.
 
     Args:
         readmes: a list of dict where keys are readme file names and values
             are readme sha1s
 
     Returns:
         A tuple (readme_name, readme_sha1)
     """
     readme_name = None
     readme_url = None
     readme_sha1 = None
     readme_html = None
 
     lc_readmes = {k.lower(): {'orig_name': k, 'sha1': v}
                   for k, v in readmes.items()}
 
     # look for readme names according to the preference order
     # defined by the _common_readme_names list
     for common_readme_name in _common_readme_names:
         if common_readme_name in lc_readmes:
             readme_name = lc_readmes[common_readme_name]['orig_name']
             readme_sha1 = lc_readmes[common_readme_name]['sha1']
             readme_url = reverse('browse-content-raw',
                                  url_args={'query_string': readme_sha1},
-                                 query_params={'reencode': 'true'})
+                                 query_params={'re_encode': 'true'})
             break
 
     # otherwise pick the first readme like file if any
     if not readme_name and len(readmes.items()) > 0:
         readme_name = next(iter(readmes))
         readme_sha1 = readmes[readme_name]
         readme_url = reverse('browse-content-raw',
                              url_args={'query_string': readme_sha1},
-                             query_params={'reencode': 'true'})
+                             query_params={'re_encode': 'true'})
 
     # convert rst README to html server side as there is
     # no viable solution to perform that task client side
     if readme_name and readme_name.endswith('.rst'):
         cache_entry_id = 'readme_%s' % readme_sha1
         cache_entry = cache.get(cache_entry_id)
 
         if cache_entry:
             readme_html = cache_entry
         else:
             try:
                 rst_doc = request_content(readme_sha1)
                 readme_html = pypandoc.convert_text(rst_doc['raw_data'],
                                                     'html', format='rst')
                 cache.set(cache_entry_id, readme_html)
             except Exception:
                 readme_html = 'Readme bytes are not available'
 
     return readme_name, readme_url, readme_html
 
 
 def get_swh_persistent_ids(swh_objects, snapshot_context=None):
     """
     Returns a list of dict containing info related to persistent
     identifiers of swh objects.
 
     Args:
         swh_objects (list): a list of dict with the following keys:
             * type: swh object type
                 (content/directory/release/revision/snapshot)
             * id: swh object id
         snapshot_context (dict): optional parameter describing the snapshot in
             which the object has been found
 
     Returns:
         list: a list of dict with the following keys:
             * object_type: the swh object type
                 (content/directory/release/revision/snapshot)
             * object_icon: the swh object icon to use in HTML views
             * swh_id: the computed swh object persistent identifier
             * swh_id_url: the url resolving the persistent identifier
             * show_options: boolean indicating if the persistent id options
                 must be displayed in persistent ids HTML view
     """
     swh_ids = []
     for swh_object in swh_objects:
         if not swh_object['id']:
             continue
         swh_id = get_swh_persistent_id(swh_object['type'], swh_object['id'])
         show_options = swh_object['type'] == 'content' or \
             (snapshot_context and snapshot_context['origin_info'] is not None)
 
         object_icon = swh_object_icons[swh_object['type']]
 
         swh_ids.append({
             'object_type': swh_object['type'],
             'object_icon': object_icon,
             'swh_id': swh_id,
             'swh_id_url': reverse('browse-swh-id',
                                   url_args={'swh_id': swh_id}),
             'show_options': show_options
         })
     return swh_ids
diff --git a/swh/web/browse/views/content.py b/swh/web/browse/views/content.py
index b2a01cb0..d91f9d58 100644
--- a/swh/web/browse/views/content.py
+++ b/swh/web/browse/views/content.py
@@ -1,319 +1,319 @@
 # Copyright (C) 2017-2019  The Software Heritage developers
 # See the AUTHORS file at the top-level directory of this distribution
 # License: GNU Affero General Public License version 3, or any later version
 # See top-level LICENSE file for more information
 
 import difflib
 import json
 
 from distutils.util import strtobool
 
 from django.http import HttpResponse
 from django.shortcuts import render
 from django.template.defaultfilters import filesizeformat
 
 from swh.model.hashutil import hash_to_hex
 
 from swh.web.common import query, service
 from swh.web.common.utils import (
     reverse, gen_path_info, swh_object_icons
 )
 from swh.web.common.exc import NotFoundExc, handle_view_exception
 from swh.web.browse.utils import (
     request_content, prepare_content_for_display,
     content_display_max_size, get_snapshot_context,
     get_swh_persistent_ids, gen_link, gen_directory_link
 )
 from swh.web.browse.browseurls import browse_route
 
 
 @browse_route(r'content/(?P<query_string>[0-9a-z_:]*[0-9a-f]+.)/raw/',
               view_name='browse-content-raw',
               checksum_args=['query_string'])
 def content_raw(request, query_string):
     """Django view that produces a raw display of a content identified
     by its hash value.
 
     The url that points to it is
         :http:get:`/browse/content/[(algo_hash):](hash)/raw/`
     """
     try:
-        reencode = bool(strtobool(request.GET.get('reencode', 'false')))
+        re_encode = bool(strtobool(request.GET.get('re_encode', 'false')))
         algo, checksum = query.parse_hash(query_string)
         checksum = hash_to_hex(checksum)
         content_data = request_content(query_string, max_size=None,
-                                       reencode=reencode)
+                                       re_encode=re_encode)
     except Exception as exc:
         return handle_view_exception(request, exc)
 
     filename = request.GET.get('filename', None)
     if not filename:
         filename = '%s_%s' % (algo, checksum)
 
     if content_data['mimetype'].startswith('text/') or \
        content_data['mimetype'] == 'inode/x-empty':
         response = HttpResponse(content_data['raw_data'],
                                 content_type="text/plain")
         response['Content-disposition'] = 'filename=%s' % filename
     else:
         response = HttpResponse(content_data['raw_data'],
                                 content_type='application/octet-stream')
         response['Content-disposition'] = 'attachment; filename=%s' % filename
     return response
 
 
 _auto_diff_size_limit = 20000
 
 
 @browse_route(r'content/(?P<from_query_string>.*)/diff/(?P<to_query_string>.*)', # noqa
               view_name='diff-contents')
 def _contents_diff(request, from_query_string, to_query_string):
     """
     Browse endpoint used to compute unified diffs between two contents.
 
     Diffs are generated only if the two contents are textual.
     By default, diffs whose size are greater than 20 kB will
     not be generated. To force the generation of large diffs,
     the 'force' boolean query parameter must be used.
 
     Args:
         request: input django http request
         from_query_string: a string of the form "[ALGO_HASH:]HASH" where
             optional ALGO_HASH can be either ``sha1``, ``sha1_git``,
             ``sha256``, or ``blake2s256`` (default to ``sha1``) and HASH
             the hexadecimal representation of the hash value identifying
             the first content
         to_query_string: same as above for identifying the second content
 
     Returns:
         A JSON object containing the unified diff.
 
     """
     diff_data = {}
     content_from = None
     content_to = None
     content_from_size = 0
     content_to_size = 0
     content_from_lines = []
     content_to_lines = []
     force = request.GET.get('force', 'false')
     path = request.GET.get('path', None)
     language = 'nohighlight'
 
     force = bool(strtobool(force))
 
     if from_query_string == to_query_string:
         diff_str = 'File renamed without changes'
     else:
         try:
             text_diff = True
             if from_query_string:
                 content_from = \
                     request_content(from_query_string, max_size=None)
                 content_from_display_data = \
                     prepare_content_for_display(content_from['raw_data'],
                                                 content_from['mimetype'], path)
                 language = content_from_display_data['language']
                 content_from_size = content_from['length']
                 if not (content_from['mimetype'].startswith('text/') or
                         content_from['mimetype'] == 'inode/x-empty'):
                     text_diff = False
 
             if text_diff and to_query_string:
                 content_to = request_content(to_query_string, max_size=None)
                 content_to_display_data = prepare_content_for_display(
                         content_to['raw_data'], content_to['mimetype'], path)
                 language = content_to_display_data['language']
                 content_to_size = content_to['length']
                 if not (content_to['mimetype'].startswith('text/') or
                         content_to['mimetype'] == 'inode/x-empty'):
                     text_diff = False
 
             diff_size = abs(content_to_size - content_from_size)
 
             if not text_diff:
                 diff_str = 'Diffs are not generated for non textual content'
                 language = 'nohighlight'
             elif not force and diff_size > _auto_diff_size_limit:
                 diff_str = 'Large diffs are not automatically computed'
                 language = 'nohighlight'
             else:
                 if content_from:
                     content_from_lines = \
                         content_from['raw_data'].decode('utf-8')\
                                                 .splitlines(True)
                     if content_from_lines and \
                             content_from_lines[-1][-1] != '\n':
                         content_from_lines[-1] += '[swh-no-nl-marker]\n'
 
                 if content_to:
                     content_to_lines = content_to['raw_data'].decode('utf-8')\
                                                             .splitlines(True)
                     if content_to_lines and content_to_lines[-1][-1] != '\n':
                         content_to_lines[-1] += '[swh-no-nl-marker]\n'
 
                 diff_lines = difflib.unified_diff(content_from_lines,
                                                   content_to_lines)
                 diff_str = ''.join(list(diff_lines)[2:])
         except Exception as e:
             diff_str = str(e)
 
     diff_data['diff_str'] = diff_str
     diff_data['language'] = language
     diff_data_json = json.dumps(diff_data, separators=(',', ': '))
     return HttpResponse(diff_data_json, content_type='application/json')
 
 
 @browse_route(r'content/(?P<query_string>[0-9a-z_:]*[0-9a-f]+.)/',
               view_name='browse-content',
               checksum_args=['query_string'])
 def content_display(request, query_string):
     """Django view that produces an HTML display of a content identified
     by its hash value.
 
     The url that points to it is
         :http:get:`/browse/content/[(algo_hash):](hash)/`
     """
     try:
         algo, checksum = query.parse_hash(query_string)
         checksum = hash_to_hex(checksum)
         content_data = request_content(query_string,
                                        raise_if_unavailable=False)
         origin_type = request.GET.get('origin_type', None)
         origin_url = request.GET.get('origin_url', None)
         if not origin_url:
             origin_url = request.GET.get('origin', None)
         snapshot_context = None
         if origin_url:
             try:
                 snapshot_context = get_snapshot_context(None, origin_type,
                                                         origin_url)
             except Exception:
                 raw_cnt_url = reverse('browse-content',
                                       url_args={'query_string': query_string})
                 error_message = \
                     ('The Software Heritage archive has a content '
                      'with the hash you provided but the origin '
                      'mentioned in your request appears broken: %s. '
                      'Please check the URL and try again.\n\n'
                      'Nevertheless, you can still browse the content '
                      'without origin information: %s'
                         % (gen_link(origin_url), gen_link(raw_cnt_url)))
 
                 raise NotFoundExc(error_message)
         if snapshot_context:
             snapshot_context['visit_info'] = None
     except Exception as exc:
         return handle_view_exception(request, exc)
 
     path = request.GET.get('path', None)
 
     content = None
     language = None
     mimetype = None
     if content_data['raw_data'] is not None:
         content_display_data = prepare_content_for_display(
             content_data['raw_data'], content_data['mimetype'], path)
         content = content_display_data['content_data']
         language = content_display_data['language']
         mimetype = content_display_data['mimetype']
 
     root_dir = None
     filename = None
     path_info = None
     directory_id = None
     directory_url = None
 
     query_params = {'origin': origin_url}
 
     breadcrumbs = []
 
     if path:
         split_path = path.split('/')
         root_dir = split_path[0]
         filename = split_path[-1]
         if root_dir != path:
             path = path.replace(root_dir + '/', '')
             path = path[:-len(filename)]
             path_info = gen_path_info(path)
             dir_url = reverse('browse-directory',
                               url_args={'sha1_git': root_dir},
                               query_params=query_params)
             breadcrumbs.append({'name': root_dir[:7],
                                 'url': dir_url})
             for pi in path_info:
                 dir_url = reverse('browse-directory',
                                   url_args={'sha1_git': root_dir,
                                             'path': pi['path']},
                                   query_params=query_params)
                 breadcrumbs.append({'name': pi['name'],
                                     'url': dir_url})
         breadcrumbs.append({'name': filename,
                             'url': None})
 
     if path and root_dir != path:
         dir_info = service.lookup_directory_with_path(root_dir, path)
         directory_id = dir_info['target']
     elif root_dir != path:
         directory_id = root_dir
 
     if directory_id:
         directory_url = gen_directory_link(directory_id)
 
     query_params = {'filename': filename}
 
     content_raw_url = reverse('browse-content-raw',
                               url_args={'query_string': query_string},
                               query_params=query_params)
 
     content_metadata = {
         'sha1': content_data['checksums']['sha1'],
         'sha1_git': content_data['checksums']['sha1_git'],
         'sha256': content_data['checksums']['sha256'],
         'blake2s256': content_data['checksums']['blake2s256'],
         'mimetype': content_data['mimetype'],
         'encoding': content_data['encoding'],
         'size': filesizeformat(content_data['length']),
         'language': content_data['language'],
         'licenses': content_data['licenses'],
         'filename': filename,
         'directory': directory_id,
         'context-independent directory': directory_url
     }
 
     if filename:
         content_metadata['filename'] = filename
 
     sha1_git = content_data['checksums']['sha1_git']
     swh_ids = get_swh_persistent_ids([{'type': 'content',
                                        'id': sha1_git}])
 
     heading = 'Content - %s' % sha1_git
     if breadcrumbs:
         content_path = '/'.join([bc['name'] for bc in breadcrumbs])
         heading += ' - %s' % content_path
 
     return render(request, 'browse/content.html',
                   {'heading': heading,
                    'swh_object_id': swh_ids[0]['swh_id'],
                    'swh_object_name': 'Content',
                    'swh_object_metadata': content_metadata,
                    'content': content,
                    'content_size': content_data['length'],
                    'max_content_size': content_display_max_size,
                    'mimetype': mimetype,
                    'language': language,
                    'breadcrumbs': breadcrumbs,
                    'top_right_link': {
                         'url': content_raw_url,
                         'icon': swh_object_icons['content'],
                         'text': 'Raw File'
                    },
                    'snapshot_context': snapshot_context,
                    'vault_cooking': None,
                    'show_actions_menu': True,
                    'swh_ids': swh_ids,
                    'error_code': content_data['error_code'],
                    'error_message': content_data['error_message'],
                    'error_description': content_data['error_description']},
                   status=content_data['error_code'])
diff --git a/swh/web/common/swh_templatetags.py b/swh/web/common/swh_templatetags.py
index eab6cbef..d7a6c77e 100644
--- a/swh/web/common/swh_templatetags.py
+++ b/swh/web/common/swh_templatetags.py
@@ -1,185 +1,185 @@
 # Copyright (C) 2017-2019  The Software Heritage developers
 # See the AUTHORS file at the top-level directory of this distribution
 # License: GNU Affero General Public License version 3, or any later version
 # See top-level LICENSE file for more information
 
 import json
 import re
 
 from django import template
 from django.core.serializers.json import DjangoJSONEncoder
 from django.utils.safestring import mark_safe
 
 from docutils.core import publish_parts
 from docutils.writers.html4css1 import Writer, HTMLTranslator
 from inspect import cleandoc
 
 from swh.web.common.origin_save import get_savable_origin_types
 
 register = template.Library()
 
 
 class NoHeaderHTMLTranslator(HTMLTranslator):
     """
     Docutils translator subclass to customize the generation of HTML
     from reST-formatted docstrings
     """
     def __init__(self, document):
         super().__init__(document)
         self.body_prefix = []
         self.body_suffix = []
 
     def visit_bullet_list(self, node):
         self.context.append((self.compact_simple, self.compact_p))
         self.compact_p = None
         self.compact_simple = self.is_compactable(node)
         self.body.append(self.starttag(node, 'ul', CLASS='docstring'))
 
 
 DOCSTRING_WRITER = Writer()
 DOCSTRING_WRITER.translator_class = NoHeaderHTMLTranslator
 
 
 @register.filter
 def safe_docstring_display(docstring):
     """
     Utility function to htmlize reST-formatted documentation in browsable
     api.
     """
     docstring = cleandoc(docstring)
     return publish_parts(docstring, writer=DOCSTRING_WRITER)['html_body']
 
 
 @register.filter
 def urlize_links_and_mails(text):
     """Utility function for decorating api links in browsable api.
 
     Args:
         text: whose content matching links should be transformed into
         contextual API or Browse html links.
 
     Returns
         The text transformed if any link is found.
         The text as is otherwise.
 
     """
     try:
         if 'href="' not in text:
             text = re.sub(r'(/api/[^"<]*|/browse/[^"<]*|http.*$)',
                           r'<a href="\1">\1</a>',
                           text)
             return re.sub(r'([^ <>"]+@[^ <>"]+)',
                           r'<a href="mailto:\1">\1</a>',
                           text)
     except Exception:
         pass
 
     return text
 
 
 @register.filter
 def urlize_header_links(text):
     """Utility function for decorating headers links in browsable api.
 
     Args
         text: Text whose content contains Link header value
 
     Returns:
         The text transformed with html link if any link is found.
         The text as is otherwise.
 
     """
     links = text.split(',')
     ret = ''
     for i, link in enumerate(links):
         ret += re.sub(r'<(/api/.*|/browse/.*)>', r'<<a href="\1">\1</a>>',
                       link)
         # add one link per line and align them
         if i != len(links) - 1:
             ret += '\n     '
     return ret
 
 
 @register.filter
 def jsonify(obj):
     """Utility function for converting a django template variable
     to JSON in order to use it in script tags.
 
     Args
         obj: Any django template context variable
 
     Returns:
         JSON representation of the variable.
 
     """
     return mark_safe(json.dumps(obj, cls=DjangoJSONEncoder))
 
 
 @register.filter
 def sub(value, arg):
     """Django template filter for subtracting two numbers
 
     Args:
         value (int/float): the value to subtract from
         arg (int/float): the value to subtract to
 
     Returns:
         int/float: The subtraction result
     """
     return value - arg
 
 
 @register.filter
 def mul(value, arg):
     """Django template filter for multiplying two numbers
 
     Args:
         value (int/float): the value to multiply from
         arg (int/float): the value to multiply with
 
     Returns:
         int/float: The multiplication result
     """
     return value * arg
 
 
 @register.filter
 def key_value(dict, key):
     """Django template filter to get a value in a dictionary.
 
         Args:
             dict (dict): a dictionary
             key (str): the key to lookup value
 
         Returns:
             The requested value in the dictionary
     """
     return dict[key]
 
 
 @register.filter
 def origin_type_savable(origin_type):
     """Django template filter to check if a save request can be
     created for a given origin type.
 
         Args:
             origin_type (str): the type of software origin
 
         Returns:
-            If the origin type is savable or not
+            If the origin type is saveable or not
     """
     return origin_type in get_savable_origin_types()
 
 
 @register.filter
 def split(value, arg):
     """Django template filter to split a string.
 
         Args:
             value (str): the string to split
             arg (str): the split separator
 
         Returns:
-            list: the splitted string parts
+            list: the split string parts
     """
     return value.split(arg)
diff --git a/swh/web/tests/browse/views/test_content.py b/swh/web/tests/browse/views/test_content.py
index 1c4f4986..55fe8333 100644
--- a/swh/web/tests/browse/views/test_content.py
+++ b/swh/web/tests/browse/views/test_content.py
@@ -1,363 +1,363 @@
 # Copyright (C) 2017-2019  The Software Heritage developers
 # See the AUTHORS file at the top-level directory of this distribution
 # License: GNU Affero General Public License version 3, or any later version
 # See top-level LICENSE file for more information
 
 from unittest.mock import patch
 
 from django.utils.html import escape
 
 from hypothesis import given
 
 from swh.web.browse.utils import (
     get_mimetype_and_encoding_for_content, prepare_content_for_display,
-    _reencode_content
+    _re_encode_content
 )
 from swh.web.common.exc import NotFoundExc
 from swh.web.common.utils import reverse, get_swh_persistent_id
 from swh.web.common.utils import gen_path_info
 from swh.web.tests.strategies import (
     content, content_text_non_utf8, content_text_no_highlight,
     content_image_type, content_text, invalid_sha1, unknown_content
 )
 from swh.web.tests.testcase import WebTestCase
 
 
 class SwhBrowseContentTest(WebTestCase):
 
     @given(content_text())
     def test_content_view_text(self, content):
 
         sha1_git = content['sha1_git']
 
         url = reverse('browse-content',
                       url_args={'query_string': content['sha1']},
                       query_params={'path': content['path']})
 
         url_raw = reverse('browse-content-raw',
                           url_args={'query_string': content['sha1']})
 
         resp = self.client.get(url)
 
         content_display = self._process_content_for_display(content)
         mimetype = content_display['mimetype']
 
         self.assertEqual(resp.status_code, 200)
         self.assertTemplateUsed('browse/content.html')
 
         if mimetype.startswith('text/'):
             self.assertContains(resp, '<code class="%s">' %
                                       content_display['language'])
             self.assertContains(resp, escape(content_display['content_data']))
         self.assertContains(resp, url_raw)
 
         swh_cnt_id = get_swh_persistent_id('content', sha1_git)
         swh_cnt_id_url = reverse('browse-swh-id',
                                  url_args={'swh_id': swh_cnt_id})
         self.assertContains(resp, swh_cnt_id)
         self.assertContains(resp, swh_cnt_id_url)
 
     @given(content_text_no_highlight())
     def test_content_view_text_no_highlight(self, content):
 
         sha1_git = content['sha1_git']
 
         url = reverse('browse-content',
                       url_args={'query_string': content['sha1']})
 
         url_raw = reverse('browse-content-raw',
                           url_args={'query_string': content['sha1']})
 
         resp = self.client.get(url)
 
         content_display = self._process_content_for_display(content)
 
         self.assertEqual(resp.status_code, 200)
         self.assertTemplateUsed('browse/content.html')
 
         self.assertContains(resp, '<code class="nohighlight">')
         self.assertContains(resp, escape(content_display['content_data'])) # noqa
         self.assertContains(resp, url_raw)
 
         swh_cnt_id = get_swh_persistent_id('content', sha1_git)
         swh_cnt_id_url = reverse('browse-swh-id',
                                  url_args={'swh_id': swh_cnt_id})
 
         self.assertContains(resp, swh_cnt_id)
         self.assertContains(resp, swh_cnt_id_url)
 
     @given(content_text_non_utf8())
     def test_content_view_no_utf8_text(self, content):
 
         sha1_git = content['sha1_git']
 
         url = reverse('browse-content',
                       url_args={'query_string': content['sha1']})
 
         resp = self.client.get(url)
 
         content_display = self._process_content_for_display(content)
 
         self.assertEqual(resp.status_code, 200)
         self.assertTemplateUsed('browse/content.html')
         swh_cnt_id = get_swh_persistent_id('content', sha1_git)
         swh_cnt_id_url = reverse('browse-swh-id',
                                  url_args={'swh_id': swh_cnt_id})
         self.assertContains(resp, swh_cnt_id_url)
         self.assertContains(resp, escape(content_display['content_data']))
 
     @given(content_image_type())
     def test_content_view_image(self, content):
 
         url = reverse('browse-content',
                       url_args={'query_string': content['sha1']})
 
         url_raw = reverse('browse-content-raw',
                           url_args={'query_string': content['sha1']})
 
         resp = self.client.get(url)
 
         content_display = self._process_content_for_display(content)
         mimetype = content_display['mimetype']
         content_data = content_display['content_data']
 
         self.assertEqual(resp.status_code, 200)
         self.assertTemplateUsed('browse/content.html')
 
         self.assertContains(resp, '<img src="data:%s;base64,%s"/>'
                                   % (mimetype, content_data.decode('utf-8')))
         self.assertContains(resp, url_raw)
 
     @given(content_text())
     def test_content_view_text_with_path(self, content):
 
         path = content['path']
 
         url = reverse('browse-content',
                       url_args={'query_string': content['sha1']},
                       query_params={'path': path})
 
         resp = self.client.get(url)
         self.assertEqual(resp.status_code, 200)
         self.assertTemplateUsed('browse/content.html')
 
         self.assertContains(resp, '<nav class="bread-crumbs')
 
         content_display = self._process_content_for_display(content)
         mimetype = content_display['mimetype']
 
         if mimetype.startswith('text/'):
             hljs_language = content['hljs_language']
             self.assertContains(resp, '<code class="%s">' % hljs_language)
             self.assertContains(resp, escape(content_display['content_data']))
 
         split_path = path.split('/')
 
         root_dir_sha1 = split_path[0]
         filename = split_path[-1]
         path = path.replace(root_dir_sha1 + '/', '').replace(filename, '')
 
         path_info = gen_path_info(path)
 
         root_dir_url = reverse('browse-directory',
                                url_args={'sha1_git': root_dir_sha1})
 
         self.assertContains(resp, '<li class="swh-path">',
                             count=len(path_info)+1)
 
         self.assertContains(resp, '<a href="' + root_dir_url + '">' +
                             root_dir_sha1[:7] + '</a>')
 
         for p in path_info:
             dir_url = reverse('browse-directory',
                               url_args={'sha1_git': root_dir_sha1,
                                         'path': p['path']})
             self.assertContains(resp, '<a href="' + dir_url + '">' +
                                 p['name'] + '</a>')
 
         self.assertContains(resp, '<li>' + filename + '</li>')
 
         url_raw = reverse('browse-content-raw',
                           url_args={'query_string': content['sha1']},
                           query_params={'filename': filename})
         self.assertContains(resp, url_raw)
 
         url = reverse('browse-content',
                       url_args={'query_string': content['sha1']},
                       query_params={'path': filename})
 
         resp = self.client.get(url)
         self.assertEqual(resp.status_code, 200)
         self.assertTemplateUsed('browse/content.html')
 
         self.assertNotContains(resp, '<nav class="bread-crumbs')
 
     @given(content_text())
     def test_content_raw_text(self, content):
 
         url = reverse('browse-content-raw',
                       url_args={'query_string': content['sha1']})
 
         resp = self.client.get(url)
 
         content_data = self.content_get(content['sha1'])['data']
 
         self.assertEqual(resp.status_code, 200)
         self.assertEqual(resp['Content-Type'], 'text/plain')
         self.assertEqual(resp['Content-disposition'],
                          'filename=%s_%s' % ('sha1', content['sha1']))
         self.assertEqual(resp.content, content_data)
 
         filename = content['path'].split('/')[-1]
 
         url = reverse('browse-content-raw',
                       url_args={'query_string': content['sha1']}, # noqa
                       query_params={'filename': filename})
 
         resp = self.client.get(url)
 
         self.assertEqual(resp.status_code, 200)
         self.assertEqual(resp['Content-Type'], 'text/plain')
         self.assertEqual(resp['Content-disposition'],
                          'filename=%s' % filename)
         self.assertEqual(resp.content, content_data)
 
     @given(content_text_non_utf8())
     def test_content_raw_no_utf8_text(self, content):
 
         url = reverse('browse-content-raw',
                       url_args={'query_string': content['sha1']})
 
         resp = self.client.get(url)
         self.assertEqual(resp.status_code, 200)
         _, encoding = get_mimetype_and_encoding_for_content(resp.content)
         self.assertEqual(encoding, content['encoding'])
 
     @given(content_image_type())
     def test_content_raw_bin(self, content):
 
         url = reverse('browse-content-raw',
                       url_args={'query_string': content['sha1']})
 
         resp = self.client.get(url)
 
         filename = content['path'].split('/')[-1]
         content_data = self.content_get(content['sha1'])['data']
 
         self.assertEqual(resp.status_code, 200)
         self.assertEqual(resp['Content-Type'], 'application/octet-stream')
         self.assertEqual(resp['Content-disposition'],
                          'attachment; filename=%s_%s' %
                          ('sha1', content['sha1']))
         self.assertEqual(resp.content, content_data)
 
         url = reverse('browse-content-raw',
                       url_args={'query_string': content['sha1']},
                       query_params={'filename': filename})
 
         resp = self.client.get(url)
 
         self.assertEqual(resp.status_code, 200)
         self.assertEqual(resp['Content-Type'], 'application/octet-stream')
         self.assertEqual(resp['Content-disposition'],
                          'attachment; filename=%s' % filename)
         self.assertEqual(resp.content, content_data)
 
     @given(invalid_sha1(), unknown_content())
     def test_content_request_errors(self, invalid_sha1, unknown_content):
 
         url = reverse('browse-content',
                       url_args={'query_string': invalid_sha1})
         resp = self.client.get(url)
         self.assertEqual(resp.status_code, 400)
         self.assertTemplateUsed('error.html')
 
         url = reverse('browse-content',
                       url_args={'query_string': unknown_content['sha1']})
         resp = self.client.get(url)
         self.assertEqual(resp.status_code, 404)
         self.assertTemplateUsed('error.html')
 
     @patch('swh.web.browse.utils.service')
     @given(content())
     def test_content_bytes_missing(self, mock_service, content):
 
         content_data = self.content_get_metadata(content['sha1'])
         content_data['data'] = None
 
         mock_service.lookup_content.return_value = content_data
         mock_service.lookup_content_filetype.side_effect = Exception()
         mock_service.lookup_content_raw.side_effect = NotFoundExc(
             'Content bytes not available!')
 
         url = reverse('browse-content',
                       url_args={'query_string': content['sha1']})
 
         resp = self.client.get(url)
 
         self.assertEqual(resp.status_code, 404)
         self.assertTemplateUsed('browse/content.html')
 
     @patch('swh.web.browse.views.content.request_content')
     def test_content_too_large(self, mock_request_content):
         stub_content_too_large_data = {
             'checksums': {
                 'sha1': '8624bcdae55baeef00cd11d5dfcfa60f68710a02',
                 'sha1_git': '94a9ed024d3859793618152ea559a168bbcbb5e2',
                 'sha256': ('8ceb4b9ee5adedde47b31e975c1d90c73ad27b6b16'
                            '5a1dcd80c7c545eb65b903'),
                 'blake2s256': ('38702b7168c7785bfe748b51b45d9856070ba90'
                                'f9dc6d90f2ea75d4356411ffe')
             },
             'length': 30000000,
             'raw_data': None,
             'mimetype': 'text/plain',
             'encoding': 'us-ascii',
             'language': 'not detected',
             'licenses': 'GPL',
             'error_code': 200,
             'error_message': '',
             'error_description': ''
         }
 
         content_sha1 = stub_content_too_large_data['checksums']['sha1']
 
         mock_request_content.return_value = stub_content_too_large_data
 
         url = reverse('browse-content',
                       url_args={'query_string': content_sha1})
 
         url_raw = reverse('browse-content-raw',
                           url_args={'query_string': content_sha1})
 
         resp = self.client.get(url)
 
         self.assertEqual(resp.status_code, 200)
         self.assertTemplateUsed('browse/content.html')
 
         self.assertContains(resp, 'Content is too large to be displayed')
         self.assertContains(resp, url_raw)
 
     def _process_content_for_display(self, content):
         content_data = self.content_get(content['sha1'])
 
         mime_type, encoding = get_mimetype_and_encoding_for_content(
             content_data['data'])
 
-        mime_type, content_data = _reencode_content(mime_type, encoding,
-                                                    content_data['data'])
+        mime_type, content_data = _re_encode_content(mime_type, encoding,
+                                                     content_data['data'])
 
         return prepare_content_for_display(content_data, mime_type,
                                            content['path'])
 
     @given(content())
     def test_content_uppercase(self, content):
         url = reverse('browse-content-uppercase-checksum',
                       url_args={'query_string': content['sha1'].upper()})
         resp = self.client.get(url)
         self.assertEqual(resp.status_code, 302)
 
         redirect_url = reverse('browse-content',
                                url_args={'query_string': content['sha1']})
 
         self.assertEqual(resp['location'], redirect_url)