diff --git a/docs/cli.rst b/docs/cli.rst index 2ba90a1..a9420b4 100644 --- a/docs/cli.rst +++ b/docs/cli.rst @@ -1,8 +1,8 @@ .. _swh-fuse-cli: Command-line interface ====================== .. click:: swh.fuse.cli:fuse - :prog: swh fuse + :prog: swh fs :show-nested: diff --git a/docs/design.md b/docs/design.md index 54173f1..ea2e979 100644 --- a/docs/design.md +++ b/docs/design.md @@ -1,221 +1,221 @@ # Software Heritage virtual filesystem (SwhFS) --- Design notes ```{warning} this document describes design notes for the Software Heritage virtual filesystem (SwhFS), which is still under active development and hence **not yet available** for general use. ``` The [Software Heritage](https://www.softwareheritage.org/) {ref}`data model ` is a [Direct Acyclic Graph](https://en.wikipedia.org/wiki/Directed_acyclic_graph) (DAG) with nodes of different types that correspond to source code artifacts such as directories, commits, etc. Using this [FUSE](https://en.wikipedia.org/wiki/Filesystem_in_Userspace) module (*SwhFS* for short) you can locally mount, and then navigate as a (virtual) file system, parts of the archive identified by {ref}`Software Heritage identifiers ` (SWHIDs). To retrieve information about the source code artifacts, SwhFS interacts over the network with the Software Heritage archive via its {ref}`Web API `. ## Command-line interface - $ swh fuse mount [SWHID]... + $ swh fs mount [SWHID]... will mount the Software Heritage archive at the local ``, the *SwhFS mount point*. From there, the user will be able to lazily load and navigate the archive using SWHID at entry points. If one or more SWHIDs are also specified, the corresponding objects will be pre- fetched from the archive at mount-time and available at `/archive/`. For more details see the {ref}`CLI documentation `. ## Mount point The SwhFS mount point contain: - `archive/`: initially empty, this directory is lazily populated with one entry per accessed SWHID, having actual SWHIDs as names. - `meta/`: initially empty, this directory contains one `.json` file for each `` entry under `archive/`. The JSON file contain all available meta information about the given SWHID, as returned by the Software Heritage Web API for that object. Note that, in case of pagination (e.g., snapshot objects with many branches) the JSON file will contain a complete version with all pages merged together. ```{todo} Consider sharding ``/`.json` files under `ab/cd/` dirs to avoid exploding the number of dir entries under `archive/` and `meta/` (cf. [T2694](https://forge.softwareheritage.org/T2694)) ``` ## File system representation SWHID are represented differently on the file-system depending on the associated node types in the Software Heritage graph. Details are given below, for each node type. ### `cnt` nodes (blobs) Content leaves (AKA blobs) are represented on disks as regular files, containing the corresponding bytes, as archived. Note that permissions are associated to blobs only in the context of directories. Hence, when accessing blobs from the top-level `archive/` directory, the permissions of the `archive/SWHID` file will be arbitrary and not meaningful (e.g., `0x644`). ### `dir` nodes (directories) Directory nodes are represented as directories on the file-system, containing one entry for each entry of the archived directory. Entry names and other metadata, including permissions, will correspond to the archived entry metadata. Note that SwhFS is mounted read-only, no matter what the permissions say. So it is possible that, in the context of a directory, a file is presented as writable, whereas actually writing to it will fail with `EPERM`. ### `rev` nodes (commits) Revision (AKA commit) nodes are represented on the file-system as directories with the following entries: - `root`: source tree at the time of the commit, as a symlink pointing into `archive/`, to a SWHID of type `dir` - `parents/` (note the plural): a virtual directory containing entries named `1`, `2`, `3`, etc., one for each parent commit. Each of these entry is a symlink pointing into `archive/`, to the SWHID file for the given parent commit - `parent` (note the singular): present if and only if the current commit has a single parent commit (which is the most common case). When present it is a symlink pointing into `archive/` to the SWHID for the sole parent commit - `meta.json`: metadata for the current node, as a symlink pointing to the relevant `meta/.json` file ### `rel` nodes (releases) Release nodes are represented on the file-system as directories with the following entries: - `target`: target node, as a symlink to `archive/` - `target_type`: type of the target SWHID, as a 3-letter code - `root`: present if and only if the release points to something that (transitively) resolves to a directory. When present it is a symlink pointing into `archive/` to the SWHID of the given directory - `meta.json`: metadata for the current node, as a symlink pointing to the relevant `meta/.json` file ### `snp` nodes (snapshots) Snapshot nodes are represented on the file-system as directories with on entry for each branch in the snapshot. Branch names are mangled by replacing... ```{todo} decide how to do branch name escaping and describe it here ``` Each entry is a symlink pointing into `archive/` to the branch target SWHID. ## Caching SwhFS retrieves both metadata and file contents from the Software Heritage archive via the network. In order to obtain reasonable performances several caches are used to minimize network transfer. Caches are stored on disk in SQLite DB(s) located under `$XDG_CACHE_HOME/swh/fuse/`. ```{todo} - potential improvement: store blobs larger than a threshold on disk as files rather than in SQLite, e.g., under `$XDG_CACHE_HOME/swh/fuse/objects/` ``` All caches are persistent (i.e., they survive the restart of the SwhFS process) and global (i.e., they are shared by concurrent SwhFS processes). We assume that no cache *invalidation* is necessary, due to intrinsic properties of the Software Heritage archive, such as integrity verification and append-only archive changes. To clean the caches one can just remove the corresponding files from disk. ### Metadata cache SWHID → JSON metadata The metadata cache map each SWHID to the complete metadata of the referenced object. This is analogous to what is available in `meta/.json` file (and generally used as data source for returning the content of those files). Cache location on-disk: `$XDG_CACHE_HOME/swh/fuse/metadata.sqlite` ### Blob cache cnt SWHID → bytes The blob cache map SWHIDs of type `cnt` to the bytes of their archived content. In general, each SWHID that has an entry in the blob cache also has a matching entry in the metadata cache for other blob attributes (e.g., checksums, size, etc.). The blob cache entry for a given content object is populated, at the latest, the first time the object is `open()`-d. It might be populated earlier on due to prefetching, e.g., when a directory pointing to the given content is listed for the first time. Cache location on-disk: `$XDG_CACHE_HOME/swh/fuse/blob.sqlite` ### Dentry cache dir SWHID → directory entries The dentry (directory entry) cache map SWHIDs of type `dir` to the directory entries they contain. Each entry comes with its name as well as file attributes (i.e., all its needed to perform a detailed directory listing). Additional attributes of each directory entry should be looked up on a entry by entry basis, possibly hitting the metadata cache. The dentry cache for a given dir is populated, at the latest, when the content of the directory is listed. More aggressive prefetching might happen. For instance, when first opening a dir a recursive listing of it can be retrieved from the remote backend and used to recursively populate the dentry cache for all (transitive) sub-directories. ### Parents cache rev SWHID → parent SWHIDs The parents cache map SWHIDs of type `rev` to the list of their parent commits. The parents cache for a given rev is populated, at the latest, when the content of the revision virtual directory is listed. More aggressive prefetching might happen. For instance, when first opening a rev virtual directory a recursive listing of all its ancestor can be retrieved from the remote backend and used to recursively populate the parents cache for all ancestors. diff --git a/swh/fuse/cli.py b/swh/fuse/cli.py index b14afcf..326d4ba 100644 --- a/swh/fuse/cli.py +++ b/swh/fuse/cli.py @@ -1,198 +1,198 @@ # Copyright (C) 2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information # WARNING: do not import unnecessary things here to keep cli startup time under # control import os from pathlib import Path from typing import Any, Dict import click from swh.core.cli import CONTEXT_SETTINGS from swh.core.cli import swh as swh_cli_group from swh.model.cli import SWHIDParamType # All generic config code should reside in swh.core.config DEFAULT_CONFIG_PATH = os.environ.get( "SWH_CONFIG_FILE", os.path.join(click.get_app_dir("swh"), "global.yml") ) CACHE_HOME_DIR: Path = ( Path(os.environ["XDG_CACHE_HOME"]) if "XDG_CACHE_HOME" in os.environ else Path.home() / ".cache" ) DEFAULT_CONFIG: Dict[str, Any] = { "cache": { "metadata": {"path": CACHE_HOME_DIR / "swh/fuse/metadata.sqlite"}, "blob": {"path": CACHE_HOME_DIR / "swh/fuse/blob.sqlite"}, }, "web-api": { "url": "https://archive.softwareheritage.org/api/1", "auth-token": None, }, } -@swh_cli_group.group(name="fuse", context_settings=CONTEXT_SETTINGS) +@swh_cli_group.group(name="fs", context_settings=CONTEXT_SETTINGS) @click.option( "-C", "--config-file", default=None, type=click.Path(exists=True, dir_okay=False, path_type=str), help=f"Configuration file (default: {DEFAULT_CONFIG_PATH})", ) @click.pass_context def fuse(ctx, config_file): """Software Heritage virtual file system""" import logging import pprint from swh.core import config if not config_file: config_file = DEFAULT_CONFIG_PATH try: logging.info(f"Loading configuration from: {config_file}") conf = config.read_raw_config(config.config_basepath(config_file)) if not conf: raise ValueError(f"Cannot parse configuration file: {config_file}") if config_file == DEFAULT_CONFIG_PATH: try: conf = conf["swh"]["fuse"] except KeyError: pass # recursive merge not done by config.read conf = config.merge_configs(DEFAULT_CONFIG, conf) except Exception as err: logging.warning(f"Using default configuration (cannot load custom one: {err})") conf = DEFAULT_CONFIG logging.info(f"Read configuration: \n{pprint.pformat(conf)}") ctx.ensure_object(dict) ctx.obj["config"] = conf @fuse.command(name="mount") @click.argument( "path", required=True, metavar="PATH", type=click.Path(exists=True, dir_okay=True, file_okay=False), ) @click.argument("swhids", nargs=-1, metavar="[SWHID]...", type=SWHIDParamType()) @click.option( "-f/-d", "--foreground/--daemon", default=False, help="whether to run FUSE attached to the console (foreground) " "or daemonized in the background (default: daemon)", ) @click.pass_context def mount(ctx, swhids, path, foreground): """Mount the Software Heritage virtual file system at PATH. If specified, objects referenced by the given SWHIDs will be prefetched and used to populate the virtual file system (VFS). Otherwise the VFS will be populated on-demand, when accessing its content. \b Example: \b $ mkdir swhfs - $ swh fuse mount swhfs/ + $ swh fs mount swhfs/ $ grep printf swhfs/archive/swh:1:cnt:c839dea9e8e6f0528b468214348fee8669b305b2 printf("Hello, World!"); $ """ import asyncio from contextlib import ExitStack import logging from daemon import DaemonContext from swh.fuse import fuse # TODO: set default logging settings when --log-config is not passed # DEFAULT_LOG_PATH = Path(".local/swh/fuse/mount.log") with ExitStack() as stack: if not foreground: # TODO: temporary fix until swh.core has the proper logging utilities # Disable logging config before daemonizing, and reset it once # daemonized to be sure to not close file handlers logging.shutdown() # Stay in the current working directory when spawning daemon cwd = os.getcwd() stack.enter_context(DaemonContext(working_directory=cwd)) logging.config.dictConfig( { "version": 1, "handlers": { "syslog": { "class": "logging.handlers.SysLogHandler", "address": "/dev/log", }, }, "root": {"level": ctx.obj["log_level"], "handlers": ["syslog"],}, } ) conf = ctx.obj["config"] asyncio.run(fuse.main(swhids, path, conf)) @fuse.command() @click.argument( "path", required=True, metavar="PATH", type=click.Path(exists=True, dir_okay=True, file_okay=False), ) @click.pass_context def umount(ctx, path): """Unmount a mounted virtual file system. Note: this is equivalent to ``fusermount -u PATH``, which can be used to unmount any FUSE-based virtual file system. See ``man fusermount3``. """ import logging import subprocess try: subprocess.run(["fusermount", "-u", path], check=True) except subprocess.CalledProcessError as err: logging.error( f"cannot unmount virtual file system: " f"\"{' '.join(err.cmd)}\" returned exit status {err.returncode}" ) ctx.exit(1) @fuse.command() @click.pass_context def clean(ctx): """Clean on-disk cache(s). """ def rm_cache(conf, cache_name): try: conf["cache"][cache_name]["path"].unlink(missing_ok=True) except KeyError: pass conf = ctx.obj["config"] for cache_name in ["blob", "metadata"]: rm_cache(conf, cache_name)