diff --git a/docs/design.md b/docs/design.md index 2b1e675..8998b66 100644 --- a/docs/design.md +++ b/docs/design.md @@ -1,236 +1,244 @@ # Software Heritage Filesystem (SwhFS) --- Design notes The [Software Heritage](https://www.softwareheritage.org/) {ref}`data model ` is a [Direct Acyclic Graph](https://en.wikipedia.org/wiki/Directed_acyclic_graph) (DAG) with nodes of different types that correspond to source code artifacts such as directories, commits, etc. Using this [FUSE](https://en.wikipedia.org/wiki/Filesystem_in_Userspace) module (*SwhFS* for short) you can locally mount, and then navigate as a filesystem, parts of the archive identified by {ref}`Software Heritage identifiers ` (SWHIDs). To retrieve information about the source code artifacts, SwhFS interacts over the network with the Software Heritage archive via its {ref}`Web API `. ## Architecture SwhFS in context ([C4](https://en.wikipedia.org/wiki/C4_model) context diagram): ```{image} images/arch-context.svg :align: center ``` Main components of SwhFS (C4 container diagram): ```{image} images/arch-container.svg :align: center ``` ## Command-line interface $ swh fs mount [SWHID]... will mount the Software Heritage archive at the local ``, the *SwhFS mount point*. From there, the user will be able to lazily load and navigate the archive using SWHID at entry points. If one or more SWHIDs are also specified, the corresponding objects will be pre- fetched from the archive at mount-time and available at `/archive/`. For more details see the {ref}`CLI documentation `. ## Mount point The SwhFS mount point contain: - `archive/`: virtual directory allowing to mount any artifact on the fly using its SWHID as name. The associated metadata of the artifact from the Software Heritage Web API can also be accessed through the `SWHID.json` file (in case of pagination, the JSON file will contain a complete version with all pages merged together). Note: the archive directory cannot be listed with ls, but entries in it can be accessed (e.g., using cat or cd). - `origin/`: initially empty, this directory is lazily populated with one entry per accessed origin URL, having encoded URL as names. The URL encoding is done using the percent-encoding mechanism described in [RFC 3986](https://tools.ietf.org/html/rfc3986.html). +- `cache/`: on-disk representation of locally cached objects and metadata. Via + this directory you can browse cached data and selectively remove them from the + cache, freeing disk space. (See `swh fs clean` in the {ref}`CLI + ` to completely empty the cache). The directory is populated + with symlinks to: all artifacts, identified by their SWHIDs and sharded by the + first two character of their object id, the metadata identified by a + `SWHID.json` entry, and the `origin/` directory. + ## File system representation SWHID are represented differently on the file-system depending on the associated node types in the Software Heritage graph. Details are given below, for each node type. ### `cnt` nodes (blobs) Content leaves (AKA blobs) are represented on disks as regular files, containing the corresponding bytes, as archived. Note that permissions are associated to blobs only in the context of directories. Hence, when accessing blobs from the top-level `archive/` directory, the permissions of the `archive/SWHID` file will be arbitrary and not meaningful (e.g., `0x644`). ### `dir` nodes (directories) Directory nodes are represented as directories on the file-system, containing one entry for each entry of the archived directory. Entry names and other metadata, including permissions, will correspond to the archived entry metadata. Note that SwhFS is mounted read-only, no matter what the permissions say. So it is possible that, in the context of a directory, a file is presented as writable, whereas actually writing to it will fail with `EPERM`. ### `rev` nodes (commits) Revision (AKA commit) nodes are represented on the file-system as directories with the following entries: - `root`: source tree at the time of the commit, as a symlink pointing into `archive/`, to a SWHID of type `dir` - `parents/` (note the plural): a virtual directory containing entries named `1`, `2`, `3`, etc., one for each parent commit. Each of these entry is a symlink pointing into `archive/`, to the SWHID file for the given parent commit - `parent` (note the singular): present if and only if the current commit has at least one parent commit (which is the most common case). When present it is a symlink pointing into `parents/1/` - `history`: a virtual directory listing all its revision ancestors, sorted in reverse topological order. The history can be listed through `by-date/`, `by-hash/` or `by-page/` with each its own sharding policy. - `meta.json`: metadata for the current node, as a symlink pointing to the relevant `archive/.json` file ### `rel` nodes (releases) Release nodes are represented on the file-system as directories with the following entries: - `target`: target node, as a symlink to `archive/` - `target_type`: regular file containing the type of the target SWHID - `root`: present if and only if the release points to something that (transitively) resolves to a directory. When present it is a symlink pointing into `archive/` to the SWHID of the given directory - `meta.json`: metadata for the current node, as a symlink pointing to the relevant `archive/.json` file ### `snp` nodes (snapshots) Snapshot nodes are represented on the file-system as recursive directories following the branch names structure. For example, a branch named ``refs/tags/v1.0`` will be represented as a ``refs`` directory containing a ``tags`` directory containing a ``v1.0`` symlink pointing to the branch target SWHID. ### `ori` nodes (origins) Origin nodes are represented on the file-system as directories with one entry for each origin visit. The visits directories are named after the visit date (`YYYY-MM-DD`, if multiple visits occur the same day only the first one is kept). Each visit directory contains a `meta.json` with associated metadata for the origin node, and potentially a `snapshot` symlink pointing to the visit's snapshot node. ## Caching SwhFS retrieves both metadata and file contents from the Software Heritage archive via the network. In order to obtain reasonable performances several caches are used to minimize network transfer. Caches are stored on disk in SQLite DB(s) located under `$XDG_CACHE_HOME/swh/fuse/`. ```{todo} - potential improvement: store blobs larger than a threshold on disk as files rather than in SQLite, e.g., under `$XDG_CACHE_HOME/swh/fuse/objects/` ``` All caches are persistent (i.e., they survive the restart of the SwhFS process) and global (i.e., they are shared by concurrent SwhFS processes). We assume that no cache *invalidation* is necessary, due to intrinsic properties of the Software Heritage archive, such as integrity verification and append-only archive changes. To clean the caches one can just remove the corresponding files from disk. ### Metadata cache Artifact id → JSON metadata The metadata cache map each artifact to the complete metadata of the referenced object. This is analogous to what is available in `archive/.json` file (and generally used as data source for returning the content of those files). Artifacts are identified using their SWHIDs, or in the case of origin visits, using their URLs. Cache location on-disk: `$XDG_CACHE_HOME/swh/fuse/metadata.sqlite` ### Blob cache cnt SWHID → bytes The blob cache map SWHIDs of type `cnt` to the bytes of their archived content. In general, each SWHID that has an entry in the blob cache also has a matching entry in the metadata cache for other blob attributes (e.g., checksums, size, etc.). The blob cache entry for a given content object is populated, at the latest, the first time the object is `open()`-d. It might be populated earlier on due to prefetching, e.g., when a directory pointing to the given content is listed for the first time. Cache location on-disk: `$XDG_CACHE_HOME/swh/fuse/blob.sqlite` ### History cache rev SWHID → ancestor SWHIDs The history cache map SWHIDs of type `rev` to a list of `rev` SWHIDs corresponding to all its revision ancestors, sorted in reverse topological order. As the parents cache, the history cache is lazily populated and can be prefetched. To efficiently store the ancestor lists, the history cache represents ancestors as graph edges (a pair of two SWHID nodes), meaning the history cache is shared amongst all revisions parents. Cache location on-disk: `$XDG_CACHE_HOME/swh/fuse/metadata.sqlite` ### Direntry cache dir inode → directory entries The direntry cache map inode representing directories to the entries they contain. Each entry comes with its name as well as file attributes (i.e., all its needed to perform a detailed directory listing). Additional attributes of each directory entry should be looked up on a entry by entry basis, possibly hitting the metadata cache. The direntry cache for a given dir is populated, at the latest, when the content of the directory is listed. More aggressive prefetching might happen. For instance, when first opening a dir a recursive listing of it can be retrieved from the remote backend and used to recursively populate the direntry cache for all (transitive) sub-directories. Cache location: in-memory. diff --git a/swh/fuse/cache.py b/swh/fuse/cache.py index 42b7dd5..cf56d2f 100644 --- a/swh/fuse/cache.py +++ b/swh/fuse/cache.py @@ -1,380 +1,379 @@ # Copyright (C) 2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information from abc import ABC from collections import OrderedDict from dataclasses import dataclass, field import json import logging from pathlib import Path import re import sys from typing import Any, AsyncGenerator, Dict, List, Optional, Tuple import aiosqlite import dateutil.parser from psutil import virtual_memory from swh.fuse.fs.artifact import RevisionHistoryShardByDate from swh.fuse.fs.entry import FuseDirEntry, FuseEntry -from swh.fuse.fs.mountpoint import ArchiveDir, OriginDir +from swh.fuse.fs.mountpoint import CacheDir, OriginDir from swh.model.exceptions import ValidationError from swh.model.identifiers import REVISION, SWHID, parse_swhid from swh.web.client.client import ORIGIN_VISIT, typify_json class FuseCache: """SwhFS retrieves both metadata and file contents from the Software Heritage archive via the network. In order to obtain reasonable performances several caches are used to minimize network transfer. Caches are stored on disk in SQLite databases located at `$XDG_CACHE_HOME/swh/fuse/`. All caches are persistent (i.e., they survive the restart of the SwhFS process) and global (i.e., they are shared by concurrent SwhFS processes). We assume that no cache *invalidation* is necessary, due to intrinsic properties of the Software Heritage archive, such as integrity verification and append-only archive changes. To clean the caches one can just remove the corresponding files from disk. """ def __init__(self, cache_conf: Dict[str, Any]): self.cache_conf = cache_conf async def __aenter__(self): # History and raw metadata share the same SQLite db self.metadata = MetadataCache(self.cache_conf["metadata"]) self.history = HistoryCache(self.cache_conf["metadata"]) self.blob = BlobCache(self.cache_conf["blob"]) self.direntry = DirEntryCache(self.cache_conf["direntry"]) await self.metadata.__aenter__() await self.blob.__aenter__() await self.history.__aenter__() return self async def __aexit__(self, type=None, val=None, tb=None) -> None: await self.metadata.__aexit__() await self.blob.__aexit__() await self.history.__aexit__() async def get_cached_swhids(self) -> AsyncGenerator[SWHID, None]: """ Return a list of all previously cached SWHID """ # Use the metadata db since it should always contain all accessed SWHIDs metadata_cursor = await self.metadata.conn.execute( "select swhid from metadata_cache" ) swhids = await metadata_cursor.fetchall() for raw_swhid in swhids: yield parse_swhid(raw_swhid[0]) async def get_cached_visits(self) -> AsyncGenerator[str, None]: """ Return a list of all previously cached visit URL """ cursor = await self.metadata.conn.execute("select url from visits_cache") urls = await cursor.fetchall() for raw_url in urls: yield raw_url[0] class AbstractCache(ABC): """ Abstract cache implementation to share common behavior between cache types (such as: YAML config parsing, SQLite context manager) """ def __init__(self, conf: Dict[str, Any]): self.conf = conf async def __aenter__(self): # In-memory (thus temporary) caching is useful for testing purposes if self.conf.get("in-memory", False): path = "file::memory:?cache=shared" uri = True else: path = Path(self.conf["path"]) path.parent.mkdir(parents=True, exist_ok=True) uri = False self.conn = await aiosqlite.connect(path, uri=uri) return self async def __aexit__(self, type=None, val=None, tb=None) -> None: await self.conn.close() class MetadataCache(AbstractCache): """ The metadata cache map each artifact to the complete metadata of the referenced object. This is analogous to what is available in `archive/.json` file (and generally used as data source for returning the content of those files). Artifacts are identified using their SWHIDs, or in the case of origin visits, using their URLs. """ DB_SCHEMA = """ create table if not exists metadata_cache ( swhid text, metadata blob, date text ); create index if not exists idx_metadata on metadata_cache(swhid); create table if not exists visits_cache ( url text, metadata blob ); create index if not exists idx_visits on visits_cache(url); """ async def __aenter__(self): await super().__aenter__() await self.conn.executescript(self.DB_SCHEMA) await self.conn.commit() return self async def get(self, swhid: SWHID, typify: bool = True) -> Any: cursor = await self.conn.execute( "select metadata from metadata_cache where swhid=?", (str(swhid),) ) cache = await cursor.fetchone() if cache: metadata = json.loads(cache[0]) return typify_json(metadata, swhid.object_type) if typify else metadata else: return None async def get_visits(self, url_encoded: str) -> Optional[List[Dict[str, Any]]]: cursor = await self.conn.execute( "select metadata from visits_cache where url=?", (url_encoded,) ) cache = await cursor.fetchone() if cache: visits = json.loads(cache[0]) visits_typed = [typify_json(v, ORIGIN_VISIT) for v in visits] return visits_typed else: return None async def set(self, swhid: SWHID, metadata: Any) -> None: # Fill in the date column for revisions (used as cache for history/by-date/) swhid_date = "" if swhid.object_type == REVISION: date = dateutil.parser.parse(metadata["date"]) swhid_date = RevisionHistoryShardByDate.DATE_FMT.format( year=date.year, month=date.month, day=date.day ) await self.conn.execute( "insert into metadata_cache values (?, ?, ?)", (str(swhid), json.dumps(metadata), swhid_date), ) await self.conn.commit() async def set_visits(self, url_encoded: str, visits: List[Dict[str, Any]]) -> None: await self.conn.execute( "insert into visits_cache values (?, ?)", (url_encoded, json.dumps(visits)), ) await self.conn.commit() class BlobCache(AbstractCache): """ The blob cache map SWHIDs of type `cnt` to the bytes of their archived content. The blob cache entry for a given content object is populated, at the latest, the first time the object is `read()`-d. It might be populated earlier on due to prefetching, e.g., when a directory pointing to the given content is listed for the first time. """ DB_SCHEMA = """ create table if not exists blob_cache ( swhid text, blob blob ); create index if not exists idx_blob on blob_cache(swhid); """ async def __aenter__(self): await super().__aenter__() await self.conn.executescript(self.DB_SCHEMA) await self.conn.commit() return self async def get(self, swhid: SWHID) -> Optional[bytes]: cursor = await self.conn.execute( "select blob from blob_cache where swhid=?", (str(swhid),) ) cache = await cursor.fetchone() if cache: blob = cache[0] return blob else: return None async def set(self, swhid: SWHID, blob: bytes) -> None: await self.conn.execute( "insert into blob_cache values (?, ?)", (str(swhid), blob) ) await self.conn.commit() class HistoryCache(AbstractCache): """ The history cache map SWHIDs of type `rev` to a list of `rev` SWHIDs corresponding to all its revision ancestors, sorted in reverse topological order. As the parents cache, the history cache is lazily populated and can be prefetched. To efficiently store the ancestor lists, the history cache represents ancestors as graph edges (a pair of two SWHID nodes), meaning the history cache is shared amongst all revisions parents. """ DB_SCHEMA = """ create table if not exists history_graph ( src text not null, dst text not null, unique(src, dst) ); create index if not exists idx_history on history_graph(src); """ async def __aenter__(self): await super().__aenter__() await self.conn.executescript(self.DB_SCHEMA) await self.conn.commit() return self HISTORY_REC_QUERY = """ with recursive dfs(node) AS ( values(?) union select history_graph.dst from history_graph join dfs on history_graph.src = dfs.node ) -- Do not keep the root node since it is not an ancestor select * from dfs limit -1 offset 1 """ async def get(self, swhid: SWHID) -> Optional[List[SWHID]]: cursor = await self.conn.execute(self.HISTORY_REC_QUERY, (str(swhid),),) cache = await cursor.fetchall() if not cache: return None history = [] for row in cache: parent = row[0] try: history.append(parse_swhid(parent)) except ValidationError: logging.warning("Cannot parse object from history cache: %s", parent) return history async def get_with_date_prefix( self, swhid: SWHID, date_prefix: str ) -> List[Tuple[SWHID, str]]: cursor = await self.conn.execute( f""" select swhid, date from ( {self.HISTORY_REC_QUERY} ) as history join metadata_cache on history.node = metadata_cache.swhid where metadata_cache.date like '{date_prefix}%' """, (str(swhid),), ) cache = await cursor.fetchall() if not cache: return [] history = [] for row in cache: parent, date = row[0], row[1] try: history.append((parse_swhid(parent), date)) except ValidationError: logging.warning("Cannot parse object from history cache: %s", parent) return history async def set(self, history: str) -> None: history = history.strip() if history: edges = [edge.split(" ") for edge in history.split("\n")] await self.conn.executemany( "insert or ignore into history_graph values (?, ?)", edges ) await self.conn.commit() class DirEntryCache: """ The direntry cache map inode representing directories to the entries they contain. Each entry comes with its name as well as file attributes (i.e., all its needed to perform a detailed directory listing). Additional attributes of each directory entry should be looked up on a entry by entry basis, possibly hitting other caches. The direntry cache for a given dir is populated, at the latest, when the content of the directory is listed. More aggressive prefetching might happen. For instance, when first opening a dir a recursive listing of it can be retrieved from the remote backend and used to recursively populate the direntry cache for all (transitive) sub-directories. """ @dataclass class LRU(OrderedDict): max_ram: int used_ram: int = field(init=False, default=0) def sizeof(self, value: Any) -> int: # Rough size estimate in bytes for a list of entries return len(value) * 1000 def __getitem__(self, key: Any) -> Any: value = super().__getitem__(key) self.move_to_end(key) return value def __setitem__(self, key: Any, value: Any) -> None: if key in self: self.move_to_end(key) else: self.used_ram += self.sizeof(value) super().__setitem__(key, value) while self.used_ram > self.max_ram and self: oldest = next(iter(self)) self.used_ram -= self.sizeof(oldest) del self[oldest] def __init__(self, conf: Dict[str, Any]): m = re.match(r"(\d+)\s*(.+)\s*", conf["maxram"]) if not m: logging.error("Cannot parse direntry maxram config: %s", conf["maxram"]) sys.exit(1) num = float(m.group(1)) unit = m.group(2).upper() if unit == "%": max_ram = int(num * virtual_memory().available / 100) else: units = {"B": 1, "KB": 10 ** 3, "MB": 10 ** 6, "GB": 10 ** 9} max_ram = int(float(num) * units[unit]) self.lru_cache = self.LRU(max_ram) def get(self, direntry: FuseDirEntry) -> Optional[List[FuseEntry]]: return self.lru_cache.get(direntry.inode, None) def set(self, direntry: FuseDirEntry, entries: List[FuseEntry]) -> None: - if isinstance(direntry, (ArchiveDir, OriginDir)): - # The `archive/`, and `origin/` are populated on the fly so we - # should never cache them + if isinstance(direntry, (CacheDir, OriginDir)): + # The `cache/` and `origin/` directories are populated on the fly pass elif ( isinstance(direntry, RevisionHistoryShardByDate) and not direntry.is_status_done ): # The `by-date/' directory is populated in parallel so only cache it # once it has finished fetching all data from the API pass else: self.lru_cache[direntry.inode] = entries diff --git a/swh/fuse/fs/mountpoint.py b/swh/fuse/fs/mountpoint.py index 97a4d07..fa9d277 100644 --- a/swh/fuse/fs/mountpoint.py +++ b/swh/fuse/fs/mountpoint.py @@ -1,127 +1,195 @@ # Copyright (C) 2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information from dataclasses import dataclass, field import json +from pathlib import Path import re from typing import AsyncIterator from swh.fuse.fs.artifact import OBJTYPE_GETTERS, SWHID_REGEXP, Origin -from swh.fuse.fs.entry import EntryMode, FuseDirEntry, FuseEntry, FuseFileEntry +from swh.fuse.fs.entry import ( + EntryMode, + FuseDirEntry, + FuseEntry, + FuseFileEntry, + FuseSymlinkEntry, +) from swh.model.exceptions import ValidationError from swh.model.identifiers import CONTENT, SWHID, parse_swhid +JSON_SUFFIX = ".json" + @dataclass class Root(FuseDirEntry): """ The FUSE mountpoint, consisting of the archive/ and origin/ directories """ name: str = field(init=False, default=None) mode: int = field(init=False, default=int(EntryMode.RDONLY_DIR)) depth: int = field(init=False, default=1) async def compute_entries(self) -> AsyncIterator[FuseEntry]: yield self.create_child(ArchiveDir) yield self.create_child(OriginDir) + yield self.create_child(CacheDir) @dataclass class ArchiveDir(FuseDirEntry): """ The `archive/` virtual directory allows to mount any artifact on the fly using its SWHID as name. The associated metadata of the artifact from the Software Heritage Web API can also be accessed through the `SWHID.json` file (in case of pagination, the JSON file will contain a complete version with all pages merged together). Note: the archive directory cannot be listed with ls, but entries in it can be accessed (e.g., using cat or cd). """ name: str = field(init=False, default="archive") mode: int = field(init=False, default=int(EntryMode.RDONLY_DIR)) ENTRIES_REGEXP = re.compile(r"^(" + SWHID_REGEXP + ")(.json)?$") - JSON_SUFFIX = ".json" async def compute_entries(self) -> AsyncIterator[FuseEntry]: return yield async def lookup(self, name: str) -> FuseEntry: # On the fly mounting of a new artifact try: - if name.endswith(self.JSON_SUFFIX): - swhid = parse_swhid(name[: -len(self.JSON_SUFFIX)]) + if name.endswith(JSON_SUFFIX): + swhid = parse_swhid(name[: -len(JSON_SUFFIX)]) return self.create_child( MetaEntry, - name=f"{swhid}{self.JSON_SUFFIX}", + name=f"{swhid}{JSON_SUFFIX}", mode=int(EntryMode.RDONLY_FILE), swhid=swhid, ) else: swhid = parse_swhid(name) await self.fuse.get_metadata(swhid) return self.create_child( OBJTYPE_GETTERS[swhid.object_type], name=str(swhid), mode=int( EntryMode.RDONLY_FILE if swhid.object_type == CONTENT else EntryMode.RDONLY_DIR ), swhid=swhid, ) except ValidationError: return None @dataclass class MetaEntry(FuseFileEntry): """ An entry for a `archive/.json` file, containing all the SWHID's metadata from the Software Heritage archive. """ swhid: SWHID async def get_content(self) -> bytes: # Make sure the metadata is in cache await self.fuse.get_metadata(self.swhid) # Retrieve raw JSON metadata from cache (un-typified) metadata = await self.fuse.cache.metadata.get(self.swhid, typify=False) json_str = json.dumps(metadata, indent=self.fuse.conf["json-indent"]) return (json_str + "\n").encode() async def size(self) -> int: return len(await self.get_content()) @dataclass class OriginDir(FuseDirEntry): """ The origin/ directory is lazily populated with one entry per accessed origin URL (mangled to create a valid UNIX filename). The URL encoding is done using the percent-encoding mechanism described in RFC 3986. """ name: str = field(init=False, default="origin") mode: int = field(init=False, default=int(EntryMode.RDONLY_DIR)) ENTRIES_REGEXP = re.compile(r"^.*%3A.*$") # %3A is the encoded version of ':' def create_child(self, url_encoded: str) -> FuseEntry: return super().create_child( Origin, name=url_encoded, mode=int(EntryMode.RDONLY_DIR), ) async def compute_entries(self) -> AsyncIterator[FuseEntry]: async for url in self.fuse.cache.get_cached_visits(): yield self.create_child(url) async def lookup(self, name: str) -> FuseEntry: entry = await super().lookup(name) if entry: return entry # On the fly mounting of new origin url try: url_encoded = name await self.fuse.get_visits(url_encoded) return self.create_child(url_encoded) except ValueError: return None + + +@dataclass +class CacheDir(FuseDirEntry): + """ The cache/ directory is an on-disk representation of locally cached + objects and metadata. Via this directory you can browse cached data and + selectively remove them from the cache, freeing disk space. (See `swh fs + clean` in the {ref}`CLI ` to completely empty the cache). The + directory is populated with symlinks to: all artifacts, identified by their + SWHIDs and sharded by the first two character of their object id, the + metadata identified by a `SWHID.json` entry, and the `origin/` directory. + """ + + name: str = field(init=False, default="cache") + mode: int = field(init=False, default=int(EntryMode.RDONLY_DIR)) + + ENTRIES_REGEXP = re.compile(r"^([a-f0-9]{2})|(" + OriginDir.name + ")$") + + @dataclass + class ArtifactShardBySwhid(FuseDirEntry): + ENTRIES_REGEXP = re.compile(r"^(" + SWHID_REGEXP + ")$") + + prefix: str = field(default="") + + async def compute_entries(self) -> AsyncIterator[FuseEntry]: + root_path = self.get_relative_root_path() + async for swhid in self.fuse.cache.get_cached_swhids(): + if not swhid.object_id.startswith(self.prefix): + continue + + yield self.create_child( + FuseSymlinkEntry, + name=str(swhid), + target=Path(root_path, f"archive/{swhid}"), + ) + yield self.create_child( + FuseSymlinkEntry, + name=f"{swhid}{JSON_SUFFIX}", + target=Path(root_path, f"archive/{swhid}{JSON_SUFFIX}"), + ) + + async def compute_entries(self) -> AsyncIterator[FuseEntry]: + prefixes = set() + async for swhid in self.fuse.cache.get_cached_swhids(): + prefixes.add(swhid.object_id[:2]) + + for prefix in prefixes: + yield self.create_child( + CacheDir.ArtifactShardBySwhid, + name=prefix, + mode=int(EntryMode.RDONLY_DIR), + prefix=prefix, + ) + + yield self.create_child( + FuseSymlinkEntry, + name=OriginDir.name, + target=Path(self.get_relative_root_path(), OriginDir.name), + ) diff --git a/swh/fuse/tests/test_mountpoint.py b/swh/fuse/tests/test_mountpoint.py index 127a476..4c64014 100644 --- a/swh/fuse/tests/test_mountpoint.py +++ b/swh/fuse/tests/test_mountpoint.py @@ -1,21 +1,30 @@ # Copyright (C) 2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import os from swh.fuse.tests.data.config import ORIGIN_URL_ENCODED, REGULAR_FILE +from swh.model.identifiers import parse_swhid def test_mountpoint(fuse_mntdir): - assert {"archive", "origin"} <= set(os.listdir(fuse_mntdir)) + assert {"archive", "cache", "origin"} <= set(os.listdir(fuse_mntdir)) def test_on_the_fly_mounting(fuse_mntdir): assert os.listdir(fuse_mntdir / "archive") == [] assert (fuse_mntdir / "archive" / REGULAR_FILE).is_file() assert (fuse_mntdir / "archive" / (REGULAR_FILE + ".json")).is_file() assert os.listdir(fuse_mntdir / "origin") == [] assert (fuse_mntdir / "origin" / ORIGIN_URL_ENCODED).is_dir() + + sharded_dir = parse_swhid(REGULAR_FILE).object_id[:2] + assert os.listdir(fuse_mntdir / "cache") == [sharded_dir, "origin"] + assert os.listdir(fuse_mntdir / "cache" / sharded_dir) == [ + REGULAR_FILE, + REGULAR_FILE + ".json", + ] + assert os.listdir(fuse_mntdir / "cache/origin") == [ORIGIN_URL_ENCODED]