diff --git a/docs/design.md b/docs/design.md index 8e2189e..e39c26c 100644 --- a/docs/design.md +++ b/docs/design.md @@ -1,244 +1,240 @@ # Software Heritage Filesystem (SwhFS) --- Design notes The [Software Heritage](https://www.softwareheritage.org/) {ref}`data model ` is a [Direct Acyclic Graph](https://en.wikipedia.org/wiki/Directed_acyclic_graph) (DAG) with nodes of different types that correspond to source code artifacts such as directories, commits, etc. Using this [FUSE](https://en.wikipedia.org/wiki/Filesystem_in_Userspace) module (*SwhFS* for short) you can locally mount, and then navigate as a filesystem, parts of the archive identified by {ref}`Software Heritage identifiers ` (SWHIDs). To retrieve information about the source code artifacts, SwhFS interacts over the network with the Software Heritage archive via its {ref}`Web API `. ## Architecture SwhFS in context ([C4](https://en.wikipedia.org/wiki/C4_model) context diagram): ```{image} images/arch-context.svg :align: center ``` Main components of SwhFS (C4 container diagram): ```{image} images/arch-container.svg :align: center ``` ## Command-line interface $ swh fs mount [SWHID]... will mount the Software Heritage archive at the local ``, the *SwhFS mount point*. From there, the user will be able to lazily load and navigate the archive using SWHID at entry points. If one or more SWHIDs are also specified, the corresponding objects will be pre- fetched from the archive at mount-time and available at `/archive/`. For more details see the {ref}`CLI documentation `. ## Mount point The SwhFS mount point contain: - `archive/`: initially empty, this directory is lazily populated with one entry per accessed SWHID, having actual SWHIDs as names (possibly sharded into `xy/../SWHID` paths to avoid overcrowding `archive/`). - `meta/`: initially empty, this directory contains one `.json` file for each `` entry under `archive/`. The JSON file contain all available meta information about the given SWHID, as returned by the Software Heritage Web API for that object. Note that, in case of pagination (e.g., snapshot objects with many branches) the JSON file will contain a complete version with all pages merged together. - `origin/`: initially empty, this directory is lazily populated with one entry per accessed origin URL, having encoded URL as names. The URL encoding is done using the percent-encoding mechanism described in [RFC 3986](https://tools.ietf.org/html/rfc3986.html). ## File system representation SWHID are represented differently on the file-system depending on the associated node types in the Software Heritage graph. Details are given below, for each node type. ### `cnt` nodes (blobs) Content leaves (AKA blobs) are represented on disks as regular files, containing the corresponding bytes, as archived. Note that permissions are associated to blobs only in the context of directories. Hence, when accessing blobs from the top-level `archive/` directory, the permissions of the `archive/SWHID` file will be arbitrary and not meaningful (e.g., `0x644`). ### `dir` nodes (directories) Directory nodes are represented as directories on the file-system, containing one entry for each entry of the archived directory. Entry names and other metadata, including permissions, will correspond to the archived entry metadata. Note that SwhFS is mounted read-only, no matter what the permissions say. So it is possible that, in the context of a directory, a file is presented as writable, whereas actually writing to it will fail with `EPERM`. ### `rev` nodes (commits) Revision (AKA commit) nodes are represented on the file-system as directories with the following entries: - `root`: source tree at the time of the commit, as a symlink pointing into `archive/`, to a SWHID of type `dir` - `parents/` (note the plural): a virtual directory containing entries named `1`, `2`, `3`, etc., one for each parent commit. Each of these entry is a symlink pointing into `archive/`, to the SWHID file for the given parent commit - `parent` (note the singular): present if and only if the current commit has at least one parent commit (which is the most common case). When present it is a symlink pointing into `parents/1/` - `history`: a virtual directory listing all its revision ancestors, sorted in reverse topological order. The history can be listed through `by-date/`, `by-hash/` or `by-page/` with each its own sharding policy. - `meta.json`: metadata for the current node, as a symlink pointing to the relevant `meta/.json` file ### `rel` nodes (releases) Release nodes are represented on the file-system as directories with the following entries: - `target`: target node, as a symlink to `archive/` - `target_type`: regular file containing the type of the target SWHID - `root`: present if and only if the release points to something that (transitively) resolves to a directory. When present it is a symlink pointing into `archive/` to the SWHID of the given directory - `meta.json`: metadata for the current node, as a symlink pointing to the relevant `meta/.json` file ### `snp` nodes (snapshots) -Snapshot nodes are represented on the file-system as directories with one entry -for each branch in the snapshot. - -Branch names are subject to URL encoding, in order to avoid problematic -characters (e.g., `/` are replaced by `%2F`). - -Each entry is a symlink named as the branch name, URL encoded (to avoid -problematic characters such as `/`, which becomes `%2F`). The symlink target -points into `archive/` to the SWHID corresponding to the branch target. +Snapshot nodes are represented on the file-system as recursive directories +following the branch names structure. For example, a branch named +``refs/tags/v1.0`` will be represented as a ``refs`` directory containing a +``tags`` directory containing a ``v1.0`` symlink pointing to the branch +target SWHID. ### `ori` nodes (origins) Origin nodes are represented on the file-system as directories with one entry for each origin visit. The visits directories are named after the visit date (`YYYY-MM-DD`, if multiple visits occur the same day only the first one is kept). Each visit directory contains a `meta.json` with associated metadata for the origin node, and potentially a `snapshot` symlink pointing to the visit's snapshot node. ## Caching SwhFS retrieves both metadata and file contents from the Software Heritage archive via the network. In order to obtain reasonable performances several caches are used to minimize network transfer. Caches are stored on disk in SQLite DB(s) located under `$XDG_CACHE_HOME/swh/fuse/`. ```{todo} - potential improvement: store blobs larger than a threshold on disk as files rather than in SQLite, e.g., under `$XDG_CACHE_HOME/swh/fuse/objects/` ``` All caches are persistent (i.e., they survive the restart of the SwhFS process) and global (i.e., they are shared by concurrent SwhFS processes). We assume that no cache *invalidation* is necessary, due to intrinsic properties of the Software Heritage archive, such as integrity verification and append-only archive changes. To clean the caches one can just remove the corresponding files from disk. ### Metadata cache Artifact id → JSON metadata The metadata cache map each artifact to the complete metadata of the referenced object. This is analogous to what is available in `meta/.json` file (and generally used as data source for returning the content of those files). Artifacts are identified using their SWHIDs, or in the case of origin visits, using their URLs. Cache location on-disk: `$XDG_CACHE_HOME/swh/fuse/metadata.sqlite` ### Blob cache cnt SWHID → bytes The blob cache map SWHIDs of type `cnt` to the bytes of their archived content. In general, each SWHID that has an entry in the blob cache also has a matching entry in the metadata cache for other blob attributes (e.g., checksums, size, etc.). The blob cache entry for a given content object is populated, at the latest, the first time the object is `open()`-d. It might be populated earlier on due to prefetching, e.g., when a directory pointing to the given content is listed for the first time. Cache location on-disk: `$XDG_CACHE_HOME/swh/fuse/blob.sqlite` ### History cache rev SWHID → ancestor SWHIDs The history cache map SWHIDs of type `rev` to a list of `rev` SWHIDs corresponding to all its revision ancestors, sorted in reverse topological order. As the parents cache, the history cache is lazily populated and can be prefetched. To efficiently store the ancestor lists, the history cache represents ancestors as graph edges (a pair of two SWHID nodes), meaning the history cache is shared amongst all revisions parents. Cache location on-disk: `$XDG_CACHE_HOME/swh/fuse/metadata.sqlite` ### Direntry cache dir inode → directory entries The direntry cache map inode representing directories to the entries they contain. Each entry comes with its name as well as file attributes (i.e., all its needed to perform a detailed directory listing). Additional attributes of each directory entry should be looked up on a entry by entry basis, possibly hitting the metadata cache. The direntry cache for a given dir is populated, at the latest, when the content of the directory is listed. More aggressive prefetching might happen. For instance, when first opening a dir a recursive listing of it can be retrieved from the remote backend and used to recursively populate the direntry cache for all (transitive) sub-directories. Cache location: in-memory. diff --git a/swh/fuse/fs/artifact.py b/swh/fuse/fs/artifact.py index 54f75ac..d7ec94a 100644 --- a/swh/fuse/fs/artifact.py +++ b/swh/fuse/fs/artifact.py @@ -1,561 +1,579 @@ # Copyright (C) 2020 The Software Heritage developers # See the AUTHORS file at the top-level directory of this distribution # License: GNU General Public License version 3, or any later version # See top-level LICENSE file for more information import asyncio from dataclasses import dataclass, field import json import logging from pathlib import Path from typing import Any, AsyncIterator, Dict, List -import urllib.parse from swh.fuse.fs.entry import ( EntryMode, FuseDirEntry, FuseEntry, FuseFileEntry, FuseSymlinkEntry, ) from swh.model.from_disk import DentryPerms from swh.model.identifiers import CONTENT, DIRECTORY, RELEASE, REVISION, SNAPSHOT, SWHID @dataclass class Content(FuseFileEntry): """ Software Heritage content artifact. Attributes: swhid: Software Heritage persistent identifier prefetch: optional prefetched metadata used to set entry attributes Content leaves (AKA blobs) are represented on disks as regular files, containing the corresponding bytes, as archived. Note that permissions are associated to blobs only in the context of directories. Hence, when accessing blobs from the top-level `archive/` directory, the permissions of the `archive/SWHID` file will be arbitrary and not meaningful (e.g., `0x644`). """ swhid: SWHID prefetch: Any = None async def get_content(self) -> bytes: data = await self.fuse.get_blob(self.swhid) if not self.prefetch: self.prefetch = {"length": len(data)} return data async def size(self) -> int: if self.prefetch: return self.prefetch["length"] else: return await super().size() @dataclass class Directory(FuseDirEntry): """ Software Heritage directory artifact. Attributes: swhid: Software Heritage persistent identifier Directory nodes are represented as directories on the file-system, containing one entry for each entry of the archived directory. Entry names and other metadata, including permissions, will correspond to the archived entry metadata. Note that the FUSE mount is read-only, no matter what the permissions say. So it is possible that, in the context of a directory, a file is presented as writable, whereas actually writing to it will fail with `EPERM`. """ swhid: SWHID async def compute_entries(self) -> AsyncIterator[FuseEntry]: metadata = await self.fuse.get_metadata(self.swhid) for entry in metadata: name = entry["name"] swhid = entry["target"] mode = ( # Archived permissions for directories are always set to # 0o040000 so use a read-only permission instead int(EntryMode.RDONLY_DIR) if swhid.object_type == DIRECTORY else entry["perms"] ) # 1. Regular file if swhid.object_type == CONTENT: yield self.create_child( Content, name=name, mode=mode, swhid=swhid, # The directory API has extra info we can use to set # attributes without additional Software Heritage API call prefetch=entry, ) # 2. Regular directory elif swhid.object_type == DIRECTORY: yield self.create_child( Directory, name=name, mode=mode, swhid=swhid, ) # 3. Symlink elif mode == DentryPerms.symlink: yield self.create_child( FuseSymlinkEntry, name=name, # Symlink target is stored in the blob content target=await self.fuse.get_blob(swhid), ) # 4. Submodule elif swhid.object_type == REVISION: # Make sure the revision metadata is fetched and create a # symlink to distinguish it with regular directories await self.fuse.get_metadata(swhid) yield self.create_child( FuseSymlinkEntry, name=name, target=Path(self.get_relative_root_path(), f"archive/{swhid}"), ) else: raise ValueError("Unknown directory entry type: {swhid.object_type}") @dataclass class Revision(FuseDirEntry): """ Software Heritage revision artifact. Attributes: swhid: Software Heritage persistent identifier Revision (AKA commit) nodes are represented on the file-system as directories with the following entries: - `root`: source tree at the time of the commit, as a symlink pointing into `archive/`, to a SWHID of type `dir` - `parents/` (note the plural): a virtual directory containing entries named `1`, `2`, `3`, etc., one for each parent commit. Each of these entry is a symlink pointing into `archive/`, to the SWHID file for the given parent commit - `parent` (note the singular): present if and only if the current commit has at least one parent commit (which is the most common case). When present it is a symlink pointing into `parents/1/` - `history`: a virtual directory listing all its revision ancestors, sorted in reverse topological order. The history can be listed through `by-date/`, `by-hash/` or `by-page/` with each its own sharding policy. - `meta.json`: metadata for the current node, as a symlink pointing to the relevant `meta/.json` file """ swhid: SWHID async def compute_entries(self) -> AsyncIterator[FuseEntry]: metadata = await self.fuse.get_metadata(self.swhid) directory = metadata["directory"] parents = metadata["parents"] root_path = self.get_relative_root_path() yield self.create_child( FuseSymlinkEntry, name="root", target=Path(root_path, f"archive/{directory}"), ) yield self.create_child( FuseSymlinkEntry, name="meta.json", target=Path(root_path, f"meta/{self.swhid}.json"), ) yield self.create_child( RevisionParents, name="parents", mode=int(EntryMode.RDONLY_DIR), parents=[x["id"] for x in parents], ) if len(parents) >= 1: yield self.create_child( FuseSymlinkEntry, name="parent", target="parents/1/", ) yield self.create_child( RevisionHistory, name="history", mode=int(EntryMode.RDONLY_DIR), swhid=self.swhid, ) @dataclass class RevisionParents(FuseDirEntry): """ Revision virtual `parents/` directory """ parents: List[SWHID] async def compute_entries(self) -> AsyncIterator[FuseEntry]: root_path = self.get_relative_root_path() for i, parent in enumerate(self.parents): yield self.create_child( FuseSymlinkEntry, name=str(i + 1), target=Path(root_path, f"archive/{parent}"), ) @dataclass class RevisionHistory(FuseDirEntry): """ Revision virtual `history/` directory """ swhid: SWHID async def prefill_caches(self) -> None: history = await self.fuse.get_history(self.swhid) for swhid in history: await self.fuse.get_metadata(swhid) async def compute_entries(self) -> AsyncIterator[FuseEntry]: # Run it concurrently because of the many API calls necessary asyncio.create_task(self.prefill_caches()) yield self.create_child( RevisionHistoryShardByDate, name="by-date", mode=int(EntryMode.RDONLY_DIR), history_swhid=self.swhid, ) yield self.create_child( RevisionHistoryShardByHash, name="by-hash", mode=int(EntryMode.RDONLY_DIR), history_swhid=self.swhid, ) yield self.create_child( RevisionHistoryShardByPage, name="by-page", mode=int(EntryMode.RDONLY_DIR), history_swhid=self.swhid, ) @dataclass class RevisionHistoryShardByDate(FuseDirEntry): """ Revision virtual `history/by-date` sharded directory """ history_swhid: SWHID prefix: str = field(default="") is_status_done: bool = field(default=False) DATE_FMT = "{year:04d}/{month:02d}/{day:02d}/" @dataclass class StatusFile(FuseFileEntry): """ Temporary file used to indicate loading progress in by-date/ """ name: str = field(init=False, default=".status") mode: int = field(init=False, default=int(EntryMode.RDONLY_FILE)) done: int todo: int async def get_content(self) -> bytes: fmt = f"Done: {self.done}/{self.todo}\n" return fmt.encode() async def compute_entries(self) -> AsyncIterator[FuseEntry]: history = await self.fuse.get_history(self.history_swhid) # Only check for cached revisions with the appropriate prefix, since # fetching all of them with the Web API would take too long swhids = await self.fuse.cache.history.get_with_date_prefix( self.history_swhid, date_prefix=self.prefix ) depth = self.prefix.count("/") root_path = self.get_relative_root_path() sharded_dirs = set() for (swhid, sharded_name) in swhids: if not sharded_name.startswith(self.prefix): continue if depth == 3: yield self.create_child( FuseSymlinkEntry, name=str(swhid), target=Path(root_path, f"archive/{swhid}"), ) # Create sharded directories else: next_prefix = sharded_name.split("/")[depth] if next_prefix not in sharded_dirs: sharded_dirs.add(next_prefix) yield self.create_child( RevisionHistoryShardByDate, name=next_prefix, mode=int(EntryMode.RDONLY_DIR), prefix=f"{self.prefix}{next_prefix}/", history_swhid=self.history_swhid, ) # TODO: store len(history) somewhere to avoid recompute? self.is_status_done = len(swhids) == len(history) if not self.is_status_done and depth == 0: yield self.create_child( RevisionHistoryShardByDate.StatusFile, done=len(swhids), todo=len(history), ) @dataclass class RevisionHistoryShardByHash(FuseDirEntry): """ Revision virtual `history/by-hash` sharded directory """ history_swhid: SWHID prefix: str = field(default="") SHARDING_LENGTH = 2 async def compute_entries(self) -> AsyncIterator[FuseEntry]: history = await self.fuse.get_history(self.history_swhid) if self.prefix: root_path = self.get_relative_root_path() for swhid in history: if swhid.object_id.startswith(self.prefix): yield self.create_child( FuseSymlinkEntry, name=str(swhid), target=Path(root_path, f"archive/{swhid}"), ) # Create sharded directories else: sharded_dirs = set() for swhid in history: next_prefix = swhid.object_id[: self.SHARDING_LENGTH] if next_prefix not in sharded_dirs: sharded_dirs.add(next_prefix) yield self.create_child( RevisionHistoryShardByHash, name=next_prefix, mode=int(EntryMode.RDONLY_DIR), prefix=next_prefix, history_swhid=self.history_swhid, ) @dataclass class RevisionHistoryShardByPage(FuseDirEntry): """ Revision virtual `history/by-page` sharded directory """ history_swhid: SWHID prefix: int = field(default=None) PAGE_SIZE = 10_000 PAGE_FMT = "{page_number:03d}" async def compute_entries(self) -> AsyncIterator[FuseEntry]: history = await self.fuse.get_history(self.history_swhid) if self.prefix is not None: current_page = self.prefix root_path = self.get_relative_root_path() max_idx = min(len(history), (current_page + 1) * self.PAGE_SIZE) for i in range(current_page * self.PAGE_SIZE, max_idx): swhid = history[i] yield self.create_child( FuseSymlinkEntry, name=str(swhid), target=Path(root_path, f"archive/{swhid}"), ) # Create sharded directories else: for i in range(0, len(history), self.PAGE_SIZE): page_number = i // self.PAGE_SIZE yield self.create_child( RevisionHistoryShardByPage, name=self.PAGE_FMT.format(page_number=page_number), mode=int(EntryMode.RDONLY_DIR), history_swhid=self.history_swhid, prefix=page_number, ) @dataclass class Release(FuseDirEntry): """ Software Heritage release artifact. Attributes: swhid: Software Heritage persistent identifier Release nodes are represented on the file-system as directories with the following entries: - `target`: target node, as a symlink to `archive/` - `target_type`: regular file containing the type of the target SWHID - `root`: present if and only if the release points to something that (transitively) resolves to a directory. When present it is a symlink pointing into `archive/` to the SWHID of the given directory - `meta.json`: metadata for the current node, as a symlink pointing to the relevant `meta/.json` file """ swhid: SWHID async def find_root_directory(self, swhid: SWHID) -> SWHID: if swhid.object_type == RELEASE: metadata = await self.fuse.get_metadata(swhid) return await self.find_root_directory(metadata["target"]) elif swhid.object_type == REVISION: metadata = await self.fuse.get_metadata(swhid) return metadata["directory"] elif swhid.object_type == DIRECTORY: return swhid else: return None async def compute_entries(self) -> AsyncIterator[FuseEntry]: metadata = await self.fuse.get_metadata(self.swhid) root_path = self.get_relative_root_path() yield self.create_child( FuseSymlinkEntry, name="meta.json", target=Path(root_path, f"meta/{self.swhid}.json"), ) target = metadata["target"] yield self.create_child( FuseSymlinkEntry, name="target", target=Path(root_path, f"archive/{target}") ) yield self.create_child( ReleaseType, name="target_type", mode=int(EntryMode.RDONLY_FILE), target_type=target.object_type, ) target_dir = await self.find_root_directory(target) if target_dir is not None: yield self.create_child( FuseSymlinkEntry, name="root", target=Path(root_path, f"archive/{target_dir}"), ) @dataclass class ReleaseType(FuseFileEntry): """ Release type virtual file """ target_type: str async def get_content(self) -> bytes: return str.encode(self.target_type + "\n") @dataclass class Snapshot(FuseDirEntry): """ Software Heritage snapshot artifact. Attributes: swhid: Software Heritage persistent identifier - Snapshot nodes are represented on the file-system as directories with one - entry for each branch in the snapshot. Each entry is a symlink pointing into - `archive/` to the branch target SWHID. Branch names are URL encoded (hence - '/' are replaced with '%2F'). """ + Snapshot nodes are represented on the file-system as recursive directories + following the branch names structure. For example, a branch named + ``refs/tags/v1.0`` will be represented as a ``refs`` directory containing a + ``tags`` directory containing a ``v1.0`` symlink pointing to the branch + target SWHID. """ swhid: SWHID + prefix: str = field(default="") async def compute_entries(self) -> AsyncIterator[FuseEntry]: metadata = await self.fuse.get_metadata(self.swhid) root_path = self.get_relative_root_path() + subdirs = set() for branch_name, branch_meta in metadata.items(): - # Mangle branch name to create a valid UNIX filename - name = urllib.parse.quote_plus(branch_name) + if not branch_name.startswith(self.prefix): + continue + + next_subdirs = branch_name[len(self.prefix) :].split("/") + next_prefix = next_subdirs[0] + + if len(next_subdirs) == 1: + yield self.create_child( + FuseSymlinkEntry, + name=next_prefix, + target=Path(root_path, f"archive/{branch_meta['target']}"), + ) + else: + subdirs.add(next_prefix) + + for subdir in subdirs: yield self.create_child( - FuseSymlinkEntry, - name=name, - target=Path(root_path, f"archive/{branch_meta['target']}"), + Snapshot, + name=subdir, + mode=int(EntryMode.RDONLY_DIR), + swhid=self.swhid, + prefix=f"{self.prefix}{subdir}/", ) @dataclass class Origin(FuseDirEntry): """ Software Heritage origin artifact. Origin nodes are represented on the file-system as directories with one entry for each origin visit. The visits directories are named after the visit date (`YYYY-MM-DD`, if multiple visits occur the same day only the first one is kept). Each visit directory contains a `meta.json` with associated metadata for the origin node, and potentially a `snapshot` symlink pointing to the visit's snapshot node. """ DATE_FMT = "{year:04d}-{month:02d}-{day:02d}" async def compute_entries(self) -> AsyncIterator[FuseEntry]: # The origin's name is always its URL (encoded to create a valid UNIX filename) visits = await self.fuse.get_visits(self.name) seen_date = set() for visit in visits: date = visit["date"] name = self.DATE_FMT.format(year=date.year, month=date.month, day=date.day) if name in seen_date: logging.debug( "Conflict date on origin: %s, %s", visit["origin"], str(name) ) else: seen_date.add(name) yield self.create_child( OriginVisit, name=name, mode=int(EntryMode.RDONLY_DIR), meta=visit, ) @dataclass class OriginVisit(FuseDirEntry): """ Origin visit virtual directory """ meta: Dict[str, Any] @dataclass class MetaFile(FuseFileEntry): content: str async def get_content(self) -> bytes: return str.encode(self.content + "\n") async def compute_entries(self) -> AsyncIterator[FuseEntry]: snapshot_swhid = self.meta["snapshot"] if snapshot_swhid: root_path = self.get_relative_root_path() yield self.create_child( FuseSymlinkEntry, name="snapshot", target=Path(root_path, f"archive/{snapshot_swhid}"), ) yield self.create_child( OriginVisit.MetaFile, name="meta.json", mode=int(EntryMode.RDONLY_FILE), content=json.dumps( self.meta, indent=self.fuse.conf["json-indent"], default=lambda x: str(x), ), ) OBJTYPE_GETTERS = { CONTENT: Content, DIRECTORY: Directory, REVISION: Revision, RELEASE: Release, SNAPSHOT: Snapshot, } diff --git a/swh/fuse/tests/test_snapshot.py b/swh/fuse/tests/test_snapshot.py index 411ed0e..039462e 100644 --- a/swh/fuse/tests/test_snapshot.py +++ b/swh/fuse/tests/test_snapshot.py @@ -1,22 +1,19 @@ import os -import urllib.parse +from pathlib import Path from swh.fuse.tests.common import get_data_from_web_archive from swh.fuse.tests.data.config import ROOT_SNP def test_list_branches(fuse_mntdir): + snp_dir = Path(fuse_mntdir / "archive" / ROOT_SNP) snp_meta = get_data_from_web_archive(ROOT_SNP) - expected = snp_meta.keys() - # Mangle branch name to create a valid UNIX filename - expected = [urllib.parse.quote_plus(x) for x in expected] - actual = os.listdir(fuse_mntdir / "archive" / ROOT_SNP) - assert set(actual) == set(expected) + for branch_name in snp_meta.keys(): + assert (snp_dir / branch_name).is_symlink() def test_access_rev_target(fuse_mntdir): - branch_name = urllib.parse.quote_plus("refs/heads/master") - dir_path = fuse_mntdir / "archive" / ROOT_SNP / branch_name + dir_path = fuse_mntdir / "archive" / ROOT_SNP / "refs/heads/master" expected = set(["meta.json", "root", "parent", "parents", "history"]) actual = set(os.listdir(dir_path)) assert expected.issubset(actual)