Page MenuHomeSoftware Heritage

No OneTemporary

diff --git a/PKG-INFO b/PKG-INFO
index 49e0d80d..20376a4c 100644
--- a/PKG-INFO
+++ b/PKG-INFO
@@ -1,12 +1,12 @@
Metadata-Version: 2.1
Name: swh.storage
-Version: 0.0.104
+Version: 0.0.105
Summary: Software Heritage storage manager
Home-page: https://forge.softwareheritage.org/diffusion/DSTO/
Author: Software Heritage developers
Author-email: swh-devel@inria.fr
License: UNKNOWN
Description: UNKNOWN
Platform: UNKNOWN
Provides-Extra: listener
Provides-Extra: schemata
diff --git a/docs/archiver-blueprint.rst b/docs/archiver-blueprint.rst
deleted file mode 100644
index 6d76752f..00000000
--- a/docs/archiver-blueprint.rst
+++ /dev/null
@@ -1,250 +0,0 @@
-.. _swh-archiver-blueprint:
-
-Archiver blueprint
-==================
-
-The Software Heritage (SWH) Archiver is responsible for backing up SWH
-objects as to reduce the risk of losing them.
-
-Currently, the archiver only deals with content objects (i.e., those
-referenced by the content table in the DB and stored in the SWH object
-storage). The database itself is lively replicated by other means.
-
-Requirements
-------------
-
-Peer-to-peer topology
-~~~~~~~~~~~~~~~~~~~~~
-
-Every storage involved in the archival process can be used as a source
-or a destination for the archival, depending on the blobs it contains. A
-retention policy specifies the minimum number of copies that are
-required to be "safe".
-
-Although the servers are totally equals the coordination of which
-content should be copied and from where to where is centralized.
-
-Append-only archival
-~~~~~~~~~~~~~~~~~~~~
-
-The archiver treats involved storages as append-only storages. The
-archiver never deletes any object. If removals are needed, they will be
-dealt with by other means.
-
-Asynchronous archival
-~~~~~~~~~~~~~~~~~~~~~
-
-Periodically (e.g., via cron), the archiver runs, produces a list of
-objects that need to have more copies, and starts copying them over. The
-decision of which storages are choosen to be sources and destinations
-are not up to the storages themselves.
-
-Very likely, during any given archival run, other new objects will be
-added to storages; it will be the responsibility of *future* archiver
-runs, and not the current one, to copy new objects over if needed.
-
-Integrity at archival time
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Before archiving objects, the archiver performs suitable integrity
-checks on them. For instance, content objects are verified to ensure
-that they can be decompressed and that their content match their
-(checksum-based) unique identifiers. Corrupt objects will not be
-archived and suitable errors reporting about the corruption will be
-emitted.
-
-Note that archival-time integrity checks are *not meant to replace
-periodic integrity checks*.
-
-Parallel archival
-~~~~~~~~~~~~~~~~~
-
-Once the list of objects to be archived has been identified, it SHOULD
-be possible to archive objects in parallel w.r.t. one another.
-
-Persistent archival status
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The archiver maintains a mapping between objects and their storage
-locations. Locations are the set {master, slave\_1, ..., slave\_n}.
-
-Each pair is also associated to the following information:
-
-- **status**: 4-state: *missing* (copy not present at destination),
- *ongoing* (copy to destination ongoing), *present* (copy present at
- destination), *corrupted* (content detected as corrupted during an
- archival).
-- **mtime**: timestamp of last status change. This is either the
- destination archival time (when status=present), or the timestamp of
- the last archival request (status=ongoing); the timestamp is
- undefined when status=missing.
-
-Architecture
-------------
-
-The archiver is comprised of the following software components:
-
-- archiver director
-- archiver worker
-- archiver copier
-
-Archiver director
-~~~~~~~~~~~~~~~~~
-
-The archiver director is run periodically, e.g., via cron.
-
-Runtime parameters:
-
-- execution periodicity (external)
-- retention policy
-- archival max age
-- archival batch size
-
-At each execution the director:
-
-1. for each object: retrieve its archival status
-2. for each object that has fewer copies than those requested by the
- retention policy:
-3. mark object as needing archival
-4. group objects in need of archival in batches of
- ``archival batch size``
-5. for each batch:
-6. spawn an archive worker on the whole batch (e.g., submitting the
- relevant celery task)
-
-Archiver worker
-~~~~~~~~~~~~~~~
-
-The archiver is executed on demand (e.g., by a celery worker) to archive
-a given set of objects.
-
-Runtime parameters:
-
-- objects to archive
-- archival policies (retention & archival max age)
-
-At each execution a worker:
-
-1. for each object in the batch
-2. check that the object still need to be archived (#present copies <
- retention policy)
-3. if an object has status=ongoing but the elapsed time from task
- submission is less than the *archival max age*, it counts as present
- (as we assume that it will be copied in the near future). If the
- delay is elapsed (still with status ongoing), it counts as a missing
- copy.
-4. for each object to archive:
-5. retrieve current archive status for all destinations
-6. create a map noting where the object is present and where it can be
- copied
-7. Randomly choose couples (source, destination), where destinations
- are all differents, to make enough copies
-8. for each (content, source, destination):
-9. Join the contents by key (source, destination) to have a map
- {(source, destination) -> [contents]}
-10. for each (source, destination) -> contents
-11. for each content in contents, check its integrity on the source
- storage
-
- - if the object is corrupted or missing
-
- - update its status in the database
- - remove it from the current contents list
-
-12. start the copy of the batches by launching for each transfer tuple a
- copier
-
- - if an error occurred on one of the content that should have been
- valid, consider the whole batch as a failure.
-
-13. set status=present and mtime=now for each successfully copied object
-
-Note that:
-
-- In case multiple jobs where tasked to archive the same overlapping
- objects, step (1) might decide that some/all objects of this batch no
- longer needs to be archived.
-
-- Due to parallelism, it is possible that the same objects will be
- copied over at the same time by multiple workers. Also, the same
- object could end up having more copies than the minimal number
- required.
-
-Archiver copier
-~~~~~~~~~~~~~~~
-
-The copier is run on demand by archiver workers, to transfer file
-batches from a given source to a given destination.
-
-The copier transfers files one by one. The copying process is atomic
-with a file granularity (i.e., individual files might be visible on the
-destination before *all* files have been transferred) and ensures that
-*concurrent transfer of the same files by multiple copier instances do
-not result in corrupted files*. Note that, due to this and the fact that
-timestamps are updated by the worker, all files copied in the same batch
-will have the same mtime even though the actual file creation times on a
-given destination might differ.
-
-The copier is implemented using the ObjStorage API for the sources and
-destinations.
-
-DB structure
-------------
-
-Postgres SQL definitions for the archival status:
-
-::
-
- -- Those names are sample of archives server names
- CREATE TYPE archive_id AS ENUM (
- 'uffizi',
- 'banco'
- );
-
- CREATE TABLE archive (
- id archive_id PRIMARY KEY,
- url TEXT
- );
-
- CREATE TYPE archive_status AS ENUM (
- 'missing',
- 'ongoing',
- 'present',
- 'corrupted'
- );
-
- CREATE TABLE content_archive (
- content_id sha1 unique,
- copies jsonb
- );
-
-Where the content\_archive.copies field is of type jsonb. It contains
-content's presence (or absence) in storages. A content being represented
-by its signature (sha1)
-
-::
-
- {
- "$schema": "http://json-schema.org/schema#",
- "title": "Copies data",
- "description": "Data about the presence of a content into the storages",
- "type": "object",
- "Patternproperties": {
- "^[a-zA-Z1-9]+$": {
- "description": "archival status for the server named by key",
- "type": "object",
- "properties": {
- "status": {
- "description": "Archival status on this copy",
- "type": "string",
- "enum": ["missing", "ongoing", "present", "corrupted"]
- },
- "mtime": {
- "description": "Last time of the status update",
- "type": "float"
- }
- }
- }
- },
- "additionalProperties": false
- }
diff --git a/docs/index.rst b/docs/index.rst
index 117679a9..8609c687 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -1,28 +1,34 @@
.. _swh-storage:
Software Heritage - Development Documentation
=============================================
.. toctree::
:maxdepth: 2
:caption: Contents:
-High-level storage
-------------------
+The Software Heritage storage consist of a high-level storage layer
+(:mod:`swh.storage`) that exposes a client/server API
+(:mod:`swh.storage.api`). The API is exposed by a server
+(:mod:`swh.storage.api.server`) and accessible via a client
+(:mod:`swh.storage.api.client`).
-* :ref:`sql-storage`
+The low-level implementation of the storage is split between an object storage
+(:ref:`swh.objstorage <swh-objstorage>`), which stores all "blobs" (i.e., the
+leaves of the :ref:`data-model`) and a SQL representation of the rest of the
+graph (:mod:`swh.storage.storage`).
-Low-level object storage
-------------------------
+Database schema
+---------------
-* :ref:`swh-archiver-blueprint`
+* :ref:`sql-storage`
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
diff --git a/sql/swh-func.sql b/sql/swh-func.sql
index ce32afbc..bff53a5f 100644
--- a/sql/swh-func.sql
+++ b/sql/swh-func.sql
@@ -1,1435 +1,1435 @@
-
create or replace function hash_sha1(text)
returns text
as $$
select encode(digest($1, 'sha1'), 'hex')
$$ language sql strict immutable;
comment on function hash_sha1(text) is 'Compute SHA1 hash as text';
-- create a temporary table called tmp_TBLNAME, mimicking existing table
-- TBLNAME
--
-- Args:
-- tblname: name of the table to mimick
create or replace function swh_mktemp(tblname regclass)
returns void
language plpgsql
as $$
begin
execute format('
create temporary table tmp_%1$I
(like %1$I including defaults)
on commit drop;
alter table tmp_%1$I drop column if exists object_id;
', tblname);
return;
end
$$;
-- create a temporary table for directory entries called tmp_TBLNAME,
-- mimicking existing table TBLNAME with an extra dir_id (sha1_git)
-- column, and dropping the id column.
--
-- This is used to create the tmp_directory_entry_<foo> tables.
--
-- Args:
-- tblname: name of the table to mimick
create or replace function swh_mktemp_dir_entry(tblname regclass)
returns void
language plpgsql
as $$
begin
execute format('
create temporary table tmp_%1$I
(like %1$I including defaults, dir_id sha1_git)
on commit drop;
alter table tmp_%1$I drop column id;
', tblname);
return;
end
$$;
-- create a temporary table for revisions called tmp_revisions,
-- mimicking existing table revision, replacing the foreign keys to
-- people with an email and name field
--
create or replace function swh_mktemp_revision()
returns void
language sql
as $$
create temporary table tmp_revision (
like revision including defaults,
author_fullname bytea,
author_name bytea,
author_email bytea,
committer_fullname bytea,
committer_name bytea,
committer_email bytea
) on commit drop;
alter table tmp_revision drop column author;
alter table tmp_revision drop column committer;
alter table tmp_revision drop column object_id;
$$;
-- create a temporary table for releases called tmp_release,
-- mimicking existing table release, replacing the foreign keys to
-- people with an email and name field
--
create or replace function swh_mktemp_release()
returns void
language sql
as $$
create temporary table tmp_release (
like release including defaults,
author_fullname bytea,
author_name bytea,
author_email bytea
) on commit drop;
alter table tmp_release drop column author;
alter table tmp_release drop column object_id;
$$;
-- create a temporary table for occurrence_history
create or replace function swh_mktemp_occurrence_history()
returns void
language sql
as $$
create temporary table tmp_occurrence_history(
like occurrence_history including defaults,
visit bigint not null
) on commit drop;
alter table tmp_occurrence_history
drop column visits,
drop column object_id;
$$;
-- create a temporary table for entity_history, sans id
create or replace function swh_mktemp_entity_history()
returns void
language sql
as $$
create temporary table tmp_entity_history (
like entity_history including defaults) on commit drop;
alter table tmp_entity_history drop column id;
$$;
-- create a temporary table for entities called tmp_entity_lister,
-- with only the columns necessary for retrieving the uuid of a listed
-- entity.
create or replace function swh_mktemp_entity_lister()
returns void
language sql
as $$
create temporary table tmp_entity_lister (
id bigint,
lister_metadata jsonb
) on commit drop;
$$;
-- create a temporary table for the branches of a snapshot
create or replace function swh_mktemp_snapshot_branch()
returns void
language sql
as $$
create temporary table tmp_snapshot_branch (
name bytea not null,
target bytea,
target_type snapshot_target
) on commit drop;
$$;
create or replace function swh_mktemp_tool()
returns void
language sql
as $$
create temporary table tmp_tool (
like tool including defaults
) on commit drop;
alter table tmp_tool drop column id;
$$;
-- a content signature is a set of cryptographic checksums that we use to
-- uniquely identify content, for the purpose of verifying if we already have
-- some content or not during content injection
create type content_signature as (
sha1 sha1,
sha1_git sha1_git,
sha256 sha256,
blake2s256 blake2s256
);
-- check which entries of tmp_skipped_content are missing from skipped_content
--
-- operates in bulk: 0. swh_mktemp(skipped_content), 1. COPY to tmp_skipped_content,
-- 2. call this function
create or replace function swh_skipped_content_missing()
returns setof content_signature
language plpgsql
as $$
begin
return query
select sha1, sha1_git, sha256, blake2s256 from tmp_skipped_content t
where not exists
(select 1 from skipped_content s where
s.sha1 is not distinct from t.sha1 and
s.sha1_git is not distinct from t.sha1_git and
s.sha256 is not distinct from t.sha256);
return;
end
$$;
-- Look up content based on one or several different checksums. Return all
-- content information if the content is found; a NULL row otherwise.
--
-- At least one checksum should be not NULL. If several are not NULL, they will
-- be AND-ed together in the lookup query.
--
-- Note: this function is meant to be used to look up individual contents
-- (e.g., for the web app), for batch lookup of missing content (e.g., to be
-- added) see swh_content_missing
create or replace function swh_content_find(
sha1 sha1 default NULL,
sha1_git sha1_git default NULL,
sha256 sha256 default NULL,
blake2s256 blake2s256 default NULL
)
returns content
language plpgsql
as $$
declare
con content;
filters text[] := array[] :: text[]; -- AND-clauses used to filter content
q text;
begin
if sha1 is not null then
filters := filters || format('sha1 = %L', sha1);
end if;
if sha1_git is not null then
filters := filters || format('sha1_git = %L', sha1_git);
end if;
if sha256 is not null then
filters := filters || format('sha256 = %L', sha256);
end if;
if blake2s256 is not null then
filters := filters || format('blake2s256 = %L', blake2s256);
end if;
if cardinality(filters) = 0 then
return null;
else
q = format('select * from content where %s',
array_to_string(filters, ' and '));
execute q into con;
return con;
end if;
end
$$;
-- add tmp_content entries to content, skipping duplicates
--
-- operates in bulk: 0. swh_mktemp(content), 1. COPY to tmp_content,
-- 2. call this function
create or replace function swh_content_add()
returns void
language plpgsql
as $$
begin
insert into content (sha1, sha1_git, sha256, blake2s256, length, status)
select distinct sha1, sha1_git, sha256, blake2s256, length, status from tmp_content;
return;
end
$$;
-- add tmp_skipped_content entries to skipped_content, skipping duplicates
--
-- operates in bulk: 0. swh_mktemp(skipped_content), 1. COPY to tmp_skipped_content,
-- 2. call this function
create or replace function swh_skipped_content_add()
returns void
language plpgsql
as $$
begin
insert into skipped_content (sha1, sha1_git, sha256, blake2s256, length, status, reason, origin)
select distinct sha1, sha1_git, sha256, blake2s256, length, status, reason, origin
from tmp_skipped_content
where (coalesce(sha1, ''), coalesce(sha1_git, ''), coalesce(sha256, '')) in (
select coalesce(sha1, ''), coalesce(sha1_git, ''), coalesce(sha256, '')
from swh_skipped_content_missing()
);
-- TODO XXX use postgres 9.5 "UPSERT" support here, when available.
-- Specifically, using "INSERT .. ON CONFLICT IGNORE" we can avoid
-- the extra swh_content_missing() query here.
return;
end
$$;
-- Update content entries from temporary table.
-- (columns are potential new columns added to the schema, this cannot be empty)
--
create or replace function swh_content_update(columns_update text[])
returns void
language plpgsql
as $$
declare
query text;
tmp_array text[];
begin
if array_length(columns_update, 1) = 0 then
raise exception 'Please, provide the list of column names to update.';
end if;
tmp_array := array(select format('%1$s=t.%1$s', unnest) from unnest(columns_update));
query = format('update content set %s
from tmp_content t where t.sha1 = content.sha1',
array_to_string(tmp_array, ', '));
execute query;
return;
end
$$;
comment on function swh_content_update(text[]) IS 'Update existing content''s columns';
-- check which entries of tmp_directory are missing from directory
--
-- operates in bulk: 0. swh_mktemp(directory), 1. COPY to tmp_directory,
-- 2. call this function
create or replace function swh_directory_missing()
returns setof sha1_git
language plpgsql
as $$
begin
return query
select id from tmp_directory t
where not exists (
select 1 from directory d
where d.id = t.id);
return;
end
$$;
create type directory_entry_type as enum('file', 'dir', 'rev');
-- Add tmp_directory_entry_* entries to directory_entry_* and directory,
-- skipping duplicates in directory_entry_*. This is a generic function that
-- works on all kind of directory entries.
--
-- operates in bulk: 0. swh_mktemp_dir_entry('directory_entry_*'), 1 COPY to
-- tmp_directory_entry_*, 2. call this function
--
-- Assumption: this function is used in the same transaction that inserts the
-- context directory in table "directory".
create or replace function swh_directory_entry_add(typ directory_entry_type)
returns void
language plpgsql
as $$
begin
execute format('
insert into directory_entry_%1$s (target, name, perms)
select distinct t.target, t.name, t.perms
from tmp_directory_entry_%1$s t
where not exists (
select 1
from directory_entry_%1$s i
where t.target = i.target and t.name = i.name and t.perms = i.perms)
', typ);
execute format('
with new_entries as (
select t.dir_id, array_agg(i.id) as entries
from tmp_directory_entry_%1$s t
inner join directory_entry_%1$s i
using (target, name, perms)
group by t.dir_id
)
update tmp_directory as d
set %1$s_entries = new_entries.entries
from new_entries
where d.id = new_entries.dir_id
', typ);
return;
end
$$;
-- Insert the data from tmp_directory, tmp_directory_entry_file,
-- tmp_directory_entry_dir, tmp_directory_entry_rev into their final
-- tables.
--
-- Prerequisites:
-- directory ids in tmp_directory
-- entries in tmp_directory_entry_{file,dir,rev}
--
create or replace function swh_directory_add()
returns void
language plpgsql
as $$
begin
perform swh_directory_entry_add('file');
perform swh_directory_entry_add('dir');
perform swh_directory_entry_add('rev');
insert into directory
select * from tmp_directory t
where not exists (
select 1 from directory d
where d.id = t.id);
return;
end
$$;
-- a directory listing entry with all the metadata
--
-- can be used to list a directory, and retrieve all the data in one go.
create type directory_entry as
(
dir_id sha1_git, -- id of the parent directory
type directory_entry_type, -- type of entry
target sha1_git, -- id of target
name unix_path, -- path name, relative to containing dir
perms file_perms, -- unix-like permissions
status content_status, -- visible or absent
sha1 sha1, -- content if sha1 if type is not dir
sha1_git sha1_git, -- content's sha1 git if type is not dir
sha256 sha256, -- content's sha256 if type is not dir
length bigint -- content length if type is not dir
);
-- List a single level of directory walked_dir_id
-- FIXME: order by name is not correct. For git, we need to order by
-- lexicographic order but as if a trailing / is present in directory
-- name
create or replace function swh_directory_walk_one(walked_dir_id sha1_git)
returns setof directory_entry
language sql
stable
as $$
with dir as (
select id as dir_id, dir_entries, file_entries, rev_entries
from directory
where id = walked_dir_id),
ls_d as (select dir_id, unnest(dir_entries) as entry_id from dir),
ls_f as (select dir_id, unnest(file_entries) as entry_id from dir),
ls_r as (select dir_id, unnest(rev_entries) as entry_id from dir)
(select dir_id, 'dir'::directory_entry_type as type,
e.target, e.name, e.perms, NULL::content_status,
NULL::sha1, NULL::sha1_git, NULL::sha256, NULL::bigint
from ls_d
left join directory_entry_dir e on ls_d.entry_id = e.id)
union
(select dir_id, 'file'::directory_entry_type as type,
e.target, e.name, e.perms, c.status,
c.sha1, c.sha1_git, c.sha256, c.length
from ls_f
left join directory_entry_file e on ls_f.entry_id = e.id
left join content c on e.target = c.sha1_git)
union
(select dir_id, 'rev'::directory_entry_type as type,
e.target, e.name, e.perms, NULL::content_status,
NULL::sha1, NULL::sha1_git, NULL::sha256, NULL::bigint
from ls_r
left join directory_entry_rev e on ls_r.entry_id = e.id)
order by name;
$$;
-- List recursively the revision directory arborescence
create or replace function swh_directory_walk(walked_dir_id sha1_git)
returns setof directory_entry
language sql
stable
as $$
with recursive entries as (
select dir_id, type, target, name, perms, status, sha1, sha1_git,
sha256, length
from swh_directory_walk_one(walked_dir_id)
union all
select dir_id, type, target, (dirname || '/' || name)::unix_path as name,
perms, status, sha1, sha1_git, sha256, length
from (select (swh_directory_walk_one(dirs.target)).*, dirs.name as dirname
from (select target, name from entries where type = 'dir') as dirs) as with_parent
)
select dir_id, type, target, name, perms, status, sha1, sha1_git, sha256, length
from entries
$$;
create or replace function swh_revision_walk(revision_id sha1_git)
returns setof directory_entry
language sql
stable
as $$
select dir_id, type, target, name, perms, status, sha1, sha1_git, sha256, length
from swh_directory_walk((select directory from revision where id=revision_id))
$$;
COMMENT ON FUNCTION swh_revision_walk(sha1_git) IS 'Recursively list the revision targeted directory arborescence';
-- Find a directory entry by its path
create or replace function swh_find_directory_entry_by_path(
walked_dir_id sha1_git,
dir_or_content_path bytea[])
returns directory_entry
language plpgsql
as $$
declare
end_index integer;
paths bytea default '';
path bytea;
res bytea[];
r record;
begin
end_index := array_upper(dir_or_content_path, 1);
res[1] := walked_dir_id;
for i in 1..end_index
loop
path := dir_or_content_path[i];
-- concatenate path for patching the name in the result record (if we found it)
if i = 1 then
paths = path;
else
paths := paths || '/' || path; -- concatenate paths
end if;
if i <> end_index then
select *
from swh_directory_walk_one(res[i] :: sha1_git)
where name=path
and type = 'dir'
limit 1 into r;
else
select *
from swh_directory_walk_one(res[i] :: sha1_git)
where name=path
limit 1 into r;
end if;
-- find the path
if r is null then
return null;
else
-- store the next dir to lookup the next local path from
res[i+1] := r.target;
end if;
end loop;
-- at this moment, r is the result. Patch its 'name' with the full path before returning it.
r.name := paths;
return r;
end
$$;
-- List all revision IDs starting from a given revision, going back in time
--
-- TODO ordering: should be breadth-first right now (what do we want?)
-- TODO ordering: ORDER BY parent_rank somewhere?
create or replace function swh_revision_list(root_revisions bytea[], num_revs bigint default NULL)
returns table (id sha1_git, parents bytea[])
language sql
stable
as $$
with recursive full_rev_list(id) as (
(select id from revision where id = ANY(root_revisions))
union
(select h.parent_id
from revision_history as h
join full_rev_list on h.id = full_rev_list.id)
),
rev_list as (select id from full_rev_list limit num_revs)
select rev_list.id as id,
array(select rh.parent_id::bytea
from revision_history rh
where rh.id = rev_list.id
order by rh.parent_rank
) as parent
from rev_list;
$$;
-- List all the children of a given revision
create or replace function swh_revision_list_children(root_revisions bytea[], num_revs bigint default NULL)
returns table (id sha1_git, parents bytea[])
language sql
stable
as $$
with recursive full_rev_list(id) as (
(select id from revision where id = ANY(root_revisions))
union
(select h.id
from revision_history as h
join full_rev_list on h.parent_id = full_rev_list.id)
),
rev_list as (select id from full_rev_list limit num_revs)
select rev_list.id as id,
array(select rh.parent_id::bytea
from revision_history rh
where rh.id = rev_list.id
order by rh.parent_rank
) as parent
from rev_list;
$$;
-- Detailed entry for a revision
create type revision_entry as
(
id sha1_git,
date timestamptz,
date_offset smallint,
date_neg_utc_offset boolean,
committer_date timestamptz,
committer_date_offset smallint,
committer_date_neg_utc_offset boolean,
type revision_type,
directory sha1_git,
message bytea,
author_id bigint,
author_fullname bytea,
author_name bytea,
author_email bytea,
committer_id bigint,
committer_fullname bytea,
committer_name bytea,
committer_email bytea,
metadata jsonb,
synthetic boolean,
parents bytea[],
object_id bigint
);
-- "git style" revision log. Similar to swh_revision_list(), but returning all
-- information associated to each revision, and expanding authors/committers
create or replace function swh_revision_log(root_revisions bytea[], num_revs bigint default NULL)
returns setof revision_entry
language sql
stable
as $$
select t.id, r.date, r.date_offset, r.date_neg_utc_offset,
r.committer_date, r.committer_date_offset, r.committer_date_neg_utc_offset,
r.type, r.directory, r.message,
a.id, a.fullname, a.name, a.email,
c.id, c.fullname, c.name, c.email,
r.metadata, r.synthetic, t.parents, r.object_id
from swh_revision_list(root_revisions, num_revs) as t
left join revision r on t.id = r.id
left join person a on a.id = r.author
left join person c on c.id = r.committer;
$$;
-- Detailed entry for a release
create type release_entry as
(
id sha1_git,
target sha1_git,
target_type object_type,
date timestamptz,
date_offset smallint,
date_neg_utc_offset boolean,
name bytea,
comment bytea,
synthetic boolean,
author_id bigint,
author_fullname bytea,
author_name bytea,
author_email bytea,
object_id bigint
);
-- Create entries in person from tmp_revision
create or replace function swh_person_add_from_revision()
returns void
language plpgsql
as $$
begin
with t as (
select author_fullname as fullname, author_name as name, author_email as email from tmp_revision
union
select committer_fullname as fullname, committer_name as name, committer_email as email from tmp_revision
) insert into person (fullname, name, email)
select distinct fullname, name, email from t
where not exists (
select 1
from person p
where t.fullname = p.fullname
);
return;
end
$$;
-- Create entries in revision from tmp_revision
create or replace function swh_revision_add()
returns void
language plpgsql
as $$
begin
perform swh_person_add_from_revision();
insert into revision (id, date, date_offset, date_neg_utc_offset, committer_date, committer_date_offset, committer_date_neg_utc_offset, type, directory, message, author, committer, metadata, synthetic)
select t.id, t.date, t.date_offset, t.date_neg_utc_offset, t.committer_date, t.committer_date_offset, t.committer_date_neg_utc_offset, t.type, t.directory, t.message, a.id, c.id, t.metadata, t.synthetic
from tmp_revision t
left join person a on a.fullname = t.author_fullname
left join person c on c.fullname = t.committer_fullname;
return;
end
$$;
-- Create entries in person from tmp_release
create or replace function swh_person_add_from_release()
returns void
language plpgsql
as $$
begin
with t as (
select distinct author_fullname as fullname, author_name as name, author_email as email from tmp_release
) insert into person (fullname, name, email)
select fullname, name, email from t
where not exists (
select 1
from person p
where t.fullname = p.fullname
);
return;
end
$$;
-- Create entries in release from tmp_release
create or replace function swh_release_add()
returns void
language plpgsql
as $$
begin
perform swh_person_add_from_release();
insert into release (id, target, target_type, date, date_offset, date_neg_utc_offset, name, comment, author, synthetic)
select t.id, t.target, t.target_type, t.date, t.date_offset, t.date_neg_utc_offset, t.name, t.comment, a.id, t.synthetic
from tmp_release t
left join person a on a.fullname = t.author_fullname;
return;
end
$$;
create or replace function swh_occurrence_update_for_origin(origin_id bigint)
returns void
language sql
as $$
delete from occurrence where origin = origin_id;
insert into occurrence (origin, branch, target, target_type)
select origin, branch, target, target_type
from occurrence_history
where origin = origin_id and
(select visit from origin_visit
where origin = origin_id
order by date desc
limit 1) = any(visits);
$$;
create or replace function swh_occurrence_update_all()
returns void
language plpgsql
as $$
declare
origin_id origin.id%type;
begin
for origin_id in
select distinct id from origin
loop
perform swh_occurrence_update_for_origin(origin_id);
end loop;
return;
end;
$$;
-- add a new origin_visit for origin origin_id at date.
--
-- Returns the new visit id.
create or replace function swh_origin_visit_add(origin_id bigint, date timestamptz)
returns bigint
language sql
as $$
with last_known_visit as (
select coalesce(max(visit), 0) as visit
from origin_visit
where origin = origin_id
)
insert into origin_visit (origin, date, visit, status)
values (origin_id, date, (select visit from last_known_visit) + 1, 'ongoing')
returning visit;
$$;
-- add tmp_occurrence_history entries to occurrence_history
--
-- operates in bulk: 0. swh_mktemp(occurrence_history), 1. COPY to tmp_occurrence_history,
-- 2. call this function
create or replace function swh_occurrence_history_add()
returns void
language plpgsql
as $$
declare
origin_id origin.id%type;
begin
-- Create or update occurrence_history
with occurrence_history_id_visit as (
select tmp_occurrence_history.*, object_id, visits from tmp_occurrence_history
left join occurrence_history using(origin, branch, target, target_type)
),
occurrences_to_update as (
select object_id, visit from occurrence_history_id_visit where object_id is not null
),
update_occurrences as (
update occurrence_history
set visits = array(select unnest(occurrence_history.visits) as e
union
select occurrences_to_update.visit as e
order by e)
from occurrences_to_update
where occurrence_history.object_id = occurrences_to_update.object_id
)
insert into occurrence_history (origin, branch, target, target_type, visits)
select origin, branch, target, target_type, ARRAY[visit]
from occurrence_history_id_visit
where object_id is null;
-- update occurrence
for origin_id in
select distinct origin from tmp_occurrence_history
loop
perform swh_occurrence_update_for_origin(origin_id);
end loop;
return;
end
$$;
create or replace function swh_snapshot_add(origin bigint, visit bigint, snapshot_id snapshot.id%type)
returns void
language plpgsql
as $$
declare
snapshot_object_id snapshot.object_id%type;
begin
select object_id from snapshot where id = snapshot_id into snapshot_object_id;
if snapshot_object_id is null then
insert into snapshot (id) values (snapshot_id) returning object_id into snapshot_object_id;
insert into snapshot_branch (name, target_type, target)
select name, target_type, target from tmp_snapshot_branch tmp
where not exists (
select 1
from snapshot_branch sb
where sb.name = tmp.name
and sb.target = tmp.target
and sb.target_type = tmp.target_type
)
on conflict do nothing;
insert into snapshot_branches (snapshot_id, branch_id)
select snapshot_object_id, sb.object_id as branch_id
from tmp_snapshot_branch tmp
join snapshot_branch sb
using (name, target, target_type)
where tmp.target is not null and tmp.target_type is not null
union
select snapshot_object_id, sb.object_id as branch_id
from tmp_snapshot_branch tmp
join snapshot_branch sb
using (name)
where tmp.target is null and tmp.target_type is null
and sb.target is null and sb.target_type is null;
end if;
update origin_visit ov
set snapshot_id = snapshot_object_id
where ov.origin=swh_snapshot_add.origin and ov.visit=swh_snapshot_add.visit;
end;
$$;
create type snapshot_result as (
snapshot_id sha1_git,
name bytea,
target bytea,
target_type snapshot_target
);
create or replace function swh_snapshot_get_by_id(id snapshot.id%type)
returns setof snapshot_result
language sql
stable
as $$
select
swh_snapshot_get_by_id.id as snapshot_id, name, target, target_type
from snapshot_branches
inner join snapshot_branch on snapshot_branches.branch_id = snapshot_branch.object_id
where snapshot_id = (select object_id from snapshot where snapshot.id = swh_snapshot_get_by_id.id)
$$;
create or replace function swh_snapshot_get_by_origin_visit(origin_id bigint, visit_id bigint)
returns snapshot.id%type
language sql
stable
as $$
select snapshot.id
from origin_visit
left join snapshot
on snapshot.object_id = origin_visit.snapshot_id
where origin_visit.origin=origin_id and origin_visit.visit=visit_id;
$$;
-- Absolute path: directory reference + complete path relative to it
create type content_dir as (
directory sha1_git,
path unix_path
);
-- Find the containing directory of a given content, specified by sha1
-- (note: *not* sha1_git).
--
-- Return a pair (dir_it, path) where path is a UNIX path that, from the
-- directory root, reach down to a file with the desired content. Return NULL
-- if no match is found.
--
-- In case of multiple paths (i.e., pretty much always), an arbitrary one is
-- chosen.
create or replace function swh_content_find_directory(content_id sha1)
returns content_dir
language sql
stable
as $$
with recursive path as (
-- Recursively build a path from the requested content to a root
-- directory. Each iteration returns a pair (dir_id, filename) where
-- filename is relative to dir_id. Stops when no parent directory can
-- be found.
(select dir.id as dir_id, dir_entry_f.name as name, 0 as depth
from directory_entry_file as dir_entry_f
join content on content.sha1_git = dir_entry_f.target
join directory as dir on dir.file_entries @> array[dir_entry_f.id]
where content.sha1 = content_id
limit 1)
union all
(select dir.id as dir_id,
(dir_entry_d.name || '/' || path.name)::unix_path as name,
path.depth + 1
from path
join directory_entry_dir as dir_entry_d on dir_entry_d.target = path.dir_id
join directory as dir on dir.dir_entries @> array[dir_entry_d.id]
limit 1)
)
select dir_id, name from path order by depth desc limit 1;
$$;
-- Walk the revision history starting from a given revision, until a matching
-- occurrence is found. Return all occurrence information if one is found, NULL
-- otherwise.
create or replace function swh_revision_find_occurrence(revision_id sha1_git)
returns occurrence
language sql
stable
as $$
select origin, branch, target, target_type
from swh_revision_list_children(ARRAY[revision_id] :: bytea[]) as rev_list
left join occurrence_history occ_hist
on rev_list.id = occ_hist.target
where occ_hist.origin is not null and
occ_hist.target_type = 'revision'
limit 1;
$$;
-- Find the visit of origin id closest to date visit_date
create or replace function swh_visit_find_by_date(origin bigint, visit_date timestamptz default NOW())
returns origin_visit
language sql
stable
as $$
with closest_two_visits as ((
select ov, (date - visit_date) as interval
from origin_visit ov
where ov.origin = origin
and ov.date >= visit_date
order by ov.date asc
limit 1
) union (
select ov, (visit_date - date) as interval
from origin_visit ov
where ov.origin = origin
and ov.date < visit_date
order by ov.date desc
limit 1
)) select (ov).* from closest_two_visits order by interval limit 1
$$;
-- Find the visit of origin id closest to date visit_date
create or replace function swh_visit_get(origin bigint)
returns origin_visit
language sql
stable
as $$
select *
from origin_visit
where origin=origin
order by date desc
$$;
-- Retrieve occurrence by filtering on origin_id and optionally on
-- branch_name and/or validity range
create or replace function swh_occurrence_get_by(
origin_id bigint,
branch_name bytea default NULL,
date timestamptz default NULL)
returns setof occurrence_history
language plpgsql
as $$
declare
filters text[] := array[] :: text[]; -- AND-clauses used to filter content
visit_id bigint;
q text;
begin
if origin_id is null then
raise exception 'Needs an origin_id to get an occurrence.';
end if;
filters := filters || format('origin = %L', origin_id);
if branch_name is not null then
filters := filters || format('branch = %L', branch_name);
end if;
if date is not null then
select visit from swh_visit_find_by_date(origin_id, date) into visit_id;
else
select visit from origin_visit where origin = origin_id order by origin_visit.date desc limit 1 into visit_id;
end if;
if visit_id is null then
return;
end if;
filters := filters || format('%L = any(visits)', visit_id);
q = format('select * from occurrence_history where %s',
array_to_string(filters, ' and '));
return query execute q;
end
$$;
-- Retrieve revisions by occurrence criterion filtering
create or replace function swh_revision_get_by(
origin_id bigint,
branch_name bytea default NULL,
date timestamptz default NULL)
returns setof revision_entry
language sql
stable
as $$
select r.id, r.date, r.date_offset, r.date_neg_utc_offset,
r.committer_date, r.committer_date_offset, r.committer_date_neg_utc_offset,
r.type, r.directory, r.message,
a.id, a.fullname, a.name, a.email, c.id, c.fullname, c.name, c.email, r.metadata, r.synthetic,
array(select rh.parent_id::bytea
from revision_history rh
where rh.id = r.id
order by rh.parent_rank
) as parents, r.object_id
from swh_occurrence_get_by(origin_id, branch_name, date) as occ
inner join revision r on occ.target = r.id
left join person a on a.id = r.author
left join person c on c.id = r.committer;
$$;
-- Retrieve a release by occurrence criterion
create or replace function swh_release_get_by(
origin_id bigint)
returns setof release_entry
language sql
stable
as $$
select r.id, r.target, r.target_type, r.date, r.date_offset, r.date_neg_utc_offset,
r.name, r.comment, r.synthetic, a.id as author_id, a.fullname as author_fullname,
a.name as author_name, a.email as author_email, r.object_id
from release r
inner join occurrence_history occ on occ.target = r.target
left join person a on a.id = r.author
where occ.origin = origin_id and occ.target_type = 'revision' and r.target_type = 'revision';
$$;
-- Create entries in entity_history from tmp_entity_history
--
-- TODO: do something smarter to compress the entries if the data
-- didn't change.
create or replace function swh_entity_history_add()
returns void
language plpgsql
as $$
begin
insert into entity_history (
uuid, parent, name, type, description, homepage, active, generated, lister_metadata, metadata, validity
) select * from tmp_entity_history;
return;
end
$$;
create or replace function swh_update_entity_from_entity_history()
returns trigger
language plpgsql
as $$
begin
insert into entity (uuid, parent, name, type, description, homepage, active, generated,
lister_metadata, metadata, last_seen, last_id)
select uuid, parent, name, type, description, homepage, active, generated,
lister_metadata, metadata, unnest(validity), id
from entity_history
where uuid = NEW.uuid
order by unnest(validity) desc limit 1
on conflict (uuid) do update set
parent = EXCLUDED.parent,
name = EXCLUDED.name,
type = EXCLUDED.type,
description = EXCLUDED.description,
homepage = EXCLUDED.homepage,
active = EXCLUDED.active,
generated = EXCLUDED.generated,
lister_metadata = EXCLUDED.lister_metadata,
metadata = EXCLUDED.metadata,
last_seen = EXCLUDED.last_seen,
last_id = EXCLUDED.last_id;
return null;
end
$$;
create trigger update_entity
after insert or update
on entity_history
for each row
execute procedure swh_update_entity_from_entity_history();
-- map an id of tmp_entity_lister to a full entity
create type entity_id as (
id bigint,
uuid uuid,
parent uuid,
name text,
type entity_type,
description text,
homepage text,
active boolean,
generated boolean,
lister_metadata jsonb,
metadata jsonb,
last_seen timestamptz,
last_id bigint
);
-- find out the uuid of the entries of entity with the metadata
-- contained in tmp_entity_lister
create or replace function swh_entity_from_tmp_entity_lister()
returns setof entity_id
language plpgsql
as $$
begin
return query
select t.id, e.*
from tmp_entity_lister t
left join entity e
on e.lister_metadata @> t.lister_metadata;
return;
end
$$;
create or replace function swh_entity_get(entity_uuid uuid)
returns setof entity
language sql
stable
as $$
with recursive entity_hierarchy as (
select e.*
from entity e where uuid = entity_uuid
union
select p.*
from entity_hierarchy e
join entity p on e.parent = p.uuid
)
select *
from entity_hierarchy;
$$;
-- Object listing by object_id
create or replace function swh_content_list_by_object_id(
min_excl bigint,
max_incl bigint
)
returns setof content
language sql
stable
as $$
select * from content
where object_id > min_excl and object_id <= max_incl
order by object_id;
$$;
create or replace function swh_revision_list_by_object_id(
min_excl bigint,
max_incl bigint
)
returns setof revision_entry
language sql
stable
as $$
with revs as (
select * from revision
where object_id > min_excl and object_id <= max_incl
)
select r.id, r.date, r.date_offset, r.date_neg_utc_offset,
r.committer_date, r.committer_date_offset, r.committer_date_neg_utc_offset,
r.type, r.directory, r.message,
a.id, a.fullname, a.name, a.email, c.id, c.fullname, c.name, c.email, r.metadata, r.synthetic,
array(select rh.parent_id::bytea from revision_history rh where rh.id = r.id order by rh.parent_rank)
as parents, r.object_id
from revs r
left join person a on a.id = r.author
left join person c on c.id = r.committer
order by r.object_id;
$$;
create or replace function swh_release_list_by_object_id(
min_excl bigint,
max_incl bigint
)
returns setof release_entry
language sql
stable
as $$
with rels as (
select * from release
where object_id > min_excl and object_id <= max_incl
)
select r.id, r.target, r.target_type, r.date, r.date_offset, r.date_neg_utc_offset, r.name, r.comment,
r.synthetic, p.id as author_id, p.fullname as author_fullname, p.name as author_name, p.email as author_email, r.object_id
from rels r
left join person p on p.id = r.author
order by r.object_id;
$$;
create or replace function swh_occurrence_by_origin_visit(origin_id bigint, visit_id bigint)
returns setof occurrence
language sql
stable
as $$
select origin, branch, target, target_type from occurrence_history
where origin = origin_id and visit_id = ANY(visits);
$$;
-- end revision_metadata functions
-- origin_metadata functions
create type origin_metadata_signature as (
id bigint,
origin_id bigint,
discovery_date timestamptz,
tool_id bigint,
metadata jsonb,
provider_id integer,
provider_name text,
provider_type text,
provider_url text
);
create or replace function swh_origin_metadata_get_by_origin(
origin integer)
returns setof origin_metadata_signature
language sql
stable
as $$
select om.id as id, origin_id, discovery_date, tool_id, om.metadata,
mp.id as provider_id, provider_name, provider_type, provider_url
from origin_metadata as om
inner join metadata_provider mp on om.provider_id = mp.id
where om.origin_id = origin
order by discovery_date desc;
$$;
create or replace function swh_origin_metadata_get_by_provider_type(
origin integer,
type text)
returns setof origin_metadata_signature
language sql
stable
as $$
select om.id as id, origin_id, discovery_date, tool_id, om.metadata,
mp.id as provider_id, provider_name, provider_type, provider_url
from origin_metadata as om
inner join metadata_provider mp on om.provider_id = mp.id
where om.origin_id = origin
and mp.provider_type = type
order by discovery_date desc;
$$;
-- end origin_metadata functions
-- add tmp_tool entries to tool,
-- skipping duplicates if any.
--
-- operates in bulk: 0. create temporary tmp_tool, 1. COPY to
-- it, 2. call this function to insert and filtering out duplicates
create or replace function swh_tool_add()
returns setof tool
language plpgsql
as $$
begin
insert into tool(name, version, configuration)
select name, version, configuration from tmp_tool tmp
on conflict(name, version, configuration) do nothing;
return query
select id, name, version, configuration
from tmp_tool join tool
using(name, version, configuration);
return;
end
$$;
-- simple counter mapping a textual label to an integer value
create type counter as (
label text,
value bigint
);
-- return statistics about the number of tuples in various SWH tables
--
-- Note: the returned values are based on postgres internal statistics
-- (pg_class table), which are only updated daily (by autovacuum) or so
create or replace function swh_stat_counters()
returns setof counter
language sql
stable
as $$
select object_type as label, value as value
from object_counts
where object_type in (
'content',
'directory',
'directory_entry_dir',
'directory_entry_file',
'directory_entry_rev',
'occurrence',
'occurrence_history',
'origin',
'origin_visit',
'person',
'entity',
'entity_history',
'release',
'revision',
'revision_history',
- 'skipped_content'
+ 'skipped_content',
+ 'snapshot'
);
$$;
create or replace function swh_update_counter(object_type text)
returns void
language plpgsql
as $$
begin
execute format('
insert into object_counts
(value, last_update, object_type)
values
((select count(*) from %1$I), NOW(), %1$L)
on conflict (object_type) do update set
value = excluded.value,
last_update = excluded.last_update',
object_type);
return;
end;
$$;
create or replace function swh_update_counter_bucketed()
returns void
language plpgsql
as $$
declare
query text;
line_to_update int;
new_value bigint;
begin
select
object_counts_bucketed.line,
format(
'select count(%I) from %I where %s',
coalesce(identifier, '*'),
object_type,
coalesce(
concat_ws(
' and ',
case when bucket_start is not null then
format('%I >= %L', identifier, bucket_start) -- lower bound condition, inclusive
end,
case when bucket_end is not null then
format('%I < %L', identifier, bucket_end) -- upper bound condition, exclusive
end
),
'true'
)
)
from object_counts_bucketed
order by coalesce(last_update, now() - '1 month'::interval) asc
limit 1
into line_to_update, query;
execute query into new_value;
update object_counts_bucketed
set value = new_value,
last_update = now()
where object_counts_bucketed.line = line_to_update;
END
$$;
create or replace function swh_update_counters_from_buckets()
returns trigger
language plpgsql
as $$
begin
with to_update as (
select object_type, sum(value) as value, max(last_update) as last_update
from object_counts_bucketed ob1
where not exists (
select 1 from object_counts_bucketed ob2
where ob1.object_type = ob2.object_type
and value is null
)
group by object_type
) update object_counts
set
value = to_update.value,
last_update = to_update.last_update
from to_update
where
object_counts.object_type = to_update.object_type
and object_counts.value != to_update.value;
return null;
end
$$;
create trigger update_counts_from_bucketed
after insert or update
on object_counts_bucketed
for each row
when (NEW.line % 256 = 0)
execute procedure swh_update_counters_from_buckets();
diff --git a/sql/swh-schema.sql b/sql/swh-schema.sql
index e6c3b5cf..ea4b149c 100644
--- a/sql/swh-schema.sql
+++ b/sql/swh-schema.sql
@@ -1,445 +1,445 @@
---
--- Software Heritage Data Model
---
-- drop schema if exists swh cascade;
-- create schema swh;
-- set search_path to swh;
create table dbversion
(
version int primary key,
release timestamptz,
description text
);
insert into dbversion(version, release, description)
- values(119, now(), 'Work In Progress');
+ values(120, now(), 'Work In Progress');
-- a SHA1 checksum (not necessarily originating from Git)
create domain sha1 as bytea check (length(value) = 20);
-- a Git object ID, i.e., a SHA1 checksum
create domain sha1_git as bytea check (length(value) = 20);
-- a SHA256 checksum
create domain sha256 as bytea check (length(value) = 32);
-- a blake2 checksum
create domain blake2s256 as bytea check (length(value) = 32);
-- UNIX path (absolute, relative, individual path component, etc.)
create domain unix_path as bytea;
-- a set of UNIX-like access permissions, as manipulated by, e.g., chmod
create domain file_perms as int;
-- Checksums about actual file content. Note that the content itself is not
-- stored in the DB, but on external (key-value) storage. A single checksum is
-- used as key there, but the other can be used to verify that we do not inject
-- content collisions not knowingly.
create table content
(
sha1 sha1 not null,
sha1_git sha1_git not null,
sha256 sha256 not null,
blake2s256 blake2s256,
length bigint not null,
ctime timestamptz not null default now(),
-- creation time, i.e. time of (first) injection into the storage
status content_status not null default 'visible',
object_id bigserial
);
-- Entities constitute a typed hierarchy of organization, hosting
-- facilities, groups, people and software projects.
--
-- Examples of entities: Software Heritage, Debian, GNU, GitHub,
-- Apache, The Linux Foundation, the Debian Python Modules Team, the
-- torvalds GitHub user, the torvalds/linux GitHub project.
--
-- The data model is hierarchical (via the parent attribute) and might
-- store sub-branches of existing entities. The key feature of an
-- entity is might be *listed* (if it is available in listable_entity)
-- to retrieve information about its content, i.e: sub-entities,
-- projects, origins.
-- The history of entities. Allows us to keep historical metadata
-- about entities. The temporal invariant is the uuid. Root
-- organization uuids are manually generated (and available in
-- swh-data.sql).
--
-- For generated entities (generated = true), we can provide
-- generation_metadata to allow listers to retrieve the uuids of previous
-- iterations of the entity.
--
-- Inactive entities that have been active in the past (active =
-- false) should register the timestamp at which we saw them
-- deactivate, in a new entry of entity_history.
create table entity_history
(
id bigserial not null,
uuid uuid,
parent uuid, -- should reference entity_history(uuid)
name text not null,
type entity_type not null,
description text,
homepage text,
active boolean not null, -- whether the entity was seen on the last listing
generated boolean not null, -- whether this entity has been generated by a lister
lister_metadata jsonb, -- lister-specific metadata, used for queries
metadata jsonb,
validity timestamptz[] -- timestamps at which we have seen this entity
);
-- The entity table provides a view of the latest information on a
-- given entity. It is updated via a trigger on entity_history.
create table entity
(
uuid uuid not null,
parent uuid,
name text not null,
type entity_type not null,
description text,
homepage text,
active boolean not null, -- whether the entity was seen on the last listing
generated boolean not null, -- whether this entity has been generated by a lister
lister_metadata jsonb, -- lister-specific metadata, used for queries
metadata jsonb,
last_seen timestamptz, -- last listing time or disappearance time for active=false
last_id bigint -- last listing id
);
-- Register the equivalence between two entities. Allows sideways
-- navigation in the entity table
create table entity_equivalence
(
entity1 uuid,
entity2 uuid
);
-- Register a lister for a specific entity.
create table listable_entity
(
uuid uuid,
enabled boolean not null default true, -- do we list this entity automatically?
list_engine text, -- crawler to be used to list entity's content
list_url text, -- root URL to start the listing
list_params jsonb, -- org-specific listing parameter
latest_list timestamptz -- last time the entity's content has been listed
);
-- Log of all entity listings (i.e., entity crawling) that have been
-- done in the past, or are still ongoing.
create table list_history
(
id bigserial not null,
date timestamptz not null,
status boolean, -- true if and only if the listing has been successful
result jsonb, -- more detailed return value, depending on status
stdout text,
stderr text,
duration interval, -- fetch duration of NULL if still ongoing
entity uuid
);
-- An origin is a place, identified by an URL, where software can be found. We
-- support different kinds of origins, e.g., git and other VCS repositories,
-- web pages that list tarballs URLs (e.g., http://www.kernel.org), indirect
-- tarball URLs (e.g., http://www.example.org/latest.tar.gz), etc. The key
-- feature of an origin is that it can be *fetched* (wget, git clone, svn
-- checkout, etc.) to retrieve all the contained software.
create table origin
(
id bigserial not null,
type text, -- TODO use an enum here (?)
url text not null,
lister uuid,
project uuid
);
-- Content we have seen but skipped for some reason. This table is
-- separate from the content table as we might not have the sha1
-- checksum of that data (for instance when we inject git
-- repositories, objects that are too big will be skipped here, and we
-- will only know their sha1_git). 'reason' contains the reason the
-- content was skipped. origin is a nullable column allowing to find
-- out which origin contains that skipped content.
create table skipped_content
(
sha1 sha1,
sha1_git sha1_git,
sha256 sha256,
blake2s256 blake2s256,
length bigint not null,
ctime timestamptz not null default now(),
status content_status not null default 'absent',
reason text not null,
origin bigint,
object_id bigserial
);
-- Log of all origin fetches (i.e., origin crawling) that have been done in the
-- past, or are still ongoing. Similar to list_history, but for origins.
create table fetch_history
(
id bigserial,
origin bigint,
date timestamptz not null,
status boolean, -- true if and only if the fetch has been successful
result jsonb, -- more detailed returned values, times, etc...
stdout text,
stderr text, -- null when status is true, filled otherwise
duration interval -- fetch duration of NULL if still ongoing
);
-- A file-system directory. A directory is a list of directory entries (see
-- tables: directory_entry_{dir,file}).
--
-- To list the contents of a directory:
-- 1. list the contained directory_entry_dir using array dir_entries
-- 2. list the contained directory_entry_file using array file_entries
-- 3. list the contained directory_entry_rev using array rev_entries
-- 4. UNION
--
-- Synonyms/mappings:
-- * git: tree
create table directory
(
id sha1_git,
dir_entries bigint[], -- sub-directories, reference directory_entry_dir
file_entries bigint[], -- contained files, reference directory_entry_file
rev_entries bigint[], -- mounted revisions, reference directory_entry_rev
object_id bigserial -- short object identifier
);
-- A directory entry pointing to a sub-directory.
create table directory_entry_dir
(
id bigserial,
target sha1_git, -- id of target directory
name unix_path, -- path name, relative to containing dir
perms file_perms -- unix-like permissions
);
-- A directory entry pointing to a file.
create table directory_entry_file
(
id bigserial,
target sha1_git, -- id of target file
name unix_path, -- path name, relative to containing dir
perms file_perms -- unix-like permissions
);
-- A directory entry pointing to a revision.
create table directory_entry_rev
(
id bigserial,
target sha1_git, -- id of target revision
name unix_path, -- path name, relative to containing dir
perms file_perms -- unix-like permissions
);
create table person
(
id bigserial,
name bytea, -- advisory: not null if we managed to parse a name
email bytea, -- advisory: not null if we managed to parse an email
fullname bytea not null -- freeform specification; what is actually used in the checksums
-- will usually be of the form 'name <email>'
);
-- A snapshot of a software project at a specific point in time.
--
-- Synonyms/mappings:
-- * git / subversion / etc: commit
-- * tarball: a specific tarball
--
-- Revisions are organized as DAGs. Each revision points to 0, 1, or more (in
-- case of merges) parent revisions. Each revision points to a directory, i.e.,
-- a file-system tree containing files and directories.
create table revision
(
id sha1_git,
date timestamptz,
date_offset smallint,
committer_date timestamptz,
committer_date_offset smallint,
type revision_type not null,
directory sha1_git, -- file-system tree
message bytea,
author bigint,
committer bigint,
synthetic boolean not null default false, -- true if synthetic (cf. swh-loader-tar)
metadata jsonb, -- extra metadata (tarball checksums, extra commit information, etc...)
object_id bigserial,
date_neg_utc_offset boolean,
committer_date_neg_utc_offset boolean
);
-- either this table or the sha1_git[] column on the revision table
create table revision_history
(
id sha1_git,
parent_id sha1_git,
parent_rank int not null default 0
-- parent position in merge commits, 0-based
);
-- The timestamps at which Software Heritage has made a visit of the given origin.
create table origin_visit
(
origin bigint not null,
visit bigint not null,
date timestamptz not null,
status origin_visit_status not null,
metadata jsonb,
snapshot_id bigint
);
comment on column origin_visit.origin is 'Visited origin';
comment on column origin_visit.visit is 'Visit number the visit occurred for that origin';
comment on column origin_visit.date is 'Visit date for that origin';
comment on column origin_visit.status is 'Visit status for that origin';
comment on column origin_visit.metadata is 'Metadata associated with the visit';
comment on column origin_visit.snapshot_id is 'id of the snapshot associated with the visit';
-- The content of software origins is indexed starting from top-level pointers
-- called "branches". Every time we fetch some origin we store in this table
-- where the branches pointed to at fetch time.
--
-- Synonyms/mappings:
-- * git: ref (in the "git update-ref" sense)
create table occurrence_history
(
origin bigint not null,
branch bytea not null, -- e.g., b"master" (for VCS), or b"sid" (for Debian)
target sha1_git not null, -- ref target, e.g., commit id
target_type object_type not null, -- ref target type
visits bigint[] not null, -- the visits where that occurrence was valid. References
-- origin_visit(visit), where o_h.origin = origin_visit.origin.
object_id bigserial not null, -- short object identifier
snapshot_branch_id bigint
);
-- Materialized view of occurrence_history, storing the *current* value of each
-- branch, as last seen by SWH.
create table occurrence
(
origin bigint,
branch bytea not null,
target sha1_git not null,
target_type object_type not null
);
create table snapshot (
object_id bigserial not null,
id sha1_git
);
create table snapshot_branch (
object_id bigserial not null,
name bytea not null,
target bytea,
target_type snapshot_target
);
create table snapshot_branches (
snapshot_id bigint not null,
branch_id bigint not null
);
-- A "memorable" point in the development history of a project.
--
-- Synonyms/mappings:
-- * git: tag (of the annotated kind, otherwise they are just references)
-- * tarball: the release version number
create table release
(
id sha1_git not null,
target sha1_git,
date timestamptz,
date_offset smallint,
name bytea,
comment bytea,
author bigint,
synthetic boolean not null default false, -- true if synthetic (cf. swh-loader-tar)
object_id bigserial,
target_type object_type not null,
date_neg_utc_offset boolean
);
-- Tools
create table tool (
id serial not null,
name text not null,
version text not null,
configuration jsonb
);
comment on table tool is 'Tool information';
comment on column tool.id is 'Tool identifier';
comment on column tool.version is 'Tool name';
comment on column tool.version is 'Tool version';
comment on column tool.configuration is 'Tool configuration: command line, flags, etc...';
create table metadata_provider (
id serial not null,
provider_name text not null,
provider_type text not null,
provider_url text,
metadata jsonb
);
comment on table metadata_provider is 'Metadata provider information';
comment on column metadata_provider.id is 'Provider''s identifier';
comment on column metadata_provider.provider_name is 'Provider''s name';
comment on column metadata_provider.provider_url is 'Provider''s url';
comment on column metadata_provider.metadata is 'Other metadata about provider';
-- Discovery of metadata during a listing, loading, deposit or external_catalog of an origin
-- also provides a translation to a defined json schema using a translation tool (tool_id)
create table origin_metadata(
id bigserial not null, -- PK object identifier
origin_id bigint not null, -- references origin(id)
discovery_date timestamptz not null, -- when it was extracted
provider_id bigint not null, -- ex: 'hal', 'lister-github', 'loader-github'
tool_id bigint not null,
metadata jsonb not null
);
comment on table origin_metadata is 'keeps all metadata found concerning an origin';
comment on column origin_metadata.id is 'the origin_metadata object''s id';
comment on column origin_metadata.origin_id is 'the origin id for which the metadata was found';
comment on column origin_metadata.discovery_date is 'the date of retrieval';
comment on column origin_metadata.provider_id is 'the metadata provider: github, openhub, deposit, etc.';
comment on column origin_metadata.tool_id is 'the tool used for extracting metadata: lister-github, etc.';
comment on column origin_metadata.metadata is 'metadata in json format but with original terms';
-- Keep a cache of object counts
create table object_counts (
object_type text, -- table for which we're counting objects (PK)
value bigint, -- count of objects in the table
last_update timestamptz, -- last update for the object count in this table
single_update boolean -- whether we update this table standalone (true) or through bucketed counts (false)
);
CREATE TABLE object_counts_bucketed (
line serial NOT NULL, -- PK
object_type text NOT NULL, -- table for which we're counting objects
identifier text NOT NULL, -- identifier across which we're bucketing objects
bucket_start bytea, -- lower bound (inclusive) for the bucket
bucket_end bytea, -- upper bound (exclusive) for the bucket
value bigint, -- count of objects in the bucket
last_update timestamptz -- last update for the object count in this bucket
);
diff --git a/sql/upgrades/120.sql b/sql/upgrades/120.sql
new file mode 100644
index 00000000..a547530a
--- /dev/null
+++ b/sql/upgrades/120.sql
@@ -0,0 +1,39 @@
+-- SWH DB schema upgrade
+-- from_version: 119
+-- to_version: 120
+-- description: Drop unused functions using temporary tables
+
+insert into dbversion(version, release, description)
+ values(120, now(), 'Work In Progress');
+
+-- return statistics about the number of tuples in various SWH tables
+--
+-- Note: the returned values are based on postgres internal statistics
+-- (pg_class table), which are only updated daily (by autovacuum) or so
+create or replace function swh_stat_counters()
+ returns setof counter
+ language sql
+ stable
+as $$
+ select object_type as label, value as value
+ from object_counts
+ where object_type in (
+ 'content',
+ 'directory',
+ 'directory_entry_dir',
+ 'directory_entry_file',
+ 'directory_entry_rev',
+ 'occurrence',
+ 'occurrence_history',
+ 'origin',
+ 'origin_visit',
+ 'person',
+ 'entity',
+ 'entity_history',
+ 'release',
+ 'revision',
+ 'revision_history',
+ 'skipped_content',
+ 'snapshot'
+ );
+$$;
diff --git a/swh.storage.egg-info/PKG-INFO b/swh.storage.egg-info/PKG-INFO
index 49e0d80d..20376a4c 100644
--- a/swh.storage.egg-info/PKG-INFO
+++ b/swh.storage.egg-info/PKG-INFO
@@ -1,12 +1,12 @@
Metadata-Version: 2.1
Name: swh.storage
-Version: 0.0.104
+Version: 0.0.105
Summary: Software Heritage storage manager
Home-page: https://forge.softwareheritage.org/diffusion/DSTO/
Author: Software Heritage developers
Author-email: swh-devel@inria.fr
License: UNKNOWN
Description: UNKNOWN
Platform: UNKNOWN
Provides-Extra: listener
Provides-Extra: schemata
diff --git a/swh.storage.egg-info/SOURCES.txt b/swh.storage.egg-info/SOURCES.txt
index 752237e6..4ed2f69d 100644
--- a/swh.storage.egg-info/SOURCES.txt
+++ b/swh.storage.egg-info/SOURCES.txt
@@ -1,201 +1,201 @@
.gitignore
AUTHORS
LICENSE
MANIFEST.in
Makefile
Makefile.local
README.db_testing
README.dev
requirements-swh.txt
requirements.txt
setup.py
version.txt
bin/swh-storage-add-dir
debian/changelog
debian/compat
debian/control
debian/copyright
debian/rules
debian/source/format
docs/.gitignore
docs/Makefile
docs/Makefile.local
-docs/archiver-blueprint.rst
docs/conf.py
docs/index.rst
docs/sql-storage.rst
docs/_static/.placeholder
docs/_templates/.placeholder
sql/.gitignore
sql/Makefile
sql/TODO
sql/clusters.dot
sql/swh-data.sql
sql/swh-enums.sql
sql/swh-func.sql
sql/swh-indexes.sql
sql/swh-init.sql
sql/swh-schema.sql
sql/swh-triggers.sql
sql/bin/db-init
sql/bin/db-upgrade
sql/bin/dot_add_content
sql/doc/json
sql/doc/json/.gitignore
sql/doc/json/Makefile
sql/doc/json/entity.lister_metadata.schema.json
sql/doc/json/entity.metadata.schema.json
sql/doc/json/entity_history.lister_metadata.schema.json
sql/doc/json/entity_history.metadata.schema.json
sql/doc/json/fetch_history.result.schema.json
sql/doc/json/list_history.result.schema.json
sql/doc/json/listable_entity.list_params.schema.json
sql/doc/json/origin_visit.metadata.json
sql/doc/json/tool.tool_configuration.schema.json
sql/json/.gitignore
sql/json/Makefile
sql/json/entity.lister_metadata.schema.json
sql/json/entity.metadata.schema.json
sql/json/entity_history.lister_metadata.schema.json
sql/json/entity_history.metadata.schema.json
sql/json/fetch_history.result.schema.json
sql/json/list_history.result.schema.json
sql/json/listable_entity.list_params.schema.json
sql/json/origin_visit.metadata.json
sql/json/tool.tool_configuration.schema.json
sql/upgrades/015.sql
sql/upgrades/016.sql
sql/upgrades/017.sql
sql/upgrades/018.sql
sql/upgrades/019.sql
sql/upgrades/020.sql
sql/upgrades/021.sql
sql/upgrades/022.sql
sql/upgrades/023.sql
sql/upgrades/024.sql
sql/upgrades/025.sql
sql/upgrades/026.sql
sql/upgrades/027.sql
sql/upgrades/028.sql
sql/upgrades/029.sql
sql/upgrades/030.sql
sql/upgrades/032.sql
sql/upgrades/033.sql
sql/upgrades/034.sql
sql/upgrades/035.sql
sql/upgrades/036.sql
sql/upgrades/037.sql
sql/upgrades/038.sql
sql/upgrades/039.sql
sql/upgrades/040.sql
sql/upgrades/041.sql
sql/upgrades/042.sql
sql/upgrades/043.sql
sql/upgrades/044.sql
sql/upgrades/045.sql
sql/upgrades/046.sql
sql/upgrades/047.sql
sql/upgrades/048.sql
sql/upgrades/049.sql
sql/upgrades/050.sql
sql/upgrades/051.sql
sql/upgrades/052.sql
sql/upgrades/053.sql
sql/upgrades/054.sql
sql/upgrades/055.sql
sql/upgrades/056.sql
sql/upgrades/057.sql
sql/upgrades/058.sql
sql/upgrades/059.sql
sql/upgrades/060.sql
sql/upgrades/061.sql
sql/upgrades/062.sql
sql/upgrades/063.sql
sql/upgrades/064.sql
sql/upgrades/065.sql
sql/upgrades/066.sql
sql/upgrades/067.sql
sql/upgrades/068.sql
sql/upgrades/069.sql
sql/upgrades/070.sql
sql/upgrades/071.sql
sql/upgrades/072.sql
sql/upgrades/073.sql
sql/upgrades/074.sql
sql/upgrades/075.sql
sql/upgrades/076.sql
sql/upgrades/077.sql
sql/upgrades/078.sql
sql/upgrades/079.sql
sql/upgrades/080.sql
sql/upgrades/081.sql
sql/upgrades/082.sql
sql/upgrades/083.sql
sql/upgrades/084.sql
sql/upgrades/085.sql
sql/upgrades/086.sql
sql/upgrades/087.sql
sql/upgrades/088.sql
sql/upgrades/089.sql
sql/upgrades/090.sql
sql/upgrades/091.sql
sql/upgrades/092.sql
sql/upgrades/093.sql
sql/upgrades/094.sql
sql/upgrades/095.sql
sql/upgrades/096.sql
sql/upgrades/097.sql
sql/upgrades/098.sql
sql/upgrades/099.sql
sql/upgrades/100.sql
sql/upgrades/101.sql
sql/upgrades/102.sql
sql/upgrades/103.sql
sql/upgrades/104.sql
sql/upgrades/105.sql
sql/upgrades/106.sql
sql/upgrades/107.sql
sql/upgrades/108.sql
sql/upgrades/109.sql
sql/upgrades/110.sql
sql/upgrades/111.sql
sql/upgrades/112.sql
sql/upgrades/113.sql
sql/upgrades/114.sql
sql/upgrades/115.sql
sql/upgrades/116.sql
sql/upgrades/117.sql
sql/upgrades/118.sql
sql/upgrades/119.sql
+sql/upgrades/120.sql
swh/__init__.py
swh.storage.egg-info/PKG-INFO
swh.storage.egg-info/SOURCES.txt
swh.storage.egg-info/dependency_links.txt
swh.storage.egg-info/requires.txt
swh.storage.egg-info/top_level.txt
swh/storage/__init__.py
swh/storage/common.py
swh/storage/converters.py
swh/storage/db.py
swh/storage/db_utils.py
swh/storage/exc.py
swh/storage/listener.py
swh/storage/storage.py
swh/storage/algos/__init__.py
swh/storage/algos/diff.py
swh/storage/algos/dir_iterators.py
swh/storage/api/__init__.py
swh/storage/api/client.py
swh/storage/api/server.py
swh/storage/schemata/__init__.py
swh/storage/schemata/distribution.py
swh/storage/tests/__init__.py
swh/storage/tests/storage_testing.py
swh/storage/tests/test_api_client.py
swh/storage/tests/test_converters.py
swh/storage/tests/test_db.py
swh/storage/tests/test_storage.py
swh/storage/tests/algos/__init__.py
swh/storage/tests/algos/test_diff.py
utils/dump_revisions.py
utils/fix_revisions_from_dump.py
\ No newline at end of file
diff --git a/swh/storage/algos/diff.py b/swh/storage/algos/diff.py
index d4204ace..1d75ffe2 100644
--- a/swh/storage/algos/diff.py
+++ b/swh/storage/algos/diff.py
@@ -1,402 +1,402 @@
# Copyright (C) 2018 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
# Utility module to efficiently compute the list of changed files
# between two directory trees.
# The implementation is inspired from the work of Alberto Cortés
# for the go-git project. For more details, you can refer to:
# - this blog post: https://blog.sourced.tech/post/difftree/
# - the reference implementation in go:
# https://github.com/src-d/go-git/tree/master/utils/merkletrie
import collections
from swh.model.identifiers import directory_identifier
from .dir_iterators import (
DirectoryIterator, DoubleDirectoryIterator, Remaining
)
# get the hash identifier for an empty directory
_empty_dir_hash = directory_identifier({'entries': []})
def _get_rev(storage, rev_id):
"""
Return revision data from swh storage.
"""
return list(storage.revision_get([rev_id]))[0]
class _RevisionChangesList(object):
"""
Helper class to track the changes between two
revision directories.
"""
def __init__(self, storage, track_renaming):
"""
Args:
storage: instance of swh storage
track_renaming (bool): whether to track or not files renaming
"""
self.storage = storage
self.track_renaming = track_renaming
self.result = []
# dicts used to track file renaming based on hash value
# we use a list instead of a single entry to handle the corner
# case when a repository contains multiple instance of
# the same file in different directories and a commit
# renames all of them
self.inserted_hash_idx = collections.defaultdict(list)
self.deleted_hash_idx = collections.defaultdict(list)
def add_insert(self, it_to):
"""
Add a file insertion in the to directory.
Args:
it_to (swh.storage.algos.dir_iterators.DirectoryIterator):
iterator on the to directory
"""
to_hash = it_to.current_hash()
# if the current file hash has been previously marked as deleted,
# the file has been renamed
if self.track_renaming and self.deleted_hash_idx[to_hash]:
# pop the delete change index in the same order it was inserted
change = self.result[self.deleted_hash_idx[to_hash].pop(0)]
# change the delete change as a rename one
change['type'] = 'rename'
change['to'] = it_to.current()
change['to_path'] = it_to.current_path()
else:
# add the insert change in the list
self.result.append({'type': 'insert',
'from': None,
'from_path': None,
'to': it_to.current(),
'to_path': it_to.current_path()})
# if rename tracking is activated, add the change index in
# the inserted_hash_idx dict
if self.track_renaming:
self.inserted_hash_idx[to_hash].append(len(self.result) - 1)
def add_delete(self, it_from):
"""
Add a file deletion in the from directory.
Args:
it_from (swh.storage.algos.dir_iterators.DirectoryIterator):
iterator on the from directory
"""
from_hash = it_from.current_hash()
# if the current file has been previously marked as inserted,
# the file has been renamed
if self.track_renaming and self.inserted_hash_idx[from_hash]:
# pop the insert change index in the same order it was inserted
change = self.result[self.inserted_hash_idx[from_hash].pop(0)]
# change the insert change as a rename one
change['type'] = 'rename'
change['from'] = it_from.current()
change['from_path'] = it_from.current_path()
else:
# add the delete change in the list
self.result.append({'type': 'delete',
'from': it_from.current(),
'from_path': it_from.current_path(),
'to': None,
'to_path': None})
# if rename tracking is activated, add the change index in
# the deleted_hash_idx dict
if self.track_renaming:
self.deleted_hash_idx[from_hash].append(len(self.result) - 1)
def add_modify(self, it_from, it_to):
"""
Add a file modification in the to directory.
Args:
it_from (swh.storage.algos.dir_iterators.DirectoryIterator):
iterator on the from directory
it_to (swh.storage.algos.dir_iterators.DirectoryIterator):
iterator on the to directory
"""
self.result.append({'type': 'modify',
'from': it_from.current(),
'from_path': it_from.current_path(),
'to': it_to.current(),
'to_path': it_to.current_path()})
def add_recursive(self, it, insert):
"""
Recursively add changes from a directory.
Args:
it (swh.storage.algos.dir_iterators.DirectoryIterator):
iterator on a directory
insert (bool): the type of changes to add (insertion
or deletion)
"""
# current iterated element is a regular file,
# simply add adequate change in the list
if not it.current_is_dir():
if insert:
self.add_insert(it)
else:
self.add_delete(it)
return
# current iterated element is a directory,
dir_id = it.current_hash()
# handle empty dir insertion/deletion as the swh model allow
# to have such object compared to git
if dir_id == _empty_dir_hash:
if insert:
self.add_insert(it)
else:
self.add_delete(it)
# iterate on files reachable from it and add
# adequate changes in the list
else:
sub_it = DirectoryIterator(self.storage, dir_id,
it.current_path() + b'/')
sub_it_current = sub_it.step()
while sub_it_current:
if not sub_it.current_is_dir():
if insert:
self.add_insert(sub_it)
else:
self.add_delete(sub_it)
sub_it_current = sub_it.step()
def add_recursive_insert(self, it_to):
"""
Recursively add files insertion from a to directory.
Args:
it_to (swh.storage.algos.dir_iterators.DirectoryIterator):
iterator on a to directory
"""
self.add_recursive(it_to, True)
def add_recursive_delete(self, it_from):
"""
Recursively add files deletion from a from directory.
Args:
it_from (swh.storage.algos.dir_iterators.DirectoryIterator):
iterator on a from directory
"""
self.add_recursive(it_from, False)
def _diff_elts_same_name(changes, it):
""""
Compare two directory entries with the same name and add adequate
changes if any.
Args:
changes (_RevisionChangesList): the list of changes between
two revisions
it (swh.storage.algos.dir_iterators.DoubleDirectoryIterator):
the iterator traversing two revision directories at the same time
"""
# compare the two current directory elements of the iterator
status = it.compare()
# elements have same hash and same permissions:
# no changes to add and call next on the two iterators
if status['same_hash'] and status['same_perms']:
it.next_both()
# elements are regular files and have been modified:
# insert the modification change in the list and
# call next on the two iterators
elif status['both_are_files']:
changes.add_modify(it.it_from, it.it_to)
it.next_both()
# one element is a regular file, the other a directory:
# recursively add delete/insert changes and call next
# on the two iterators
elif status['file_and_dir']:
changes.add_recursive_delete(it.it_from)
changes.add_recursive_insert(it.it_to)
it.next_both()
# both elements are directories:
elif status['both_are_dirs']:
# from directory is empty:
# recursively add insert changes in the to directory
# and call next on the two iterators
if status['from_is_empty_dir']:
changes.add_recursive_insert(it.it_to)
it.next_both()
# to directory is empty:
# recursively add delete changes in the from directory
# and call next on the two iterators
elif status['to_is_empty_dir']:
changes.add_recursive_delete(it.it_from)
it.next_both()
# both directories are not empty:
# call step on the two iterators to descend further in
# the directory trees.
elif not status['from_is_empty_dir'] and not status['to_is_empty_dir']:
it.step_both()
def _compare_paths(path1, path2):
"""
Compare paths in lexicographic depth-first order.
For instance, it returns:
- "a" < "b"
- "b/c/d" < "b"
- "c/foo.txt" < "c.txt"
"""
path1_parts = path1.split(b'/')
path2_parts = path2.split(b'/')
i = 0
while True:
if len(path1_parts) == len(path2_parts) and i == len(path1_parts):
return 0
elif len(path2_parts) == i:
return 1
elif len(path1_parts) == i:
return -1
else:
if path2_parts[i] > path1_parts[i]:
return -1
elif path2_parts[i] < path1_parts[i]:
return 1
i = i + 1
def _diff_elts(changes, it):
"""
Compare two directory entries.
Args:
changes (_RevisionChangesList): the list of changes between
two revisions
it (swh.storage.algos.dir_iterators.DoubleDirectoryIterator):
the iterator traversing two revision directories at the same time
"""
# compare current to and from path in depth-first lexicographic order
c = _compare_paths(it.it_from.current_path(), it.it_to.current_path())
# current from path is lower than the current to path:
# the from path has been deleted
if c < 0:
changes.add_recursive_delete(it.it_from)
it.next_from()
- # current from path is greather than the current to path:
+ # current from path is greater than the current to path:
# the to path has been inserted
elif c > 0:
changes.add_recursive_insert(it.it_to)
it.next_to()
# paths are the same and need more processing
else:
_diff_elts_same_name(changes, it)
def diff_directories(storage, from_dir, to_dir, track_renaming=False):
"""
Compute the differential between two directories, i.e. the list of
file changes (insertion / deletion / modification / renaming)
between them.
Args:
storage (swh.storage.storage.Storage): instance of a swh
storage (either local or remote, for optimal performance
the use of a local storage is recommended)
from_dir (bytes): the swh identifier of the directory to compare from
to_dir (bytes): the swh identifier of the directory to compare to
track_renaming (bool): whether or not to track files renaming
Returns:
list: A list of dict representing the changes between the two
revisions. Each dict contains the following entries:
- *type*: a string describing the type of change
('insert' / 'delete' / 'modify' / 'rename')
- *from*: a dict containing the directory entry metadata in the
from revision (None in case of an insertion)
- *from_path*: bytes string corresponding to the absolute path
of the from revision entry (None in case of an insertion)
- *to*: a dict containing the directory entry metadata in the
to revision (None in case of a deletion)
- *to_path*: bytes string corresponding to the absolute path
of the to revision entry (None in case of a deletion)
The returned list is sorted in lexicographic depth-first order
according to the value of the *to_path* field.
"""
changes = _RevisionChangesList(storage, track_renaming)
it = DoubleDirectoryIterator(storage, from_dir, to_dir)
while True:
r = it.remaining()
if r == Remaining.NoMoreFiles:
break
elif r == Remaining.OnlyFromFilesRemain:
changes.add_recursive_delete(it.it_from)
it.next_from()
elif r == Remaining.OnlyToFilesRemain:
changes.add_recursive_insert(it.it_to)
it.next_to()
else:
_diff_elts(changes, it)
return changes.result
def diff_revisions(storage, from_rev, to_rev, track_renaming=False):
"""
Compute the differential between two revisions,
i.e. the list of file changes between the two associated directories.
Args:
storage (swh.storage.storage.Storage): instance of a swh
storage (either local or remote, for optimal performance
the use of a local storage is recommended)
from_rev (bytes): the identifier of the revision to compare from
to_rev (bytes): the identifier of the revision to compare to
track_renaming (bool): whether or not to track files renaming
Returns:
list: A list of dict describing the introduced file changes
(see :func:`swh.storage.algos.diff.diff_directories`).
"""
from_dir = None
if from_rev:
from_dir = _get_rev(storage, from_rev)['directory']
to_dir = _get_rev(storage, to_rev)['directory']
return diff_directories(storage, from_dir, to_dir, track_renaming)
def diff_revision(storage, revision, track_renaming=False):
"""
Computes the differential between a revision and its first parent.
If the revision has no parents, the directory to compare from
is considered as empty.
In other words, it computes the file changes introduced in a
specific revision.
Args:
storage (swh.storage.storage.Storage): instance of a swh
storage (either local or remote, for optimal performance
the use of a local storage is recommended)
revision (bytes): the identifier of the revision from which to
compute the introduced changes.
track_renaming (bool): whether or not to track files renaming
Returns:
list: A list of dict describing the introduced file changes
(see :func:`swh.storage.algos.diff.diff_directories`).
"""
rev_data = _get_rev(storage, revision)
parent = None
if rev_data['parents']:
parent = rev_data['parents'][0]
return diff_revisions(storage, parent, revision, track_renaming)
diff --git a/swh/storage/storage.py b/swh/storage/storage.py
index b45f4748..2cc44016 100644
--- a/swh/storage/storage.py
+++ b/swh/storage/storage.py
@@ -1,1547 +1,1547 @@
# Copyright (C) 2015-2018 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
from collections import defaultdict
from concurrent.futures import ThreadPoolExecutor
import datetime
import itertools
import json
import dateutil.parser
import psycopg2
import psycopg2.pool
from . import converters
from .common import db_transaction_generator, db_transaction
from .db import Db
from .exc import StorageDBError
from .algos import diff
from swh.model.hashutil import ALGORITHMS
from swh.objstorage import get_objstorage
from swh.objstorage.exc import ObjNotFoundError
# Max block size of contents to return
BULK_BLOCK_CONTENT_LEN_MAX = 10000
class Storage():
"""SWH storage proxy, encompassing DB and object storage
"""
def __init__(self, db, objstorage, min_pool_conns=1, max_pool_conns=10):
"""
Args:
db_conn: either a libpq connection string, or a psycopg2 connection
obj_root: path to the root of the object storage
"""
try:
if isinstance(db, psycopg2.extensions.connection):
self._pool = None
self._db = Db(db)
else:
self._pool = psycopg2.pool.ThreadedConnectionPool(
min_pool_conns, max_pool_conns, db
)
self._db = None
except psycopg2.OperationalError as e:
raise StorageDBError(e)
self.objstorage = get_objstorage(**objstorage)
def get_db(self):
if self._db:
return self._db
else:
return Db.from_pool(self._pool)
def check_config(self, *, check_write):
"""Check that the storage is configured and ready to go."""
if not self.objstorage.check_config(check_write=check_write):
return False
# Check permissions on one of the tables
with self.get_db().transaction() as cur:
if check_write:
check = 'INSERT'
else:
check = 'SELECT'
cur.execute(
"select has_table_privilege(current_user, 'content', %s)",
(check,)
)
return cur.fetchone()[0]
return True
def content_add(self, content):
"""Add content blobs to the storage
Note: in case of DB errors, objects might have already been added to
the object storage and will not be removed. Since addition to the
object storage is idempotent, that should not be a problem.
Args:
content (iterable): iterable of dictionaries representing
individual pieces of content to add. Each dictionary has the
following keys:
- data (bytes): the actual content
- length (int): content length (default: -1)
- one key for each checksum algorithm in
:data:`swh.model.hashutil.ALGORITHMS`, mapped to the
corresponding checksum
- status (str): one of visible, hidden, absent
- reason (str): if status = absent, the reason why
- origin (int): if status = absent, the origin we saw the
content in
"""
db = self.get_db()
def _unique_key(hash, keys=db.content_hash_keys):
"""Given a hash (tuple or dict), return a unique key from the
aggregation of keys.
"""
if isinstance(hash, tuple):
return hash
return tuple([hash[k] for k in keys])
content_by_status = defaultdict(list)
for d in content:
if 'status' not in d:
d['status'] = 'visible'
if 'length' not in d:
d['length'] = -1
content_by_status[d['status']].append(d)
content_with_data = content_by_status['visible']
content_without_data = content_by_status['absent']
missing_content = set(self.content_missing(content_with_data))
missing_skipped = set(_unique_key(hashes) for hashes
in self.skipped_content_missing(
content_without_data))
def add_to_objstorage():
data = {
cont['sha1']: cont['data']
for cont in content_with_data
if cont['sha1'] in missing_content
}
self.objstorage.add_batch(data)
with db.transaction() as cur:
with ThreadPoolExecutor(max_workers=1) as executor:
added_to_objstorage = executor.submit(add_to_objstorage)
if missing_content:
# create temporary table for metadata injection
db.mktemp('content', cur)
content_filtered = (cont for cont in content_with_data
if cont['sha1'] in missing_content)
db.copy_to(content_filtered, 'tmp_content',
db.content_get_metadata_keys, cur)
# move metadata in place
db.content_add_from_temp(cur)
if missing_skipped:
missing_filtered = (
cont for cont in content_without_data
if _unique_key(cont) in missing_skipped
)
db.mktemp('skipped_content', cur)
db.copy_to(missing_filtered, 'tmp_skipped_content',
db.skipped_content_keys, cur)
# move metadata in place
db.skipped_content_add_from_temp(cur)
# Wait for objstorage addition before returning from the
# transaction, bubbling up any exception
added_to_objstorage.result()
@db_transaction()
def content_update(self, content, keys=[], db=None, cur=None):
"""Update content blobs to the storage. Does nothing for unknown
contents or skipped ones.
Args:
content (iterable): iterable of dictionaries representing
individual pieces of content to update. Each dictionary has the
following keys:
- data (bytes): the actual content
- length (int): content length (default: -1)
- one key for each checksum algorithm in
:data:`swh.model.hashutil.ALGORITHMS`, mapped to the
corresponding checksum
- status (str): one of visible, hidden, absent
keys (list): List of keys (str) whose values needs an update, e.g.,
new hash column
"""
# TODO: Add a check on input keys. How to properly implement
# this? We don't know yet the new columns.
db.mktemp('content', cur)
select_keys = list(set(db.content_get_metadata_keys).union(set(keys)))
db.copy_to(content, 'tmp_content', select_keys, cur)
db.content_update_from_temp(keys_to_update=keys,
cur=cur)
def content_get(self, content):
"""Retrieve in bulk contents and their data.
Args:
content: iterables of sha1
Yields:
dict: Generates streams of contents as dict with their raw data:
- sha1: sha1's content
- data: bytes data of the content
Raises:
ValueError in case of too much contents are required.
cf. BULK_BLOCK_CONTENT_LEN_MAX
"""
# FIXME: Improve on server module to slice the result
if len(content) > BULK_BLOCK_CONTENT_LEN_MAX:
raise ValueError(
"Send at maximum %s contents." % BULK_BLOCK_CONTENT_LEN_MAX)
for obj_id in content:
try:
data = self.objstorage.get(obj_id)
except ObjNotFoundError:
yield None
continue
yield {'sha1': obj_id, 'data': data}
@db_transaction_generator(statement_timeout=500)
def content_get_metadata(self, content, db=None, cur=None):
"""Retrieve content metadata in bulk
Args:
content: iterable of content identifiers (sha1)
Returns:
an iterable with content metadata corresponding to the given ids
"""
for metadata in db.content_get_metadata_from_sha1s(content, cur):
yield dict(zip(db.content_get_metadata_keys, metadata))
@db_transaction_generator()
def content_missing(self, content, key_hash='sha1', db=None, cur=None):
"""List content missing from storage
Args:
content ([dict]): iterable of dictionaries containing one
key for each checksum algorithm in
:data:`swh.model.hashutil.ALGORITHMS`,
mapped to the corresponding checksum,
and a length key mapped to the content
length.
key_hash (str): name of the column to use as hash id
result (default: 'sha1')
Returns:
iterable ([bytes]): missing content ids (as per the
key_hash column)
Raises:
TODO: an exception when we get a hash collision.
"""
keys = db.content_hash_keys
if key_hash not in keys:
raise ValueError("key_hash should be one of %s" % keys)
key_hash_idx = keys.index(key_hash)
if not content:
return
for obj in db.content_missing_from_list(content, cur):
yield obj[key_hash_idx]
@db_transaction_generator()
def content_missing_per_sha1(self, contents, db=None, cur=None):
"""List content missing from storage based only on sha1.
Args:
contents: Iterable of sha1 to check for absence.
Returns:
iterable: missing ids
Raises:
TODO: an exception when we get a hash collision.
"""
for obj in db.content_missing_per_sha1(contents, cur):
yield obj[0]
@db_transaction_generator()
def skipped_content_missing(self, content, db=None, cur=None):
"""List skipped_content missing from storage
Args:
content: iterable of dictionaries containing the data for each
checksum algorithm.
Returns:
iterable: missing signatures
"""
keys = db.content_hash_keys
db.mktemp('skipped_content', cur)
db.copy_to(content, 'tmp_skipped_content',
keys + ['length', 'reason'], cur)
yield from db.skipped_content_missing_from_temp(cur)
@db_transaction()
def content_find(self, content, db=None, cur=None):
"""Find a content hash in db.
Args:
content: a dictionary representing one content hash, mapping
checksum algorithm names (see swh.model.hashutil.ALGORITHMS) to
checksum values
Returns:
a triplet (sha1, sha1_git, sha256) if the content exist
or None otherwise.
Raises:
ValueError: in case the key of the dictionary is not sha1, sha1_git
nor sha256.
"""
if not set(content).intersection(ALGORITHMS):
raise ValueError('content keys must contain at least one of: '
'sha1, sha1_git, sha256, blake2s256')
c = db.content_find(sha1=content.get('sha1'),
sha1_git=content.get('sha1_git'),
sha256=content.get('sha256'),
blake2s256=content.get('blake2s256'),
cur=cur)
if c:
return dict(zip(db.content_find_cols, c))
return None
def directory_add(self, directories):
"""Add directories to the storage
Args:
directories (iterable): iterable of dictionaries representing the
individual directories to add. Each dict has the following
keys:
- id (sha1_git): the id of the directory to add
- entries (list): list of dicts for each entry in the
directory. Each dict has the following keys:
- name (bytes)
- type (one of 'file', 'dir', 'rev'): type of the
directory entry (file, directory, revision)
- target (sha1_git): id of the object pointed at by the
directory entry
- perms (int): entry permissions
"""
dirs = set()
dir_entries = {
'file': defaultdict(list),
'dir': defaultdict(list),
'rev': defaultdict(list),
}
for cur_dir in directories:
dir_id = cur_dir['id']
dirs.add(dir_id)
for src_entry in cur_dir['entries']:
entry = src_entry.copy()
entry['dir_id'] = dir_id
dir_entries[entry['type']][dir_id].append(entry)
dirs_missing = set(self.directory_missing(dirs))
if not dirs_missing:
return
db = self.get_db()
with db.transaction() as cur:
# Copy directory ids
dirs_missing_dict = ({'id': dir} for dir in dirs_missing)
db.mktemp('directory', cur)
db.copy_to(dirs_missing_dict, 'tmp_directory', ['id'], cur)
# Copy entries
for entry_type, entry_list in dir_entries.items():
entries = itertools.chain.from_iterable(
entries_for_dir
for dir_id, entries_for_dir
in entry_list.items()
if dir_id in dirs_missing)
db.mktemp_dir_entry(entry_type)
db.copy_to(
entries,
'tmp_directory_entry_%s' % entry_type,
['target', 'name', 'perms', 'dir_id'],
cur,
)
# Do the final copy
db.directory_add_from_temp(cur)
@db_transaction_generator()
def directory_missing(self, directories, db=None, cur=None):
"""List directories missing from storage
Args:
directories (iterable): an iterable of directory ids
Yields:
missing directory ids
"""
for obj in db.directory_missing_from_list(directories, cur):
yield obj[0]
- @db_transaction_generator(statement_timeout=2000)
+ @db_transaction_generator(statement_timeout=20000)
def directory_ls(self, directory, recursive=False, db=None, cur=None):
"""Get entries for one directory.
Args:
- directory: the directory to list entries from.
- recursive: if flag on, this list recursively from this directory.
Returns:
List of entries for such directory.
"""
if recursive:
res_gen = db.directory_walk(directory, cur=cur)
else:
res_gen = db.directory_walk_one(directory, cur=cur)
for line in res_gen:
yield dict(zip(db.directory_ls_cols, line))
@db_transaction(statement_timeout=2000)
def directory_entry_get_by_path(self, directory, paths, db=None, cur=None):
"""Get the directory entry (either file or dir) from directory with path.
Args:
- directory: sha1 of the top level directory
- paths: path to lookup from the top level directory. From left
(top) to right (bottom).
Returns:
The corresponding directory entry if found, None otherwise.
"""
res = db.directory_entry_get_by_path(directory, paths, cur)
if res:
return dict(zip(db.directory_ls_cols, res))
def revision_add(self, revisions):
"""Add revisions to the storage
Args:
revisions (iterable): iterable of dictionaries representing the
individual revisions to add. Each dict has the following keys:
- id (sha1_git): id of the revision to add
- date (datetime.DateTime): date the revision was written
- date_offset (int): offset from UTC in minutes the revision
was written
- date_neg_utc_offset (boolean): whether a null date_offset
represents a negative UTC offset
- committer_date (datetime.DateTime): date the revision got
added to the origin
- committer_date_offset (int): offset from UTC in minutes the
revision was added to the origin
- committer_date_neg_utc_offset (boolean): whether a null
committer_date_offset represents a negative UTC offset
- type (one of 'git', 'tar'): type of the revision added
- directory (sha1_git): the directory the revision points at
- message (bytes): the message associated with the revision
- author_name (bytes): the name of the revision author
- author_email (bytes): the email of the revision author
- committer_name (bytes): the name of the revision committer
- committer_email (bytes): the email of the revision committer
- metadata (jsonb): extra information as dictionary
- synthetic (bool): revision's nature (tarball, directory
creates synthetic revision)
- parents (list of sha1_git): the parents of this revision
"""
db = self.get_db()
revisions_missing = set(self.revision_missing(
set(revision['id'] for revision in revisions)))
if not revisions_missing:
return
with db.transaction() as cur:
db.mktemp_revision(cur)
revisions_filtered = (
converters.revision_to_db(revision) for revision in revisions
if revision['id'] in revisions_missing)
parents_filtered = []
db.copy_to(
revisions_filtered, 'tmp_revision', db.revision_add_cols,
cur,
lambda rev: parents_filtered.extend(rev['parents']))
db.revision_add_from_temp(cur)
db.copy_to(parents_filtered, 'revision_history',
['id', 'parent_id', 'parent_rank'], cur)
@db_transaction_generator()
def revision_missing(self, revisions, db=None, cur=None):
"""List revisions missing from storage
Args:
revisions (iterable): revision ids
Yields:
missing revision ids
"""
if not revisions:
return
for obj in db.revision_missing_from_list(revisions, cur):
yield obj[0]
@db_transaction_generator(statement_timeout=500)
def revision_get(self, revisions, db=None, cur=None):
"""Get all revisions from storage
Args:
revisions: an iterable of revision ids
Returns:
iterable: an iterable of revisions as dictionaries (or None if the
revision doesn't exist)
"""
for line in db.revision_get_from_list(revisions, cur):
data = converters.db_to_revision(
dict(zip(db.revision_get_cols, line))
)
if not data['type']:
yield None
continue
yield data
@db_transaction_generator(statement_timeout=2000)
def revision_log(self, revisions, limit=None, db=None, cur=None):
"""Fetch revision entry from the given root revisions.
Args:
revisions: array of root revision to lookup
limit: limitation on the output result. Default to None.
Yields:
List of revision log from such revisions root.
"""
for line in db.revision_log(revisions, limit, cur):
data = converters.db_to_revision(
dict(zip(db.revision_get_cols, line))
)
if not data['type']:
yield None
continue
yield data
@db_transaction_generator(statement_timeout=2000)
def revision_shortlog(self, revisions, limit=None, db=None, cur=None):
"""Fetch the shortlog for the given revisions
Args:
revisions: list of root revisions to lookup
limit: depth limitation for the output
Yields:
a list of (id, parents) tuples.
"""
yield from db.revision_shortlog(revisions, limit, cur)
@db_transaction_generator(statement_timeout=2000)
def revision_log_by(self, origin_id, branch_name=None, timestamp=None,
limit=None, db=None, cur=None):
"""Fetch revision entry from the actual origin_id's latest revision.
Args:
origin_id: the origin id from which deriving the revision
branch_name: (optional) occurrence's branch name
timestamp: (optional) occurrence's time
limit: (optional) depth limitation for the
output. Default to None.
Yields:
The revision log starting from the revision derived from
the (origin, branch_name, timestamp) combination if any.
Returns:
None if no revision matching this combination is found.
"""
# Retrieve the revision by criterion
revisions = list(db.revision_get_by(
origin_id, branch_name, timestamp, limit=1, cur=cur))
if not revisions:
return None
revision_id = revisions[0][0]
# otherwise, retrieve the revision log from that revision
yield from self.revision_log([revision_id], limit, db=db, cur=cur)
def release_add(self, releases):
"""Add releases to the storage
Args:
releases (iterable): iterable of dictionaries representing the
individual releases to add. Each dict has the following keys:
- id (sha1_git): id of the release to add
- revision (sha1_git): id of the revision the release points to
- date (datetime.DateTime): the date the release was made
- date_offset (int): offset from UTC in minutes the release was
made
- date_neg_utc_offset (boolean): whether a null date_offset
represents a negative UTC offset
- name (bytes): the name of the release
- comment (bytes): the comment associated with the release
- author_name (bytes): the name of the release author
- author_email (bytes): the email of the release author
"""
db = self.get_db()
release_ids = set(release['id'] for release in releases)
releases_missing = set(self.release_missing(release_ids))
if not releases_missing:
return
with db.transaction() as cur:
db.mktemp_release(cur)
releases_filtered = (
converters.release_to_db(release) for release in releases
if release['id'] in releases_missing
)
db.copy_to(releases_filtered, 'tmp_release', db.release_add_cols,
cur)
db.release_add_from_temp(cur)
@db_transaction_generator()
def release_missing(self, releases, db=None, cur=None):
"""List releases missing from storage
Args:
releases: an iterable of release ids
Returns:
a list of missing release ids
"""
if not releases:
return
for obj in db.release_missing_from_list(releases, cur):
yield obj[0]
@db_transaction_generator(statement_timeout=500)
def release_get(self, releases, db=None, cur=None):
"""Given a list of sha1, return the releases's information
Args:
releases: list of sha1s
Yields:
releases: list of releases as dicts with the following keys:
- id: origin's id
- revision: origin's type
- url: origin's url
- lister: lister's uuid
- project: project's uuid (FIXME, retrieve this information)
Raises:
ValueError: if the keys does not match (url and type) nor id.
"""
for release in db.release_get_from_list(releases, cur):
yield converters.db_to_release(
dict(zip(db.release_get_cols, release))
)
@db_transaction()
def snapshot_add(self, origin, visit, snapshot, back_compat=False,
db=None, cur=None):
"""Add a snapshot for the given origin/visit couple
Args:
origin (int): id of the origin
visit (int): id of the visit
snapshot (dict): the snapshot to add to the visit, containing the
following keys:
- **id** (:class:`bytes`): id of the snapshot
- **branches** (:class:`dict`): branches the snapshot contains,
mapping the branch name (:class:`bytes`) to the branch target,
itself a :class:`dict` (or ``None`` if the branch points to an
unknown object)
- **target_type** (:class:`str`): one of ``content``,
``directory``, ``revision``, ``release``,
``snapshot``, ``alias``
- **target** (:class:`bytes`): identifier of the target
(currently a ``sha1_git`` for all object kinds, or the name
of the target branch for aliases)
back_compat (bool): whether to add the occurrences for
backwards-compatibility
"""
if not db.snapshot_exists(snapshot['id'], cur):
db.mktemp_snapshot_branch(cur)
db.copy_to(
(
{
'name': name,
'target': info['target'] if info else None,
'target_type': info['target_type'] if info else None,
}
for name, info in snapshot['branches'].items()
),
'tmp_snapshot_branch',
['name', 'target', 'target_type'],
cur,
)
db.snapshot_add(origin, visit, snapshot['id'], cur)
if not back_compat:
return
# TODO: drop this compat feature
occurrences = []
for name, info in snapshot['branches'].items():
if not info:
target = b'\x00' * 20
target_type = 'revision'
elif info['target_type'] == 'alias':
continue
else:
target = info['target']
target_type = info['target_type']
occurrences.append({
'origin': origin,
'visit': visit,
'branch': name,
'target': target,
'target_type': target_type,
})
self.occurrence_add(occurrences, db=db, cur=cur)
@db_transaction(statement_timeout=2000)
def snapshot_get(self, snapshot_id, db=None, cur=None):
"""Get the snapshot with the given id
Args:
snapshot_id (bytes): id of the snapshot
Returns:
dict: a snapshot with two keys:
id:: identifier for the snapshot
branches:: a list of branches contained by the snapshot
"""
branches = {}
for branch in db.snapshot_get_by_id(snapshot_id, cur):
branch = dict(zip(db.snapshot_get_cols, branch))
del branch['snapshot_id']
name = branch.pop('name')
if branch == {'target': None, 'target_type': None}:
branch = None
branches[name] = branch
if branches:
return {'id': snapshot_id, 'branches': branches}
if db.snapshot_exists(snapshot_id, cur):
# empty snapshot
return {'id': snapshot_id, 'branches': {}}
return None
@db_transaction(statement_timeout=2000)
def snapshot_get_by_origin_visit(self, origin, visit, db=None, cur=None):
"""Get the snapshot for the given origin visit
Args:
origin (int): the origin identifier
visit (int): the visit identifier
Returns:
dict: a snapshot with two keys:
id:: identifier for the snapshot
branches:: a dictionary containing the snapshot branch information
"""
snapshot_id = db.snapshot_get_by_origin_visit(origin, visit, cur)
if snapshot_id:
return self.snapshot_get(snapshot_id, db=db, cur=cur)
else:
# compatibility code during the snapshot migration
origin_visit_info = self.origin_visit_get_by(origin, visit,
db=db, cur=cur)
if origin_visit_info is None:
return None
ret = {'id': None}
ret['branches'] = origin_visit_info['occurrences']
return ret
return None
@db_transaction(statement_timeout=2000)
def snapshot_get_latest(self, origin, allowed_statuses=None, db=None,
cur=None):
"""Get the latest snapshot for the given origin, optionally only from visits
that have one of the given allowed_statuses.
Args:
origin (int): the origin identifier
allowed_statuses (list of str): list of visit statuses considered
to find the latest snapshot for the visit. For instance,
``allowed_statuses=['full']`` will only consider visits that
have successfully run to completion.
Returns:
dict: a snapshot with two keys:
id:: identifier for the snapshot
branches:: a dictionary containing the snapshot branch information
"""
origin_visit = db.origin_visit_get_latest_snapshot(
origin, allowed_statuses=allowed_statuses, cur=cur)
if origin_visit:
origin_visit = dict(zip(db.origin_visit_get_cols, origin_visit))
return self.snapshot_get(origin_visit['snapshot'], db=db, cur=cur)
@db_transaction()
def occurrence_add(self, occurrences, db=None, cur=None):
"""Add occurrences to the storage
Args:
occurrences: iterable of dictionaries representing the individual
occurrences to add. Each dict has the following keys:
- origin (int): id of the origin corresponding to the
occurrence
- visit (int): id of the visit corresponding to the
occurrence
- branch (str): the reference name of the occurrence
- target (sha1_git): the id of the object pointed to by
the occurrence
- target_type (str): the type of object pointed to by the
occurrence
"""
db.mktemp_occurrence_history(cur)
db.copy_to(occurrences, 'tmp_occurrence_history',
['origin', 'branch', 'target', 'target_type', 'visit'], cur)
db.occurrence_history_add_from_temp(cur)
@db_transaction_generator(statement_timeout=2000)
def occurrence_get(self, origin_id, db=None, cur=None):
"""Retrieve occurrence information per origin_id.
Args:
origin_id: The occurrence's origin.
Yields:
List of occurrences matching criterion.
"""
for line in db.occurrence_get(origin_id, cur):
yield {
'origin': line[0],
'branch': line[1],
'target': line[2],
'target_type': line[3],
}
@db_transaction()
def origin_visit_add(self, origin, ts, db=None, cur=None):
"""Add an origin_visit for the origin at ts with status 'ongoing'.
Args:
origin: Visited Origin id
ts: timestamp of such visit
Returns:
dict: dictionary with keys origin and visit where:
- origin: origin identifier
- visit: the visit identifier for the new visit occurrence
- ts (datetime.DateTime): the visit date
"""
if isinstance(ts, str):
ts = dateutil.parser.parse(ts)
return {
'origin': origin,
'visit': db.origin_visit_add(origin, ts, cur)
}
@db_transaction()
def origin_visit_update(self, origin, visit_id, status, metadata=None,
db=None, cur=None):
"""Update an origin_visit's status.
Args:
origin: Visited Origin id
visit_id: Visit's id
status: Visit's new status
metadata: Data associated to the visit
Returns:
None
"""
return db.origin_visit_update(origin, visit_id, status, metadata, cur)
@db_transaction_generator(statement_timeout=500)
def origin_visit_get(self, origin, last_visit=None, limit=None, db=None,
cur=None):
"""Retrieve all the origin's visit's information.
Args:
origin (int): The occurrence's origin (identifier).
last_visit (int): Starting point from which listing the next visits
Default to None
limit (int): Number of results to return from the last visit.
Default to None
Yields:
List of visits.
"""
for line in db.origin_visit_get_all(
origin, last_visit=last_visit, limit=limit, cur=cur):
data = dict(zip(db.origin_visit_get_cols, line))
yield data
@db_transaction(statement_timeout=2000)
def origin_visit_get_by(self, origin, visit, db=None, cur=None):
"""Retrieve origin visit's information.
Args:
origin: The occurrence's origin (identifier).
Returns:
The information on that particular (origin, visit)
"""
ori_visit = db.origin_visit_get(origin, visit, cur)
if not ori_visit:
return None
ori_visit = dict(zip(db.origin_visit_get_cols, ori_visit))
if ori_visit['snapshot']:
ori_visit['occurrences'] = self.snapshot_get(
ori_visit['snapshot'], db=db, cur=cur)['branches']
return ori_visit
# TODO: remove Backwards compatibility after snapshot migration
occs = {}
for occ in db.occurrence_by_origin_visit(origin, visit):
_, branch_name, target, target_type = occ
occs[branch_name] = {
'target': target,
'target_type': target_type
}
ori_visit['occurrences'] = occs
return ori_visit
@db_transaction_generator(statement_timeout=500)
def revision_get_by(self,
origin_id,
branch_name=None,
timestamp=None,
limit=None,
db=None,
cur=None):
"""Given an origin_id, retrieve occurrences' list per given criterions.
Args:
origin_id: The origin to filter on.
branch_name: (optional) branch name.
timestamp: (optional) time.
limit: (optional) limit
Yields:
List of occurrences matching the criterions or None if nothing is
found.
"""
for line in db.revision_get_by(origin_id, branch_name, timestamp,
limit=limit, cur=cur):
data = converters.db_to_revision(
dict(zip(db.revision_get_cols, line))
)
if not data['type']:
yield None
continue
yield data
@db_transaction_generator(statement_timeout=500)
def release_get_by(self, origin_id, limit=None, db=None, cur=None):
"""Given an origin id, return all the tag objects pointing to heads of
origin_id.
Args:
origin_id: the origin to filter on.
limit: None by default
Yields:
List of releases matching the criterions or None if nothing is
found.
"""
for line in db.release_get_by(origin_id, limit=limit, cur=cur):
data = converters.db_to_release(
dict(zip(db.release_get_cols, line))
)
yield data
@db_transaction(statement_timeout=2000)
def object_find_by_sha1_git(self, ids, db=None, cur=None):
"""Return the objects found with the given ids.
Args:
ids: a generator of sha1_gits
Returns:
dict: a mapping from id to the list of objects found. Each object
found is itself a dict with keys:
- sha1_git: the input id
- type: the type of object found
- id: the id of the object found
- object_id: the numeric id of the object found.
"""
ret = {id: [] for id in ids}
for retval in db.object_find_by_sha1_git(ids, cur=cur):
if retval[1]:
ret[retval[0]].append(dict(zip(db.object_find_by_sha1_git_cols,
retval)))
return ret
origin_keys = ['id', 'type', 'url', 'lister', 'project']
@db_transaction(statement_timeout=500)
def origin_get(self, origin, db=None, cur=None):
"""Return the origin either identified by its id or its tuple
(type, url).
Args:
origin: dictionary representing the individual origin to find.
This dict has either the keys type and url:
- type (FIXME: enum TBD): the origin type ('git', 'wget', ...)
- url (bytes): the url the origin points to
or the id:
- id: the origin id
Returns:
dict: the origin dictionary with the keys:
- id: origin's id
- type: origin's type
- url: origin's url
- lister: lister's uuid
- project: project's uuid (FIXME, retrieve this information)
Raises:
ValueError: if the keys does not match (url and type) nor id.
"""
origin_id = origin.get('id')
if origin_id: # check lookup per id first
ori = db.origin_get(origin_id, cur)
elif 'type' in origin and 'url' in origin: # or lookup per type, url
ori = db.origin_get_with(origin['type'], origin['url'], cur)
else: # unsupported lookup
raise ValueError('Origin must have either id or (type and url).')
if ori:
return dict(zip(self.origin_keys, ori))
return None
@db_transaction_generator()
def origin_search(self, url_pattern, offset=0, limit=50,
regexp=False, with_visit=False, db=None, cur=None):
"""Search for origins whose urls contain a provided string pattern
or match a provided regular expression.
The search is performed in a case insensitive way.
Args:
url_pattern (str): the string pattern to search for in origin urls
offset (int): number of found origins to skip before returning
results
limit (int): the maximum number of found origins to return
regexp (bool): if True, consider the provided pattern as a regular
expression and return origins whose urls match it
with_visit (bool): if True, filter out origins with no visit
Returns:
An iterable of dict containing origin information as returned
by :meth:`swh.storage.storage.Storage.origin_get`.
"""
for origin in db.origin_search(url_pattern, offset, limit,
regexp, with_visit, cur):
yield dict(zip(self.origin_keys, origin))
@db_transaction()
def _person_add(self, person, db=None, cur=None):
"""Add a person in storage.
Note: Internal function for now, do not use outside of this module.
Do not do anything fancy in case a person already exists.
Please adapt code if more checks are needed.
Args:
person: dictionary with keys name and email.
Returns:
Id of the new person.
"""
return db.person_add(person)
@db_transaction_generator(statement_timeout=500)
def person_get(self, person, db=None, cur=None):
"""Return the persons identified by their ids.
Args:
person: array of ids.
Returns:
The array of persons corresponding of the ids.
"""
for person in db.person_get(person):
yield dict(zip(db.person_get_cols, person))
@db_transaction()
def origin_add(self, origins, db=None, cur=None):
"""Add origins to the storage
Args:
origins: list of dictionaries representing the individual origins,
with the following keys:
- type: the origin type ('git', 'svn', 'deb', ...)
- url (bytes): the url the origin points to
Returns:
list: given origins as dict updated with their id
"""
for origin in origins:
origin['id'] = self.origin_add_one(origin, db=db, cur=cur)
return origins
@db_transaction()
def origin_add_one(self, origin, db=None, cur=None):
"""Add origin to the storage
Args:
origin: dictionary representing the individual origin to add. This
dict has the following keys:
- type (FIXME: enum TBD): the origin type ('git', 'wget', ...)
- url (bytes): the url the origin points to
Returns:
the id of the added origin, or of the identical one that already
exists.
"""
data = db.origin_get_with(origin['type'], origin['url'], cur)
if data:
return data[0]
return db.origin_add(origin['type'], origin['url'], cur)
@db_transaction()
def fetch_history_start(self, origin_id, db=None, cur=None):
"""Add an entry for origin origin_id in fetch_history. Returns the id
of the added fetch_history entry
"""
fetch_history = {
'origin': origin_id,
'date': datetime.datetime.now(tz=datetime.timezone.utc),
}
return db.create_fetch_history(fetch_history, cur)
@db_transaction()
def fetch_history_end(self, fetch_history_id, data, db=None, cur=None):
"""Close the fetch_history entry with id `fetch_history_id`, replacing
its data with `data`.
"""
now = datetime.datetime.now(tz=datetime.timezone.utc)
fetch_history = db.get_fetch_history(fetch_history_id, cur)
if not fetch_history:
raise ValueError('No fetch_history with id %d' % fetch_history_id)
fetch_history['duration'] = now - fetch_history['date']
fetch_history.update(data)
db.update_fetch_history(fetch_history, cur)
@db_transaction()
def fetch_history_get(self, fetch_history_id, db=None, cur=None):
"""Get the fetch_history entry with id `fetch_history_id`.
"""
return db.get_fetch_history(fetch_history_id, cur)
@db_transaction()
def entity_add(self, entities, db=None, cur=None):
"""Add the given entitites to the database (in entity_history).
Args:
entities (iterable): iterable of dictionaries with the following
keys:
- uuid (uuid): id of the entity
- parent (uuid): id of the parent entity
- name (str): name of the entity
- type (str): type of entity (one of 'organization',
'group_of_entities', 'hosting', 'group_of_persons', 'person',
'project')
- description (str, optional): description of the entity
- homepage (str): url of the entity's homepage
- active (bool): whether the entity is active
- generated (bool): whether the entity was generated
- lister_metadata (dict): lister-specific entity metadata
- metadata (dict): other metadata for the entity
- validity (datetime.DateTime array): timestamps at which we
listed the entity.
"""
cols = list(db.entity_history_cols)
cols.remove('id')
db.mktemp_entity_history()
db.copy_to(entities, 'tmp_entity_history', cols, cur)
db.entity_history_add_from_temp()
@db_transaction_generator()
def entity_get_from_lister_metadata(self, entities, db=None, cur=None):
"""Fetch entities from the database, matching with the lister and
associated metadata.
Args:
entities (iterable): dictionaries containing the lister metadata to
look for. Useful keys are 'lister', 'type', 'id', ...
Yields:
fetched entities with all their attributes. If no match was found,
the returned entity is None.
"""
db.mktemp_entity_lister(cur)
mapped_entities = []
for i, entity in enumerate(entities):
mapped_entity = {
'id': i,
'lister_metadata': entity,
}
mapped_entities.append(mapped_entity)
db.copy_to(mapped_entities, 'tmp_entity_lister',
['id', 'lister_metadata'], cur)
cur.execute('''select id, %s
from swh_entity_from_tmp_entity_lister()
order by id''' %
','.join(db.entity_cols))
for id, *entity_vals in cur:
fetched_entity = dict(zip(db.entity_cols, entity_vals))
if fetched_entity['uuid']:
yield fetched_entity
else:
yield {
'uuid': None,
'lister_metadata': entities[i],
}
@db_transaction_generator(statement_timeout=2000)
def entity_get(self, uuid, db=None, cur=None):
"""Returns the list of entity per its uuid identifier and also its
parent hierarchy.
Args:
uuid: entity's identifier
Returns:
List of entities starting with entity with uuid and the parent
hierarchy from such entity.
"""
for entity in db.entity_get(uuid, cur):
yield dict(zip(db.entity_cols, entity))
@db_transaction(statement_timeout=500)
def entity_get_one(self, uuid, db=None, cur=None):
"""Returns one entity using its uuid identifier.
Args:
uuid: entity's identifier
Returns:
the object corresponding to the given entity
"""
entity = db.entity_get_one(uuid, cur)
if entity:
return dict(zip(db.entity_cols, entity))
else:
return None
@db_transaction(statement_timeout=500)
def stat_counters(self, db=None, cur=None):
"""compute statistics about the number of tuples in various tables
Returns:
dict: a dictionary mapping textual labels (e.g., content) to
integer values (e.g., the number of tuples in table content)
"""
return {k: v for (k, v) in db.stat_counters()}
@db_transaction()
def origin_metadata_add(self, origin_id, ts, provider, tool, metadata,
db=None, cur=None):
""" Add an origin_metadata for the origin at ts with provenance and
metadata.
Args:
origin_id (int): the origin's id for which the metadata is added
ts (datetime): timestamp of the found metadata
provider (int): the provider of metadata (ex:'hal')
tool (int): tool used to extract metadata
metadata (jsonb): the metadata retrieved at the time and location
Returns:
id (int): the origin_metadata unique id
"""
if isinstance(ts, str):
ts = dateutil.parser.parse(ts)
return db.origin_metadata_add(origin_id, ts, provider, tool,
metadata, cur)
@db_transaction_generator(statement_timeout=500)
def origin_metadata_get_by(self, origin_id, provider_type=None, db=None,
cur=None):
"""Retrieve list of all origin_metadata entries for the origin_id
Args:
origin_id (int): the unique origin identifier
provider_type (str): (optional) type of provider
Returns:
list of dicts: the origin_metadata dictionary with the keys:
- id (int): origin_metadata's id
- origin_id (int): origin's id
- discovery_date (datetime): timestamp of discovery
- tool_id (int): metadata's extracting tool
- metadata (jsonb)
- provider_id (int): metadata's provider
- provider_name (str)
- provider_type (str)
- provider_url (str)
"""
for line in db.origin_metadata_get_by(origin_id, provider_type, cur):
yield dict(zip(db.origin_metadata_get_cols, line))
@db_transaction_generator()
def tool_add(self, tools, db=None, cur=None):
"""Add new tools to the storage.
Args:
tools (iterable of :class:`dict`): Tool information to add to
storage. Each tool is a :class:`dict` with the following keys:
- name (:class:`str`): name of the tool
- version (:class:`str`): version of the tool
- configuration (:class:`dict`): configuration of the tool,
must be json-encodable
Returns:
`iterable` of :class:`dict`: All the tools inserted in storage
(including the internal ``id``). The order of the list is not
guaranteed to match the order of the initial list.
"""
db.mktemp_tool(cur)
db.copy_to(tools, 'tmp_tool',
['name', 'version', 'configuration'],
cur)
tools = db.tool_add_from_temp(cur)
for line in tools:
yield dict(zip(db.tool_cols, line))
@db_transaction(statement_timeout=500)
def tool_get(self, tool, db=None, cur=None):
"""Retrieve tool information.
Args:
tool (dict): Tool information we want to retrieve from storage.
The dicts have the same keys as those used in :func:`tool_add`.
Returns:
dict: The full tool information if it exists (``id`` included),
None otherwise.
"""
tool_conf = tool['configuration']
if isinstance(tool_conf, dict):
tool_conf = json.dumps(tool_conf)
idx = db.tool_get(tool['name'],
tool['version'],
tool_conf)
if not idx:
return None
return dict(zip(db.tool_cols, idx))
@db_transaction()
def metadata_provider_add(self, provider_name, provider_type, provider_url,
metadata, db=None, cur=None):
return db.metadata_provider_add(provider_name, provider_type,
provider_url, metadata, cur)
@db_transaction()
def metadata_provider_get(self, provider_id, db=None, cur=None):
result = db.metadata_provider_get(provider_id)
if not result:
return None
return dict(zip(db.metadata_provider_cols, result))
@db_transaction()
def metadata_provider_get_by(self, provider, db=None, cur=None):
result = db.metadata_provider_get_by(provider['provider_name'],
provider['provider_url'])
if not result:
return None
return dict(zip(db.metadata_provider_cols, result))
def diff_directories(self, from_dir, to_dir, track_renaming=False):
"""Compute the list of file changes introduced between two arbitrary
directories (insertion / deletion / modification / renaming of files).
Args:
from_dir (bytes): identifier of the directory to compare from
to_dir (bytes): identifier of the directory to compare to
track_renaming (bool): whether or not to track files renaming
Returns:
A list of dict describing the introduced file changes
(see :func:`swh.storage.algos.diff.diff_directories`
for more details).
"""
return diff.diff_directories(self, from_dir, to_dir, track_renaming)
def diff_revisions(self, from_rev, to_rev, track_renaming=False):
"""Compute the list of file changes introduced between two arbitrary
revisions (insertion / deletion / modification / renaming of files).
Args:
from_rev (bytes): identifier of the revision to compare from
to_rev (bytes): identifier of the revision to compare to
track_renaming (bool): whether or not to track files renaming
Returns:
A list of dict describing the introduced file changes
(see :func:`swh.storage.algos.diff.diff_directories`
for more details).
"""
return diff.diff_revisions(self, from_rev, to_rev, track_renaming)
def diff_revision(self, revision, track_renaming=False):
"""Compute the list of file changes introduced by a specific revision
(insertion / deletion / modification / renaming of files) by comparing
it against its first parent.
Args:
revision (bytes): identifier of the revision from which to
compute the list of files changes
track_renaming (bool): whether or not to track files renaming
Returns:
A list of dict describing the introduced file changes
(see :func:`swh.storage.algos.diff.diff_directories`
for more details).
"""
return diff.diff_revision(self, revision, track_renaming)
diff --git a/version.txt b/version.txt
index 620d8566..01f01fe9 100644
--- a/version.txt
+++ b/version.txt
@@ -1 +1 @@
-v0.0.104-0-g9c39449
\ No newline at end of file
+v0.0.105-0-g46822cb
\ No newline at end of file

File Metadata

Mime Type
text/x-diff
Expires
Mon, Aug 18, 10:03 PM (5 d, 10 h ago)
Storage Engine
blob
Storage Format
Raw Data
Storage Handle
3253978

Event Timeline