#+title: Analyze and try to reduce loader-git memory consumption and overall work
#+author: vsellier, ardumont
The current loader git consumes a lot of memory depending on the size of the repository.
It's fetching the full packfile of unknown references/refs (filtered by last snapshot's
references), then parses the packfile multiple times to load in order contents,
directories, revisions, releases and then finishes by creating one snapshot for the
visit.
References in this context are resolved tips of branches (e.g refs/heads/master, ...) or
tags (e.g refs/tags/...).
While the memory consumption is not a problem for small (< 200 refs) to medium
repositories (<= 500 refs), this can become one on large repositories (> 500), either:
1. The currently unique packfile retrieved at the beginning of the loading is too big (>
4Gib) which fails immediately the ingestion. Nothing gets done. The visit is marked
as failed. If that happens too often (thrice consecutively iirc), the origin ends up
disabled, so no longer scheduled (up until it's listed again).
2. The ingestion starts but due to concurrency with other loading processes, the
ingestion process gets killed. That means partial ingestion of objects got done, but
no snapshot nor finalized visit. This is currently a major problem. Current
deployment details also implies a heavy disk i/o (which creates ceph problems down
the line, which can possibly cascade to outage, see the events we faced during
holidays for example...).
3. The last point 2. is also problematic for scheduling further visits for that origin.
Nonetheless, if further visit happens somehow, those will skip already ingested
objects (which will have still been retrieved again though *without* partial snapshot
in between visits).
To solve these problems, some work has been investigated and tried.
A first naive attempt has been made to iterate over the packfile once and keep a dict of
the references (to drop immediately the packfile reference) [1]. This failed as the
memory consumption spiked even further. This had the advantage to kill the loading very
fast. So, the conclusion of this attempt is that iterating over the packfile multiple
times (one iteration for each type of object of our model) is actually not the problem.
[1] https://forge.softwareheritage.org/D6377
Another attempt was to modify the loader git to make the ingestion fetch multiple
packfiles ([2] [3] with a slight loader-core change required [4])). This has the
advantage of naturally taking care of 1. (no more huge packfile). This is done by asking
intervals of unknown remote refs, starting by the tags (in natural order) then the
branches [5]. The natural order on tags sounds like a proper way to start incrementally
load the repository following its history [3]. If we don't follow the history (only
[2]), we could fetch first a huge packfile (with mostly everything in it) thus back to
square one. This does assume tags exist in the repository (which should mostly be the
case). The only limitation seen for that approach is that we now continualregularly discuss with
with the server to retrieve information (so a time trade-off) during the loading.
FWIW, this continuous server discussion approach is what's currently done with the
mercurial loader without issues (which is stable now and not as greedy in memory as it
used to be, hence one other motivation to align git loader behavior).
[2] https://forge.softwareheritage.org/D6386 (ingest in multiple packfile fetch)
[3] https://forge.softwareheritage.org/D6392 (follow refs tags in orderer, then branches)
[4] https://forge.softwareheritage.org/D6380. This core loader adaptation actually
allows the git loader git(well DVCS loaders) to create partial visit targeting a snapshot after each packfile
after each internal ingestion loop (so for git after each packfile consumption). So this makes sure that we create incremental snapshot prior to the final
one (if we reach the end of the ingestion). Such adaptation takes care of point 3. (andmakes sure that we create incremental snapshot prior to the final one (if we reach the
make subsequent visits do less work even in casend of the ingestion). Such adaptation takes care of failure)point 3. Thanks to @vlorentz which(and make subsequent
visits do less work even in case of failures). Thanks to @vlorentz which made me realize this,
this on that diff's review. aAfter a couple of nights sleeping on it, it clicked!
[5] Another idea (not followed through) would be to delay ingestion of some known special referencesspecial
references which are assumed highly connected within the graph at the end. Typically,
which are assumed highly connected within the graph (e.g. "HEAD", "refs/heads/master", "refs/heads/main", "refs/heads/develop", ...
"refs/heads/main", "refs/heads/develop", ... (others?) at the end of the loading. The hypothesis is that these references are the main connected part of the
hypothesis is that these references are the main connected part of the repository. So starting with those would end up with a huge packfile immediately (so
starting with those would end up with a huge packfile immediately (so with largewith large repositories at least, we are back to the initial problem). On the contrary,
repositories at least, we are back toif we start by the initial problem). On the contraryother references first, if we startthen dealing with the special ones at the
by the other references firstend, then dealing with the special ones atonly a bit more work would be needed to fill in the end,blanks. only a bitThat could yet be
more work would be needed to fill in the blanks. That could yet be another optimizationanother optimization which would maybe help if there are no tags in the repository for
which would maybe help if there are no tags in the repository for example.
Another consideration we did not follow completely through yet was to use a depth
parameter (in the current internal lib used to discuss with the server). It's not
completely clear what actual depth number would be a relatively decent and satisfying
enough for all repositories out there. It's been slighly tested but dismissed for now
due to that question. It's not to be excluded though. It may simply be that this
solution composed with the previous points could just be a deeper optimization on
reducing the loader's work (especially the part walking the git graph).
ANot necessarily memory consumption related but still, as another optimization point to
further align the git loader with the mercurial loader, it would be interesting to start
it would be interesting to start using the extid table to map what's considered git ids (which will change afaiui) with
(which will change afaiui) with the revision/release id. Then starting using this mappinging would allow filtering even
to actually filterfurther what we already ingested across origins known refs (and not only from the last snapshot of the
snapshot of the same origin as currently). That optimization gave a good boost in actually doing lessreduced quite a lot the
mercurial loader work for ingestespecially regarding mercurial forks. Filtering early enough known refs (revision/release)In the loader git's case,
would actually reduce further the packfiles (when the extid table is actually filledfiltering early enough known refs (revision/release) would actually reduce further the
packfiles (providing the extid table is actually filled enough that is).
Having described all that, I'm fairly convinced that some approaches described here are
a possible way forward (especially the ones already implemented in diffs) which is
better than the current status quo.
Shall we proceed? Any thoughts, pros or cons arguments welcomed.
Cheers,