#+title: Analyze and try to reduce loader-git memory consumption
#+author: vsellier, ardumont
The current loader git consumes a lot of memory depending on the size of the repository.
It's fetching the full packfile of unknown references (filtered by last snapshot's
references), then parses the packfile multiple times to load in order contents,
directories, revisions, releases and then finishes by creating a snapshot offor the visit.
While the memory consumption is not a problem for small to medium repositories, this
can become one on large repositories, either:
1. The currently unique packfile retrieved at the beginning of the loading is too big (>
4Gib) which fails immediately the ingestion. Nothing has been done. The visit is
marked as failed. If that happens too often (thrice consecutively iirc), the origin
ends up disabled, so no longer scheduled (up until it's listed again).
2. The ingestion starts but due to concurrency with other loading processes, the
ingestion process gets killed. That means partial ingestion of objects got done, but
no snapshot nor finalized visit. The last point is problematic for scheduling further
visits for that origin. Nonetheless, if further visit happens somehow, those will
skip already ingested objects.
A first naive attempt has been made to iterate over the packfile once and keep a dict of
the references, (to drop immediately the packfile reference immediately [1e) [1]. This failed as the memory
memory consumption spiked even further. This had the advantage to kill the loading very fast.
fast. So, the conclusion of this attempt is that iterating over the packfile multiple times
times (one iteration for each type of object of our model) is actually not the problem.
[1] https://forge.softwareheritage.org/D6377
Another attempt was to modify the loader git to make the ingestion fetch multiple
packfiles ([2] [3] with a slight loader-core change required [4])). This has the
advantage of naturally taking care of 1. This is done by asking intervals of unknown
remote refs, starting by the tags (in natural order) then the branches [5]. The natural
order on tags sounds like a proper way to start since it should then incrementally load
the repository following its history [3]. If we don't follow the history [2], we could
fetch first a huge packfile (with mostly everything in it) thus back to square one). This
assumes there are tags in the repository (which should mostly be the case). The only
limitation seen for that approach is that we now continually discuss with the server to
retrieve information. FWIW, this is what's currently done with the mercurial loader
without issues (which is btw very stable now and not as greedy in memory as it used to
be, hence one of the motivation to align the loader git to do the same).
[2] https://forge.softwareheritage.org/D6386
[3] https://forge.softwareheritage.org/D6392
[4] https://forge.softwareheritage.org/D6380
[5] Another idea (not followed through) would be to ingest some known special references
(withwhich are assumed highly connectivity ed within the graph, e (e.g."HEAD", "refs/heads/master",
"refs/heads/main", "refs/heads/develop", ... others?) as last references. The reasoning
is that we assume that those are the main connected part of the repository,. so highly connectedSo starting
part of the graph.with those would end up with a huge packfile immediately (if we start by those, So starting with those would end up with a huge packfile immediatelywith
(if we start by those highly connected referenceslarge repositories, with large repositories,back to square one again). back toIf we start by the other references
square one again).first, If we start by the other references firstthen dealing with those at the end, then dealing with those atit sounds like a bit more work would be
the end,needed to fill in the blanks (but not too much). it sounds like a bit more work to fill in the hole (but hopefully not tooThat could yet be another optimization
much). That could yet be another optimization which could also help if there isare no tags
in the repository.
Another consideration we did not follow completely through yet was to use a depth
parameter (in the current internal tool used to discuss with the server)lib used to discuss with the server). It's not
completely clear what actual depth number would be a relatively decent and satisfying
enough for all repositories out there. It's not to be excluded though. It may simply be
that this solution composed with the previous points could just be a deeper optimization
on reducing the loader's work.
As another point to further align the git loader with the mercurial loader, it would be
interesting to actually start using the extid table to map what's considered git ids
(which will change) with the revision/release id. As it's notAnd start using this mapping to
clear what actual depth number would be a relatively decent and satisfying enough foractually filter across origins (and not only from the last snapshot of the same origin).
all repositories out there. It's not to be excluded though. It may be that this solutionThat gave a good boost in actually doing less work for ingesting mercurial forks.
composed with the previous points cFiltering early enough known revision/release would just be a deeper optimization on reducing theactually reduce further the packfile
loader's work(when the extid table is actually filled enough that is).
Any thoughts?