Page MenuHomeSoftware Heritage

SourceForge lister
Open, HighPublic


We need a lister for SourceForge, in order to be able to archive what's there.

Sourceforge uses the Apache Allura forge under the hood to host open source projects.
Unfortunately, the associated REST API does not offer the possibility to list all hosted projects. A ticket has been created on the subject a couple of years ago but no action have been taken so far.

It is nonetheless possible to do full and incremental listing, using sitemaps and the REST API to query project-by-project information. See a specification blueprint by @zack in T735#51468 below. It has been designed discussing with a SourceForge tech contact.

Event Timeline

@anlambert: if you found additional related work, can you post it to this task? TIA

Below are some intels I managed to gather in order to fulfill that task.

Listing projects on sourceforge

Two solutions could be used.

First one is to do some web scraping from the Sourceforge directory url: This is the solution used by archiveteam, the source code of their scraper (in Ruby) can be found on Github: However this does not seem reliable as not all pages from the Sourceforge directory can be browsed. Currently, there is 18831 available pages about Sourceforge projects but trying to browse pages number greater or equal than 1000 returns an error 500 (for instance,

Second one, as pointed by pombreda on IRC, is to use rsync mirrors of files made available for download (typically release tarballs) in Sourceforge projects: rsync://, rsync:// That solution seems better as it will allow us to list all relevant projects names on Sourceforge (thus discarding empty projects and those without any releases). Please find below a sample output when using rsync to list projects whose name start with gl.

antoine@antoine-X550CC:~$ rsync --list-only rsync://
Welcome to the University of Kent's UK Mirror Service.

More information can be found at our web site:
Please send comments or questions to

drwxr-xr-x         20,480 2017/07/13 02:27:00 .
lrwxrwxrwx             19 2010/01/05 07:08:57 index-sf.html
drwxr-xr-x          4,096 2016/08/25 07:30:46 gl-117
drwxr-xr-x          4,096 2016/08/25 07:30:46 glabels
drwxr-xr-x          4,096 2016/08/25 07:30:46 gladewin32
drwxr-xr-x          4,096 2017/06/10 02:25:52 gladys
drwxr-xr-x          4,096 2016/08/25 07:30:55 glass-theme
drwxr-xr-x          4,096 2016/08/25 07:30:57 glattony
drwxr-xr-x          4,096 2016/08/25 07:30:59 glaunch
drwxr-xr-x          4,096 2016/08/25 07:31:35 glc-lib
drwxr-xr-x          4,096 2016/08/25 07:31:37 glc-player
drwxr-xr-x          4,096 2016/08/25 07:32:34 glcdtools
drwxr-xr-x          4,096 2016/08/25 07:32:38 glchess
drwxr-xr-x          4,096 2016/08/25 07:32:46 gldirect
drwxr-xr-x          4,096 2016/08/25 07:32:49 gle
drwxr-xr-x          4,096 2016/08/25 07:33:36 glesius
drwxr-xr-x          4,096 2017/06/11 02:28:24 glest
drwxr-xr-x          4,096 2016/08/25 07:33:53 glew

Ingesting sourceforge projects into the SWH archive

Once a list of relevant projects is obtained, some preprocessing has to be done before being able to ingest a project into the SWH archive.
From a Sourceforge project name, its associated metadata can easily be obtained using the public Allura REST API (Allura being the software forge used on Sourcefore, see
For instance, to get the metadata about the glew project: The url of the VCS repository (can be cvs, svn, hg, git) used by the project can be reconstructed from the retrieved metadata.
I found a project on Github, released on the public domain, dedicated to the metadata retrieval of open source projects hosted on Sourceforge: In particular, the following Python script could be reused by us.

The scripts and data at look to be exactly what is required with that person (chpwssn) having already identified over 350,000 SVN, Mercurial, and GIT repositories on SourceForge with associated rsync commands for downloading them.

I started looking into this task myself with simple scripts that scraped the directory, but this looks like it's already super close to completion (or essentially already complete, but someone needs to create the SH-bits.

Here's a blueprint for implementing a SourceForge lister, based on an exchange with a SourceForge tech contact:

  • start from the Allura sitemap index
  • recurse into sitemaps (e.g., sitemap-0.xml)
  • extract the list of all project URLs, matching (e.g., seedai)
  • for each project name, query its REST API endpoint (e.g., seedai)
  • from there extract the list of project "tools"; they include tools that corresponds to VCS, with names like "git", "svn", "cvs"
  • associated to each VCS tool there is a URL, from which we can build clone/checkout commands (or, equivalently, origin URLs for a full lister). The URL pattern (to be verified) should be {type}{project}/{mount_point} (e.g., svn, git)

I've put a prototype implementation of this (up to the listing of all tool types and URLs included, but with no integration with the swh-lister API) in the snippet repo.

I've run it once, successfully listing all of SourceForge in ~4 hours with 8 parallel threads to query the REST endpoint.
As of that run I've listed 480'711 projects and 402'908 VCS "tools" (see P832 for details), with the following breakdown by VCS type:

  • 182'858 git
  • 145'225 svn
  • 44'493 cvs (read-only)
  • 29'148 hg
  • 1'184 bzr

Other improvements needed are:

  • incremental listing: this is possible to do exploiting the <lastmod> value in sitemaps. We have been told by SourceForge that that last modification timestamp is unique per project and that it is updated when the VCS is updated. It is therefor possible to be smart and do incremental listing that only list updated repositories w.r.t. the last lister run
  • there are some subprojects on SourceForge, although we have been told by SourceForge they are very very rare. We should consider including them too. An example is: computerastherapy/ict-framework (note how the "project" here is computerastherapy/ict-framework
  • in order to play nice with SourceForge while crawling we should:
    • set the crawler user-agent to something identifying it as coming from Software Heritage
    • make sure the crawler IP address(es) have a reverse DNS entry (ideally pointing to a Software Heritage hostname too)
    • keep parallelism at 8 concurrent workers maximum
vlorentz raised the priority of this task from Normal to High.Feb 12 2021, 11:25 AM

It looks like there are projects outside of the /p/ namespace. Just looking at the very first sitemap, I got an /adobe/ namespace (, which implies that we should also consider namespaces outside of /p/ when listing.

Note also that a lot of entries are duplicated across the /projects/ and /p/ namespaces, while both point to the same thing.

New stats:

  • 317973 distinct projects in the sitemaps (including subprojects)
  • 360 subprojects
  • 356 projects are outside of the normal /p/ namespace, including subprojects