an example of this which I quite like is Ubuntu's, see, e.g.: https://www.ubuntu.com/ , https://developer.ubuntu.com/ , https://blog.ubuntu.com/ , etc.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
May 25 2018
May 24 2018
May 22 2018
May 18 2018
So I think the best option here is to used named parameters as optional parts in the identifiers. This will give us some flexibility regarding the adding of new ones in the future. Regarding the separator, we could either used \ or | as they should not interfere with origin urls to extract.
May 17 2018
the problems I see with optional URL parameters instead of modifying the identifiers themselves are the following:
May 16 2018
May 4 2018
So, how about just redirecting systematically to the browsing of the most recent snapshot available, and have from there a link (under actions?) that points to the visit calendar? Isn't that what most users would want anyway? I.e., aren't we optimizing for the wrong use case currently?
May 3 2018
FWIW, the proposed policy looks good to me. Green light!
Apr 28 2018
Apr 27 2018
This is now done as a generic goal. Bits and pieces of the doc are still in progress, but the generic doc infrastructure is now in place.
Apr 20 2018
can you consolidate the bits of docs somewhere on the intranet? they'll be easier to find than on a task in the future
Apr 19 2018
Apr 18 2018
In T782#18978, @s wrote:
Apr 6 2018
the procedure for the registration is here: https://tools.ietf.org/html/rfc7595
Mar 27 2018
relevant highlights:
It should be the other way around: making the elastic cluster does not need the web app to push it logs to the cluster to be completed
Mar 25 2018
update from joeyh, there is no need for any specific hack to maintain a local mirror, it is just an undocumented feature:
Mar 24 2018
Mar 23 2018
Mar 12 2018
Mar 9 2018
Hi s!, and thanks for your interest in helping us out!
Heya, thanks for the bug report!
Mar 3 2018
Mar 2 2018
Thanks for the report. I haven't looked into this specific, so it's indeed possible it's a bug, but in the general case this is potentially normal behavior.
Branches can point to either releases or revisions (or, in fact, anything at all).
In the Git case, which looks like your case comes from, if one simply does a "git tag", that would create a ref pointing to a revision; whereas if one does "git tag -a" (annotated tag), that would create a release object (pointing to a revision) and a ref pointing to the release object. So an author that switched from using "git tag" to use "git tag -a" would justify what you have seen.
Feb 23 2018
We now have a preliminary version of this at https://docs.softwareheritage.org/devel/swh-model/data-model.html#data-model .
We still lack prose description of the diagram though.
Feb 22 2018
Feb 20 2018
In T975#18129, @olasd wrote:I'm still not entirely convinced that this work should be done through raw SQL queries, but rather using the higher level Python APIs. This depends on the following considerations:
- how often do you need to update the data, and for how long?
- how many different parameters do you need to have? (pom.xml/build.gradle/...)
- how often will the parameters change?
Feb 14 2018
Feb 9 2018
Regarding the placement of the code, the (ideal) data structure you need for this is a lazy version of the DAG, because you need to only fetch the nodes that are different, and the only module that can do that for you is indeed swh.storage.
So I'm fine having this in swh.storage. I'm not sure I like "utils" (which sounds low-level), maybe "operations" or "algo" (which sound higher-level, like this functionality actually is) would be more appropriate here? I've no strong opinion either...
Either way, I'd like to uniform with what @seirl has done for toposort in swh.model. So whatever name we pick here, toposort in swh.model should go under swh.model.THATNAME.toposort .
I don't see anything new here. Subversion offers no integrity guarantees, it applies to the ASF repos like it applies to any other SVN repo out there. We need to decide a policy about when (if at all), re-do full ingestions of Subversion repos (which will allow to re-inject modified objects at the cost of forking the resulting history on Software Heritage) or just say *shrug* and never re-ingest in a non-incremental way any Subversion repo we have previously ingested.