- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Nov 23 2021
Nov 19 2021
Nov 2 2021
Copying my comment from a linked diff:
Hey! Thanks for this initial diff.
Oct 18 2021
Done
Oct 14 2021
Aug 12 2021
Jul 28 2021
LGTM too
Jul 27 2021
Jul 26 2021
Jul 9 2021
Jun 15 2021
May 26 2021
Super clunky, a lot of message types aren't handled, some messages get filtered out. I don't recall all the specifics but I tried it around a year ago and it was really bad.
May 8 2021
May 7 2021
May 5 2021
LGTM
May 4 2021
May 3 2021
Apr 27 2021
Apr 23 2021
Apr 19 2021
Apr 16 2021
It looks to me like this would be simpler if max_edges was given as a parameter to Traversal, since it's common to most methods. Would that work?
Apr 15 2021
rebase
Apr 14 2021
Add documentation and licensing info
Thanks for the review!
I just want to write something here that maybe isn't clear from the initial task description. This filtering must happen *after* the visit, not during. We can already change *how* the graph is visited using the edges parameter, the goal of this task is to filter the result post-visit.
Right, I suppose we can close the task then?
Remove debug print
Rebase
Apr 13 2021
@zack We talked about this on IRC with @vlorentz, I think this issue is invalid. We chose to have the source and destination nodes as part of the URI in the API. Semantically, it makes sense that accessing the path without these path fragments would return a 404: it's not a missing argument but an invalid path. If we had a ?src= and a &dst= arguments instead, then having a 400 error would make sense, but in our case the semantics are really weird.
Apr 9 2021
- Fix reviews
- Add backward compatibility for loading MPH on strings
Apr 7 2021
Duplicate of T2431
In D5427#137970, @vlorentz wrote:The new way of doing things is a lot more natural thing to do since we already have the MPH and the .order file
Newcomers aren't aware of this. I had no idea we had those before reading this diff.
I'm not saying the current state of the docs is good enough, I'm saying this commit message doesn't explain the design but why we're moving away from the old binary search solution. The new way of doing things is a lot more natural thing to do since we already have the MPH and the .order file, so there's no need to document why the old solution was bad in the main docs.
Where are the .order and MPH computed?
Thanks for the review. I don't think this needs to be documented elsewhere, it just describes why we're doing the change. What should be documented instead is why we're using these data structures in the first place. Right now this is done sparsely in the different source files, and this commit updates the already existing documentation.
Apr 6 2021
There's a problem with this diff, it's on an old java-only backend that isn't the one we use when we run swh graph rpc-serve. The one that is currently used is in python, at swh/graph/server/app.py
Apr 2 2021
Mar 26 2021
Rebase + fix phabricator incorrect ID