Raise an exception that is more useful than P279.
ValueError("unexpected mime type $foo") for example..
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Jul 5 2018
Jul 2 2018
Jun 26 2018
In T1118#20738, @zack wrote:...
In the beginning, it could be something as simple as: "if it looks like a PID, then resolve it; else, do origin search". In the future it can be extended to other syntactical searches, such as file name search, commit ID search, etc.The UI would be much better that way, and it'll always be possible to have a separate "advanced search" page (and/or special search keywords, like search engines do) where the various kinds of searches are broken down by type.
I've got this error when searching 'github':
Jun 18 2018
- Update sparse-deposit and metadata-deposit specs
Jun 14 2018
I completely agree that 'filename' is not enough and adding each time a new piece of context isn't a good solution.
Both path strategies (integers vs identifiers) are interesting.
Jun 13 2018
Jun 12 2018
Jun 5 2018
May 23 2018
May 18 2018
- docs: Update specs for the sparse-deposit and meta-deposit
- docs: Add swh xml schema for sparse and meta deposits
- docs: update example meta-deposit
May 16 2018
- docss: Add swh xml schema for sparse and meta deposits
- docs: update example meta-deposit
May 11 2018
May 7 2018
As Seirl and Anlambert suggested it was a sphinx version error coupled with a pip3 installation problem on the laptop.
With Ardumont's help I reinstalled sphinx, with stretch the latest package version is 1.4.9 (while 1.7.. is the one to use)
May 4 2018
Apr 25 2018
The response of batch_progress looks like this:
In [51]: c.batch_progress(4) Out[51]: {'bundles': [{'id': 3, 'obj_id': '7d4aecffc20478ea6807b9649b25b71e22ebbcb6', 'obj_type': 'revision_gitfast', 'progress_message': None, 'status': 'done'}, {'id': 4, 'obj_id': '355873d2daf160b736409a359da9e9ca4d714570', 'obj_type': 'revision_gitfast', 'progress_message': None, 'status': 'done'}, {'id': 5, 'obj_id': '612dd81b51a443a19e6a2c17f67ef46ea4c2c123', 'obj_type': 'revision_gitfast', 'progress_message': None, 'status': 'done'}], 'done': 3, 'failed': 0, 'new': 0, 'pending': 0, 'total': 3}
Apr 23 2018
Goal: deposit a tarball for which part or all the content is already in the SWH archive
the paths to the missing directories/content must be provided as empty paths in the tarball
the list linking each path to the object in the archive will be provided with the metadata
After a discussion with Zack:
We need to separate the internal and external use cases.
For the external use cases we separate "Write" with its technology and "Read" to be easily accessed and user-friendly.
- "Write" deposit : review url schema to follow the 'with what tool' pattern
- /sword/2/
- /sword/3/ : if we implement the SWORD v3 specs
- /truc/42/ : for any other technology/version we implement
Apr 19 2018
From today's email on swh-devel:
Apr 17 2018
Apr 13 2018
Apr 10 2018
Note: the query launched was limited to 10, so the two results are not identical
08d5853d3b832e1820c3e32c60b2dc65cb2ebe6a https://github.com/rubencepeda/jmxconsole
19f0650345e0ff2cade69f6a105908ea47d0afe5 https://github.com/YxhWife/firstPoj
24874bb64105faf845836150d72ddc151dbf5a14 https://github.com/peergreen/paas-router-manager
5d4a82405b9c8a098d660d52773ec4b1085543c3 https://github.com/bayois/Test-RESTLET
6e08888b9fe67241fff41baa059d223ab7409fca https://github.com/schmidinator/ese2015_hello
7dd9c1c2a2b5f92fbd37c705bc3ccb9e3c777a48 https://github.com/tempest200903/20150903-urlrewrite
7e80df2ea73a6bb9a030c7cb6ea1cfaf6821f3a8 https://github.com/fkrtzr/metainf
807d468a8c47e1f00bb4324119108c89f2d59677 https://github.com/liuxianqiang/protocol-buffer-basic
84629484fb861232daceb71a13918ae337f1d0e1 https://github.com/redmond007/RampUp
9ea852f3a0c73cab0403509f67d4b4164f1b30b2 https://github.com/ansell/bio2rdf-helpers
WITH last_visited AS ( SELECT o.url url, ov.snapshot_id snp, (SELECT MAX(date) FROM origin_visit WHERE origin= o.id) AS date FROM origin o INNER JOIN origin_visit ov on o.id = ov.origin WHERE o.id < 1000 ), head_branch_revision AS ( SELECT lv.url url, s.id snp_sha1, sb.target revision_sha1, lv.date date FROM last_visited lv INNER JOIN snapshot s on lv.snp = s.object_id INNER JOIN snapshot_branches sbs on s.object_id = sbs.snapshot_id INNER JOIN snapshot_branch sb on sbs.branch_id = sb.object_id WHERE sb.name = 'HEAD' AND sb.target_type = 'revision' ) SELECT hbr.url url, dir.id directory FROM head_branch_revision hbr INNER JOIN revision rev on hbr.revision_sha1 = rev.id INNER JOIN directory dir on rev.directory = dir.id INNER JOIN directory_entry_file def on def.id = any(dir.file_entries) WHERE def.name='pom.xml';
Mar 30 2018
@ardumont, I think you can resolve this one ;-)
Mar 27 2018
Mar 21 2018
Mar 16 2018
Mar 13 2018
Mar 12 2018
Feb 26 2018
following @olasd suggestion with a CTE
WITH last_visited AS ( SELECT o.url url, ov.snapshot_id snp, (SELECT MAX(date) FROM origin_visit WHERE origin= o.id) AS date FROM origin o INNER JOIN origin_visit ov on o.id = ov.origin limit 1000 ), head_branch_revision AS ( SELECT lv.url url, s.id snp_sha1, sb.target revision_sha1, lv.date date FROM last_visited lv INNER JOIN snapshot s on lv.snp = s.object_id INNER JOIN snapshot_branches sbs on s.object_id = sbs.snapshot_id INNER JOIN snapshot_branch sb on sbs.branch_id = sb.object_id WHERE sb.name = 'HEAD' AND sb.target_type = 'revision' limit 100 ) SELECT hbr.url url, hbr.snp_sha1 snp_sha1, hbr.revision_sha1 revision_sha1, hbr.date visit_date, dir.id directory FROM head_branch_revision hbr INNER JOIN revision rev on hbr.revision_sha1 = rev.id INNER JOIN directory dir on rev.directory = dir.id INNER JOIN directory_entry_file def on def.id = any(dir.file_entries) WHERE def.name='pom.xml' limit 1;
Feb 21 2018
Thank you both for your answers.
Feb 20 2018
Query with last visit, master branch and 'pom.xml' file name filtering: