Explicit that the current behavior does not compute another snapshot when nothing
changes (thus everything gets filtered out).
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
All Stories
Sep 14 2021
In D6252#161761, @olasd wrote:Looks like the format you're expecting for the content-disposition header isn't quite standards-compliant.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Disposition says the content-disposition filename entry is supposed to be a quoted string.
Looks like the format you're expecting for the content-disposition header isn't quite standards-compliant.
Awesome. Thanks a lot.
Build is green
Adapt test so it's currently a fork that is loaded
Build is green
closed by 94be817f869409c64415b181824071d2998e33d5
Revert 'drop the flush instruction'
closed by a3c1f39013bae1a6982140d51d8bb443dc1b5c9c
Keep port 5092 exposed on host
thx
Build is green
I'll udpate those tomorrow as it's still ongoing.
Rebase
Build is green
- maven-lister: Fix tests (review D6133)
Sep 13 2021
In T3468#70305, @ardumont wrote:
great, let's close this then.
In D6250#161647, @olasd wrote:requests.get already follows redirects by default. I believe that this boolean only applies to POST/PUT/DELETE requests.
requests.get already follows redirects by default. I believe that this boolean only applies to POST/PUT/DELETE requests.
cassandra implementation for extid_get_from_target needs to be changed to actually allow filtering on both extid_type and extid_version.
Add a bit of documentation in the README file on how to consume kafka from the host
It's not worth the trouble, and there is a better solution (server-side)
In D6234#161606, @vlorentz wrote:You could also add a command in swh-dataset's entrypoint.sh that calls whatever Kafka's script does
You could also add a command in swh-dataset's entrypoint.sh that calls whatever Kafka's script does
Build is green
Improve test assertion
In D6234#161506, @vlorentz wrote:In D6234#161491, @douardda wrote:So either I kill this diff or it stays "intricate" with the setup of the consumer (so the whole journalprocessor.py)
Note: this feature is mainly useful for testing purpose IMHO, so I suppose it's not that critical to keep it, I just find it handy when "playing" with swh dataset export
Meh. How much easier does it make testing, compared to using Kafka's CLI (from the linked comment)?
Build is green
rebase
in favor of D6247 because phab/arcanist won't let me update this later any more (sorry)