Build is green
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
All Stories
Aug 26 2021
The backfill is also done for the production.
It tooks less than 4h30
... 2021-08-25T19:25:25 INFO swh.storage.backfill Processing extid range 700000 to 700001
rebase
status:
- scheduler v0.17.1 deployed on production [1] (db migrated) and staging.
- then swh-scheduler-journal-client service restarted.
D6139 should address the bottleneck in the flame graph
Build is green
Build is green
- Add SQL enum for relation get filter options
rebase
Build is green
Update test according to @ardumont suggesstion
Build is green
Looks good to me.
Build is green
Build is green
Build is green
Fix another typo in the sql this time
LGTM
Fix docstring typo
oops, wasn't supposed to be in this diff
re-hide 'raw' endpoints
I bumped the priority since scheduler runners (next-gen) are depending on the results of journal client (scheduler metrics as well).
Build is green
- Clarify code change
- Update docstring to mention the possible side when calculating offset.
- Add a migration script to adapt existing values in db
lgtm, one suggestion inline.
I think so, thanks
@vlorentz should we close this one?
Aug 25 2021
For the rpm support, [1] may help.
Build is green
Rename count_visit_types to visit_types_count.
It was really faster than expected in staging. The backfilling is already done:
- on production:
vsellier@kafka1 ~ % /opt/kafka/bin/kafka-topics.sh --bootstrap-server $SERVER --describe --topic swh.journal.objects.extid | grep "^Topic" Topic: swh.journal.objects.extid PartitionCount: 256 ReplicationFactor: 2 Configs: cleanup.policy=compact,max.message.bytes=104857600 vsellier@kafka1 ~ % /opt/kafka/bin/kafka-configs.sh --bootstrap-server ${SERVER} --alter --add-config 'cleanup.policy=[compact,delete],retention.ms=86400000' --entity-type=topics --entity-name swh.journal.objects.extid Completed updating config for topic swh.journal.objects.extid.
In the kafka logs:
... [2021-08-25 14:56:19,495] INFO [Log partition=swh.journal.objects.extid-162, dir=/srv/kafka/logdir] Found deletable segments with base offsets [0] due to retention time 86400000ms breach (kafka.log.Log) [2021-08-25 14:56:19,495] INFO [Log partition=swh.journal.objects.extid-162, dir=/srv/kafka/logdir] Scheduling segments for deletion LogSegment(baseOffset=0, size=2720767, lastModifiedTime=1629815520833, largestTime=1629815520702) (kafka.log.Log) [2021-08-25 14:56:19,495] INFO [Log partition=swh.journal.objects.extid-162, dir=/srv/kafka/logdir] Incremented log start offset to 20623 due to segment deletion (kafka.log.Log) ....
vsellier@kafka1 ~ % /opt/kafka/bin/kafka-configs.sh --bootstrap-server ${SERVER} --alter --delete-config 'cleanup.policy' --entity-type=topics --entity-name swh.journal.objects.extid Completed updating config for topic swh.journal.objects.extid. vsellier@kafka1 ~ % /opt/kafka/bin/kafka-configs.sh --bootstrap-server ${SERVER} --alter --delete-config 'retention.ms' --entity-type=topics --entity-name swh.journal.objects.extid Completed updating config for topic swh.journal.objects.extid. vsellier@kafka1 ~ % /opt/kafka/bin/kafka-configs.sh --bootstrap-server ${SERVER} --alter --add-config 'cleanup.policy=compact' --entity-type=topics --entity-name swh.journal.objects.extid Completed updating config for topic swh.journal.objects.extid. vsellier@kafka1 ~ % /opt/kafka/bin/kafka-topics.sh --bootstrap-server $SERVER --describe --topic swh.journal.objects.extid | grep "^Topic" Topic: swh.journal.objects.extid PartitionCount: 256 ReplicationFactor: 2 Configs: cleanup.policy=compact,max.message.bytes=104857600
- the retention policy was restore to compact on staging:
vsellier@journal0 ~ % /opt/kafka/bin/kafka-configs.sh --bootstrap-server journal0.internal.staging.swh.network:9092 --alter --delete-config 'cleanup.policy' --entity-type=topics --entity-name swh.journal.objects.extid
% /opt/kafka/bin/kafka-configs.sh --bootstrap-server journal0.internal.staging.swh.network:9092 --alter --add-config 'cleanup.policy=compact' --entity-type=topics --entity-name swh.journal.objects.extid Completed updating config for topic swh.journal.objects.extid.
% /opt/kafka/bin/kafka-topics.sh --bootstrap-server $SERVER --describe --topic swh.journal.objects.extid | grep "^Topic" Topic: swh.journal.objects.extid PartitionCount: 64 ReplicationFactor: 1 Configs: cleanup.policy=compact,max.message.bytes=104857600,min.cleanable.dirty.ratio=0.01
Build is green
- set 3 origins types per row instead of 4 to improve readability
Build is green
MongoDB backend implementation for provenancestorage.
Build is green
- Added the necessary directory for mongo pytest to work
Build has FAILED
MongoDB backend implementation for provenancestorage.
Build has FAILED
- Made default provenance storage to psql
Build has FAILED
- Pytest is using mongomock
I've duplicated the credentials for the relevant forges, and updated the following instance names: