rebase
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
All Stories
Oct 19 2021
- Fix config file parsing for server initilization
- Send several items per message in the remote provenance storage
- Export batch size and prefetch count as parameters for remote storage
rebase
Build is green
Build is green
Rebase
Rebase and fix a couple of typos in cypress test comments
Hello, I wanted to have a look at your diff but then I saw old commits from me in there.
So, you need to rebase your work on latest master and then properly diff with the right
range of commits. I gather something like this would do:
In D6502#168732, @ardumont wrote:lgtm
lgtm
Great!
What i understood of this improvment is that only the detection logic of {tree, commit} got improved.
This keeps whatever is ingested with their "non-standard" mode.
- (1) add an entry to the list of "forges" being listed.
- (2) immediately do the full listing, with high scheduling priority
In D6503#168716, @ardumont wrote:That was quick ;)
Thanks.
That was quick ;)
Thanks.
this actually requires quite some refactoring in the seaweedfs backend,
mostly to support the (somewhat useless) list_content() method.
lgtm
Sent a summary of this discussion to the swh-devel list for input:
Thanks for the great work!
journal0 is stopped for the moment.
It's decommissioned in puppet:
root@pergamon:/usr/local/sbin# ./swh-puppet-master-decommission journal0.internal.staging.swh.network + puppet node deactivate journal0.internal.staging.swh.network Submitted 'deactivate node' for journal0.internal.staging.swh.network with UUID 02929cbb-6b34-4fab-85c1-46c9397fa949 + puppet node clean journal0.internal.staging.swh.network Notice: Revoked certificate with serial 199 Notice: Removing file Puppet::SSL::Certificate journal0.internal.staging.swh.network at '/var/lib/puppet/ssl/ca/signed/journal0.internal.staging.swh.network.pem' journal0.internal.staging.swh.network + puppet cert clean journal0.internal.staging.swh.network Warning: `puppet cert` is deprecated and will be removed in a future release. (location: /usr/lib/ruby/vendor_ruby/puppet/application.rb:370:in `run') Notice: Revoked certificate with serial 199 + systemctl restart apache2
I've updated the description with one of your explanatory comment (feel free to adapt if i'm wrong ;).
Build is green
the return value of mmap is -1 on error, not NULL
There is an error on mmap which was not detected, therefore no information on why it failed. This was fixed.
- time tox -e py3 -- --basetemp=/mnt/pytest -s --shard-size $((100 * 1024)) --object-max-size $((4 * 1024)) -k test_build_speed number of objects = 45973118 baseline 163.73826217651367, write_duration 300.58917450904846, build_duration 26.01908826828003, total_duration 326.6082627773285
Build is green
return on error if the write method exceeds the file capacity
Running benchmarks directly on grid5000
- oarsub -I -l "{cluster='dahu'}/host=1,walltime=1" -t deploy
- kadeploy3 -f $OAR_NODE_FILE -e debian11-x64-base -k
- ssh root@$(tail -1 $OAR_NODE_FILE)
- mkfs.ext4 /dev/sdb1
- mount /dev/sdb1 /mnt
- apt-get install -y python3-venv libcmph-dev gcc git
- git clone https://git.easter-eggs.org/biceps/swh-perfecthash/
- python3 -m venv bench
- source bench/bin/activate
- pip install -r requirements.txt -r requirements-test.txt
- cd swh-perfecthash
- tox -e py3
- time tox -e py3 -- --basetemp=/mnt/pytest -s --shard-size $((100 * 1024)) --object-max-size $((100 * 1024)) -k test_build_speed
- rm -fr /mnt/pytest
/opt/jFed/jFed-Experimenter works but I'll have to wait on the approval of the account before proceeding further.
Created a project in https://portal.fed4fire.eu/ with the intention of using grid5000. It is pending approval from an administrator (see T3670).
Oct 18 2021
After a fight with zookeeper the new broker is nowfully functional on storage1.
It seems the zookeeper content was not synchronized between the 2 nodes. It was solved by stopping everything and copying the content of '/var/lib/zookeeper' to the second server. For example the credentials were not present on the second zookeeper.
After the battle, I realized I should have dig deeper to find the reason but I try to react fast to reduce the risk of corruption.