@olasd these are the failed dependencies you told me to expect, right? The missing package is ... libcmph-dev.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Oct 6 2021
I'd like to create a new package ( swh-objstorage-hash) and https://docs.softwareheritage.org/devel/tutorials/add-new-package.html is presumably the guide to do that. I however do not have the required permissions: would someone be so kind as to work with me on this?
make it more readable as suggested
Trvial bugfix https://forge.softwareheritage.org/D6417
Oct 4 2021
it make sense to create a dedicated swh-perfecthash package.
That I did not know, so indeed, if we need a specific wrapper for our needs, ...
In addition to being unmaintained,this could be addressed by asking authors to be in charge of the package
SWH I guess: I don't see the difference whether it's embedded in swh-objstorage, winery or a dedicated package.
That was just a test, trash it.
07:38:12 py3 run-test: commands[0] | docker run debian:bullseye date 07:38:12 Unable to find image 'debian:bullseye' locally 07:38:12 bullseye: Pulling from library/debian 07:38:12 df5590a8898b: Already exists 07:38:12 Digest: sha256:86dddd82dddf445aea3d2ea26af46cebd727bf2f47ed810fa1450a0d79722d55 07:38:12 Status: Downloaded newer image for debian:bullseye
tox is called with explicit -e, adding a new environment is a noop unless the matching jenkins job is updated
Oct 1 2021
SWH I guess: I don't see the difference whether it's embedded in swh-objstorage, winery or a dedicated package.
Sep 29 2021
Wouldn't it make sense to put the cffi-based cmph wrapper in a dedicated python module/project (not necessarily under the swh namespace)?
Sep 28 2021
@olasd @douardda @thomash05 : the following passes tox -e py3 therefore it is not complete nonsense. However it raises two questions:
Aug 30 2021
Aug 29 2021
Aug 23 2021
Aug 12 2021
Throttling writes to 120MBs to reduce the pressure:
The number of slow random reads reaches ~3.5% presumably because there is too much write pressure (the throttling of writes was removed).
The benchmarks were modified to (i) use a fixed number of random / sequential readers instead of a random choice for better predictability, (ii) introduce throttling to cap the sequential reads speed to approximately 200MB/s. A run of read only was run:
The run terminated August 11th @ 15:21 because of what appears to be a rare race condition. It was however mostly finished. The results show an unexpected degradation in the read performances. It deserves further investigation because it keeps degrading over time. The write performance are however stable and suggest the benchmark code itself may be responsible for this degradation. If the Ceph cluster was globally slowing down, both reads and writes would show a degradation in performance because previous benchmark results showed that there is a correlation between the two.
Aug 2 2021
Improve the readability of the graphs
Rehearse the run and make minor updates to make sure it runs right away this friday.