- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Mar 16 2021
fix tear down method
lgtm
2 disks were installed on the 2 remaining free slots.
They are detected by the raid card but need to be configured in JBOD mode.
It's postponed to Thursday morning as granet is sensible until a demonstration on wednesday afternoon.
Change the flask app implementation method
changing status to pending changes as I will reworke the app initialization to use the same method as swh-objstorage
Mar 15 2021
review feedbacks:
- rename the strangly named test
- use a getter for the redis client property
- cleanup code
The build of the jenkins images is now ok: https://jenkins.softwareheritage.org/job/jenkins-tools/job/swh-jenkins-dockerfiles/
The build was failing with a cryptic error :
14:20:43 error committing wijin0be0ztxj3pd6l64aeahh: invalid mutable ref 0xc001a73760: invalid: error committing 8bvh4xv6le4xw14m4fr8zvwho: invalid mutable ref 0xc001a72040: invalid: executor failed running [/bin/sh -c export DEBIAN_FRONTEND=noninteractive && apt-get update && apt-get install -y apt-transport-https curl ca-certificates gpg && echo deb [signed-by=/usr/share/keyrings/postgres-archive-keyring.gpg] http://apt.postgresql.org/pub/repos/apt/ buster-pgdg main > /etc/apt/sources.list.d/postgres.list && curl -fsSL https://www.postgresql.org/media/keys/ACCC4CF8.asc | gpg --dearmor > /usr/share/keyrings/postgres-archive-keyring.gpg && echo deb [signed-by=/usr/share/keyrings/yarnpkg-archive-keyring.gpg] https://dl.yarnpkg.com/debian/ stable main > /etc/apt/sources.list.d/yarnpkg.list && curl -fsSL https://dl.yarnpkg.com/debian/pubkey.gpg | gpg --dearmor > /usr/share/keyrings/yarnpkg-archive-keyring.gpg && echo deb [signed-by=/usr/share/keyrings/elasticsearch-archive-keyring.gpg] https://artifacts.elastic.co/packages/7.x/apt stable main > /etc/apt/sources.list.d/elastic-7.x.list && curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | gpg --dearmor > /usr/share/keyrings/elasticsearch-archive-keyring.gpg && echo deb [signed-by=/usr/share/keyrings/cassandra.gpg] http://www.apache.org/dist/cassandra/debian 40x main > /etc/apt/sources.list.d/cassandra.list && curl -fsSL https://downloads.apache.org/cassandra/KEYS | gpg --dearmor > /usr/share/keyrings/cassandra.gpg && apt-get update && apt-get upgrade -y && apt-get install -y arcanist build-essential cassandra elasticsearch fuse3 git-lfs jq libfuse3-dev libsvn-dev libsystemd-dev lzip maven mercurial pkg-config postgresql-11 postgresql-client-11 postgresql-server-dev-11 python3-dev python3-pip python3-venv subversion tini yarn zstd]: stat /var/lib/docker/overlay2/8bvh4xv6le4xw14m4fr8zvwho: no such file or directory 14:20:43 make: *** [Makefile:45: swh-jenkins/base-buster] Error 1 14:20:43 Build step 'Execute shell' marked build as failure
Used signed-by for all the 3d party repositories
Mar 12 2021
- adapt the kafka's healthcheck
- remove the typo on the initial log of the swh-counters container
Add the swh-counters in the docker image
The debian package a first release is needed to be able to launch the counters without overrides
use redis-server package instead of a metapackage
In D5235#133003, @anlambert wrote:...
Looks simpler to me but there might be a reason to not use apt-key.
update commit message
Add task link on the commit message
Update the commit message
Fix review feedbacks
- All workers and journal clients stopped before upgrading storage1 and db1
Mar 11 2021
swh-search0
- stopping writes
root@search0:~# systemctl stop swh-search-journal-client@objects root@search0:~# systemctl stop swh-search-journal-client@indexed root@search0:~# puppet agent --disable "zfs upgrade" `` - package upgrades - `swh-search0` rebooted - `swh-search0` rebooted - all service are up and running
Add tests
Mar 10 2021
Mail sent to the dsi to request the installation of 2 of the new disks
Overview of the system :
- 2 slots availables (10 slot occupied on a total of 12)
- system installed on 2 disks ssd disk (wwn-0x500a075122f366e4 and wwn-0x500a075122f357f1)
- 2 zfs pools
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT hdd 29.1T 22.8T 6.29T - - 20% 78% 1.00x ONLINE - ssd 10.3T 7.91T 2.44T - - 24% 76% 1.00x ONLINE -
root@granet:~# zpool status -v hdd pool: hdd state: ONLINE scan: scrub repaired 0B in 0 days 15:42:24 with 0 errors on Sun Feb 14 16:06:26 2021 config:
Mar 8 2021
Mar 5 2021
Thanks for the feedback
- Repository created : https://forge.softwareheritage.org/source/swh-counters/
- Jenkins jobs configured : https://jenkins.softwareheritage.org/job/DCNT/
Let's start the subject ;)
I forgot one step, cleaning the previous alias origin -> origin_production not needed anymore:
vsellier@search-esnode1 ~ % curl -s http://$ES_SERVER/_cat/indices\?v && echo && curl -s http://$ES_SERVER/_cat/aliases\?v && echo && curl -s http://$ES_SERVER/_cat/health\?v health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open origin-production hZfuv0lVRImjOjO_rYgDzg 90 1 153130652 26701625 273.4gb 137.3gb
The new configuration is deployed, swh-search is now using the alias which should help for the future upgrades
Deployment in production:
- puppet stopped
- configuration updated to declare the index, it needs to be done to make swh-search initializing the aliaes before the journal clients starts (not guaranteed with a puppet apply)
- package updated
- gunicorn-swh-search service restarted:
Mar 05 09:08:46 search1 python3[1881743]: 2021-03-05 09:08:46 [1881743] gunicorn.error:INFO Starting gunicorn 19.9.0 Mar 05 09:08:46 search1 python3[1881743]: 2021-03-05 09:08:46 [1881743] gunicorn.error:INFO Listening at: unix:/run/gunicorn/swh-search/gunicorn.sock (1881743) Mar 05 09:08:46 search1 python3[1881743]: 2021-03-05 09:08:46 [1881743] gunicorn.error:INFO Using worker: sync Mar 05 09:08:46 search1 python3[1881748]: 2021-03-05 09:08:46 [1881748] gunicorn.error:INFO Booting worker with pid: 1881748 Mar 05 09:08:46 search1 python3[1881749]: 2021-03-05 09:08:46 [1881749] gunicorn.error:INFO Booting worker with pid: 1881749 Mar 05 09:08:46 search1 python3[1881750]: 2021-03-05 09:08:46 [1881750] gunicorn.error:INFO Booting worker with pid: 1881750 Mar 05 09:08:46 search1 python3[1881751]: 2021-03-05 09:08:46 [1881751] gunicorn.error:INFO Booting worker with pid: 1881751 Mar 05 09:08:53 search1 python3[1881750]: 2021-03-05 09:08:53 [1881750] swh.search.api.server:INFO Initializing indexes with configuration: Mar 05 09:08:53 search1 python3[1881750]: 2021-03-05 09:08:53 [1881750] elasticsearch:INFO HEAD http://search-esnode2.internal.softwareheritage.org:9200/origin-production [status:200 request:0.023s] Mar 05 09:08:54 search1 python3[1881750]: 2021-03-05 09:08:54 [1881750] elasticsearch:INFO PUT http://search-esnode1.internal.softwareheritage.org:9200/origin-production/_alias/origin-read [status:200 request:0.487s] Mar 05 09:08:54 search1 python3[1881750]: 2021-03-05 09:08:54 [1881750] elasticsearch:INFO PUT http://search-esnode3.internal.softwareheritage.org:9200/origin-production/_alias/origin-write [status:200 request:0.152s] Mar 05 09:08:54 search1 python3[1881750]: 2021-03-05 09:08:54 [1881750] elasticsearch:INFO PUT http://search-esnode1.internal.softwareheritage.org:9200/origin-production/_mapping [status:200 request:0.009s]
vsellier@search-esnode1 ~ % curl -s http://$ES_SERVER/_cat/indices\?v && echo && curl -s http://$ES_SERVER/_cat/aliases\?v && echo && curl -s http://$ES_SERVER/_cat/health\?v health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open origin-production hZfuv0lVRImjOjO_rYgDzg 90 1 153097672 144224208 288.1gb 149gb
Mar 4 2021
swh-search:v0.7.1 deployed in staging according to the defined plan.
The aliases are well created and used by the services
vsellier@search-esnode0 ~ % curl -XGET -H "Content-Type: application/json" http://192.168.130.80:9200/_cat/indices green open origin HthJj42xT5uO7w3Aoxzppw 80 0 929692 137147 4gb 4gb green close origin-backup-20210209-1736 P1CKjXW0QiWM5zlzX46-fg 80 0 green close origin-v0.5.0 SGplSaqPR_O9cPYU4ZsmdQ 80 0 vsellier@search-esnode0 ~ % curl -XGET -H "Content-Type: application/json" http://192.168.130.80:9200/_cat/aliases origin-read origin - - - - origin-write origin - - - -
Journal clients:
Mar 04 16:22:40 search0 swh[3598137]: INFO:elasticsearch:POST http://search-esnode0.internal.staging.swh.network:9200/origin-write/_bulk [status:200 request:0.013s] Mar 04 16:22:41 search0 swh[3598137]: INFO:elasticsearch:POST http://search-esnode0.internal.staging.swh.network:9200/origin-write/_bulk [status:200 request:0.012s]
Search:
Mar 04 15:40:20 search0 python3[3598040]: 2021-03-04 15:40:20 [3598040] swh.search.api.server:INFO Initializing indexes with configuration: Mar 04 15:40:20 search0 python3[3598040]: 2021-03-04 15:40:20 [3598040] elasticsearch:INFO HEAD http://search-esnode0.internal.staging.swh.network:9200/origin [status:200 request:0.005s] Mar 04 15:40:20 search0 python3[3598040]: 2021-03-04 15:40:20 [3598040] elasticsearch:INFO HEAD http://search-esnode0.internal.staging.swh.network:9200/origin-read/_alias [status:200 request:0.001s] Mar 04 15:40:20 search0 python3[3598040]: 2021-03-04 15:40:20 [3598040] elasticsearch:INFO HEAD http://search-esnode0.internal.staging.swh.network:9200/origin-write/_alias [status:200 request:0.001s] Mar 04 15:40:20 search0 python3[3598040]: 2021-03-04 15:40:20 [3598040] elasticsearch:INFO PUT http://search-esnode0.internal.staging.swh.network:9200/origin/_mapping [status:200 request:0.006s] Mar 04 16:19:27 search0 python3[3598042]: 2021-03-04 16:19:27 [3598042] elasticsearch:INFO GET http://search-esnode0.internal.staging.swh.network:9200/origin-read/_search?size=100 [status:200 request:0.076s]