- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Mar 29 2022
Remove useless debug logs leaking the azure credentials
Mar 24 2022
The test instance can be reached at https://gitlab-staging.swh.network
The diff is now ready to be reviewed.
We will start with a manual deployment in a first time
split gitlab installation from the aks configuration
Mar 23 2022
same
fix output typos
a couple of non-blocking remarks in lined
Mar 22 2022
other more general points, not directly relative to this diff:
- the image should be build from adoptopenjdk/openjdk11:debian-jre or any debian based image
- The user used in the container seems to be root, doesn't it generate some permissions issues in the temporary directory?
- Review feedbacks
- Add a report generation in html + an index page
- Add a nginx container to export the html files
- Use a docker volume to avoid permission issues in the output directory
- Ensure the csv is downloaded each time a docker-compose up is launched
- use the requirements.txt file to install the dependencies in the image
Mar 21 2022
The following installation methods were tested:
- debian packages
- docker image
- helm charts
- gitlab operator
Mar 17 2022
Mar 16 2022
A working poc is implemented in the snippet repository for now.
btw: This is some examples of the output of the script: P1312
Remove useless commented code
Mar 14 2022
Mar 11 2022
@zack Do you know when the internship will end? (to schedule a reminder for the account removal)
Mar 10 2022
Mar 9 2022
Mar 8 2022
Feb 24 2022
The replication is in place:
root@backup01:~# zfs list -t all NAME USED AVAIL REFER MOUNTPOINT data 120G 72.5G 96K none data/sync 120G 72.5G 96K none data/sync/dali 120G 72.5G 96K none data/sync/dali/postgresql 120G 72.5G 73.2G none data/sync/dali/postgresql@autosnap_2022-02-08_19:04:44_monthly 22.9G - 72.3G - data/sync/dali/postgresql@autosnap_2022-02-18_00:00:01_daily 3.11G - 73.2G - data/sync/dali/postgresql@autosnap_2022-02-19_00:00:01_daily 2.43G - 73.2G - data/sync/dali/postgresql@autosnap_2022-02-20_00:00:01_daily 2.39G - 73.2G - data/sync/dali/postgresql@autosnap_2022-02-21_00:00:01_daily 2.44G - 73.2G - data/sync/dali/postgresql@autosnap_2022-02-22_00:00:00_daily 2.47G - 73.2G - data/sync/dali/postgresql@autosnap_2022-02-23_00:00:02_daily 2.56G - 73.2G - data/sync/dali/postgresql@autosnap_2022-02-24_00:00:00_daily 0B - 73.2G - data/sync/dali/postgresql/wal 600M 72.5G 88.4M none data/sync/dali/postgresql/wal@autosnap_2022-02-08_19:04:44_monthly 61.9M - 61.9M - data/sync/dali/postgresql/wal@autosnap_2022-02-18_00:00:01_daily 90.9M - 107M - data/sync/dali/postgresql/wal@autosnap_2022-02-19_00:00:01_daily 94.7M - 111M - data/sync/dali/postgresql/wal@autosnap_2022-02-20_00:00:01_daily 55.4M - 87.5M - data/sync/dali/postgresql/wal@autosnap_2022-02-21_00:00:01_daily 50.1M - 98.2M - data/sync/dali/postgresql/wal@autosnap_2022-02-22_00:00:00_daily 57.7M - 106M - data/sync/dali/postgresql/wal@autosnap_2022-02-23_00:00:02_daily 52.8M - 68.8M - data/sync/dali/postgresql/wal@autosnap_2022-02-24_00:00:00_daily 0B - 88.4M -
The retention will 2 monthly snapshots and 30 daily snapshots
The space occupation should be just around 200Go so we will probably have to extend a little the data disk.
update commit message
avoid unnecessary update if the no-sync-snap is not specified
Feb 23 2022
- backup01 vm created on azure
- zfs installed (will be reported in puppet):
- add contrib repository
- install zfs
# apt install linux-headers-cloud-amd64 zfs-dkms
- configure zfs pool
root@backup01:~# fdisk /dev/sdc -l Disk /dev/sdc: 200 GiB, 214748364800 bytes, 419430400 sectors Disk model: Virtual Disk Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: D0FB08C6-F046-F340-AC8B-D6C9372015D5
- Assign a static ip to not use an address in the middle of the workers
- Ensure the data disk is not deleted in case of accidental removal of the vm
- Use a supported rsa key
- fix the ssh-key provisioning
Feb 22 2022
Update facts:
- Remove the location entry
- add the deployment variable
- add the subnet variable
After the elasticsearch restart, there is no more message relative to any gc overhead in the logs but there were a couple of timeouts during the night.
Further investigations are needed
A workaround is deployed to restart the sync if it was interrupted by a race condition scenario
Feb 21 2022
Elastisearch was restarted and the sentry issues closed.
Let's monitor if the gcs are coming coming again
first, clean the unused resources, even if it will not free a lot of resources:
- aliases cleanup
vsellier@search-esnode0 ~ % export ES_SERVER=192.168.130.80:9200 vsellier@search-esnode0 ~ % curl -XGET http://$ES_SERVER/_cat/aliases origin-read origin-v0.11 - - - - origin-write origin-v0.11 - - - - origin-v0.9.0-read origin-v0.9.0 - - - - origin-v0.9.0-write origin-v0.9.0 - - - - vsellier@search-esnode0 ~ % curl -XDELETE http://$ES_SERVER/origin-v0.9.0/_alias/origin-v0.9.0-read {"acknowledged":true}% vsellier@search-esnode0 ~ % curl -XDELETE -H "Content-Type: application/json" http://$ES_SERVER/origin-v0.9.0/_alias/origin-v0.9.0-write {"acknowledged":true}%
The replication of object storage is now running correctly:
-- Journal begins at Thu 2022-02-17 04:52:45 UTC, ends at Mon 2022-02-21 07:44:15 UTC. -- Feb 17 15:41:22 db1 systemd[1]: Starting ZFS dataset synchronization of... Feb 17 15:41:23 db1 syncoid[283583]: INFO: Sending oldest full snapshot data/objects@syncoid_db1_2022-02-17:15:41:23 (~ 11811.3 GB) to new target filesystem: Feb 19 13:41:09 db1 systemd[1]: syncoid-storage1-objects.service: Succeeded. Feb 19 13:41:09 db1 systemd[1]: Finished ZFS dataset synchronization of. Feb 19 13:41:09 db1 systemd[1]: syncoid-storage1-objects.service: Consumed 1d 10h 59min 6.865s CPU time. Feb 19 13:41:09 db1 systemd[1]: Starting ZFS dataset synchronization of... Feb 19 13:41:11 db1 syncoid[3716482]: Sending incremental data/objects@syncoid_db1_2022-02-17:15:41:23 ... syncoid_db1_2022-02-19:13:41:09 (~ 130.3 GB): Feb 19 14:29:18 db1 systemd[1]: syncoid-storage1-objects.service: Succeeded. Feb 19 14:29:18 db1 systemd[1]: Finished ZFS dataset synchronization of. Feb 19 14:29:18 db1 systemd[1]: syncoid-storage1-objects.service: Consumed 25min 43.311s CPU time. Feb 19 14:29:18 db1 systemd[1]: Starting ZFS dataset synchronization of... Feb 19 14:29:25 db1 syncoid[1084137]: Sending incremental data/objects@syncoid_db1_2022-02-19:13:41:09 ... syncoid_db1_2022-02-19:14:29:18 (~ 5.3 GB): Feb 19 14:31:12 db1 systemd[1]: syncoid-storage1-objects.service: Succeeded. Feb 19 14:31:12 db1 systemd[1]: Finished ZFS dataset synchronization of. Feb 19 14:31:12 db1 systemd[1]: syncoid-storage1-objects.service: Consumed 1min 7.439s CPU time. Feb 19 14:35:03 db1 systemd[1]: Starting ZFS dataset synchronization of... Feb 19 14:35:07 db1 syncoid[1174209]: Sending incremental data/objects@syncoid_db1_2022-02-19:14:29:18 ... syncoid_db1_2022-02-19:14:35:04 (~ 710.1 MB): Feb 19 14:35:35 db1 systemd[1]: syncoid-storage1-objects.service: Succeeded. Feb 19 14:35:35 db1 systemd[1]: Finished ZFS dataset synchronization of. Feb 19 14:35:35 db1 systemd[1]: syncoid-storage1-objects.service: Consumed 10.015s CPU time. Feb 19 14:40:48 db1 systemd[1]: Starting ZFS dataset synchronization of... Feb 19 14:40:52 db1 syncoid[1223955]: Sending incremental data/objects@syncoid_db1_2022-02-19:14:35:04 ... syncoid_db1_2022-02-19:14:40:49 (~ 271.6 MB): Feb 19 14:41:14 db1 systemd[1]: syncoid-storage1-objects.service: Succeeded. Feb 19 14:41:14 db1 systemd[1]: Finished ZFS dataset synchronization of. Feb 19 14:41:14 db1 systemd[1]: syncoid-storage1-objects.service: Consumed 5.701s CPU time. Feb 19 14:46:32 db1 systemd[1]: Starting ZFS dataset synchronization of... Feb 19 14:46:37 db1 syncoid[1267267]: Sending incremental data/objects@syncoid_db1_2022-02-19:14:40:49 ... syncoid_db1_2022-02-19:14:46:33 (~ 461.8 MB): Feb 19 14:47:05 db1 systemd[1]: syncoid-storage1-objects.service: Succeeded. Feb 19 14:47:05 db1 systemd[1]: Finished ZFS dataset synchronization of. Feb 19 14:47:05 db1 systemd[1]: syncoid-storage1-objects.service: Consumed 8.945s CPU time. Feb 19 14:52:18 db1 systemd[1]: Starting ZFS dataset synchronization of... Feb 19 14:52:22 db1 syncoid[1312265]: Sending incremental data/objects@syncoid_db1_2022-02-19:14:46:33 ... syncoid_db1_2022-02-19:14:52:19 (~ 263.2 MB): Feb 19 14:52:42 db1 systemd[1]: syncoid-storage1-objects.service: Succeeded. Feb 19 14:52:42 db1 systemd[1]: Finished ZFS dataset synchronization of. Feb 19 14:52:42 db1 systemd[1]: syncoid-storage1-objects.service: Consumed 6.021s CPU time. Feb 19 14:58:04 db1 systemd[1]: Starting ZFS dataset synchronization of... ...