Fix previous too enthusiastic commit
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Nov 9 2020
Use an alias for sentry entry to clarify the internal ip usage
remove wrong plural
Nov 6 2020
LGTM as a coauthor 馃槂
LGTM
LGTM
Nov 4 2020
The only remaining task is the monitoring / metrics gathering, it will be detailed on another dedicated task.
In T2721#52000, @vsellier wrote:after digging why the git configuration is not pushed, I have found in the git backup configuration [1] the plugins needs an 'configuration-changed` event to detect the updated.
Now an upgrade can be performed without interruption:
- On glyptotek (SLAVE), upgrade to the version 20.7.4 launched via the web ui
- Switch the master from pushkin to glyptotek via the web ui (Interfaces / Virtual Ips / Status => Enter Persistent CARP Maintenance Mode) on pushkin
- Everything seems to work well in glytotek in 20.7.4 so the operation can be repeated on pushkin
- Don't forget to disable the Maintenance Mode on both firewalls
Nov 3 2020
- glyptotek hostname reserved on the host naming page [1]
- pushkin vm cloned on proxmox and deployed on beaubourg for the ha (pushkin in running on branly)
- to be able to start the new instance without ip conflicts, the network devices have to be disconnected in the proxmox configuration
- the IPs were reconfigured in the text console via the menu available when the user root connect. This is the assignement :
Interface | IP |
---|---|
VLAN440 | 192.168.100.128 |
VLAN442 | 192.168.50.3 |
VLAN443 | 192.168.130.3 |
VLAN1300 | 128.93.166.4 |
- the Ha settings were configured on both firewalls to activate the synchronization of the states (menu System / High availability / settings) and the configuration, the peer ip was configured to reach fw2 from fw1 and respectively
- the master/slave switch via the the interface (Interfaces > Virtual IPs / Status -> Enter/Leave Persistent CARP Maintenance Mode) are ok, there is no packets lost between 2 servers (1 in VLAN440 and the other in VLAN443)
after digging why the git configuration is not pushed, I have found in the git backup configuration [1] the plugins needs an 'configuration-changed` event to detect the updated.
This event[2] was added on the version v20.7.4. The firewall is in the v20.7.3 which can explain why the full process is not working.
Netbox is up and used since several weeks now.
The backup is correctly configured:
root@bojimans:/etc/borgmatic# borgmatic info --archive latest borg@banco.internal.softwareheritage.org:/srv/borg/repositories/bojimans.internal.softwareheritage.org: Displaying summary info for archives Archive name: bojimans.internal.softwareheritage.org-2020-11-03T12:41:02.069548 Archive fingerprint: f8d0932e85043e61f59b21856a2cd871336d2b7e7a3e7d6e681cd4333f091581 Comment: Hostname: bojimans Username: root Time (start): Tue, 2020-11-03 12:41:03 Time (end): Tue, 2020-11-03 12:41:10 Duration: 7.19 seconds Number of files: 62391 Command line: /usr/bin/borg create --exclude-from /tmp/tmpo2f1n9xq --exclude-caches --exclude-if-present .nobackup 'borg@banco.internal.softwareheritage.org:/srv/borg/repositories/bojimans.internal.softwareheritage.org::bojimans.internal.softwareheritage.org-{now:%Y-%m-%dT%H:%M:%S.%f}' / Utilization of maximum supported archive size: 0% ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size This archive: 1.84 GB 938.96 MB 2.12 MB All archives: 64.97 GB 32.95 GB 1.06 GB Unique chunks Total chunks Chunk index: 61324 2163683
root@bojimans:~# borgmatic mount --archive latest --mount-point /tmp/bck root@bojimans:/tmp/bck/opt# du --apparent-size -schP {/tmp/bck,}/opt/netbox* {/tmp/bck,}/var/lib/netbox {/tmp/bck,}/var/lib/postgresql/ 17 /tmp/bck/opt/netbox 141M /tmp/bck/opt/netbox-2.9.3 17 /opt/netbox 156M /opt/netbox-2.9.3 0 /tmp/bck/var/lib/netbox 16K /var/lib/netbox 75M /tmp/bck/var/lib/postgresql/ 75M /var/lib/postgresql/ 446M total
the difference of size return by `du` on the netbox directory seems due to the computation of the size on the fuse fs
root@bojimans:~# mount | grep /tmp/bck borgfs on /tmp/bck type fuse (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions)
There is no visible differences on the 2 directories :
root@bojimans:~# diff -r {/tmp/bck,}/opt/netbox-2.9.3/ root@bojimans:~#
Thanks for validating,
I haven't changed the other docker-compose files because I didn't succeed to start them and not sure the storage part is still used.
As they are independent, we can do it in another diff without impacting the main docker-compose
We have performed with @ardumont several tests on the webapp, the vault, the deposit, the loaders and the listers and it seems everything is working well.
Nov 2 2020
the puppet agent was stopped since some time.
It was restarted and the webapp is now up to date :
Following the diff D4391, the zfs dsatasets were reconfigured tobe mounted on the /srv/softwareheritage/postgres/* :
systemctl stop postgresql@12-main zfs set mountpoint=none data/postgres-indexer-12 zfs set mountpoint=none data/postgres-secondary-12 zfs set mountpoint=none data/postgres-main-12 zfs set mountpoint=none data/postgres-misc
Staging
The staging is already up to date with the last tag., There is just the indexers packages which needs an update
factorize the base directory declaration to avoid duplication in the puppet code
Oct 30 2020
rebase
I have landed this one as it's accepted. I will prepare another ones for the other databases
Oct 29 2020
Check the right database availability
Using this, we can execute the "init-admin" command at each start so new superuser scripts can be executed during each restart toUsing this, we can execute the "init-admin" command at each start which can be useful when new super-user migrations are added
This is a poc for the scheduler, all the databases initialization could be changed this way when the diff on swh-core will land.
The configuration backup in git is configured[3].
The configuration should be committed on the iFWCFG[1] repository by the user swhfirewall (the credentials are in the credentials repository)
Oct 28 2020
- refactor the postgresql declaration to configure the main cluster instance
Oct 27 2020
The puppetlabs-postgresql module doesn't allow to manage several postgresql clusters. We have made the tradeoff to use only one cluster on db1 at the beginning to be able to deploy db1 via puppet as it's the priority. The module will be extended or replaced by something else later.
Oct 26 2020
For the puppet part, the actual staging configuration needs some adaptations as the configuration install postgresql on version 11 and 13. Another point is the different clusters are not managed by puppet but it's the same for the production.
- Create the postgresql:5434 dataset
zfs create data/postgres-secondary-12 -o mountpoint=/srv/softwareheritage/postgres/12/secondary
- Create the postgresql:5435 dataset
zfs create data/postgres-indexer-12 -o mountpoint=/srv/softwareheritage/postgres/12/indexer
Oct 23 2020
- All the servers are migrated to the new network 192.163.130.0/24.
- Netbox is up to date.
- The provisionning code was changed accordingly and applied
Update the state file after the terraform apply