Page MenuHomeSoftware Heritage

[production] Provision enough space for the search ES cluster to ingest all intrinsic metadata
Closed, MigratedEdits Locked

Description

Considering it looks like there will be a lot of new indexed object, we must be sure the chosen solution will be able to scale up in the future

Possible identified options:

  • Deploy a new elasticsearch instance on esnode* [1]
  • Build a new cluster with new bare metal servers [2]
  • extends current cluster storage [3]

[1] Some drawbacks:

  • there is 32Go of memory on the servers, 16G are allocated to the current elasticsearches, in the short term, we can use 8go per instance but we must be sure the log cluster is functional with only 8go
    • Possible solution: Increase memory (4 memory slots are still available on the servers)
  • the current cluster use 3To per node for a total of 7To, which could be too small to keep the replication factor in case of a failure of one node. There is no more slot available to add new disks, the possible solutions:
    • reducing the retention delay of the logs (estimated gain 2To)
    • Replace disk by bigger ones

[2] 3 new servers 1U 4x2.4To (4.8Toeffective) 32Go memory cost: ~8500e (possible to have up to 8 disks)
[3] The storage is on the ceph cluster can be increased as there are several free disk slots on beaubourg/hypervisor3/branly

Initial discussion on irc:

11:25 <+vsellier> we will have to check the elasticsearch behavior with production volume
11:26 <+vsellier> the size increases have been important on staging
11:26 <+vsellier> around x5
11:26 <+vsellier> (the index size)
11:27 <+vsellier> if the ratio is the same for production, the bump could be from 250g to 1.2T
11:28 <+vsellier> the search-esnode are not yet sized for this
11:31 <vlorentz> shouldn't we stop the metadata ingestion until they are?
11:31 <+vsellier> (they are vms on proxmox, we will have to think where we can find so much space)
11:32 <+vsellier> the search journal client is not deployed so no problem for the moment, nothing is sent to ES
11:33 <+vsellier> (I mean not deployed in production, it is only on staging)

Related Objects

Event Timeline

vlorentz triaged this task as Normal priority.Feb 11 2021, 1:17 PM
vlorentz created this task.
vsellier renamed this task from Provision enough space for the search ES cluster to ingest all intrinsic metadata to [production] Provision enough space for the search ES cluster to ingest all intrinsic metadata.Feb 15 2021, 10:02 AM

Sharing resources with the existing esnodes is a non-starter IMO; we probably want SSD storage for this. Plus they're at (physical) capacity.

Adding more storage to the proxmox ceph means eating a 3x replication cost on top of the elasticsearch replication, which doesn't give me very warm feelings. I felt the current deployment was more of a PoC than something we would expand longer term.

So I'm leaning towards the dedicated hw proposal. But I'm guessing your pricing is using rotational storage; Is this 10k SAS? Do you think that will be fast enough? I believe the current esnodes are 7.2k SATA, so pretty slow spinners.

What would pricing look like with SSDs?

Thanks for the feedback

The initial quotation was for a PowerEdge R6515 with 2 system drives (240Go) + 4 additional SAS 10k 2.4T drives.
With 4 1.92 To SSD (3.8To effective) the quotation is ~ 11500e

Final quotation sent for approval.
The details are:
3 PowerEdge R6515 (1u) with per server:

  • 10 disks enclosure
  • BOSS controller with 2 240Go cards (for system)
  • 4 SAS 2.5" 10k 2.4To disks
  • SFP+ network card
  • 2 SFP cables
  • 2 power supplies with their cables
  • IDRac enterprise
  • Rack mount rails with cable management

(I don't attach the quotation to respect the math-info EULA)

After talking with @rdicosmo, we finally chose to replace on each server the 4 HDD 2.4To by 6 SSD 1.9To to be sure we will have good performances and enought space for the future.
The quote wil nowl be sent to the purchasing service according to the usual procedure [1]

[1] https://intranet.softwareheritage.org/wiki/Team_charter#Procurement

Apparently the order was lost somewhere after it was sent to dell the 6th april 🤔
It was reissued yesterday...

The order was received and confirmed by dell ETA: 28th may
The detail was sent on the sysadm mailing list

According to the tracking page, the command has left the factory the Apr 22, 2021, The ETA is May 28, 2021*.

  • The DSI is notified of the arrival of the package.

The command seems to be delivered, I will check with the DSI how we can proceed for the installation

The servers should be installed on the rack the 26th May. The network configuration will follow the same day or next day.
They will be installed as it by the "DSI" so we will have to install the system via the iDRAC when they will be reachable.

Inventory links (provisioning):
rack : https://inventory.internal.softwareheritage.org/dcim/racks/2/
search-esnode4.internal.softwareheritage.org : https://inventory.internal.softwareheritage.org/dcim/devices/54/
search-esnode5.internal.softwareheritage.orgL https://inventory.internal.softwareheritage.org/dcim/devices/55/
search-esnode6.internal.softwareheritage.org: https://inventory.internal.softwareheritage.org/dcim/devices/56/

We did the following for the 3 servers one at a time (the first one was used for papercuts ;):

  • (offline) install with the debian DVD1 iso [1]
  • rebooted on the main os (yeah it worked, use UEFI mode)
  • Installed from the dvd2 iso the packages vlan ifenslave
  • then configure the /etc/network/interfaces to use the network cards (10G) as bonding + vlan [2]

[1] The network install could not work as the main interface did not go up and we could
not install the necessary tools from there. The network ports were probably not configured
as untagged-vlan440 (as asked early on).

[2] The following was used (mounted as iso volume to ease copy/paste):

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

auto ens1f0np0
iface ens1f0np0 inet manual

auto ens1f1np1
iface ens1f1np1 inet manual

auto bond0
iface bond0 inet manual
        bond-mode 802.3ad
        bond-slaves ens1f0np0 ens1f1np1

auto bond0.440
iface bond0.440 inet static
        vlan-raw-device bond0
        address 192.168.100.86/24  # respectively 87, 88
        gateway 192.168.100.1

Now the machine are pingable on our networks. Remains to actually configure them through
puppet (for tomorrow).

ardumont changed the task status from Open to Work in Progress.Jun 8 2021, 4:48 PM
ardumont moved this task from Backlog to in-progress on the System administration board.
  • To manage the disks via zfs, the raid card needed to be configured in enhanced HBA mode in the idrac
  • after a rebbot, the disks are well detected by the system:
root@search-esnode4:~# ls -al /dev/sd*
brw-rw---- 1 root disk 8,  0 Jun  9 04:54 /dev/sda
brw-rw---- 1 root disk 8, 16 Jun  9 04:54 /dev/sdb
brw-rw---- 1 root disk 8, 32 Jun  9 04:54 /dev/sdc
brw-rw---- 1 root disk 8, 48 Jun  9 04:54 /dev/sdd
brw-rw---- 1 root disk 8, 64 Jun  9 04:54 /dev/sde
brw-rw---- 1 root disk 8, 80 Jun  9 04:54 /dev/sdf
brw-rw---- 1 root disk 8, 96 Jun  9 04:54 /dev/sdg
brw-rw---- 1 root disk 8, 97 Jun  9 04:54 /dev/sdg1
brw-rw---- 1 root disk 8, 98 Jun  9 04:54 /dev/sdg2
brw-rw---- 1 root disk 8, 99 Jun  9 04:54 /dev/sdg3
root@search-esnode4:~# smartctl -a /dev/sda
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.19.0-16-amd64] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model:     SSDSC2KB019T8R
Serial Number:    PHYF101400PS1P9DGN
LU WWN Device Id: 5 5cd2e4 15325f09e
Add. Product Id:  DELL(tm)
Firmware Version: XCV1DL69
User Capacity:    1,920,383,410,176 bytes [1.92 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    Solid State Device
Form Factor:      2.5 inches
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   ACS-3 T13/2161-D revision 5
SATA Version is:  SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Wed Jun  9 04:58:14 2021 CDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x02)	Offline data collection activity
					was completed without error.
					Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0)	The previous self-test routine completed
					without error or no self-test has ever 
					been run.
Total time to complete Offline 
data collection: 		(  288) seconds.
Offline data collection
capabilities: 			 (0x79) SMART execute Offline immediate.
					No Auto Offline data collection support.
					Suspend Offline collection upon new
					command.
					Offline surface scan supported.
					Self-test supported.
					Conveyance Self-test supported.
					Selective Self-test supported.
SMART capabilities:            (0x0003)	Saves SMART data before entering
					power-saving mode.
					Supports SMART auto save timer.
Error logging capability:        (0x01)	Error logging supported.
					General Purpose Logging supported.
Short self-test routine 
recommended polling time: 	 (   1) minutes.
Extended self-test routine
recommended polling time: 	 (  60) minutes.
Conveyance self-test routine
recommended polling time: 	 (  60) minutes.
SCT capabilities: 	       (0x003d)	SCT Status supported.
					SCT Error Recovery Control supported.
					SCT Feature Control supported.
					SCT Data Table supported.

SMART Attributes Data Structure revision number: 1
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000e   130   130   039    Old_age   Always       -       5753
  5 Reallocated_Sector_Ct   0x0033   100   100   001    Pre-fail  Always       -       0
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       124
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       16
 13 Read_Soft_Error_Rate    0x001e   130   130   000    Old_age   Always       -       5753
170 Unknown_Attribute       0x0033   100   100   010    Pre-fail  Always       -       0
173 Unknown_Attribute       0x0012   100   100   000    Old_age   Always       -       1
174 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       16
175 Program_Fail_Count_Chip 0x0012   100   100   010    Old_age   Always       -       0
179 Used_Rsvd_Blk_Cnt_Tot   0x0033   100   100   010    Pre-fail  Always       -       0
180 Unused_Rsvd_Blk_Cnt_Tot 0x0032   100   100   000    Old_age   Always       -       14744
181 Program_Fail_Cnt_Total  0x003a   100   100   000    Old_age   Always       -       0
182 Erase_Fail_Count_Total  0x003a   100   100   000    Old_age   Always       -       0
184 End-to-End_Error        0x0032   100   100   000    Old_age   Always       -       0
194 Temperature_Celsius     0x0022   100   100   000    Old_age   Always       -       23
195 Hardware_ECC_Recovered  0x0032   100   100   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   100   100   000    Old_age   Always       -       0
201 Unknown_SSD_Attribute   0x0033   100   100   010    Pre-fail  Always       -       86380317382
202 Unknown_SSD_Attribute   0x0027   100   100   000    Pre-fail  Always       -       0
233 Media_Wearout_Indicator 0x0032   100   100   000    Old_age   Always       -       0
234 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       0
235 Unknown_Attribute       0x000b   100   100   000    Pre-fail  Always       -       0
241 Total_LBAs_Written      0x0032   100   100   000    Old_age   Always       -       0
242 Total_LBAs_Read         0x0032   100   100   000    Old_age   Always       -       57410
245 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       100

SMART Error Log not supported

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error       00%         1         -
# 2  Short offline       Completed without error       00%         1         -

Read SMART Selective Self-test Log failed: scsi error unsupported field in scsi command
  • zfs installation:
root@search-esnode4:~# apt update && apt install linux-image-amd64 linux-headers-amd64
root@search-esnode4:~# shutdown -r now  # to apply the kernel
root@search-esnode4:~# apt install libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-dkms zfsutils-linux zfs-zed
  • refresh with the last packages installed from backports
root@search-esnode4:~# apt dist-upgrade # trigger a udev upgrade which leads to a network interface renaming
root@search-esnode4:~# sed -i 's/ens1/enp2s0/g' /etc/network/interfaces
  • pre zfs configuration actions:
root@search-esnode4:~# puppet agent --disable
root@search-esnode4:~# systemctl disable elasticsearch
root@search-esnode4:~# systemctl stop elasticsearch
root@search-esnode4:~# rm -rf /srv/elasticsearch/nodes
  • create the zfs pool and dataset with sda, sdb, sdd, sde in the pool and sdc and sdf as spare:
root@search-esnode4:~# zpool create -f data -m none wwn-0x55cd2e4153265a69 wwn-0x55cd2e415325f09e wwn-0x55cd2e415325efe4 wwn-0x55cd2e415325efe7 spare wwn-0x55cd2e41532644ea wwn-0x55cd2e41532659ff
root@search-esnode4:~# zfs create -o mountpoint=/srv/elasticsearch/nodes data/elasticsearch
root@search-esnode4:~# zfs list
NAME                 USED  AVAIL     REFER  MOUNTPOINT
data                 624K  6.72T       96K  none
data/elasticsearch    96K  6.72T       96K  /srv/elasticseach/nodes

root@search-esnode4:~# chown elasticsearch: /srv/elasticsearch/nodes/

root@search-esnode4:~# zfs set atime=off relatime=on data
  • add the first node search-esnode4 in the production cluster (it's enough to do it on search-esnode4 only as the other new nodes will retrieve the network topology from search-esnode4)
# Edit the /etc/elasticsearch/elasticsearch.yml file to add in the following line in the discovery.seed_hosts section:
- search-esnode1.internal.softwareheritage.org
root@search-esnode4:~# systemctl start elasticsearch
  • after some time, the shard are equally dispatched on the 4 nodes of the cluster:
curl -s http://search\-esnode4:9200/_cat/allocation\?s\=host\&v                                                               29m 10s 18:09:15
shards disk.indices disk.used disk.avail disk.total disk.percent host           ip             node
    45       64.2gb    64.4gb    128.2gb    192.6gb           33 192.168.100.81 192.168.100.81 search-esnode1
    45       61.9gb    62.1gb    130.5gb    192.6gb           32 192.168.100.82 192.168.100.82 search-esnode2
    45       62.5gb    62.6gb      130gb    192.6gb           32 192.168.100.83 192.168.100.83 search-esnode3
    45       62.4gb    62.6gb      6.6tb      6.7tb            0 192.168.100.86 192.168.100.86 search-esnode4

The other nodes will be configured the same way

And all the new nodes are now in the production cluster:

curl -s http://search\-esnode4:9200/_cat/allocation\?s\=host\&v                                                                35m 9s 18:47:23
shards disk.indices disk.used disk.avail disk.total disk.percent host           ip             node
    30       42.7gb    42.9gb    149.7gb    192.6gb           22 192.168.100.81 192.168.100.81 search-esnode1
    30       41.4gb    41.6gb      151gb    192.6gb           21 192.168.100.82 192.168.100.82 search-esnode2
    30       41.7gb    41.8gb    150.8gb    192.6gb           21 192.168.100.83 192.168.100.83 search-esnode3
    30       41.9gb      42gb      6.6tb      6.7tb            0 192.168.100.86 192.168.100.86 search-esnode4
    30       41.8gb    41.9gb      6.6tb      6.7tb            0 192.168.100.87 192.168.100.87 search-esnode5
    30       41.2gb    41.3gb      6.6tb      6.7tb            0 192.168.100.88 192.168.100.88 search-esnode6

The next step will be to switch the swh-search configurations to use the new nodes and progressively remove the old nodes from the cluster.

  • configuration of the swh-search and journal clients services deployed
  • Old node decommissionning on the cluster:
export ES_NODE=192.168.100.86:9200
curl -H "Content-Type: application/json" -XPUT http://${ES_NODE}/_cluster/settings\?pretty -d '{ 
    "transient" : {
        "cluster.routing.allocation.exclude._ip" : "192.168.100.81,192.168.100.82,192.168.100.83"
    }
}'
{
  "acknowledged" : true,
  "persistent" : { },
  "transient" : {
    "cluster" : {
      "routing" : {
        "allocation" : {
          "exclude" : {
            "_ip" : "192.168.100.81,192.168.100.82,192.168.100.83"
          }
        }
      }
    }
  }
}

The shards start to be gently moved from the old servers:

curl -s http://search-esnode4:9200/_cat/allocation\?s\=host\&v                                                                        10:22:58
shards disk.indices disk.used disk.avail disk.total disk.percent host           ip             node
    27       38.7gb    38.8gb    153.7gb    192.6gb           20 192.168.100.81 192.168.100.81 search-esnode1
    27       37.7gb    37.8gb    154.8gb    192.6gb           19 192.168.100.82 192.168.100.82 search-esnode2
    22       30.5gb    30.6gb      162gb    192.6gb           15 192.168.100.83 192.168.100.83 search-esnode3
    35         50gb    50.1gb      6.6tb      6.7tb            0 192.168.100.86 192.168.100.86 search-esnode4
    35         50gb    50.2gb      6.6tb      6.7tb            0 192.168.100.87 192.168.100.87 search-esnode5
    34       49.4gb    49.5gb      6.6tb      6.7tb            0 192.168.100.88 192.168.100.88 search-esnode6

When they will be no shards on the old servers, we will be able to stop them and remove them from the proxmox server.

  • old nodes removed from proxmox. It has freed up some space on ceph:

Result:

  • The volume of the index with the metadata was overestimated, its size is around 350Go. It's still a good news as we will have space for future indexed metadata like content
  • The metadata search on ES is activated on webapp1 but it seems it's not working due to a mapping issue on one of the fields. Another issue will be created to diagnose the problem