Page MenuHomeSoftware Heritage

Prepare disk replacement on granet
Closed, MigratedEdits Locked

Description

4 16To disk are ready to be installed on granet.

As it's in replacement of existing disk, some data need to be moved

TODO Identify the disk to be replaced

Current zfs status:

root@granet:~# zpool list -v
NAME                             SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
hdd                             29.1T  22.6T  6.45T        -         -    20%    77%  1.00x    ONLINE  -
  mirror                        14.5T  10.4T  4.16T        -         -    18%  71.4%      -  ONLINE  
    scsi-35000c500ca119fcf          -      -      -        -         -      -      -      -  ONLINE  
    scsi-35000c500ca13ab7f          -      -      -        -         -      -      -      -  ONLINE  
  mirror                        7.27T  6.09T  1.18T        -         -    23%  83.8%      -  ONLINE  
    scsi-35000c500ae751df3          -      -      -        -         -      -      -      -  ONLINE  
    scsi-35000c500ae758333          -      -      -        -         -      -      -      -  ONLINE  
  mirror                        7.27T  6.15T  1.12T        -         -    25%  84.6%      -  ONLINE  
    scsi-35000c500ae750b2f          -      -      -        -         -      -      -      -  ONLINE  
    scsi-35000c500ae759873          -      -      -        -         -      -      -      -  ONLINE  
ssd                             10.3T  7.91T  2.44T        -         -    24%    76%  1.00x    ONLINE  -
  wwn-0x55cd2e41524cd5f1        3.48T  2.27T  1.22T        -         -     8%  65.0%      -  ONLINE  
  wwn-0x55cd2e41524cdbcf        3.48T  2.30T  1.19T        -         -     8%  66.0%      -  ONLINE  
  wwn-0x500a075122f366e4-part3  1.69T  1.67T  17.2G        -         -    50%  99.0%      -  ONLINE  
  wwn-0x500a075122f357f1-part3  1.69T  1.67T  17.2G        -         -    65%  99.0%      -  ONLINE

Event Timeline

vsellier triaged this task as Normal priority.Mar 4 2021, 6:18 PM
vsellier created this task.
vsellier renamed this task from Prepare disk replacement of granet to Prepare disk replacement on granet.Mar 5 2021, 10:59 AM

AFAICT, there's not enough space to move the data within the current hdd pool; we'll need to ask DSI to install the first two disks before shuffling the data around.

We'll want to replace one of the 8TB pairs. They're pretty much balanced and have the same purchase date, so which one doesn't matter much.

Overview of the system :

  • 2 slots availables (10 slot occupied on a total of 12)
  • system installed on 2 disks ssd disk (wwn-0x500a075122f366e4 and wwn-0x500a075122f357f1)
  • 2 zfs pools
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
hdd   29.1T  22.8T  6.29T        -         -    20%    78%  1.00x    ONLINE  -
ssd   10.3T  7.91T  2.44T        -         -    24%    76%  1.00x    ONLINE  -
root@granet:~# zpool status -v hdd
  pool: hdd
 state: ONLINE
  scan: scrub repaired 0B in 0 days 15:42:24 with 0 errors on Sun Feb 14 16:06:26 2021
config:

	NAME                        STATE     READ WRITE CKSUM
	hdd                         ONLINE       0     0     0
	  mirror-0                  ONLINE       0     0     0
	    scsi-35000c500ca119fcf  ONLINE       0     0     0
	    scsi-35000c500ca13ab7f  ONLINE       0     0     0
	  mirror-1                  ONLINE       0     0     0
	    scsi-35000c500ae751df3  ONLINE       0     0     0
	    scsi-35000c500ae758333  ONLINE       0     0     0
	  mirror-2                  ONLINE       0     0     0
	    scsi-35000c500ae750b2f  ONLINE       0     0     0
	    scsi-35000c500ae759873  ONLINE       0     0     0
  • disks:
root@granet:/dev/disk/by-id# lsscsi   -s       
[0:0:0:0]    disk    ATA      MTFDDAK1T9TDD    F003  /dev/sda   1.92TB  <-- SSD system
[0:0:1:0]    disk    ATA      MTFDDAK1T9TDD    F003  /dev/sdb   1.92TB  <-- SSD system
[0:0:2:0]    disk    SEAGATE  ST8000NM0185     PT54  /dev/sdc   8.00TB  <-- 7200rpm
[0:0:3:0]    disk    SEAGATE  ST8000NM0185     PT54  /dev/sdd   8.00TB  <-- 7200rpm
[0:0:4:0]    disk    SEAGATE  ST8000NM0185     PT54  /dev/sde   8.00TB  <-- 7200rpm
[0:0:5:0]    disk    SEAGATE  ST8000NM0185     PT54  /dev/sdf   8.00TB  <-- 7200rpm
[0:0:6:0]    disk    ATA      SSDSC2KB038T8R   DL67  /dev/sdg   3.84TB  <-- SSD
[0:0:7:0]    disk    ATA      SSDSC2KB038T8R   DL67  /dev/sdh   3.84TB  <-- SSD
[0:0:8:0]    disk    SEAGATE  ST16000NM010G    ESL3  /dev/sdi   16.0TB  <-- 7200rpm
[0:0:9:0]    disk    SEAGATE  ST16000NM010G    ESL3  /dev/sdj   16.0TB  <-- 7200rpm


root@granet:/dev/disk/by-id# lsscsi --long-unit
[0:0:0:0]    disk    500a075122f366e4  /dev/sda 
[0:0:1:0]    disk    500a075122f357f1  /dev/sdb 
[0:0:2:0]    disk    5000c500ae751df3  /dev/sdc 
[0:0:3:0]    disk    5000c500ae759873  /dev/sdd 
[0:0:4:0]    disk    5000c500ae758333  /dev/sde 
[0:0:5:0]    disk    5000c500ae750b2f  /dev/sdf 
[0:0:6:0]    disk    55cd2e41524cd5f1  /dev/sdg 
[0:0:7:0]    disk    55cd2e41524cdbcf  /dev/sdh 
[0:0:8:0]    disk    5000c500ca119fcf  /dev/sdi 
[0:0:9:0]    disk    5000c500ca13ab7f  /dev/sdj

So according to this, and you have suggested, the plan can be:

  • add 2 new disks on the last 2 slots available
  • Declare the new mirror:
zpool add hdd mirror <disk1> <disk2>
  • remove mirror-2 from the zfs pool
zpool remove hdd mirror-2
  • wait for the end of the removal
  • Ask for the replacement of disks in slot 4 (5000c500ae759873 / Serial ZA1G3R1S ) and 6 (5000c500ae750b2f / Serial ZA1G3H81) by the last 2 new disks
  • Add the 2 new disks in the zfs pool :
zpool add hdd mirror <dnew isk1> <new disk2>

Mail sent to the dsi to request the installation of 2 of the new disks

2 disks were installed on the 2 remaining free slots.
They are detected by the raid card but need to be configured in JBOD mode.
It's postponed to Thursday morning as granet is sensible until a demonstration on wednesday afternoon.

root@granet:/dev# megacli -EncInfo -aAll
                                     
    Number of enclosures on adapter 0 -- 1

    Enclosure 0:
    Device ID                     : 32
    Number of Slots               : 12
    Number of Power Supplies      : 0
    Number of Fans                : 0
    Number of Temperature Sensors : 0
    Number of Alarms              : 0
    Number of SIM Modules         : 1
    Number of Physical Drives     : 12   <-- it was 10 before
    Status                        : Normal
    Position                      : 1
    Connector Name                : Unavailable
    Enclosure type                : SES
    FRU Part Number               : N/A
    Enclosure Serial Number       : N/A 
    ESM Serial Number             : N/A 
    Enclosure Zoning Mode         : N/A 
    Partner Device Id             : 65535

    Inquiry data                  :
        Vendor Identification     : DP      
        Product Identification    : BP14G+EXP       
        Product Revision Level    : 2.41
        Vendor Specific           :                     


Exit Code: 0x00
# megacli -PDList -a0
...

Enclosure Device ID: 32
Slot Number: 10
Enclosure position: 1
Device Id: 10
WWN: 5000C500CB46E418
Sequence Number: 1
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SAS

Raw Size: 14.552 TB [0x746c00000 Sectors]
Non Coerced Size: 14.551 TB [0x746b00000 Sectors]
Coerced Size: 14.551 TB [0x746b00000 Sectors]
Sector Size:  512
Logical Sector Size:  512
Physical Sector Size:  4096
Firmware state: Unconfigured(good), Spun Up
Device Firmware Level: ESL3
Shield Counter: 0
Successful diagnostics completion on :  N/A
SAS Address(0): 0x5000c500cb46e419
SAS Address(1): 0x0
Connected Port Number: 0(path0) 
Inquiry Data: SEAGATE ST16000NM010G   ESL3ZL2CFQZB            
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None 
Device Speed: 12.0Gb/s 
Link Speed: 12.0Gb/s 
Media Type: Hard Disk Device
Drive Temperature :29C (84.20 F)
PI Eligibility:  No 
Drive is formatted for PI information:  No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 12.0Gb/s 
Port-1 :
Port status: Active
Port's Linkspeed: 12.0Gb/s 
Drive has flagged a S.M.A.R.T alert : No



Enclosure Device ID: 32
Slot Number: 11
Enclosure position: 1
Device Id: 11
WWN: 5000C500CB46DA48
Sequence Number: 1
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SAS

Raw Size: 14.552 TB [0x746c00000 Sectors]
Non Coerced Size: 14.551 TB [0x746b00000 Sectors]
Coerced Size: 14.551 TB [0x746b00000 Sectors]
Sector Size:  512
Logical Sector Size:  512
Physical Sector Size:  4096
Firmware state: Unconfigured(good), Spun Up
Device Firmware Level: ESL3
Shield Counter: 0
Successful diagnostics completion on :  N/A
SAS Address(0): 0x5000c500cb46da49
SAS Address(1): 0x0
Connected Port Number: 0(path0) 
Inquiry Data: SEAGATE ST16000NM010G   ESL3ZL2CFGEG            
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None 
Device Speed: 12.0Gb/s 
Link Speed: 12.0Gb/s 
Media Type: Hard Disk Device
Drive Temperature :29C (84.20 F)
PI Eligibility:  No 
Drive is formatted for PI information:  No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 12.0Gb/s 
Port-1 :
Port status: Active
Port's Linkspeed: 12.0Gb/s 
Drive has flagged a S.M.A.R.T alert : No

The commands to configure the drive should be:

megacli -PDMakeJBOD -physdrv [32:10,32:11]
vsellier changed the task status from Open to Work in Progress.Mar 18 2021, 3:19 PM
vsellier moved this task from Backlog to in-progress on the System administration board.

Disks configured in JBOD mode:

root@granet:~# megacli -PDMakeJBOD  -physdrv[32:10] -a0
                                     
Adapter: 0: EnclId-32 SlotId-10 state changed to JBOD.

Exit Code: 0x00
root@granet:~# megacli -PDMakeJBOD  -physdrv[32:11] -a0
                                     
Adapter: 0: EnclId-32 SlotId-11 state changed to JBOD.

Exit Code: 0x00

They are detected by the system:

[8168904.389863] megaraid_sas 0000:18:00.0: scanning for scsi0...
[8168904.391527] scsi 0:0:10:0: Direct-Access     SEAGATE  ST16000NM010G    ESL3 PQ: 0 ANSI: 7
[8168904.407497] sd 0:0:10:0: Attached scsi generic sg10 type 0
[8168904.412843] sd 0:0:10:0: [sdk] 31251759104 512-byte logical blocks: (16.0 TB/14.6 TiB)
[8168904.412847] sd 0:0:10:0: [sdk] 4096-byte physical blocks
[8168904.413807] sd 0:0:10:0: [sdk] Write Protect is off
[8168904.413810] sd 0:0:10:0: [sdk] Mode Sense: df 00 10 08
[8168904.415651] sd 0:0:10:0: [sdk] Write cache: disabled, read cache: enabled, supports DPO and FUA
[8168904.452627] sd 0:0:10:0: [sdk] Attached SCSI disk
[8168910.391899] megaraid_sas 0000:18:00.0: scanning for scsi0...
[8168910.393507] scsi 0:0:11:0: Direct-Access     SEAGATE  ST16000NM010G    ESL3 PQ: 0 ANSI: 7
[8168910.401234] sd 0:0:11:0: Attached scsi generic sg11 type 0
[8168910.406486] sd 0:0:11:0: [sdl] 31251759104 512-byte logical blocks: (16.0 TB/14.6 TiB)
[8168910.406490] sd 0:0:11:0: [sdl] 4096-byte physical blocks
[8168910.407456] sd 0:0:11:0: [sdl] Write Protect is off
[8168910.407459] sd 0:0:11:0: [sdl] Mode Sense: df 00 10 08
[8168910.409226] sd 0:0:11:0: [sdl] Write cache: disabled, read cache: enabled, supports DPO and FUA
[8168910.444798] sd 0:0:11:0: [sdl] Attached SCSI disk

And added to the hdd zfs pool:

  • before
root@granet:~# zpool list
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
hdd   29.1T  22.8T  6.29T        -         -    20%    78%  1.00x    ONLINE  -
ssd   10.3T  7.91T  2.44T        -         -    24%    76%  1.00x    ONLINE  -
  • configuration
root@granet:~# ls -l /dev/disk/by-id | grep -e "wwn.*sdk" -e "wwn.*sdl"
lrwxrwxrwx 1 root root  9 Mar 19 10:10 wwn-0x5000c500cb46da4b -> ../../sdl
lrwxrwxrwx 1 root root  9 Mar 19 10:09 wwn-0x5000c500cb46e41b -> ../../sdk
root@granet:~# zpool add hdd mirror wwn-0x5000c500cb46da4b wwn-0x5000c500cb46e41b
root@granet:~# zpool status hdd
  pool: hdd
 state: ONLINE
  scan: scrub repaired 0B in 0 days 17:39:22 with 0 errors on Sun Mar 14 18:03:23 2021
config:

	NAME                        STATE     READ WRITE CKSUM
	hdd                         ONLINE       0     0     0
	  mirror-0                  ONLINE       0     0     0
	    scsi-35000c500ca119fcf  ONLINE       0     0     0
	    scsi-35000c500ca13ab7f  ONLINE       0     0     0
	  mirror-1                  ONLINE       0     0     0
	    scsi-35000c500ae751df3  ONLINE       0     0     0
	    scsi-35000c500ae758333  ONLINE       0     0     0
	  mirror-2                  ONLINE       0     0     0
	    scsi-35000c500ae750b2f  ONLINE       0     0     0
	    scsi-35000c500ae759873  ONLINE       0     0     0
	  mirror-3                  ONLINE       0     0     0
	    wwn-0x5000c500cb46da4b  ONLINE       0     0     0
	    wwn-0x5000c500cb46e41b  ONLINE       0     0     0

errors: No known data errors
root@granet:~# zpool list -v
NAME                             SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
hdd                             43.6T  22.8T  20.8T        -         -    13%    52%  1.00x    ONLINE  -
  mirror                        14.5T  10.5T  4.08T        -         -    18%  72.0%      -  ONLINE  
    scsi-35000c500ca119fcf          -      -      -        -         -      -      -      -  ONLINE  
    scsi-35000c500ca13ab7f          -      -      -        -         -      -      -      -  ONLINE  
  mirror                        7.27T  6.13T  1.14T        -         -    23%  84.4%      -  ONLINE  
    scsi-35000c500ae751df3          -      -      -        -         -      -      -      -  ONLINE  
    scsi-35000c500ae758333          -      -      -        -         -      -      -      -  ONLINE  
  mirror                        7.27T  6.19T  1.08T        -         -    25%  85.2%      -  ONLINE  
    scsi-35000c500ae750b2f          -      -      -        -         -      -      -      -  ONLINE  
    scsi-35000c500ae759873          -      -      -        -         -      -      -      -  ONLINE  
  mirror                        14.5T   284K  14.5T        -         -     0%  0.00%      -  ONLINE  
    wwn-0x5000c500cb46da4b          -      -      -        -         -      -      -      -  ONLINE  
    wwn-0x5000c500cb46e41b          -      -      -        -         -      -      -      -  ONLINE  
ssd                             10.3T  7.91T  2.44T        -         -    24%    76%  1.00x    ONLINE  -
  wwn-0x55cd2e41524cd5f1        3.48T  2.27T  1.22T        -         -     8%  65.0%      -  ONLINE  
  wwn-0x55cd2e41524cdbcf        3.48T  2.30T  1.19T        -         -     8%  66.0%      -  ONLINE  
  wwn-0x500a075122f366e4-part3  1.69T  1.67T  17.2G        -         -    50%  99.0%      -  ONLINE  
  wwn-0x500a075122f357f1-part3  1.69T  1.67T  17.2G        -         -    65%  99.0%      -  ONLINE
  • pool extended:
root@granet:~# zpool list      
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
hdd   43.6T  22.8T  20.8T        -         -    13%    52%  1.00x    ONLINE  -
ssd   10.3T  7.91T  2.44T        -         -    24%    76%  1.00x    ONLINE  -

Removing the mirror mirror-2 with the disks scsi-35000c500ae750b2f and scsi-35000c500ae759873

root@granet:~# zpool remove hdd mirror-2
root@granet:~# zpool status hdd
  pool: hdd
 state: ONLINE
  scan: scrub repaired 0B in 0 days 17:39:22 with 0 errors on Sun Mar 14 18:03:23 2021
remove: Evacuation of mirror in progress since Fri Mar 19 10:45:22 2021
    1.03G copied out of 6.19T at 118M/s, 0.02% done, 15h18m to go
config:

	NAME                        STATE     READ WRITE CKSUM
	hdd                         ONLINE       0     0     0
	  mirror-0                  ONLINE       0     0     0
	    scsi-35000c500ca119fcf  ONLINE       0     0     0
	    scsi-35000c500ca13ab7f  ONLINE       0     0     0
	  mirror-1                  ONLINE       0     0     0
	    scsi-35000c500ae751df3  ONLINE       0     0     0
	    scsi-35000c500ae758333  ONLINE       0     0     0
	  mirror-2                  ONLINE       0     0     0
	    scsi-35000c500ae750b2f  ONLINE       0     0     0
	    scsi-35000c500ae759873  ONLINE       0     0     0
	  mirror-3                  ONLINE       0     0     0
	    wwn-0x5000c500cb46da4b  ONLINE       0     0     0
	    wwn-0x5000c500cb46e41b  ONLINE       0     0     0

errors: No known data errors

The disk to remove will be:

root@granet:~# ls -l /dev/disk/by-id/ | grep -e 'scsi-35000c500ae750b2f ' -e 'scsi-35000c500ae759873 '
lrwxrwxrwx 1 root root  9 Dec 14 20:57 scsi-35000c500ae750b2f -> ../../sdf
lrwxrwxrwx 1 root root  9 Dec 14 20:57 scsi-35000c500ae759873 -> ../../sdd

To identify the disks to replace, the front led can be activated via the idrac interface.
The disks are :
scsi-35000c500ae759873 | /dev/sdd: Serial ZA1G3R1S -> Physical Disk 0:1:3
scsi-35000c500ae750b2f | /dev/sdf : Serial ZA1G3H81 -> Physical Disk 0:1:5

The 2 remaining disks were inserted in place of the 2 old sdd and sdf.
They needed to be configured in JBOD mode:

root@granet:~# megacli -PDMakeJBOD  -physdrv[32:3] -a0
                                     
Adapter: 0: EnclId-32 SlotId-3 state changed to JBOD.

Exit Code: 0x00
root@granet:~# megacli -PDMakeJBOD  -physdrv[32:5] -a0
                                     
Adapter: 0: EnclId-32 SlotId-5 state changed to JBOD.

Exit Code: 0x00

After that they were correctly detected by the system:

[8525429.881267] megaraid_sas 0000:18:00.0: scanning for scsi0...
[8525429.882904] scsi 0:0:3:0: Direct-Access     SEAGATE  ST16000NM010G    ESL3 PQ: 0 ANSI: 7
[8525429.895344] sd 0:0:3:0: Attached scsi generic sg3 type 0
[8525429.900699] sd 0:0:3:0: [sdd] 31251759104 512-byte logical blocks: (16.0 TB/14.6 TiB)
[8525429.900703] sd 0:0:3:0: [sdd] 4096-byte physical blocks
[8525429.901718] sd 0:0:3:0: [sdd] Write Protect is off
[8525429.901722] sd 0:0:3:0: [sdd] Mode Sense: df 00 10 08
[8525429.903510] sd 0:0:3:0: [sdd] Write cache: disabled, read cache: enabled, supports DPO and FUA
[8525429.944179] sd 0:0:3:0: [sdd] Attached SCSI disk
[8525436.724068] megaraid_sas 0000:18:00.0: scanning for scsi0...
[8525436.725886] scsi 0:0:5:0: Direct-Access     SEAGATE  ST16000NM010G    ESL3 PQ: 0 ANSI: 7
[8525436.737679] sd 0:0:5:0: Attached scsi generic sg5 type 0
[8525436.743116] sd 0:0:5:0: [sdf] 31251759104 512-byte logical blocks: (16.0 TB/14.6 TiB)
[8525436.743121] sd 0:0:5:0: [sdf] 4096-byte physical blocks
[8525436.744082] sd 0:0:5:0: [sdf] Write Protect is off
[8525436.744085] sd 0:0:5:0: [sdf] Mode Sense: df 00 10 08
[8525436.745861] sd 0:0:5:0: [sdf] Write cache: disabled, read cache: enabled, supports DPO and FUA
[8525436.786992] sd 0:0:5:0: [sdf] Attached SCSI disk

Their serial are :

wwn-0x5000c500cb4677e7 / ZL2CFHNC
wwn-0x5000c500cb469787 / ZL2CFRTS

And finally they can be declared on the zfs pool:

  • before:
root@granet:/dev/disk/by-id# zpool list
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
hdd   36.4T  22.8T  13.6T        -         -    18%    62%  1.00x    ONLINE  -
ssd   10.3T  6.70T  3.65T        -         -    15%    64%  1.00x    ONLINE  -
  • Adding them on the pool:
root@granet:/dev/disk/by-id# zpool add hdd mirror wwn-0x5000c500cb4677e7 wwn-0x5000c500cb469787
  • Pool status after:
root@granet:/dev/disk/by-id# zpool list
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
hdd   50.9T  22.8T  28.1T        -         -    13%    44%  1.00x    ONLINE  -
ssd   10.3T  6.70T  3.65T        -         -    15%    64%  1.00x    ONLINE  -

root@granet:/dev/disk/by-id# zpool status hdd
  pool: hdd
 state: ONLINE
  scan: scrub repaired 0B in 0 days 17:39:22 with 0 errors on Sun Mar 14 18:03:23 2021
remove: Removal of vdev 2 copied 6.19T in 11h24m, completed on Fri Mar 19 22:10:16 2021
    48.1M memory used for removed device mappings
config:

	NAME                        STATE     READ WRITE CKSUM
	hdd                         ONLINE       0     0     0
	  mirror-0                  ONLINE       0     0     0
	    scsi-35000c500ca119fcf  ONLINE       0     0     0
	    scsi-35000c500ca13ab7f  ONLINE       0     0     0
	  mirror-1                  ONLINE       0     0     0
	    scsi-35000c500ae751df3  ONLINE       0     0     0
	    scsi-35000c500ae758333  ONLINE       0     0     0
	  mirror-3                  ONLINE       0     0     0
	    wwn-0x5000c500cb46da4b  ONLINE       0     0     0
	    wwn-0x5000c500cb46e41b  ONLINE       0     0     0
	  mirror-4                  ONLINE       0     0     0
	    wwn-0x5000c500cb4677e7  ONLINE       0     0     0
	    wwn-0x5000c500cb469787  ONLINE       0     0     0