Add a new RAID array to m2 using storcli64

From AN!Wiki
Revision as of 03:35, 11 July 2020 by Digimer (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

 AN!Wiki :: How To :: Add a new RAID array to m2 using storcli64

This tutorial is designed to walk you through adding a new bank of drives as a RAID array, and then using those to create a new DRBD-backed volume group for hosting servers.

In our example, we will be adding 3x SATA SSDs in a simple RAID level 5 array.

Template note icon.png
Note: Physically install the disks into the nodes before proceeding.

Contents

Finding the Disks

Template note icon.png
Note: This assumes the controller ID is '0'.

The newly added disks will be in the state UGood. We can use this to list their enclosure and slot numbers;

storcli64 /c0 /eall /sall show all | grep UGood | grep -v '|'
8:6      21 UGood -  446.625 GB SATA SSD N   N  512B SAMSUNG MZ7KH480HAHQ-00005 U  
8:7      22 UGood -  446.625 GB SATA SSD N   N  512B SAMSUNG MZ7KH480HAHQ-00005 U  
8:8      19 UGood -  446.625 GB SATA SSD N   N  512B SAMSUNG MZ7KH480HAHQ-00005 U

Here we see the three drives in positions 8:6, 8:7, and 8:8.

We want to create a RAID level 5 array, which storcli64 takes as r5 or RAID5. Knowing this, the command to create the array is:

storcli64 /c0 add vd type=r5 drives=8:6-8
Controller = 0
Status = Success
Description = Add VD Succeeded

Find the new VD Number

To know what the virtual disk number is, we can look using /c0 /vall show all. You can pipe this through less if you like, then search for the <enclosure>:<slot> number(s) used in the step above.

storcli64 /c0 /vall show all | grep -B7 -A1 -e '8:6' -e '8:7' -e '8:8'
PDs for VD 0 :
============
 
-----------------------------------------------------------------------------------
EID:Slt DID State DG       Size Intf Med SED PI SeSz Model                      Sp 
-----------------------------------------------------------------------------------
8:6      21 Onln   1 446.625 GB SATA SSD N   N  512B SAMSUNG MZ7KH480HAHQ-00005 U  
8:7      22 Onln   1 446.625 GB SATA SSD N   N  512B SAMSUNG MZ7KH480HAHQ-00005 U  
8:8      19 Onln   1 446.625 GB SATA SSD N   N  512B SAMSUNG MZ7KH480HAHQ-00005 U  
-----------------------------------------------------------------------------------

In our case, the new VD is 0 (despite an existing VD, which was 1).

Tuning

Once it is created, you can tune the caching policy and specify which virtual disk is used for booting.

Physical Disk Caching

Template warning icon.png
Warning: Most disks have no protection for their cached data in the event of power loss. Enable PD caching with caution.

Enable the disk caching on the physical disks in the virtual disk.

[root@nr-a04n02 ~]# storcli64 /c0 /v0 set pdcache=on
Controller = 0
Status = Success
Description = None
 
Detailed Status :
===============
 
---------------------------------------
VD Property Value Status  ErrCd ErrMsg 
---------------------------------------
 0 PdCac    On    Success     0 -      
---------------------------------------

IO Policy Caching

storcli64 /c0 /v0 set iopolicy=cached
Controller = 0
Status = Success
Description = None
 
Detailed Status :
===============
 
----------------------------------------
VD Property Value  Status  ErrCd ErrMsg 
----------------------------------------
 0 IoPolicy Cached Success     0 -      
----------------------------------------

Set the Boot Drive

We want to make sure the original virtual drive, VD1, is the boot device. To ensure this, we'll explicitly set VD0 to off and VD1 to on.

storcli64 /c0 /v0 set bootdrive=off
Controller = 0
Status = Success
Description = None
 
Detailed Status :
===============
 
-----------------------------------------
VD Property   Value Status  ErrCd ErrMsg 
-----------------------------------------
 0 Boot Drive Off   Success     0 -      
-----------------------------------------
storcli64 /c0 /v1 set bootdrive=on
Controller = 0
Status = Success
Description = None
 
Detailed Status :
===============
 
-----------------------------------------
VD Property   Value Status  ErrCd ErrMsg 
-----------------------------------------
 1 Boot Drive On    Success     0 -      
-----------------------------------------

Partition the new VD

In our case, the new VD0 disk came up on the host OS as /dev/sdb. We'll need to create a label and then partition with a single partition using parted.

parted -a opt /dev/sdb
GNU Parted 2.1
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt                                                      
(parted) mkpart primary 0% 100%                                           
(parted) print free
Model: FTS PRAID EP420i (scsi)
Disk /dev/sdb: 959GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
 
Number  Start   End     Size    File system  Name     Flags
        17.4kB  1049kB  1031kB  Free Space
 1      1049kB  959GB   959GB                primary
        959GB   959GB   1032kB  Free Space
(parted) q

Configure DRBD Resource

On one node, copy the /etc/drbd.d/r0.res to /etc/drbd.d/r1.res, then make

diff -U0 /etc/drbd.d/r0.res /etc/drbd.d/r1.res
--- /etc/drbd.d/r0.res	2020-04-15 14:23:23.623428249 -0400
+++ /etc/drbd.d/r1.res	2020-07-10 18:56:44.537891604 -0400
@@ -4 +4 @@
-resource r0 {
+resource r1 {
@@ -8 +8 @@
-			device       /dev/drbd0 minor 0;
+			device       /dev/drbd1 minor 1;
@@ -12 +12 @@
-			disk         /dev/sda5;
+			disk         /dev/sdb1;
@@ -26 +26 @@
-		address          ipv4 10.10.40.1:7788;
+		address          ipv4 10.10.40.1:7789;
@@ -30,2 +30,2 @@
-			device       /dev/drbd0 minor 0;
-			disk         /dev/sda5;
+			device       /dev/drbd1 minor 1;
+			disk         /dev/sdb1;
@@ -34 +34 @@
-		address          ipv4 10.10.40.2:7788;
+		address          ipv4 10.10.40.2:7789;

Verify that there is no problem by running drbdadm dump. Assuming there is no problem, rsync the file to the peer.

rsync -av /etc/drbd.d/r1.res root@an-a05n02:/etc/drbd.d/
sending incremental file list
r1.res
 
sent 1708 bytes  received 31 bytes  3478.00 bytes/sec
total size is 1634  speedup is 0.94

Create the metadata on both nodes;

drbdadm create-md r1
initializing activity log
initializing bitmap (28584 KB) to all zero
ioctl(/dev/sdb1, BLKZEROOUT, [959088480256, 29270016]) failed: Inappropriate ioctl for device
Using slow(er) fallback.
100%
Writing meta data...
New drbd meta data block successfully created.
success

Start the resource on both nodes;

drbdadm up r1
Template warning icon.png
Warning: This command must be carefully run! If run on the wrong resource or wrong node, you could lose data.

On one node only; Force one node to be Primary to begin synchronization;

drbdadm primary r1 --force

On the other node; set the resource to primary.

drbdadm primary r1

You can now watch cat /proc/drbd to see the initial synchronization run.

cat /proc/drbd
GIT-hash: d232dfd1701f911288729c9144056bd21856e6f6 README.md build by root@rhel6-builder-production.alteeve.ca, 2020-01-28 00:30:13
 0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:40679264 nr:1332 dw:40680564 dr:9570892 al:2445 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
 1: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r---n-
    ns:15346440 nr:0 dw:0 dr:15353752 al:8 bm:0 lo:0 pe:15 ua:26 ap:0 ep:1 wo:d oos:921266996
        [>....................] sync'ed:  1.7% (899672/914656)M
        finish: 2:10:23 speed: 117,744 (111,180) K/sec

Configure Clustered LVM

Template note icon.png
Note: From here forward, commands are run on one node only.

Create the LVM physical volume.

pvcreate /dev/drbd1
  Physical volume "/dev/drbd1" successfully created

Create the volume group;

vgcreate nr-a04n01_vg1 /dev/drbd1
  Clustered volume group "nr-a04n01_vg1" successfully created

Done!

 

Any questions, feedback, advice, complaints or meanderings are welcome.
Us: Alteeve's Niche! Support: Mailing List IRC: #clusterlabs on Freenode   © Alteeve's Niche! Inc. 1997-2019
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions.
Personal tools
Namespaces

Variants
Actions
Navigation
projects
Toolbox