Difference between revisions of "Add a new RAID array to m2 using storcli64"
Line 35: | Line 35: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
+ | == Find the new VD Number == | ||
+ | To know what the virtual disk number is, we can look using <span class="code">/c0 /vall show all</span>. You can pipe this through <span class="code">less</span> if you like, then search for the <span class="code"><enclosure>:<slot></span> number(s) used in the step above. | ||
− | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
+ | storcli64 /c0 /vall show all | grep -B7 -A1 -e '8:6' -e '8:7' -e '8:8' | ||
</syntaxhighlight> | </syntaxhighlight> | ||
<syntaxhighlight lang="text"> | <syntaxhighlight lang="text"> | ||
+ | PDs for VD 0 : | ||
+ | ============ | ||
+ | |||
+ | ----------------------------------------------------------------------------------- | ||
+ | EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp | ||
+ | ----------------------------------------------------------------------------------- | ||
+ | 8:6 21 Onln 1 446.625 GB SATA SSD N N 512B SAMSUNG MZ7KH480HAHQ-00005 U | ||
+ | 8:7 22 Onln 1 446.625 GB SATA SSD N N 512B SAMSUNG MZ7KH480HAHQ-00005 U | ||
+ | 8:8 19 Onln 1 446.625 GB SATA SSD N N 512B SAMSUNG MZ7KH480HAHQ-00005 U | ||
+ | ----------------------------------------------------------------------------------- | ||
</syntaxhighlight> | </syntaxhighlight> | ||
− | == | + | In our case, the new VD is <span class="code">0</span> (despite an existing VD, which was <span class="code">1</span>). |
− | + | = Tuning = | |
− | + | ||
− | + | Once it is created, you can tune the caching policy and specify which virtual disk is used for booting. | |
− | + | ||
− | + | == Physical Disk Caching == | |
− | + | ||
− | + | {{warning|1=Most disks have no protection for their cached data in the event of power loss. Enable PD caching with caution.}} | |
− | + | ||
− | + | Enable the disk caching on the physical disks in the virtual disk. | |
− | + | ||
− | + | <syntaxhighlight lang="bash"> | |
[root@nr-a04n02 ~]# storcli64 /c0 /v0 set pdcache=on | [root@nr-a04n02 ~]# storcli64 /c0 /v0 set pdcache=on | ||
+ | </syntaxhighlight> | ||
+ | <syntaxhighlight lang="text"> | ||
Controller = 0 | Controller = 0 | ||
Status = Success | Status = Success | ||
Line 69: | Line 83: | ||
0 PdCac On Success 0 - | 0 PdCac On Success 0 - | ||
--------------------------------------- | --------------------------------------- | ||
+ | </syntaxhighlight> | ||
+ | == IO Policy Caching == | ||
− | + | <syntaxhighlight lang="bash"> | |
− | + | storcli64 /c0 /v0 set iopolicy=cached | |
+ | </syntaxhighlight> | ||
+ | <syntaxhighlight lang="text"> | ||
Controller = 0 | Controller = 0 | ||
Status = Success | Status = Success | ||
Line 85: | Line 103: | ||
0 IoPolicy Cached Success 0 - | 0 IoPolicy Cached Success 0 - | ||
---------------------------------------- | ---------------------------------------- | ||
+ | </syntaxhighlight> | ||
+ | = Set the Boot Drive = | ||
− | + | We want to make sure the original virtual drive, <span class="code">VD1</span>, is the boot device. To ensure this, we'll explicitly set <span class="code">VD0</span> to <span class="code">off</span> and <span class="code">VD1</span> to <span class="code">on</span>. | |
− | + | ||
+ | <syntaxhighlight lang="bash"> | ||
+ | storcli64 /c0 /v0 set bootdrive=off | ||
+ | </syntaxhighlight> | ||
+ | <syntaxhighlight lang="text"> | ||
Controller = 0 | Controller = 0 | ||
Status = Success | Status = Success | ||
Line 101: | Line 125: | ||
0 Boot Drive Off Success 0 - | 0 Boot Drive Off Success 0 - | ||
----------------------------------------- | ----------------------------------------- | ||
+ | </syntaxhighlight> | ||
− | + | <syntaxhighlight lang="bash"> | |
− | + | storcli64 /c0 /v1 set bootdrive=on | |
− | + | </syntaxhighlight> | |
+ | <syntaxhighlight lang="text"> | ||
Controller = 0 | Controller = 0 | ||
Status = Success | Status = Success | ||
Line 117: | Line 143: | ||
1 Boot Drive On Success 0 - | 1 Boot Drive On Success 0 - | ||
----------------------------------------- | ----------------------------------------- | ||
+ | </syntaxhighlight> | ||
− | + | = Partition the new VD = | |
− | + | In our case, the new <span class="code">VD0</span> disk came up on the host OS as <span class="code">/dev/sdb</span>. We'll need to create a label and then partition with a single partition using <span class="code">parted</span>. | |
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | parted -a opt /dev/sdb | ||
+ | </syntaxhighlight> | ||
+ | <syntaxhighlight lang="text"> | ||
GNU Parted 2.1 | GNU Parted 2.1 | ||
Using /dev/sdb | Using /dev/sdb | ||
Welcome to GNU Parted! Type 'help' to view a list of commands. | Welcome to GNU Parted! Type 'help' to view a list of commands. | ||
+ | </syntaxhighlight> | ||
+ | <syntaxhighlight lang="text"> | ||
(parted) mklabel gpt | (parted) mklabel gpt | ||
(parted) mkpart primary 0% 100% | (parted) mkpart primary 0% 100% | ||
(parted) print free | (parted) print free | ||
+ | </syntaxhighlight> | ||
+ | <syntaxhighlight lang="text"> | ||
Model: FTS PRAID EP420i (scsi) | Model: FTS PRAID EP420i (scsi) | ||
Disk /dev/sdb: 959GB | Disk /dev/sdb: 959GB | ||
Line 136: | Line 172: | ||
1 1049kB 959GB 959GB primary | 1 1049kB 959GB 959GB primary | ||
959GB 959GB 1032kB Free Space | 959GB 959GB 1032kB Free Space | ||
− | + | </syntaxhighlight> | |
+ | <syntaxhighlight lang="text"> | ||
(parted) q | (parted) q | ||
− | + | </syntaxhighlight> | |
− | + | = Configure DRBD Resource = | |
+ | |||
+ | On one node, copy the <span class="code">/etc/drbd.d/r0.res</span> to <span class="code">/etc/drbd.d/r1.res</span>, then make | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | diff -U0 /etc/drbd.d/r0.res /etc/drbd.d/r1.res | ||
+ | </syntaxhighlight> | ||
+ | <syntaxhighlight lang="diff"> | ||
+ | --- /etc/drbd.d/r0.res 2020-04-15 14:23:23.623428249 -0400 | ||
+ | +++ /etc/drbd.d/r1.res 2020-07-10 18:56:44.537891604 -0400 | ||
+ | @@ -4 +4 @@ | ||
+ | -resource r0 { | ||
+ | +resource r1 { | ||
+ | @@ -8 +8 @@ | ||
+ | - device /dev/drbd0 minor 0; | ||
+ | + device /dev/drbd1 minor 1; | ||
+ | @@ -12 +12 @@ | ||
+ | - disk /dev/sda5; | ||
+ | + disk /dev/sdb1; | ||
+ | @@ -26 +26 @@ | ||
+ | - address ipv4 10.10.40.1:7788; | ||
+ | + address ipv4 10.10.40.1:7789; | ||
+ | @@ -30,2 +30,2 @@ | ||
+ | - device /dev/drbd0 minor 0; | ||
+ | - disk /dev/sda5; | ||
+ | + device /dev/drbd1 minor 1; | ||
+ | + disk /dev/sdb1; | ||
+ | @@ -34 +34 @@ | ||
+ | - address ipv4 10.10.40.2:7788; | ||
+ | + address ipv4 10.10.40.2:7789; | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | Verify that there is no problem by running <span class="code">drbdadm dump</span>. Assuming there is no problem, <span class="code">rsync</span> the file to the peer. | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | rsync -av /etc/drbd.d/r1.res root@an-a05n02:/etc/drbd.d/ | ||
+ | </syntaxhighlight> | ||
+ | <syntaxhighlight lang="text"> | ||
+ | sending incremental file list | ||
+ | r1.res | ||
+ | |||
+ | sent 1708 bytes received 31 bytes 3478.00 bytes/sec | ||
+ | total size is 1634 speedup is 0.94 | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | Create the metadata on both nodes; | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | drbdadm create-md r1 | ||
+ | </syntaxhighlight> | ||
+ | <syntaxhighlight lang="text"> | ||
initializing activity log | initializing activity log | ||
initializing bitmap (28584 KB) to all zero | initializing bitmap (28584 KB) to all zero | ||
Line 149: | Line 236: | ||
New drbd meta data block successfully created. | New drbd meta data block successfully created. | ||
success | success | ||
− | + | </syntaxhighlight> | |
+ | Start the resource on both nodes; | ||
− | + | <syntaxhighlight lang="bash"> | |
− | + | drbdadm up r1 | |
+ | </syntaxhighlight> | ||
+ | {{warning|1=This command must be carefully run! If run on the wrong resource or wrong node, you could lose data.}} | ||
+ | '''On one node only'''; Force one node to be Primary to begin synchronization; | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | drbdadm primary r1 --force | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | '''On the other node'''; set the resource to primary. | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | drbdadm primary r1 | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | You can now watch <span class="code">cat /proc/drbd</span> to see the initial synchronization run. | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | cat /proc/drbd | ||
+ | </syntaxhighlight> | ||
+ | <syntaxhighlight lang="text"> | ||
GIT-hash: d232dfd1701f911288729c9144056bd21856e6f6 README.md build by root@rhel6-builder-production.alteeve.ca, 2020-01-28 00:30:13 | GIT-hash: d232dfd1701f911288729c9144056bd21856e6f6 README.md build by root@rhel6-builder-production.alteeve.ca, 2020-01-28 00:30:13 | ||
0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r----- | 0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r----- | ||
Line 163: | Line 271: | ||
[>....................] sync'ed: 1.7% (899672/914656)M | [>....................] sync'ed: 1.7% (899672/914656)M | ||
finish: 2:10:23 speed: 117,744 (111,180) K/sec | finish: 2:10:23 speed: 117,744 (111,180) K/sec | ||
+ | </syntaxhighlight> | ||
+ | = Configure Clustered LVM = | ||
+ | {{note|1=From here forward, commands are run on one node only.}} | ||
+ | Create the LVM physical volume. | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | pvcreate /dev/drbd1 | ||
+ | </syntaxhighlight> | ||
+ | <syntaxhighlight lang="text"> | ||
+ | Physical volume "/dev/drbd1" successfully created | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | Create the volume group; | ||
+ | |||
+ | <syntaxhighlight lang="bash"> | ||
+ | vgcreate nr-a04n01_vg1 /dev/drbd1 | ||
+ | </syntaxhighlight> | ||
+ | <syntaxhighlight lang="text"> | ||
+ | Clustered volume group "nr-a04n01_vg1" successfully created | ||
</syntaxhighlight> | </syntaxhighlight> | ||
+ | Done! | ||
{{footer}} | {{footer}} |
Latest revision as of 03:35, 11 July 2020
AN!Wiki :: How To :: Add a new RAID array to m2 using storcli64 |
This tutorial is designed to walk you through adding a new bank of drives as a RAID array, and then using those to create a new DRBD-backed volume group for hosting servers.
In our example, we will be adding 3x SATA SSDs in a simple RAID level 5 array.
Note: Physically install the disks into the nodes before proceeding. |
Contents |
[edit] Finding the Disks
Note: This assumes the controller ID is '0'. |
The newly added disks will be in the state UGood. We can use this to list their enclosure and slot numbers;
storcli64 /c0 /eall /sall show all | grep UGood | grep -v '|'
8:6 21 UGood - 446.625 GB SATA SSD N N 512B SAMSUNG MZ7KH480HAHQ-00005 U 8:7 22 UGood - 446.625 GB SATA SSD N N 512B SAMSUNG MZ7KH480HAHQ-00005 U 8:8 19 UGood - 446.625 GB SATA SSD N N 512B SAMSUNG MZ7KH480HAHQ-00005 U
Here we see the three drives in positions 8:6, 8:7, and 8:8.
We want to create a RAID level 5 array, which storcli64 takes as r5 or RAID5. Knowing this, the command to create the array is:
storcli64 /c0 add vd type=r5 drives=8:6-8
Controller = 0 Status = Success Description = Add VD Succeeded
[edit] Find the new VD Number
To know what the virtual disk number is, we can look using /c0 /vall show all. You can pipe this through less if you like, then search for the <enclosure>:<slot> number(s) used in the step above.
storcli64 /c0 /vall show all | grep -B7 -A1 -e '8:6' -e '8:7' -e '8:8'
PDs for VD 0 : ============ ----------------------------------------------------------------------------------- EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp ----------------------------------------------------------------------------------- 8:6 21 Onln 1 446.625 GB SATA SSD N N 512B SAMSUNG MZ7KH480HAHQ-00005 U 8:7 22 Onln 1 446.625 GB SATA SSD N N 512B SAMSUNG MZ7KH480HAHQ-00005 U 8:8 19 Onln 1 446.625 GB SATA SSD N N 512B SAMSUNG MZ7KH480HAHQ-00005 U -----------------------------------------------------------------------------------
In our case, the new VD is 0 (despite an existing VD, which was 1).
[edit] Tuning
Once it is created, you can tune the caching policy and specify which virtual disk is used for booting.
[edit] Physical Disk Caching
Warning: Most disks have no protection for their cached data in the event of power loss. Enable PD caching with caution. |
Enable the disk caching on the physical disks in the virtual disk.
[root@nr-a04n02 ~]# storcli64 /c0 /v0 set pdcache=on
Controller = 0 Status = Success Description = None Detailed Status : =============== --------------------------------------- VD Property Value Status ErrCd ErrMsg --------------------------------------- 0 PdCac On Success 0 - ---------------------------------------
[edit] IO Policy Caching
storcli64 /c0 /v0 set iopolicy=cached
Controller = 0 Status = Success Description = None Detailed Status : =============== ---------------------------------------- VD Property Value Status ErrCd ErrMsg ---------------------------------------- 0 IoPolicy Cached Success 0 - ----------------------------------------
[edit] Set the Boot Drive
We want to make sure the original virtual drive, VD1, is the boot device. To ensure this, we'll explicitly set VD0 to off and VD1 to on.
storcli64 /c0 /v0 set bootdrive=off
Controller = 0 Status = Success Description = None Detailed Status : =============== ----------------------------------------- VD Property Value Status ErrCd ErrMsg ----------------------------------------- 0 Boot Drive Off Success 0 - -----------------------------------------
storcli64 /c0 /v1 set bootdrive=on
Controller = 0 Status = Success Description = None Detailed Status : =============== ----------------------------------------- VD Property Value Status ErrCd ErrMsg ----------------------------------------- 1 Boot Drive On Success 0 - -----------------------------------------
[edit] Partition the new VD
In our case, the new VD0 disk came up on the host OS as /dev/sdb. We'll need to create a label and then partition with a single partition using parted.
parted -a opt /dev/sdb
GNU Parted 2.1 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt (parted) mkpart primary 0% 100% (parted) print free
Model: FTS PRAID EP420i (scsi) Disk /dev/sdb: 959GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 17.4kB 1049kB 1031kB Free Space 1 1049kB 959GB 959GB primary 959GB 959GB 1032kB Free Space
(parted) q
[edit] Configure DRBD Resource
On one node, copy the /etc/drbd.d/r0.res to /etc/drbd.d/r1.res, then make
diff -U0 /etc/drbd.d/r0.res /etc/drbd.d/r1.res
--- /etc/drbd.d/r0.res 2020-04-15 14:23:23.623428249 -0400 +++ /etc/drbd.d/r1.res 2020-07-10 18:56:44.537891604 -0400 @@ -4 +4 @@ -resource r0 { +resource r1 { @@ -8 +8 @@ - device /dev/drbd0 minor 0; + device /dev/drbd1 minor 1; @@ -12 +12 @@ - disk /dev/sda5; + disk /dev/sdb1; @@ -26 +26 @@ - address ipv4 10.10.40.1:7788; + address ipv4 10.10.40.1:7789; @@ -30,2 +30,2 @@ - device /dev/drbd0 minor 0; - disk /dev/sda5; + device /dev/drbd1 minor 1; + disk /dev/sdb1; @@ -34 +34 @@ - address ipv4 10.10.40.2:7788; + address ipv4 10.10.40.2:7789;
Verify that there is no problem by running drbdadm dump. Assuming there is no problem, rsync the file to the peer.
rsync -av /etc/drbd.d/r1.res root@an-a05n02:/etc/drbd.d/
sending incremental file list r1.res sent 1708 bytes received 31 bytes 3478.00 bytes/sec total size is 1634 speedup is 0.94
Create the metadata on both nodes;
drbdadm create-md r1
initializing activity log initializing bitmap (28584 KB) to all zero ioctl(/dev/sdb1, BLKZEROOUT, [959088480256, 29270016]) failed: Inappropriate ioctl for device Using slow(er) fallback. 100% Writing meta data... New drbd meta data block successfully created. success
Start the resource on both nodes;
drbdadm up r1
Warning: This command must be carefully run! If run on the wrong resource or wrong node, you could lose data. |
On one node only; Force one node to be Primary to begin synchronization;
drbdadm primary r1 --force
On the other node; set the resource to primary.
drbdadm primary r1
You can now watch cat /proc/drbd to see the initial synchronization run.
cat /proc/drbd
GIT-hash: d232dfd1701f911288729c9144056bd21856e6f6 README.md build by root@rhel6-builder-production.alteeve.ca, 2020-01-28 00:30:13 0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r----- ns:40679264 nr:1332 dw:40680564 dr:9570892 al:2445 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0 1: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r---n- ns:15346440 nr:0 dw:0 dr:15353752 al:8 bm:0 lo:0 pe:15 ua:26 ap:0 ep:1 wo:d oos:921266996 [>....................] sync'ed: 1.7% (899672/914656)M finish: 2:10:23 speed: 117,744 (111,180) K/sec
[edit] Configure Clustered LVM
Note: From here forward, commands are run on one node only. |
Create the LVM physical volume.
pvcreate /dev/drbd1
Physical volume "/dev/drbd1" successfully created
Create the volume group;
vgcreate nr-a04n01_vg1 /dev/drbd1
Clustered volume group "nr-a04n01_vg1" successfully created
Done!
Any questions, feedback, advice, complaints or meanderings are welcome. | ||||
Us: Alteeve's Niche! | Support: Mailing List | IRC: #clusterlabs on Freenode | © Alteeve's Niche! Inc. 1997-2019 | |
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions. |