2x5 Scalable Cluster Tutorial

From Alteeve Wiki
Jump to navigation Jump to search

 AN!Wiki :: How To :: 2x5 Scalable Cluster Tutorial

Warning: This tutorial is not even close to complete or accurate. It will be updated later, but so long as this warning is here, consider it defective and unusable. The only up to date clustering tutorial is: Red Hat Cluster Service 2 Tutorial.

The Design

All nodes have IPs as follows:
 * eth0 == Internet Facing Network == 192.168.1.x
 * eth1 == Storage Network         == 192.168.2.x
 * eth2 == Back Channel Network    == 192.168.3.x
   * Where 'x' = the node ID (ie: an-node01 -> x=1)

 * If a node has an IPMI (or similar) interface piggy-backed on a network
   interface, it will be shared with eth2. If it has a dedicated interface, it
   will be connected to the BCN.
 * Node management interfaces will be on 192.168.3.(x+100)
 * All subnets are /24 (255.255.255.0)

 Storage node use 2x SATA drives ('sda' and 'sdb') plus 2x SSD drives ('sdc' 
 and 'sdd').

 Logical map:
  ___________________________________________                     ___________________________________________ 
 | [ an-node01 ]                       ______|                   |______                       [ an-node02 ] |
 |  ______    _____    _______        | eth0 =------\     /------= eth0 |        _______    _____    ______  |
 | [_sda1_]--[_md0_]--[_/boot_]       |_____||      |     |      ||_____|       [_/boot_]--[_md0_]--[_sda1_] |
 | [_sdb1_]                                  |      |     |      |                                  [_sdb1_] |
 |  ______    _____    ______          ______|      |     |      |______          ______    _____    ______  |
 | [_sda2_]--[_md1_]--[_swap_]   /----| eth1 =----\ |     | /----= eth1 |----\   [_swap_]--[_md1_]--[_sda2_] |
 | [_sdb2_]                      | /--|_____||    | |     | |    ||_____|--\ |                      [_sdb2_] |
 |  ______    _____    ___       | |         |    | |     | |    |         | |       ___    _____    ______  |
 | [_sda3_]--[_md2_]--[_/_]      | |   ______|    | |     | |    |______   | |      [_/_]--[_md2_]--[_sda3_] |
 | [_sdb3_]                      | |  | eth2 =--\ | |     | | /--= eth2 |  | |                      [_sdb3_] |
 |  ______    _____    _______   | |  |_____||  | | |     | | |  ||_____|  | |   _______    _____    ______  |
 | [_sda5_]--[_md3_]--[_drbd0_]--/ |         |  | | |     | | |  |         | \--[_drbd0_]--[_md3_]--[_sda5_] |
 | [_sdb5_]                        |         |  | | |     | | |  |         |                        [_sdb5_] |
 |  ______    _____    _______     |         |  | | |     | | |  |         |     _______    _____    ______  |
 | [_sdc1_]--[_md4_]--[_drbd1_]----/         |  | | |     | | |  |         \----[_drbd1_]--[_md4_]--[_sdc1_] |
 | [_sdd1_]                                  |  | | |     | | |  |                                  [_sdd1_] |
 |___________________________________________|  | | |     | | |  |___________________________________________|
                                                | | |     | | |
                        /---------------------------/     | | |
                        |                       | |       | | |
                        | /-------------------------------/ | |
                        | |                     | |         | \-----------------------\
                        | |                     | |         |                         |
                        | |                     \-----------------------------------\ |
                        | |                       |         |                       | |
                        | |   ____________________|_________|____________________   | |
                        | |  [ iqn.2011-08.com.alteeve:an-clusterA.target01.hdd  ]  | |
                        | |  [ iqn.2011-08.com.alteeve:an-clusterA.target02.sdd  ]  | |
   _________________    | |  [    drbd0 == hdd == vg01           Floating IP     ]  | |      ___________________
  [ Internet Facing ]   | |  [____drbd1_==_sdd_==_vg02__________192.168.2.100____]  | | /---[ Internal Managed  ]
  [_____Routers_____]   | |                             | |                         | | |   [  Private Network  ]
                  |     | \-----------\                 | |                   /-----/ | |   [_and_fence_devices_]
                  |     \-----------\ |                 | |                   | /-----/ |
                  \---------------\ | |                 | |                   | | /-----/
                                 _|_|_|___________     _|_|_____________     _|_|_|___________
 [ Storage Cluster ]            [ Internet Facing ]   [ Storage Network ]   [  Back-Channel   ]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~[_____Network_____]~~~[_________________]~~~[_____Network_____]~~~~~~~~~~~~~~~
 [    VM Cluster   ]              | | | | |             | | | | |             | | | | |  
                                  | | | | |             | | | | |             | | | | |
  __________________________      | | | | |             | | | | |             | | | | |
 | [ an-node03 ]      ______|     | | | | |             | | | | |             | | | | |
 |                   | eth0 =-----/ | | | |             | | | | |             | | | | |
 |                   |_____||       | | | |             | | | | |             | | | | |
 |                          |       | | | |             | | | | |             | | | | |
 |                    ______|       | | | |             | | | | |             | | | | |
 |                   | eth1 =---------------------------/ | | | |             | | | | |
 |                   |_____||       | | | |               | | | |             | | | | |
 |                          |       | | | |               | | | |             | | | | |
 |                    ______|       | | | |               | | | |             | | | | |
 |                   | eth2 =-------------------------------------------------/ | | | |
 |                   |_____||       | | | |               | | | |               | | | |
 |__________________________|       | | | |               | | | |               | | | |
                                    | | | |               | | | |               | | | |
  __________________________        | | | |               | | | |               | | | |
 | [ an-node04 ]      ______|       | | | |               | | | |               | | | |
 |                   | eth0 =-------/ | | |               | | | |               | | | |
 |                   |_____||         | | |               | | | |               | | | |
 |                          |         | | |               | | | |               | | | |
 |                    ______|         | | |               | | | |               | | | |
 |                   | eth1 =-----------------------------/ | | |               | | | |
 |                   |_____||         | | |                 | | |               | | | |
 |                          |         | | |                 | | |               | | | |
 |                    ______|         | | |                 | | |               | | | |
 |                   | eth2 =---------------------------------------------------/ | | |
 |                   |_____||         | | |                 | | |                 | | |
 |__________________________|         | | |                 | | |                 | | |
                                      | | |                 | | |                 | | |
  __________________________          | | |                 | | |                 | | |
 | [ an-node05 ]      ______|         | | |                 | | |                 | | |
 |                   | eth0 =---------/ | |                 | | |                 | | |
 |                   |_____||           | |                 | | |                 | | |
 |                          |           | |                 | | |                 | | |
 |                    ______|           | |                 | | |                 | | |
 |                   | eth1 =-------------------------------/ | |                 | | |
 |                   |_____||           | |                   | |                 | | |
 |                          |           | |                   | |                 | | |
 |                    ______|           | |                   | |                 | | |
 |                   | eth2 =-----------------------------------------------------/ | |
 |                   |_____||           | |                   | |                   | |
 |__________________________|           | |                   | |                   | |
                                        | |                   | |                   | |
  __________________________            | |                   | |                   | |
 | [ an-node06 ]      ______|           | |                   | |                   | |
 |                   | eth0 =-----------/ |                   | |                   | |
 |                   |_____||             |                   | |                   | |
 |                          |             |                   | |                   | |
 |                    ______|             |                   | |                   | |
 |                   | eth1 =---------------------------------/ |                   | |
 |                   |_____||             |                     |                   | |
 |                          |             |                     |                   | |
 |                    ______|             |                     |                   | |
 |                   | eth2 =-------------------------------------------------------/ |
 |                   |_____||             |                     |                     |
 |__________________________|             |                     |                     |
                                          |                     |                     |
  __________________________              |                     |                     |
 | [ an-node07 ]      ______|             |                     |                     |
 |                   | eth0 =-------------/                     |                     |
 |                   |_____||                                   |                     |
 |                          |                                   |                     |
 |                    ______|                                   |                     |
 |                   | eth1 =-----------------------------------/                     |
 |                   |_____||                                                         |
 |                          |                                                         |
 |                    ______|                                                         |
 |                   | eth2 =---------------------------------------------------------/
 |                   |_____||
 |__________________________|

Install The Cluster Software

If you are using Red Hat Enterprise Linux, you will need to add the RHEL Server Optional (v. 6 64-bit x86_64) channel for each node in your cluster. You can do this in RHN by going the your subscription management page, clicking on each server, clicking on "Alter Channel Subscriptions", click to enable the RHEL Server Optional (v. 6 64-bit x86_64) channel and then by clicking on "Change Subscription".

This actual installation is simple, just use yum to install cman.

yum install cman fence-agents

Initial Config

With these decisions and the information gathered, here is what our first /etc/cluster/cluster.conf file will look like.

touch /etc/cluster/cluster.conf
vim /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster name="an-clusterA" config_version="1">
        <cman two_node="1" expected_votes="1" />
        <totem secauth="off" rrp_mode="none" />
        <clusternodes>
                <clusternode name="an-node01.alteeve.com" nodeid="1">
                        <fence>
                                <method name="PDU">
                                        <device name="pdu2" action="reboot" port="1" />
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="an-node02.alteeve.com" nodeid="2">
                        <fence>
                                <method name="PDU">
                                        <device name="pdu2" action="reboot" port="2" />
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <fencedevices>
                <fencedevice name="pdu2" agent="fence_apc" ipaddr="192.168.1.6" login="apc" passwd="secret" />
        </fencedevices>
</cluster>

Save the file, then validate it. If it fails, address the errors and try again.

ccs_config_validate
Configuration validates

Push it to the other node:

rsync -av /etc/cluster/cluster.conf root@an-node02:/etc/cluster/
sending incremental file list
cluster.conf

sent 781 bytes  received 31 bytes  541.33 bytes/sec
total size is 701  speedup is 0.86

Start:


DO NOT PROCEED UNTIL YOUR cluster.conf FILE VALIDATES!

Unless you have it perfect, your cluster will fail.

Once it validates, proceed.

Starting The Cluster For The First Time

By default, if you start one node only and you've enabled the <cman two_node="1" expected_votes="1"/> option as we have done, the lone server will effectively gain quorum. It will try to connect to the cluster, but there won't be a cluster to connect to, so it will fence the other node after a timeout period. This timeout is 6 seconds by default.

For now, we will leave the default as it is. If you're interested in changing it though, the argument you are looking for is post_join_delay.

This behaviour means that we'll want to start both nodes well within six seconds of one another, least the slower one get needlessly fenced.

Left off here

Note to help minimize dual-fences:

  • you could add FENCED_OPTS="-f 5" to /etc/sysconfig/cman on *one* node (ilo fence devices may need this)

DRBD Config

Install from source:

Both:

# Obliterate peer - fence via cman
wget -c https://alteeve.com/files/an-cluster/sbin/obliterate-peer.sh -O /sbin/obliterate-peer.sh
chmod a+x /sbin/obliterate-peer.sh
ls -lah /sbin/obliterate-peer.sh

# Download, compile and install DRBD
wget -c http://oss.linbit.com/drbd/8.3/drbd-8.3.11.tar.gz
tar -xvzf drbd-8.3.11.tar.gz
cd drbd-8.3.11
./configure \
   --prefix=/usr \
   --localstatedir=/var \
   --sysconfdir=/etc \
   --with-utils \
   --with-km \
   --with-udev \
   --with-pacemaker \
   --with-rgmanager \
   --with-bashcompletion
make
make install

Configure

an-node01:

# Configure DRBD's global value.
cp /etc/drbd.d/global_common.conf /etc/drbd.d/global_common.conf.orig
vim /etc/drbd.d/global_common.conf
diff -u /etc/drbd.d/global_common.conf
--- /etc/drbd.d/global_common.conf.orig	2011-08-01 21:58:46.000000000 -0400
+++ /etc/drbd.d/global_common.conf	2011-08-01 23:18:27.000000000 -0400
@@ -15,24 +15,35 @@
 		# out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
 		# before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
 		# after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
+		fence-peer		"/sbin/obliterate-peer.sh";
 	}
 
 	startup {
 		# wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb
+		become-primary-on	both;
+		wfc-timeout		300;
+		degr-wfc-timeout	120;
 	}
 
 	disk {
 		# on-io-error fencing use-bmbv no-disk-barrier no-disk-flushes
 		# no-disk-drain no-md-flushes max-bio-bvecs
+		fencing			resource-and-stonith;
 	}
 
 	net {
 		# sndbuf-size rcvbuf-size timeout connect-int ping-int ping-timeout max-buffers
 		# max-epoch-size ko-count allow-two-primaries cram-hmac-alg shared-secret
 		# after-sb-0pri after-sb-1pri after-sb-2pri data-integrity-alg no-tcp-cork
+		allow-two-primaries;
+		after-sb-0pri		discard-zero-changes;
+		after-sb-1pri		discard-secondary;
+		after-sb-2pri		disconnect;
 	}
 
 	syncer {
 		# rate after al-extents use-rle cpu-mask verify-alg csums-alg
+		# This should be no more than 30% of the maximum sustainable write speed.
+		rate			20M;
 	}
 }
vim /etc/drbd.d/r0.res
resource r0 {
        device          /dev/drbd0;
        meta-disk       internal;
        on an-node01.alteeve.com {
                address         192.168.2.71:7789;
                disk            /dev/sda5;
        }
        on an-node02.alteeve.com {
                address         192.168.2.72:7789;
                disk            /dev/sda5;
        }
}
cp /etc/drbd.d/r0.res /etc/drbd.d/r1.res 
vim /etc/drbd.d/r1.res
resource r1 {
        device          /dev/drbd1;
        meta-disk       internal;
        on an-node01.alteeve.com {
                address         192.168.2.71:7790;
                disk            /dev/sdb1;
        }
        on an-node02.alteeve.com {
                address         192.168.2.72:7790;
                disk            /dev/sdb1;
        }
}
Note: If you have multiple DRBD resources on on (set of) backing disks, consider adding syncer { after <minor-1>; }. For example, tell /dev/drbd1 to wait for /dev/drbd0 by adding syncer { after 0; }. This will prevent simultaneous resync's which could seriously impact performance. Resources will wait in state until the defined resource has completed sync'ing.

Validate:

drbdadm dump
  --==  Thank you for participating in the global usage survey  ==--
The server's response is:

you are the 369th user to install this version
# /usr/etc/drbd.conf
common {
    protocol               C;
    net {
        allow-two-primaries;
        after-sb-0pri    discard-zero-changes;
        after-sb-1pri    discard-secondary;
        after-sb-2pri    disconnect;
    }
    disk {
        fencing          resource-and-stonith;
    }
    syncer {
        rate             20M;
    }
    startup {
        wfc-timeout      300;
        degr-wfc-timeout 120;
        become-primary-on both;
    }
    handlers {
        pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
        pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
        local-io-error   "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
        fence-peer       /sbin/obliterate-peer.sh;
    }
}

# resource r0 on an-node01.alteeve.com: not ignored, not stacked
resource r0 {
    on an-node01.alteeve.com {
        device           /dev/drbd0 minor 0;
        disk             /dev/sda5;
        address          ipv4 192.168.2.71:7789;
        meta-disk        internal;
    }
    on an-node02.alteeve.com {
        device           /dev/drbd0 minor 0;
        disk             /dev/sda5;
        address          ipv4 192.168.2.72:7789;
        meta-disk        internal;
    }
}

# resource r1 on an-node01.alteeve.com: not ignored, not stacked
resource r1 {
    on an-node01.alteeve.com {
        device           /dev/drbd1 minor 1;
        disk             /dev/sdb1;
        address          ipv4 192.168.2.71:7790;
        meta-disk        internal;
    }
    on an-node02.alteeve.com {
        device           /dev/drbd1 minor 1;
        disk             /dev/sdb1;
        address          ipv4 192.168.2.72:7790;
        meta-disk        internal;
    }
}
rsync -av /etc/drbd.d root@an-node02:/etc/
drbd.d/
drbd.d/global_common.conf
drbd.d/global_common.conf.orig
drbd.d/r0.res
drbd.d/r1.res

sent 3523 bytes  received 110 bytes  7266.00 bytes/sec
total size is 3926  speedup is 1.08

Initialize and First start

Both:

Create the meta-data.

drbdadm create-md r{0,1}
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
success
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
success

Attach, connect and confirm (after both have attached and connected):

drbdadm attach r{0,1}
drbdadm connect r{0,1}
cat /proc/drbd
version: 8.3.11 (api:88/proto:86-96)
GIT-hash: 0de839cee13a4160eed6037c4bddd066645e23c5 build by root@an-node01.alteeve.com, 2011-08-01 22:04:32
 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:441969960
 1: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:29309628

There is no data, so force both devices to be instantly UpToDate:

drbdadm -- --clear-bitmap new-current-uuid r{0,1}
cat /proc/drbd
version: 8.3.11 (api:88/proto:86-96)
GIT-hash: 0de839cee13a4160eed6037c4bddd066645e23c5 build by root@an-node01.alteeve.com, 2011-08-01 22:04:32
 0: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r-----
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
 1: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r-----
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0

Set both to primary and run a final check.

drbdadm primary r{0,1}
cat /proc/drbd
version: 8.3.11 (api:88/proto:86-96)
GIT-hash: 0de839cee13a4160eed6037c4bddd066645e23c5 build by root@an-node01.alteeve.com, 2011-08-01 22:04:32
 0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:0 dw:0 dr:672 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
 1: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:0 dw:0 dr:672 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0


iSCSI notes

IET vs tgt pros and cons needed.

default iscsi port is 3260

initiator: This is the client. target: This is the server side. sid: Session ID; Found with iscsiadm -m session -P 1. SID and sysfs path are not persistent, partially start-order based. iQN: iSCSI Qualified Name; This is a string that uniquely identifies targets and initiators.

vim /etc/tgt/targets.conf

Note: Linbit offers 8.3.11 to RHEL6 clients now, use that in this tutorial.


yum install iscsi-initiator-utils scsi-target-utils
/etc/init.d/iscsid start
iscsiadm -m iface -o new --interface=eth1
cp /etc/tgt/targets.conf /etc/tgt/targets.conf.orig
vim /etc/tgt/targets.conf

<source lang="bash">


 

Any questions, feedback, advice, complaints or meanderings are welcome.
Alteeve's Niche! Enterprise Support:
Alteeve Support
Community Support
© Alteeve's Niche! Inc. 1997-2024   Anvil! "Intelligent Availability®" Platform
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions.