2-Node Red Hat KVM Cluster Tutorial - Quick guide: Difference between revisions
No edit summary |
|||
Line 82: | Line 82: | ||
Destroy the <span class="code">libvirtd</span> bridge if needed. | Destroy the <span class="code">libvirtd</span> bridge if needed. | ||
If <span class="code">libvirtd</span> is not yet running: | |||
<source lang="bash"> | <source lang="bash"> | ||
cat /dev/null >/etc/libvirt/qemu/networks/default.xml | cat /dev/null >/etc/libvirt/qemu/networks/default.xml | ||
</span> | |||
Otherwise, if <span class="code">libvirtd</span> has started: | |||
<source lang="bash"> | |||
virsh net-destroy default | |||
virsh net-autostart default --disable | |||
virsh net-undefine default | |||
/etc/init.d/iptables stop | /etc/init.d/iptables stop | ||
</source> | </source> |
Revision as of 20:32, 5 September 2012
AN!Wiki :: How To :: 2-Node Red Hat KVM Cluster Tutorial - Quick guide |
This is a "cookbook" version of the complete 2-Node Red Hat KVM Cluster Tutorial tutorial. It is designed for walking through all the steps needed to build a cluster, without any explanation at all. This is only useful to people who've already read the full tutorial and want something of a "cluster build checklist".
Note: This cookbook installs DRBD from ELRepo. |
OS Install
This section is based on a minimal install.
Install Apps
yum -y update
yum -y install cman corosync rgmanager ricci gfs2-utils ntp libvirt lvm2-cluster qemu-kvm qemu-kvm-tools virt-install virt-viewer syslinux wget gpm rsync
rpm --import http://elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://elrepo.org/elrepo-release-6-4.el6.elrepo.noarch.rpm
yum -y install drbd83-utils kmod-drbd83
yum -y remove NetworkManager
Set ricci's Password
passwd ricci
Setup ntpd
echo server tick.redhat.com$'\n'restrict tick.redhat.com mask 255.255.255.255 nomodify notrap noquery >> /etc/ntp.conf
Restore Detailed Boot Screen
Note: This can take a minute or three to finish, be patient. |
plymouth-set-default-theme details -R
Enable and Disable Daemons
chkconfig iptables off
chkconfig --list iptables
chkconfig ip6tables off
chkconfig --list ip6tables
chkconfig network on
chkconfig --list network
chkconfig ntpd on
chkconfig --list ntpd
chkconfig ricci on
chkconfig --list ricci
chkconfig modclusterd on
chkconfig --list modclusterd
chkconfig drbd off
chkconfig --list drbd
chkconfig clvmd off
chkconfig --list clvmd
chkconfig gfs2 off
chkconfig --list gfs2
chkconfig cman off
chkconfig --list cman
chkconfig rgmanager off
chkconfig --list rgmanager
Start Daemons
/etc/init.d/iptables stop
/etc/init.d/ip6tables stop
/etc/init.d/ntpd start
/etc/init.d/ricci start
/etc/init.d/modclusterd start
Configure Networking
Note: This assumes you've already renamed your ifcfg-ethX files. |
Destroy the libvirtd bridge if needed.
If libvirtd is not yet running:
cat /dev/null >/etc/libvirt/qemu/networks/default.xml
</span>
Otherwise, if <span class="code">libvirtd</span> has started:
<source lang="bash">
virsh net-destroy default
virsh net-autostart default --disable
virsh net-undefine default
/etc/init.d/iptables stop
Backup the existing network files and create the bond and bridge files.
mkdir /root/backups/
rsync -av /etc/sysconfig/network-scripts/ifcfg-eth* /root/backups/
touch /etc/sysconfig/network-scripts/ifcfg-bond{0..2}
touch /etc/sysconfig/network-scripts/ifcfg-vbr2
Warning: Be sure you use your MAC addresses in the HWADDR="..." lines below. |
Bridge:
vim /etc/sysconfig/network-scripts/ifcfg-vbr2
# Internet-Facing Network - Bridge
DEVICE="vbr2"
TYPE="Bridge"
BOOTPROTO="static"
IPADDR="10.255.0.1"
NETMASK="255.255.0.0"
GATEWAY="10.255.255.254"
DNS1="8.8.8.8"
DNS2="8.8.4.4"
DEFROUTE="yes"
Bonds:
vim /etc/sysconfig/network-scripts/ifcfg-bond0
# Back-Channel Network - Bond
DEVICE="bond0"
BOOTPROTO="static"
NM_CONTROLLED="no"
ONBOOT="yes"
BONDING_OPTS="mode=1 miimon=100 use_carrier=1 updelay=120000 downdelay=0 primary=eth0"
IPADDR="10.20.0.1"
NETMASK="255.255.0.0"
vim /etc/sysconfig/network-scripts/ifcfg-bond1
# Storage Network - Bond
DEVICE="bond1"
BOOTPROTO="static"
NM_CONTROLLED="no"
ONBOOT="yes"
BONDING_OPTS="mode=1 miimon=100 use_carrier=1 updelay=120000 downdelay=0 primary=eth1"
IPADDR="10.10.0.1"
NETMASK="255.255.0.0"
vim /etc/sysconfig/network-scripts/ifcfg-bond2
# Internet-Facing Network - Bond
DEVICE="bond2"
BRIDGE="vbr2"
BOOTPROTO="none"
NM_CONTROLLED="no"
ONBOOT="yes"
BONDING_OPTS="mode=1 miimon=100 use_carrier=1 updelay=120000 downdelay=0 primary=eth2"
Ethernet devices:
vim /etc/sysconfig/network-scripts/ifcfg-eth0
# Back-Channel Network - Link 1
HWADDR="00:E0:81:C7:EC:49"
DEVICE="eth0"
NM_CONTROLLED="no"
ONBOOT="yes"
BOOTPROTO="none"
MASTER="bond0"
SLAVE="yes"
vim /etc/sysconfig/network-scripts/ifcfg-eth1
# Storage Network - Link 1
HWADDR="00:E0:81:C7:EC:48"
DEVICE="eth1"
NM_CONTROLLED="no"
ONBOOT="yes"
BOOTPROTO="none"
MASTER="bond1"
SLAVE="yes"
vim /etc/sysconfig/network-scripts/ifcfg-eth2
# Internet-Facing Network - Link 1
HWADDR="00:E0:81:C7:EC:47"
DEVICE="eth2"
NM_CONTROLLED="no"
ONBOOT="yes"
BOOTPROTO="none"
MASTER="bond2"
SLAVE="yes"
vim /etc/sysconfig/network-scripts/ifcfg-eth3
# Back-Channel Network - Link 2
HWADDR="00:1B:21:9D:59:FC"
DEVICE="eth3"
NM_CONTROLLED="no"
ONBOOT="yes"
BOOTPROTO="none"
MASTER="bond0"
SLAVE="yes"
vim /etc/sysconfig/network-scripts/ifcfg-eth4
# Storage Network - Link 2
HWADDR="00:1B:21:BF:70:02"
DEVICE="eth4"
NM_CONTROLLED="no"
ONBOOT="yes"
BOOTPROTO="none"
MASTER="bond1"
SLAVE="yes"
vim /etc/sysconfig/network-scripts/ifcfg-eth5
# Internet-Facing Network - Link 2
HWADDR="00:1B:21:BF:6F:FE"
DEVICE="eth5"
NM_CONTROLLED="no"
ONBOOT="yes"
BOOTPROTO="none"
MASTER="bond2"
SLAVE="yes"
Populate the hosts file:
vim /etc/hosts
# an-node01
10.20.0.1 an-node01 an-node01.bcn an-node01.alteeve.com
10.20.1.1 an-node01.ipmi
10.10.0.1 an-node01.sn
10.255.0.1 an-node01.ifn
# an-node01
10.20.0.2 an-node02 an-node02.bcn an-node02.alteeve.com
10.20.1.2 an-node02.ipmi
10.10.0.2 an-node02.sn
10.255.0.2 an-node02.ifn
# Fence devices
10.20.2.1 pdu1 pdu1.alteeve.com
10.20.2.2 pdu2 pdu2.alteeve.com
10.20.2.3 switch1 switch1.alteeve.com
Restart networking:
/etc/init.d/network restart
SSH Configuration
Note: Populating files is done here on node1 and sync'ed to node2. |
ssh-keygen -t rsa -N "" -b 4095 -f ~/.ssh/id_rsa
Add the ~/.ssh/id_rsa.pub from both nodes to:
vim ~/.ssh/authorized_keys
rsync -av ~/.ssh/authorized_keys root@an-node02:/root/.ssh/
SSH into both nodes using all host names to populate ~/.ssh/known_hosts
# After ssh'ing into all host names:
rsync -av ~/.ssh/known_hosts root@an-node02:/root/.ssh/
Cluster Communications
Note: This assumes a pair of nodes with IPMI and redundant PSUs split across two switched PDUs. |
Build the cluster communication section of /etc/cluster/cluster.conf.
vim /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster name="an-cluster-01" config_version="1">
<cman expected_votes="1" two_node="1" />
<clusternodes>
<clusternode name="an-node01.alteeve.com" nodeid="1">
<fence>
<method name="ipmi">
<device name="ipmi_an01" action="reboot" />
</method>
<method name="pdu">
<device name="pdu1" port="1" action="reboot" />
<device name="pdu2" port="1" action="reboot" />
</method>
</fence>
</clusternode>
<clusternode name="an-node02.alteeve.com" nodeid="2">
<fence>
<method name="ipmi">
<device name="ipmi_an02" action="reboot" />
</method>
<method name="pdu">
<device name="pdu1" port="2" action="reboot" />
<device name="pdu2" port="2" action="reboot" />
</method>
</fence>
</clusternode>
</clusternodes>
<fencedevices>
<fencedevice name="ipmi_an01" agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" passwd="secret" />
<fencedevice name="ipmi_an02" agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" passwd="secret" />
<fencedevice agent="fence_apc_snmp" ipaddr="pdu1.alteeve.com" name="pdu1" />
<fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.com" name="pdu2" />
</fencedevices>
<fence_daemon post_join_delay="30" />
<totem rrp_mode="none" secauth="off"/>
</cluster>
Setting Up DRBD
Partitioning The Drives
Note: This assumes a hardware RAID array at /dev/sda. |
Installing the Fence Agent Hook
wget -c https://alteeve.com/files/an-cluster/sbin/obliterate-peer.sh -O /sbin/obliterate-peer.sh
chmod a+x /sbin/obliterate-peer.sh
Any questions, feedback, advice, complaints or meanderings are welcome. | |||
Alteeve's Niche! | Enterprise Support: Alteeve Support |
Community Support | |
© Alteeve's Niche! Inc. 1997-2024 | Anvil! "Intelligent Availability®" Platform | ||
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions. |