|
|
Line 1: |
Line 1: |
| {{howto_header}}
| | == Summary == |
| | | The mail server has been added! |
| {{warning|1=This tutorial is incomplete, flawed and generally sucks at this time. Do not follow this and expect anything to work. In large part, it's a dumping ground for notes and little else. This warning will be removed when the tutorial is completed.}}
| |
| | |
| {{warning|1=This tutorial is built on [[Red Hat]]'s Enterprise Linux 7 beta. [[Red Hat]] never confirms what a future release will contain until it is actually released, so there is a real chance that what is in the beta will '''not''' be in the final release.}}
| |
| | |
| This is the third '''AN!Cluster''' tutorial built on [[Red Hat]]'s Enterprise Linux 7. It improves on the [[Red Hat Cluster Service 2 Tutorial|RHEL 5, RHCS stable 2]] and [[AN!Cluster Tutorial 2]] tutorials.
| |
| | |
| As with the previous tutorials, the end goal of this tutorial is a 2-node cluster providing a platform for high-availability virtual servers. It's design attempts to remove all single points of failure from the system. Power and networking are made fully redundant in this version, along with minimizing the node failures which would lead to service interruption. This tutorial also covers the [[AN!Utilities]]; [[AN!Cluster Dashboard]], [[AN!Cluster Monitor]] and [[AN!Safe Cluster Shutdown]].
| |
| | |
| As it the previous tutorial, [[KVM]] will be the hypervisor used for facilitating virtual machines. The old <span class="code">[[cman]]</span> and <span class="code">[[rgmanager]]</span> tools are replaced in favour of <span class="code">[[pacemaker]]</span> for resource management.
| |
| | |
| = Before We Begin =
| |
| | |
| This tutorial '''does not''' require prior cluster experience, but it does expect familiarity with Linux and a low-intermediate understanding of networking. Where possible, steps are explained in detail and rationale is provided for why certain decisions are made.
| |
| | |
| '''For those with cluster experience''';
| |
| | |
| Please be careful not to skip too much. There are some major and some subtle changes from previous tutorials.
| |
| | |
| = OS Setup =
| |
| | |
| {{warning|1=We are using the [[RHEL]] 7 Release Candidate OS.}}
| |
| | |
| == Post OS Install ==
| |
| | |
| {{note|1=With RHEL7, <span class="code">[[biosdevname]]</span> tries to give network devices predictable names. It's very likely that your initial device names will differ from those in this tutorial.}}
| |
| | |
| === Enabling the RC Repos ===
| |
| | |
| While using the RHEL 7 RC, the public repos are disabled by default. This enables them.
| |
| | |
| <syntaxhighlight lang="bash">
| |
| vi /etc/yum.repos.d/rhel.repo
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| name=Red Hat Enterprise Linux 7 RC - $basearch
| |
| #baseurl=ftp://ftp.redhat.com/pub/redhat/rhel/rc/7/$basearch/os/
| |
| mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=rhel-7&arch=$basearch
| |
| enabled=1
| |
| gpgcheck=1
| |
| gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
| |
| | |
| [rhel-rc-source]
| |
| name=Red Hat Enterprise Linux 7 RC - $basearch - Source
| |
| #baseurl=ftp://ftp.redhat.com/pub/redhat/rhel/rc/7/source/
| |
| mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=rhel-source-7&arch=$basearch
| |
| enabled=0
| |
| gpgcheck=1
| |
| gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
| |
| | |
| [rhel-rc-debuginfo]
| |
| name=Red Hat Enterprise Linux 7 RC - $basearch - Debuginfo
| |
| #baseurl=ftp://ftp.redhat.com/pub/redhat/rhel/rc/7/$basearch/debug/
| |
| mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=rhel-debug-7&arch=$basearch
| |
| enabled=0
| |
| gpgcheck=1
| |
| gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| yum clean all
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Loaded plugins: product-id, subscription-manager
| |
| This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
| |
| Cleaning repos: rhel-beta rhel-beta-debuginfo rhel-beta-source
| |
| Cleaning up everything
| |
| </syntaxhighlight>
| |
| | |
| Done.
| |
| | |
| == Install == | |
| | |
| Not all of these are required, but most are used at one point or another in this tutorial.
| |
| | |
| {{note|1=The <span class="code">fence-agents-virsh</span> package is not available in RHEL 7 beta. Further, it's only needed if you're building your cluster using VMs.}}
| |
| | |
| <syntaxhighlight lang="bash">
| |
| yum install bridge-utils corosync ntp pacemaker pcs rsync syslinux \
| |
| wget fence-agents-all fence-agents-virsh gpm man vim screen mlocate \
| |
| syslinux dlm dlm-lib lvm2-cluster gfs2-utils
| |
| </syntaxhighlight>
| |
| | |
| During the install, you will be asked to OK the keys:
| |
| | |
| <syntaxhighlight lang="text">
| |
| Importing GPG key 0xF21541EB:
| |
| Userid : "Red Hat, Inc. (beta key 2) <security@redhat.com>"
| |
| Fingerprint: b08b 659e e86a f623 bc90 e8db 938a 80ca f215 41eb
| |
| Package : redhat-release-everything-7.0-0.6.el7.x86_64 (@anaconda/7.0)
| |
| From : /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta
| |
| Is this ok [y/N]: y
| |
| Importing GPG key 0x897DA07A:
| |
| Userid : "Red Hat, Inc. (Beta Test Software) <rawhide@redhat.com>"
| |
| Fingerprint: 17e8 543d 1d4a a5fa a96a 7e9f fd37 2689 897d a07a
| |
| Package : redhat-release-everything-7.0-0.6.el7.x86_64 (@anaconda/7.0)
| |
| From : /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta
| |
| Is this ok [y/N]: y
| |
| </syntaxhighlight>
| |
| | |
| If you want to use your mouse at the node's terminal, run the following;
| |
| | |
| <syntaxhighlight lang="bash">
| |
| systemctl enable gpm.service
| |
| systemctl start gpm.service
| |
| </syntaxhighlight>
| |
| | |
| Disable dlm and clvmd from starting on boot.
| |
| | |
| <syntaxhighlight lang="bash">
| |
| systemctl disable clvmd.service
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| clvmd.service is not a native service, redirecting to /sbin/chkconfig.
| |
| Executing /sbin/chkconfig clvmd off
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| systemctl disable dlm.service
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| rm '/etc/systemd/system/multi-user.target.wants/dlm.service'
| |
| </syntaxhighlight>
| |
| | |
| === Making ssh faster when the net is down ===
| |
| | |
| By default, the nodes will try to resolve the host name of an incoming ssh connection. When the internet connection is down, DNS lookups have to time out, which can make login times quite slow. When something goes wrong, seconds count and waiting for up to a minute for an SSH password prompt can be maddening.
| |
| | |
| For this reason, we will make two changes to <span class="code">/etc/ssh/sshd_config</span> that disable this login delay.
| |
| | |
| Please be aware that this can reduce security. If this is a concern, skip this step.
| |
| | |
| <syntaxhighlight lang="bash">
| |
| sed -i.anvil 's/#GSSAPIAuthentication no/GSSAPIAuthentication no/' /etc/ssh/sshd_config
| |
| sed -i 's/GSSAPIAuthentication yes/#GSSAPIAuthentication yes/' /etc/ssh/sshd_config
| |
| sed -i 's/#UseDNS yes/UseDNS no/' /etc/ssh/sshd_config
| |
| systemctl restart sshd.service
| |
| diff -u /etc/ssh/sshd_config.anvil /etc/ssh/sshd_config
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="diff">
| |
| --- /etc/ssh/sshd_config.anvil 2013-11-08 09:17:23.000000000 -0500
| |
| +++ /etc/ssh/sshd_config 2014-04-03 00:01:40.980951975 -0400
| |
| @@ -89,8 +89,8 @@
| |
| #KerberosUseKuserok yes
| |
|
| |
| # GSSAPI options
| |
| -#GSSAPIAuthentication no
| |
| -GSSAPIAuthentication yes
| |
| +GSSAPIAuthentication no
| |
| +#GSSAPIAuthentication yes
| |
| #GSSAPICleanupCredentials yes
| |
| GSSAPICleanupCredentials yes
| |
| #GSSAPIStrictAcceptorCheck yes
| |
| @@ -127,7 +127,7 @@
| |
| #ClientAliveInterval 0
| |
| #ClientAliveCountMax 3
| |
| #ShowPatchLevel no
| |
| -#UseDNS yes
| |
| +UseDNS no
| |
| #PidFile /var/run/sshd.pid
| |
| #MaxStartups 10:30:100
| |
| #PermitTunnel no
| |
| </syntaxhighlight>
| |
| | |
| Subsequent logins when the net is down should be quick.
| |
| | |
| === Configuring the network ===
| |
| | |
| <span class="code"></span>
| |
| <syntaxhighlight lang="bash">
| |
| </syntaxhighlight>
| |
| | |
| | |
| Enable the <span class="code">eth0</span> interface on boot.
| |
| | |
| <syntaxhighlight lang="bash">
| |
| sed -i.bak 's/ONBOOT=.*/ONBOOT="yes"/' /etc/sysconfig/network-scripts/ifcfg-eth0
| |
| diff -U0 /etc/sysconfig/network-scripts/ifcfg-eth0.bak /etc/sysconfig/network-scripts/ifcfg-eth0
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="diff">
| |
| --- /etc/sysconfig/network-scripts/ifcfg-eth0.bak 2014-01-23 16:15:45.008085032 -0500
| |
| +++ /etc/sysconfig/network-scripts/ifcfg-eth0 2014-01-23 16:15:25.573009623 -0500
| |
| @@ -11 +11 @@
| |
| -ONBOOT=no
| |
| +ONBOOT="yes"
| |
| </syntaxhighlight>
| |
| | |
| If you want to make any other changes, like configuring the interface to have a static IP, do so now. Once you're done editing;
| |
| | |
| <syntaxhighlight lang="bash">
| |
| nmcli connection reload
| |
| systemctl restart NetworkManager.service
| |
| ip addr show
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
| |
| link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
| |
| inet 127.0.0.1/8 scope host lo
| |
| valid_lft forever preferred_lft forever
| |
| inet6 ::1/128 scope host
| |
| valid_lft forever preferred_lft forever
| |
| 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
| |
| link/ether 52:54:00:a7:9d:17 brd ff:ff:ff:ff:ff:ff
| |
| inet 192.168.122.201/24 scope global eth0
| |
| valid_lft forever preferred_lft forever
| |
| inet6 fe80::5054:ff:fea7:9d17/64 scope link
| |
| valid_lft forever preferred_lft forever
| |
| </syntaxhighlight>
| |
| | |
| The interface should now start on boot properly.
| |
| | |
| == Setting the Hostname ==
| |
| | |
| Fedora 19 is '''very''' different from [[EL6]].
| |
| | |
| {{note|1=The '<span class="code">--pretty</span>' line currently doesn't work as there is [https://bugzilla.redhat.com/show_bug.cgi?id=895299 a bug (rhbz#895299)] with single-quotes.}}
| |
| {{note|1=The '<span class="code">--static</span>' option is currently needed to prevent the '<span class="code">.</span>' from being removed. See [https://bugzilla.redhat.com/show_bug.cgi?id=896756 this bug (rhbz#896756)].}}
| |
| | |
| Use a format that works for you. For the tutorial, node names are based on the following;
| |
| * A two-letter prefix identifying the company/user (<span class="code">an</span>, for "Alteeve's Niche!")
| |
| * A sequential cluster ID number in the form of <span class="code">cXX</span> (<span class="code">c01</span> for "Cluster 01", <span class="code">c02</span> for Cluster 02, etc)
| |
| * A sequential node ID number in the form of <span class="code">nYY</span>
| |
| | |
| In my case, this is my third cluster and I use the company prefix <span class="code">an</span>, so my two nodes will be;
| |
| * <span class="code">an-c03n01</span> - node 1
| |
| * <span class="code">an-c03n02</span> - node 2
| |
| | |
| Folks who've read my earlier tutorials will note that this is a departure in naming. I find this method spans and scales much better. Further, it the simply required in order to use the [[AN!CDB|AN! Cluster Dashboard]].
| |
| | |
| <syntaxhighlight lang="bash">
| |
| hostnamectl set-hostname an-c03n01.alteeve.ca --static
| |
| hostnamectl set-hostname --pretty "Alteeve's Niche! - Cluster 03, Node 01"
| |
| </syntaxhighlight>
| |
| | |
| If you want the new host name to take effect immediately, you can use the traditional <span class="code">hostname</span> command:
| |
| | |
| <syntaxhighlight lang="bash">
| |
| hostname an-c03n01.alteeve.ca
| |
| </syntaxhighlight>
| |
| | |
| The "pretty" host name is stored in <span class="code">/etc/machine-info</span> as the unquoted value for the <span class="code">PRETTY_HOSTNAME</span> value.
| |
| | |
| <syntaxhighlight lang="bash">
| |
| vim /etc/machine-info
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| PRETTY_HOSTNAME=Alteeves Niche! - Cluster 01, Node 01
| |
| </syntaxhighlight>
| |
| | |
| If you can't get the <span class="code">hostname</span> command to work for some reason, you can reboot to have the system read the new values.
| |
| | |
| == What Security? ==
| |
| | |
| {{note|1=The final version of this tutorial '''will''' use the firewall and selinux. It's disabled to simplify debugging during the development stage of the tutorial only.}}
| |
| | |
| This section will be re-added at the end. For now;
| |
| | |
| <syntaxhighlight lang="bash">
| |
| setenforce 0
| |
| sed -i 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config
| |
| systemctl disable firewalld.service
| |
| systemctl stop firewalld.service
| |
| </syntaxhighlight>
| |
| | |
| == Network ==
| |
| | |
| We want static, named network devices. Follow this;
| |
| | |
| * [[Changing Ethernet Device Names in EL7 and Fedora 15+]]
| |
| | |
| Then, use these configuration files;
| |
| | |
| Build the bridge;
| |
| | |
| <syntaxhighlight lang="bash">
| |
| vim /etc/sysconfig/network-scripts/ifcfg-ifn-vbr1
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| # Internet-Facing Network - Bridge
| |
| DEVICE="ifn-vbr1"
| |
| TYPE="Bridge"
| |
| BOOTPROTO="none"
| |
| IPADDR="10.255.10.1"
| |
| NETMASK="255.255.0.0"
| |
| GATEWAY="10.255.255.254"
| |
| DNS1="8.8.8.8"
| |
| DNS2="8.8.4.4"
| |
| DEFROUTE="yes"
| |
| </syntaxhighlight>
| |
| | |
| Now build the bonds;
| |
| | |
| <syntaxhighlight lang="bash">
| |
| vim /etc/sysconfig/network-scripts/ifcfg-ifn-bond1
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| # Internet-Facing Network - Bond
| |
| DEVICE="ifn-bond1"
| |
| BRIDGE="ifn-vbr1"
| |
| BOOTPROTO="none"
| |
| NM_CONTROLLED="no"
| |
| ONBOOT="yes"
| |
| BONDING_OPTS="mode=1 miimon=100 use_carrier=1 updelay=120000 downdelay=0 primary=ifn1"
| |
| </syntaxhighlight>
| |
| | |
| <syntaxhighlight lang="bash">
| |
| vim /etc/sysconfig/network-scripts/ifcfg-sn-bond1
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| # Storage Network - Bond
| |
| DEVICE="sn-bond1"
| |
| BOOTPROTO="none"
| |
| NM_CONTROLLED="no"
| |
| ONBOOT="yes"
| |
| BONDING_OPTS="mode=1 miimon=100 use_carrier=1 updelay=120000 downdelay=0 primary=sn1"
| |
| IPADDR="10.10.10.1"
| |
| NETMASK="255.255.0.0"
| |
| </syntaxhighlight>
| |
| | |
| <syntaxhighlight lang="bash">
| |
| vim /etc/sysconfig/network-scripts/ifcfg-bcn-bond1
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| # Back-Channel Network - Bond
| |
| DEVICE="bcn-bond1"
| |
| BOOTPROTO="none"
| |
| NM_CONTROLLED="no"
| |
| ONBOOT="yes"
| |
| BONDING_OPTS="mode=1 miimon=100 use_carrier=1 updelay=120000 downdelay=0 primary=bcn1"
| |
| IPADDR="10.20.10.1"
| |
| NETMASK="255.255.0.0"
| |
| </syntaxhighlight>
| |
| | |
| Now tell the interfaces to be slaves to their bonds;
| |
| | |
| Internet-Facing Network;
| |
| | |
| <syntaxhighlight lang="bash">
| |
| vim /etc/sysconfig/network-scripts/ifcfg-ifn1
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| # Internet-Facing Network - Link 1
| |
| DEVICE="ifn1"
| |
| NM_CONTROLLED="no"
| |
| BOOTPROTO="none"
| |
| ONBOOT="yes"
| |
| SLAVE="yes"
| |
| MASTER="ifn-bond1"
| |
| </syntaxhighlight>
| |
| | |
| <syntaxhighlight lang="bash">
| |
| vim /etc/sysconfig/network-scripts/ifcfg-ifn2
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| # Back-Channel Network - Link 2
| |
| DEVICE="ifn2"
| |
| NM_CONTROLLED="no"
| |
| BOOTPROTO="none"
| |
| ONBOOT="yes"
| |
| SLAVE="yes"
| |
| MASTER="ifn-bond1"
| |
| </syntaxhighlight>
| |
| | |
| Storage Network;
| |
| | |
| <syntaxhighlight lang="bash">
| |
| vim /etc/sysconfig/network-scripts/ifcfg-sn1
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| # Storage Network - Link 1
| |
| DEVICE="sn1"
| |
| NM_CONTROLLED="no"
| |
| BOOTPROTO="none"
| |
| ONBOOT="yes"
| |
| SLAVE="yes"
| |
| MASTER="sn-bond1"
| |
| </syntaxhighlight>
| |
| | |
| <syntaxhighlight lang="bash">
| |
| vim /etc/sysconfig/network-scripts/ifcfg-sn2
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| # Storage Network - Link 1
| |
| DEVICE="sn2"
| |
| NM_CONTROLLED="no"
| |
| BOOTPROTO="none"
| |
| ONBOOT="yes"
| |
| SLAVE="yes"
| |
| MASTER="sn-bond1"
| |
| </syntaxhighlight>
| |
| | |
| Back-Channel Network
| |
| | |
| <syntaxhighlight lang="bash">
| |
| vim /etc/sysconfig/network-scripts/ifcfg-bcn1
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| # Back-Channel Network - Link 1
| |
| DEVICE="bcn1"
| |
| NM_CONTROLLED="no"
| |
| BOOTPROTO="none"
| |
| ONBOOT="yes"
| |
| SLAVE="yes"
| |
| MASTER="bcn-bond1"
| |
| </syntaxhighlight>
| |
| | |
| <syntaxhighlight lang="bash">
| |
| vim /etc/sysconfig/network-scripts/ifcfg-bcn2
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| # Storage Network - Link 1
| |
| DEVICE="bcn2"
| |
| NM_CONTROLLED="no"
| |
| BOOTPROTO="none"
| |
| ONBOOT="yes"
| |
| SLAVE="yes"
| |
| MASTER="bcn-bond1"
| |
| </syntaxhighlight>
| |
| | |
| Now restart the network, confirm that the bonds and bridge are up and you are ready to proceed.
| |
| | |
| == Setup The hosts File ==
| |
| | |
| You can use [[DNS]] if you prefer. For now, lets use <span class="code">/etc/hosts</span> for node name resolution.
| |
| | |
| <syntaxhighlight lang="bash">
| |
| vim /etc/hosts
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
| |
| ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
| |
| | |
| # AN!Cluster 01, Node 01
| |
| 10.255.30.1 an-c03n01.ifn
| |
| 10.10.30.1 an-c03n01.sn
| |
| 10.20.30.1 an-c03n01.bcn an-c03n01 an-c03n01.alteeve.ca
| |
| 10.20.31.1 an-c03n01.ipmi
| |
| | |
| # AN!Cluster 01, Node 02
| |
| 10.255.30.2 an-c03n02.ifn
| |
| 10.10.30.2 an-c03n02.sn
| |
| 10.20.30.2 an-c03n02.bcn an-c03n02 an-c03n02.alteeve.ca
| |
| 10.20.31.2 an-c03n02.ipmi
| |
| | |
| # Foundation Pack
| |
| 10.20.2.7 an-p03 an-p03.alteeve.ca
| |
| </syntaxhighlight>
| |
| | |
| == Setup SSH ==
| |
| | |
| Same as [[AN!Cluster_Tutorial_2#Setting_up_SSH|before]].
| |
| | |
| == Populating And Pushing ~/.ssh/known_hosts ==
| |
| | |
| | |
| {|class="wikitable"
| |
| !<span class="code">an-c03n01</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| ssh-keygen -t rsa -N "" -b 8191 -f ~/.ssh/id_rsa
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Generating public/private rsa key pair.
| |
| | |
| Your identification has been saved in /root/.ssh/id_rsa.
| |
| Your public key has been saved in /root/.ssh/id_rsa.pub.
| |
| The key fingerprint is:
| |
| be:17:cc:23:8e:b1:b4:76:a1:e4:2a:91:cb:cd:d8:3a root@an-c03n01.alteeve.ca
| |
| The key's randomart image is:
| |
| +--[ RSA 8191]----+
| |
| | |
| |
| | |
| |
| | |
| |
| | |
| |
| | . So |
| |
| | o +.o = |
| |
| | . B + B.o o |
| |
| | E + B o.. |
| |
| | .+.o ... |
| |
| +-----------------+
| |
| </syntaxhighlight>
| |
| |-
| |
| !<span class="code">an-c03n01</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| ssh-keygen -t rsa -N "" -b 8191 -f ~/.ssh/id_rsa
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Generating public/private rsa key pair.
| |
| Created directory '/root/.ssh'.
| |
| Your identification has been saved in /root/.ssh/id_rsa.
| |
| Your public key has been saved in /root/.ssh/id_rsa.pub.
| |
| The key fingerprint is:
| |
| 71:b1:9d:31:9f:7a:c9:10:74:e0:4c:69:53:8f:e4:70 root@an-c03n02.alteeve.ca
| |
| The key's randomart image is:
| |
| +--[ RSA 8191]----+
| |
| | ..O+E |
| |
| | B+% + |
| |
| | . o.*.= .|
| |
| | o + . |
| |
| | S . + |
| |
| | . |
| |
| | |
| |
| | |
| |
| | |
| |
| +-----------------+
| |
| </syntaxhighlight>
| |
| |}
| |
| | |
| Setup autorized_keys:
| |
| | |
| {|class="wikitable"
| |
| !<span class="code">an-c03n01</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
| |
| ssh root@an-c03n02 "cat /root/.ssh/id_rsa.pub" >> ~/.ssh/authorized_keys
| |
| rsync -av ~/.ssh/authorized_keys root@an-c03n02:/root/.ssh/
| |
| ssh root@an-c03n01
| |
| ssh root@an-c03n01.alteeve.ca
| |
| ssh root@an-c03n02
| |
| ssh root@an-c03n02.alteeve.ca
| |
| rsync -av ~/.ssh/known_hosts root@an-c03n02:/root/.ssh/
| |
| rsync -av /etc/hosts root@an-c03n02:/etc/
| |
| </syntaxhighlight>
| |
| |-
| |
| !<span class="code">an-c03n01</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| ssh root@an-c03n01
| |
| </syntaxhighlight>
| |
| |}
| |
| | |
| == Keeping Time in Sync ==
| |
| | |
| It's not as critical as it used to be to keep the clocks on the nodes in sync, but it's still a good idea.
| |
| | |
| <syntaxhighlight lang="bash">
| |
| ln -sf /usr/share/zoneinfo/America/Toronto /etc/localtime
| |
| systemctl start ntpd.service
| |
| systemctl enable ntpd.service
| |
| </syntaxhighlight>
| |
| | |
| == Configuring IPMI ==
| |
| | |
| F19 specifics based on the [[IPMI]] tutorial.
| |
| | |
| <syntaxhighlight lang="bash">
| |
| yum -y install ipmitools OpenIPMI
| |
| systemctl start ipmi.service
| |
| systemctl enable ipmi.service
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| ln -s '/usr/lib/systemd/system/ipmi.service' '/etc/systemd/system/multi-user.target.wants/ipmi.service'
| |
| </syntaxhighlight>
| |
| | |
| Our servers use lan channel 2, yours might be 1 or something else. Experiment.
| |
| | |
| <syntaxhighlight lang="bash">
| |
| ipmitool lan print 2
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Set in Progress : Set Complete
| |
| Auth Type Support : NONE MD5 PASSWORD
| |
| Auth Type Enable : Callback : NONE MD5 PASSWORD
| |
| : User : NONE MD5 PASSWORD
| |
| : Operator : NONE MD5 PASSWORD
| |
| : Admin : NONE MD5 PASSWORD
| |
| : OEM : NONE MD5 PASSWORD
| |
| IP Address Source : BIOS Assigned Address
| |
| IP Address : 10.20.51.1
| |
| Subnet Mask : 255.255.0.0
| |
| MAC Address : 00:19:99:9a:d8:e8
| |
| SNMP Community String : public
| |
| IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
| |
| Default Gateway IP : 10.20.255.254
| |
| 802.1q VLAN ID : Disabled
| |
| 802.1q VLAN Priority : 0
| |
| RMCP+ Cipher Suites : 0,1,2,3,6,7,8,17
| |
| Cipher Suite Priv Max : OOOOOOOOXXXXXXX
| |
| : X=Cipher Suite Unused
| |
| : c=CALLBACK
| |
| : u=USER
| |
| : o=OPERATOR
| |
| : a=ADMIN
| |
| : O=OEM
| |
| </syntaxhighlight>
| |
| | |
| I need to set the IPs to <span class="code">10.20.31.1/16</span> and <span class="code">10.20.31.2/16</span> for nodes 1 and 2, respectively. I also want to set the password to <span class="code">secret</span> for the <span class="code">admin</span> user.
| |
| | |
| '''Node 01''' IP;
| |
| | |
| <syntaxhighlight lang="bash">
| |
| ipmitool lan set 2 ipsrc static
| |
| ipmitool lan set 2 ipaddr 10.20.31.
| |
| ipmitool lan set 2 netmask 255.255.0.0
| |
| ipmitool lan set 2 defgw ipaddr 10.20.255.254
| |
| ipmitool lan print 2
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Set in Progress : Set Complete
| |
| Auth Type Support : NONE MD5 PASSWORD
| |
| Auth Type Enable : Callback : NONE MD5 PASSWORD
| |
| : User : NONE MD5 PASSWORD
| |
| : Operator : NONE MD5 PASSWORD
| |
| : Admin : NONE MD5 PASSWORD
| |
| : OEM : NONE MD5 PASSWORD
| |
| IP Address Source : Static Address
| |
| IP Address : 10.20.31.1
| |
| Subnet Mask : 255.255.0.0
| |
| MAC Address : 00:19:99:9a:d8:e8
| |
| SNMP Community String : public
| |
| IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
| |
| Default Gateway IP : 10.20.255.254
| |
| 802.1q VLAN ID : Disabled
| |
| 802.1q VLAN Priority : 0
| |
| RMCP+ Cipher Suites : 0,1,2,3,6,7,8,17
| |
| Cipher Suite Priv Max : OOOOOOOOXXXXXXX
| |
| : X=Cipher Suite Unused
| |
| : c=CALLBACK
| |
| : u=USER
| |
| : o=OPERATOR
| |
| : a=ADMIN
| |
| : O=OEM
| |
| </syntaxhighlight>
| |
| | |
| '''Node 01''' IP;
| |
| | |
| <syntaxhighlight lang="bash">
| |
| ipmitool lan set 2 ipsrc static
| |
| ipmitool lan set 2 ipaddr 10.20.31.2
| |
| ipmitool lan set 2 netmask 255.255.0.0
| |
| ipmitool lan set 2 defgw ipaddr 10.20.255.254
| |
| ipmitool lan print 2
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Set in Progress : Set Complete
| |
| Auth Type Support : NONE MD5 PASSWORD
| |
| Auth Type Enable : Callback : NONE MD5 PASSWORD
| |
| : User : NONE MD5 PASSWORD
| |
| : Operator : NONE MD5 PASSWORD
| |
| : Admin : NONE MD5 PASSWORD
| |
| : OEM : NONE MD5 PASSWORD
| |
| IP Address Source : Static Address
| |
| IP Address : 10.20.31.2
| |
| Subnet Mask : 255.255.0.0
| |
| MAC Address : 00:19:99:9a:b1:78
| |
| SNMP Community String : public
| |
| IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
| |
| Default Gateway IP : 10.20.255.254
| |
| 802.1q VLAN ID : Disabled
| |
| 802.1q VLAN Priority : 0
| |
| RMCP+ Cipher Suites : 0,1,2,3,6,7,8,17
| |
| Cipher Suite Priv Max : OOOOOOOOXXXXXXX
| |
| : X=Cipher Suite Unused
| |
| : c=CALLBACK
| |
| : u=USER
| |
| : o=OPERATOR
| |
| : a=ADMIN
| |
| : O=OEM
| |
| </syntaxhighlight>
| |
| | |
| Set the password.
| |
| | |
| <syntaxhighlight lang="bash">
| |
| ipmitool user list 2
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| ID Name Callin Link Auth IPMI Msg Channel Priv Limit
| |
| 1 true true true Unknown (0x00)
| |
| 2 admin true true true OEM
| |
| Get User Access command failed (channel 2, user 3): Unknown (0x32)
| |
| </syntaxhighlight>
| |
| | |
| (ignore the error, it's harmless... *BOOM*)
| |
| | |
| We want to set <span class="code">admin</span>'s password, so we do:
| |
| | |
| {{note|1=The <span class="code">2</span> below is the ID number, not the LAN channel.}}
| |
| | |
| <syntaxhighlight lang="bash">
| |
| ipmitool user set password 2 secret
| |
| </syntaxhighlight>
| |
| | |
| Done!
| |
| | |
| = Configuring the Cluster =
| |
| | |
| Now we're getting down to business!
| |
| | |
| For this section, we will be working on <span class="code">an-c03n01</span> and using [[ssh]] to perform tasks on <span class="code">an-c03n02</span>.
| |
| | |
| {{note|1=TODO: explain what this is and how it works.}}
| |
| | |
| == Enable the pcs Daemon ==
| |
| | |
| {{note|1=Most of this section comes more or less verbatim from the main [http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Clusters_from_Scratch/index.html Clusters from Scratch] tutorial.}}
| |
| | |
| We will use [[pcs]], the Pacemaker Configuration System, to configure our cluster.
| |
| | |
| <syntaxhighlight lang="bash">
| |
| systemctl start pcsd.service
| |
| systemctl enable pcsd.service
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| ln -s '/usr/lib/systemd/system/pcsd.service' '/etc/systemd/system/multi-user.target.wants/pcsd.service'
| |
| </syntaxhighlight>
| |
| | |
| Now we need to set a password for the <span class="code">hacluster</span> user. This is the account used by <span class="code">pcs</span> on one node to talk to the <span class="code">pcs</span> [[daemon]] on the other node. For this tutorial, we will use the password <span class="code">secret</span>. You will want to use [https://xkcd.com/936/ a stronger password], of course.
| |
| | |
| <syntaxhighlight lang="bash">
| |
| echo secret | passwd --stdin hacluster
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Changing password for user hacluster.
| |
| passwd: all authentication tokens updated successfully.
| |
| </syntaxhighlight>
| |
| | |
| == Initializing the Cluster ==
| |
| | |
| One of the biggest reasons we're using the [[pcs]] tool, over something like [[crm]], is that it has been written to simplify the setup of clusters on [[Red Hat]] style operating systems. It will configure [[corosync]] automatically.
| |
| | |
| First, we need to know what <span class="code">hostname</span> we will need to use for <span class="code">[[pcs]]</span>.
| |
| | |
| '''Node 01''':
| |
| | |
| <syntaxhighlight lang="bash">
| |
| hostname
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| an-c03n01.alteeve.ca
| |
| </syntaxhighlight>
| |
| | |
| '''Node 02''':
| |
| | |
| <syntaxhighlight lang="bash">
| |
| hostname
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| an-c03n02.alteeve.ca
| |
| </syntaxhighlight>
| |
| | |
| Next, authenticate against the cluster nodes.
| |
| | |
| '''Both nodes''':
| |
| | |
| <syntaxhighlight lang="bash">
| |
| pcs cluster auth an-c03n01.alteeve.ca an-c03n02.alteeve.ca -u hacluster
| |
| </syntaxhighlight>
| |
| | |
| This will ask you for the user name and password. The default user name is <span class="code">hacluster</span> and we set the password to <span class="code">secret</span>.
| |
| | |
| <syntaxhighlight lang="text">
| |
| Password:
| |
| an-c03n01.alteeve.ca: 6e9f7e98-dfb7-4305-b8e0-d84bf4f93ce3
| |
| an-c03n01.alteeve.ca: Authorized
| |
| an-c03n02.alteeve.ca: ffee6a85-ddac-4d03-9b97-f136d532b478
| |
| an-c03n02.alteeve.ca: Authorized
| |
| </syntaxhighlight>
| |
| | |
| '''Do this on one node only''':
| |
| | |
| Now to initialize the cluster's communication and membership layer.
| |
| | |
| <syntaxhighlight lang="bash">
| |
| pcs cluster setup --name an-cluster-03 an-c03n01.alteeve.ca an-c03n02.alteeve.ca
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| an-c03n01.alteeve.ca: Succeeded
| |
| an-c03n02.alteeve.ca: Succeeded
| |
| </syntaxhighlight>
| |
| | |
| This will create the corosync configuration file <span class="code">/etc/corosync/corosync.conf</span>;
| |
| | |
| <syntaxhighlight lang="bash">
| |
| cat /etc/corosync/corosync.conf
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| totem {
| |
| version: 2
| |
| secauth: off
| |
| cluster_name: an-cluster-03
| |
| transport: udpu
| |
| }
| |
| | |
| nodelist {
| |
| node {
| |
| ring0_addr: an-c03n01.alteeve.ca
| |
| nodeid: 1
| |
| }
| |
| node {
| |
| ring0_addr: an-c03n02.alteeve.ca
| |
| nodeid: 2
| |
| }
| |
| }
| |
| | |
| quorum {
| |
| provider: corosync_votequorum
| |
| two_node: 1
| |
| }
| |
| | |
| logging {
| |
| to_syslog: yes
| |
| }
| |
| </syntaxhighlight>
| |
| | |
| == Start the Cluster For the First Time ==
| |
| | |
| This starts the cluster communication and membership layer for the first time.
| |
| | |
| '''On one node only''';
| |
| | |
| <syntaxhighlight lang="bash">
| |
| pcs cluster start --all
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| an-c03n01.alteeve.ca: Starting Cluster...
| |
| an-c03n02.alteeve.ca: Starting Cluster...
| |
| </syntaxhighlight>
| |
| | |
| After a few moments, you should be able to check the status;
| |
| | |
| <syntaxhighlight lang="bash">
| |
| pcs status
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Cluster name: an-cluster-03
| |
| WARNING: no stonith devices and stonith-enabled is not false
| |
| Last updated: Mon Jun 24 23:28:29 2013
| |
| Last change: Mon Jun 24 23:28:10 2013 via crmd on an-c03n01.alteeve.ca
| |
| Current DC: NONE
| |
| 2 Nodes configured, unknown expected votes
| |
| 0 Resources configured.
| |
| | |
| | |
| Node an-c03n01.alteeve.ca (1): UNCLEAN (offline)
| |
| Node an-c03n02.alteeve.ca (2): UNCLEAN (offline)
| |
| | |
| Full list of resources:
| |
| </syntaxhighlight>
| |
| | |
| The other node should show almost the identical output.
| |
| | |
| == Disabling Quorum ==
| |
| | |
| {{note|1=Show the math.}}
| |
| | |
| With quorum enabled, a two node cluster will lose quorum once either node fails. So we have to disable quorum.
| |
| | |
| By default, pacemaker uses quorum. You don't see this initially though;
| |
| | |
| <syntaxhighlight lang="bash">
| |
| pcs property
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Cluster Properties:
| |
| dc-version: 1.1.9-0.1318.a7966fb.git.fc18-a7966fb
| |
| cluster-infrastructure: corosync
| |
| </syntaxhighlight>
| |
| | |
| To disable it, we set <span class="code">no-quorum-policy=ignore</span>.
| |
| | |
| <syntaxhighlight lang="bash">
| |
| pcs property set no-quorum-policy=ignore
| |
| pcs property
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Cluster Properties:
| |
| dc-version: 1.1.9-0.1318.a7966fb.git.fc18-a7966fb
| |
| cluster-infrastructure: corosync
| |
| no-quorum-policy: ignore
| |
| </syntaxhighlight>
| |
| | |
| == Enabling and Configuring Fencing ==
| |
| | |
| We will use IPMI and PDU based fence devices for redundancy.
| |
| | |
| You can see the list of available fence agents here. You will need to find the one for your hardware fence devices.
| |
| | |
| <syntaxhighlight lang="bash">
| |
| pcs stonith list
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| fence_alom - Fence agent for Sun ALOM
| |
| fence_apc - Fence agent for APC over telnet/ssh
| |
| fence_apc_snmp - Fence agent for APC over SNMP
| |
| fence_baytech - I/O Fencing agent for Baytech RPC switches in combination with a Cyclades Terminal
| |
| Server
| |
| fence_bladecenter - Fence agent for IBM BladeCenter
| |
| fence_brocade - Fence agent for Brocade over telnet
| |
| fence_bullpap - I/O Fencing agent for Bull FAME architecture controlled by a PAP management console.
| |
| fence_cisco_mds - Fence agent for Cisco MDS
| |
| fence_cisco_ucs - Fence agent for Cisco UCS
| |
| fence_cpint - I/O Fencing agent for GFS on s390 and zSeries VM clusters
| |
| fence_drac - fencing agent for Dell Remote Access Card
| |
| fence_drac5 - Fence agent for Dell DRAC CMC/5
| |
| fence_eaton_snmp - Fence agent for Eaton over SNMP
| |
| fence_egenera - I/O Fencing agent for the Egenera BladeFrame
| |
| fence_eps - Fence agent for ePowerSwitch
| |
| fence_hpblade - Fence agent for HP BladeSystem
| |
| fence_ibmblade - Fence agent for IBM BladeCenter over SNMP
| |
| fence_idrac - Fence agent for IPMI over LAN
| |
| fence_ifmib - Fence agent for IF MIB
| |
| fence_ilo - Fence agent for HP iLO
| |
| fence_ilo2 - Fence agent for HP iLO
| |
| fence_ilo3 - Fence agent for IPMI over LAN
| |
| fence_ilo_mp - Fence agent for HP iLO MP
| |
| fence_imm - Fence agent for IPMI over LAN
| |
| fence_intelmodular - Fence agent for Intel Modular
| |
| fence_ipdu - Fence agent for iPDU over SNMP
| |
| fence_ipmilan - Fence agent for IPMI over LAN
| |
| fence_kdump - Fence agent for use with kdump
| |
| fence_ldom - Fence agent for Sun LDOM
| |
| fence_lpar - Fence agent for IBM LPAR
| |
| fence_mcdata - I/O Fencing agent for McData FC switches
| |
| fence_rackswitch - fence_rackswitch - I/O Fencing agent for RackSaver RackSwitch
| |
| fence_rhevm - Fence agent for RHEV-M REST API
| |
| fence_rsa - Fence agent for IBM RSA
| |
| fence_rsb - I/O Fencing agent for Fujitsu-Siemens RSB
| |
| fence_sanbox2 - Fence agent for QLogic SANBox2 FC switches
| |
| fence_scsi - fence agent for SCSI-3 persistent reservations
| |
| fence_virsh - Fence agent for virsh
| |
| fence_vixel - I/O Fencing agent for Vixel FC switches
| |
| fence_vmware - Fence agent for VMWare
| |
| fence_vmware_soap - Fence agent for VMWare over SOAP API
| |
| fence_wti - Fence agent for WTI
| |
| fence_xcat - I/O Fencing agent for xcat environments
| |
| fence_xenapi - XenAPI based fencing for the Citrix XenServer virtual machines.
| |
| fence_zvm - I/O Fencing agent for GFS on s390 and zSeries VM clusters
| |
| </syntaxhighlight>
| |
| | |
| We will use <span class="code">fence_ipmilan</span> and <span class="code">fence_apc_snmp</span>.
| |
| | |
| === Configuring IPMI Fencing ===
| |
| | |
| Every fence agent has a possibly unique subset of options that can be used. You can see a brief description of these options with the <span class="code">pcs stonith describe fence_X</span> command. Let's look at the options available for <span class="code">fence_ipmilan</span>.
| |
| | |
| <syntaxhighlight lang="bash">
| |
| pcs stonith describe fence_ipmilan
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Stonith options for: fence_ipmilan
| |
| auth: IPMI Lan Auth type (md5, password, or none)
| |
| ipaddr: IPMI Lan IP to talk to
| |
| passwd: Password (if required) to control power on IPMI device
| |
| passwd_script: Script to retrieve password (if required)
| |
| lanplus: Use Lanplus
| |
| login: Username/Login (if required) to control power on IPMI device
| |
| action: Operation to perform. Valid operations: on, off, reboot, status, list, diag, monitor or metadata
| |
| timeout: Timeout (sec) for IPMI operation
| |
| cipher: Ciphersuite to use (same as ipmitool -C parameter)
| |
| method: Method to fence (onoff or cycle)
| |
| power_wait: Wait X seconds after on/off operation
| |
| delay: Wait X seconds before fencing is started
| |
| privlvl: Privilege level on IPMI device
| |
| verbose: Verbose mode
| |
| </syntaxhighlight>
| |
| | |
| One of the nice things about pcs is that it allows us to create a test file to prepare all our changes in. Then, when we're happy with the changes, merge them into the running cluster. So let's make a copy called <span class="code">stonith_cfg</span>
| |
| | |
| <syntaxhighlight lang="bash">
| |
| pcs cluster cib stonith_cfg
| |
| </syntaxhighlight>
| |
| | |
| Now add [[IPMI]] fencing.
| |
| | |
| <syntaxhighlight lang="bash">
| |
| # unique name fence agent target node device addr options
| |
| pcs stonith create fence_n01_ipmi fence_ipmilan pcmk_host_list="an-c03n01.alteeve.ca" ipaddr="an-c03n01.ipmi" action="reboot" login="admin" passwd="secret" delay=15 op monitor interval=60s
| |
| pcs stonith create fence_n02_ipmi fence_ipmilan pcmk_host_list="an-c03n02.alteeve.ca" ipaddr="an-c03n02.ipmi" action="reboot" login="admin" passwd="secret" op monitor interval=60s
| |
| </syntaxhighlight>
| |
| | |
| Note that <span class="code">fence_n01_ipmi</span> has a <span class="code">delay=15</span> set but <span class="code">fence_n02_ipmi</span> does not. If the network connection breaks between the two nodes, they will both try to fence each other at the same time. If <span class="code">acpid</span> is running, the slower node will not die right away. It will continue to run for up to four more seconds, ample time for it to also initiate a fence against the faster node. The end result is that both nodes get fenced. The ten-second delay protects against this by causing <span class="code">an-c03n02</span> to pause for <span class="code">10</span> seconds before initiating a fence against <span class="code">an-c03n01</span>. If both nodes are alive, <span class="code">an-c03n02</span> will power off before the 10 seconds pass, so it will never fence <span class="code">an-c03n01</span>. However, if <span class="code">an-c03n01</span> really is dead, after the ten seconds have elapsed, fencing will proceed as normal.
| |
| | |
| {{note|1=At the time of writing, <span class="code">pcmk_reboot_action</span> is needed to override pacemaker's global fence action and <span class="code">pcmk_reboot_action</span> is not recognized by pcs. Both of these issues will be resolved shortly; Pacemaker will honour <span class="code">action="..."</span> in v1.1.10 and pcs will recognize <span class="code">pcmk_*</span> special attributes "real soon now". Until then, the <span class="code">--force</span> switch is needed.}}
| |
| | |
| Next, add the [[PDU]] fencing. This requires distinct "off" and "on" actions for each outlet on each PDU. With two nodes, each with two [[PSU]]s, this translates to eight commands. The "off" commands will be monitored to alert us if the PDU fails for some reason. There is no reason to monitor the "on" actions (it would be redundant). Note also that we don't bother using a "delay". The IPMI fence method will go first, before the PDU actions, so the PDU is already delayed.
| |
| | |
| <syntaxhighlight lang="bash">
| |
| # Node 1 - off
| |
| pcs stonith create fence_n01_pdu1_off fence_apc_snmp pcmk_host_list="an-c03n01.alteeve.ca" ipaddr="an-p01" action="off" port="1" op monitor interval="60s"
| |
| pcs stonith create fence_n01_pdu2_off fence_apc_snmp pcmk_host_list="an-c03n01.alteeve.ca" ipaddr="an-p02" action="off" port="1" power_wait="5" op monitor interval="60s"
| |
| | |
| # Node 1 - on
| |
| pcs stonith create fence_n01_pdu1_on fence_apc_snmp pcmk_host_list="an-c03n01.alteeve.ca" ipaddr="an-p01" action="on" port="1"
| |
| pcs stonith create fence_n01_pdu2_on fence_apc_snmp pcmk_host_list="an-c03n01.alteeve.ca" ipaddr="an-p02" action="on" port="1"
| |
| | |
| # Node 2 - off
| |
| pcs stonith create fence_n02_pdu1_off fence_apc_snmp pcmk_host_list="an-c03n02.alteeve.ca" ipaddr="an-p01" action="off" port="2" op monitor interval="60s"
| |
| pcs stonith create fence_n02_pdu2_off fence_apc_snmp pcmk_host_list="an-c03n02.alteeve.ca" ipaddr="an-p02" action="off" port="2" power_wait="5" op monitor interval="60s"
| |
| | |
| # Node 2 - on
| |
| pcs stonith create fence_n02_pdu1_on fence_apc_snmp pcmk_host_list="an-c03n02.alteeve.ca" ipaddr="an-p01" action="on" port="2"
| |
| pcs stonith create fence_n02_pdu2_on fence_apc_snmp pcmk_host_list="an-c03n02.alteeve.ca" ipaddr="an-p02" action="on" port="2"
| |
| </syntaxhighlight>
| |
| | |
| We can check the new configuration now;
| |
| | |
| <syntaxhighlight lang="bash">
| |
| pcs status
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Cluster name: an-cluster-03
| |
| Last updated: Tue Jul 2 16:41:55 2013
| |
| Last change: Tue Jul 2 16:41:44 2013 via cibadmin on an-c03n01.alteeve.ca
| |
| Stack: corosync
| |
| Current DC: an-c03n01.alteeve.ca (1) - partition with quorum
| |
| Version: 1.1.9-3.fc19-781a388
| |
| 2 Nodes configured, unknown expected votes
| |
| 10 Resources configured.
| |
| | |
| | |
| Online: [ an-c03n01.alteeve.ca an-c03n02.alteeve.ca ]
| |
| | |
| Full list of resources:
| |
| | |
| fence_n01_ipmi (stonith:fence_ipmilan): Started an-c03n01.alteeve.ca
| |
| fence_n02_ipmi (stonith:fence_ipmilan): Started an-c03n02.alteeve.ca
| |
| fence_n01_pdu1_off (stonith:fence_apc_snmp): Started an-c03n01.alteeve.ca
| |
| fence_n01_pdu2_off (stonith:fence_apc_snmp): Started an-c03n02.alteeve.ca
| |
| fence_n02_pdu1_off (stonith:fence_apc_snmp): Started an-c03n01.alteeve.ca
| |
| fence_n02_pdu2_off (stonith:fence_apc_snmp): Started an-c03n02.alteeve.ca
| |
| fence_n01_pdu1_on (stonith:fence_apc_snmp): Started an-c03n01.alteeve.ca
| |
| fence_n01_pdu2_on (stonith:fence_apc_snmp): Started an-c03n02.alteeve.ca
| |
| fence_n02_pdu1_on (stonith:fence_apc_snmp): Started an-c03n01.alteeve.ca
| |
| fence_n02_pdu2_on (stonith:fence_apc_snmp): Started an-c03n02.alteeve.ca
| |
| </syntaxhighlight>
| |
| | |
| Before we proceed, we need to tell pacemaker to use fencing;
| |
| | |
| <syntaxhighlight lang="bash">
| |
| pcs property set stonith-enabled=true
| |
| pcs property
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Cluster Properties:
| |
| Cluster Properties:
| |
| cluster-infrastructure: corosync
| |
| dc-version: 1.1.9-3.fc19-781a388
| |
| no-quorum-policy: ignore
| |
| stonith-enabled: true
| |
| </syntaxhighlight>
| |
| | |
| Excellent!
| |
| | |
| == Configuring Fence Levels ==
| |
| | |
| The goal of fence levels is to tell pacemaker that there are "fence methods" to try and to impose an order on those methods. Each method composes one or more fence primitives and, when 2 or more primitives are tied together, that all primitives must succeed for the overall method to succeed.
| |
| | |
| So in our case; the order we want is;
| |
| | |
| * IPMI -> PDUs
| |
| | |
| The reason is that when IPMI fencing succeeds, we can be very certain the node is truly fenced. When PDU fencing succeeds, it only confirms that the power outlets were cycled. If someone moved a node's power cables to another outlet, we'll get a false positive. On that topic, tie-down the node's PSU cables to the PDU's cable tray when possible, clearly label the power cables and wrap the fingers of anyone who might move them around.
| |
| | |
| The PDU fencing needs to be implemented using four steps;
| |
| | |
| * PDU 1, outlet X -> off
| |
| * PDU 2, outlet X -> off
| |
| ** The <span class="code">power_wait="5"</span> setting for the <span class="code">fence_n0X_pdu2_off</span> primitives will cause a 5 second delay here, giving ample time to ensure the nodes lose power
| |
| * PDU 1, outlet X -> on
| |
| * PDU 2, outlet X -> on
| |
| | |
| This is to ensure that both outlets are off at the same time, ensuring that the node loses power. This works because <span class="code">fencing_topology</span> acts serially.
| |
| | |
| Putting all this together, we issue this command;
| |
| | |
| <syntaxhighlight lang="bash">
| |
| pcs stonith level add 1 an-c03n01.alteeve.ca fence_n01_ipmi
| |
| pcs stonith level add 1 an-c03n02.alteeve.ca fence_n02_ipmi
| |
| </syntaxhighlight>
| |
| | |
| The <span class="code">1</span> tells pacemaker that this is our highest priority fence method. We can see that this was set using pcs;
| |
| | |
| <syntaxhighlight lang="bash">
| |
| pcs stonith level
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| Node: an-c03n01.alteeve.ca
| |
| Level 1 - fence_n01_ipmi
| |
| Node: an-c03n02.alteeve.ca
| |
| Level 1 - fence_n02_ipmi
| |
| </syntaxhighlight>
| |
| | |
| Now we'll tell pacemaker to use the PDUs as the second fence method. Here we tie together the two <span class="code">off</span> calls and the two <span class="code">on</span> calls into a single method.
| |
| | |
| <syntaxhighlight lang="bash">
| |
| pcs stonith level add 2 an-c03n01.alteeve.ca fence_n01_pdu1_off,fence_n01_pdu2_off,fence_n01_pdu1_on,fence_n01_pdu2_on
| |
| pcs stonith level add 2 an-c03n02.alteeve.ca fence_n02_pdu1_off,fence_n02_pdu2_off,fence_n02_pdu1_on,fence_n02_pdu2_on
| |
| </syntaxhighlight>
| |
| | |
| Check again and we'll see that the new methods were added.
| |
| | |
| <syntaxhighlight lang="bash">
| |
| pcs stonith level
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| Node: an-c03n01.alteeve.ca
| |
| Level 1 - fence_n01_ipmi
| |
| Level 2 - fence_n01_pdu1_off,fence_n01_pdu2_off,fence_n01_pdu1_on,fence_n01_pdu2_on
| |
| Node: an-c03n02.alteeve.ca
| |
| Level 1 - fence_n02_ipmi
| |
| Level 2 - fence_n02_pdu1_off,fence_n02_pdu2_off,fence_n02_pdu1_on,fence_n02_pdu2_on
| |
| </syntaxhighlight>
| |
| | |
| For those of us who are [[XML]] fans, this is what the [[cib]] looks like now:
| |
| | |
| <syntaxhighlight lang="bash">
| |
| cat /var/lib/pacemaker/cib/cib.xml
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="xml">
| |
| <cib epoch="18" num_updates="0" admin_epoch="0" validate-with="pacemaker-1.2" cib-last-written="Thu Jul 18 13:15:53 2013" update-origin="an-c03n01.alteeve.ca" update-client="cibadmin" crm_feature_set="3.0.7" have-quorum="1" dc-uuid="1">
| |
| <configuration>
| |
| <crm_config>
| |
| <cluster_property_set id="cib-bootstrap-options">
| |
| <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.9-dde1c52"/>
| |
| <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
| |
| <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore"/>
| |
| </cluster_property_set>
| |
| </crm_config>
| |
| <nodes>
| |
| <node id="1" uname="an-c03n01.alteeve.ca"/>
| |
| <node id="2" uname="an-c03n02.alteeve.ca"/>
| |
| </nodes>
| |
| <resources>
| |
| <primitive class="stonith" id="fence_n01_ipmi" type="fence_ipmilan">
| |
| <instance_attributes id="fence_n01_ipmi-instance_attributes">
| |
| <nvpair id="fence_n01_ipmi-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="an-c03n01.alteeve.ca"/>
| |
| <nvpair id="fence_n01_ipmi-instance_attributes-ipaddr" name="ipaddr" value="an-c03n01.ipmi"/>
| |
| <nvpair id="fence_n01_ipmi-instance_attributes-action" name="action" value="reboot"/>
| |
| <nvpair id="fence_n01_ipmi-instance_attributes-login" name="login" value="admin"/>
| |
| <nvpair id="fence_n01_ipmi-instance_attributes-passwd" name="passwd" value="secret"/>
| |
| <nvpair id="fence_n01_ipmi-instance_attributes-delay" name="delay" value="15"/>
| |
| </instance_attributes>
| |
| <operations>
| |
| <op id="fence_n01_ipmi-monitor-interval-60s" interval="60s" name="monitor"/>
| |
| </operations>
| |
| </primitive>
| |
| <primitive class="stonith" id="fence_n02_ipmi" type="fence_ipmilan">
| |
| <instance_attributes id="fence_n02_ipmi-instance_attributes">
| |
| <nvpair id="fence_n02_ipmi-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="an-c03n02.alteeve.ca"/>
| |
| <nvpair id="fence_n02_ipmi-instance_attributes-ipaddr" name="ipaddr" value="an-c03n02.ipmi"/>
| |
| <nvpair id="fence_n02_ipmi-instance_attributes-action" name="action" value="reboot"/>
| |
| <nvpair id="fence_n02_ipmi-instance_attributes-login" name="login" value="admin"/>
| |
| <nvpair id="fence_n02_ipmi-instance_attributes-passwd" name="passwd" value="secret"/>
| |
| </instance_attributes>
| |
| <operations>
| |
| <op id="fence_n02_ipmi-monitor-interval-60s" interval="60s" name="monitor"/>
| |
| </operations>
| |
| </primitive>
| |
| <primitive class="stonith" id="fence_n01_pdu1_off" type="fence_apc_snmp">
| |
| <instance_attributes id="fence_n01_pdu1_off-instance_attributes">
| |
| <nvpair id="fence_n01_pdu1_off-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="an-c03n01.alteeve.ca"/>
| |
| <nvpair id="fence_n01_pdu1_off-instance_attributes-ipaddr" name="ipaddr" value="an-p01"/>
| |
| <nvpair id="fence_n01_pdu1_off-instance_attributes-action" name="action" value="off"/>
| |
| <nvpair id="fence_n01_pdu1_off-instance_attributes-port" name="port" value="1"/>
| |
| </instance_attributes>
| |
| <operations>
| |
| <op id="fence_n01_pdu1_off-monitor-interval-60s" interval="60s" name="monitor"/>
| |
| </operations>
| |
| </primitive>
| |
| <primitive class="stonith" id="fence_n01_pdu2_off" type="fence_apc_snmp">
| |
| <instance_attributes id="fence_n01_pdu2_off-instance_attributes">
| |
| <nvpair id="fence_n01_pdu2_off-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="an-c03n01.alteeve.ca"/>
| |
| <nvpair id="fence_n01_pdu2_off-instance_attributes-ipaddr" name="ipaddr" value="an-p02"/>
| |
| <nvpair id="fence_n01_pdu2_off-instance_attributes-action" name="action" value="off"/>
| |
| <nvpair id="fence_n01_pdu2_off-instance_attributes-port" name="port" value="1"/>
| |
| <nvpair id="fence_n01_pdu2_off-instance_attributes-power_wait" name="power_wait" value="5"/>
| |
| </instance_attributes>
| |
| <operations>
| |
| <op id="fence_n01_pdu2_off-monitor-interval-60s" interval="60s" name="monitor"/>
| |
| </operations>
| |
| </primitive>
| |
| <primitive class="stonith" id="fence_n01_pdu1_on" type="fence_apc_snmp">
| |
| <instance_attributes id="fence_n01_pdu1_on-instance_attributes">
| |
| <nvpair id="fence_n01_pdu1_on-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="an-c03n01.alteeve.ca"/>
| |
| <nvpair id="fence_n01_pdu1_on-instance_attributes-ipaddr" name="ipaddr" value="an-p01"/>
| |
| <nvpair id="fence_n01_pdu1_on-instance_attributes-action" name="action" value="on"/>
| |
| <nvpair id="fence_n01_pdu1_on-instance_attributes-port" name="port" value="1"/>
| |
| </instance_attributes>
| |
| <operations>
| |
| <op id="fence_n01_pdu1_on-monitor-interval-60s" interval="60s" name="monitor"/>
| |
| </operations>
| |
| </primitive>
| |
| <primitive class="stonith" id="fence_n01_pdu2_on" type="fence_apc_snmp">
| |
| <instance_attributes id="fence_n01_pdu2_on-instance_attributes">
| |
| <nvpair id="fence_n01_pdu2_on-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="an-c03n01.alteeve.ca"/>
| |
| <nvpair id="fence_n01_pdu2_on-instance_attributes-ipaddr" name="ipaddr" value="an-p02"/>
| |
| <nvpair id="fence_n01_pdu2_on-instance_attributes-action" name="action" value="on"/>
| |
| <nvpair id="fence_n01_pdu2_on-instance_attributes-port" name="port" value="1"/>
| |
| </instance_attributes>
| |
| <operations>
| |
| <op id="fence_n01_pdu2_on-monitor-interval-60s" interval="60s" name="monitor"/>
| |
| </operations>
| |
| </primitive>
| |
| <primitive class="stonith" id="fence_n02_pdu1_off" type="fence_apc_snmp">
| |
| <instance_attributes id="fence_n02_pdu1_off-instance_attributes">
| |
| <nvpair id="fence_n02_pdu1_off-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="an-c03n02.alteeve.ca"/>
| |
| <nvpair id="fence_n02_pdu1_off-instance_attributes-ipaddr" name="ipaddr" value="an-p01"/>
| |
| <nvpair id="fence_n02_pdu1_off-instance_attributes-action" name="action" value="off"/>
| |
| <nvpair id="fence_n02_pdu1_off-instance_attributes-port" name="port" value="2"/>
| |
| </instance_attributes>
| |
| <operations>
| |
| <op id="fence_n02_pdu1_off-monitor-interval-60s" interval="60s" name="monitor"/>
| |
| </operations>
| |
| </primitive>
| |
| <primitive class="stonith" id="fence_n02_pdu2_off" type="fence_apc_snmp">
| |
| <instance_attributes id="fence_n02_pdu2_off-instance_attributes">
| |
| <nvpair id="fence_n02_pdu2_off-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="an-c03n02.alteeve.ca"/>
| |
| <nvpair id="fence_n02_pdu2_off-instance_attributes-ipaddr" name="ipaddr" value="an-p02"/>
| |
| <nvpair id="fence_n02_pdu2_off-instance_attributes-action" name="action" value="off"/>
| |
| <nvpair id="fence_n02_pdu2_off-instance_attributes-port" name="port" value="2"/>
| |
| <nvpair id="fence_n02_pdu2_off-instance_attributes-power_wait" name="power_wait" value="5"/>
| |
| </instance_attributes>
| |
| <operations>
| |
| <op id="fence_n02_pdu2_off-monitor-interval-60s" interval="60s" name="monitor"/>
| |
| </operations>
| |
| </primitive>
| |
| <primitive class="stonith" id="fence_n02_pdu1_on" type="fence_apc_snmp">
| |
| <instance_attributes id="fence_n02_pdu1_on-instance_attributes">
| |
| <nvpair id="fence_n02_pdu1_on-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="an-c03n02.alteeve.ca"/>
| |
| <nvpair id="fence_n02_pdu1_on-instance_attributes-ipaddr" name="ipaddr" value="an-p01"/>
| |
| <nvpair id="fence_n02_pdu1_on-instance_attributes-action" name="action" value="on"/>
| |
| <nvpair id="fence_n02_pdu1_on-instance_attributes-port" name="port" value="2"/>
| |
| </instance_attributes>
| |
| <operations>
| |
| <op id="fence_n02_pdu1_on-monitor-interval-60s" interval="60s" name="monitor"/>
| |
| </operations>
| |
| </primitive>
| |
| <primitive class="stonith" id="fence_n02_pdu2_on" type="fence_apc_snmp">
| |
| <instance_attributes id="fence_n02_pdu2_on-instance_attributes">
| |
| <nvpair id="fence_n02_pdu2_on-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="an-c03n02.alteeve.ca"/>
| |
| <nvpair id="fence_n02_pdu2_on-instance_attributes-ipaddr" name="ipaddr" value="an-p02"/>
| |
| <nvpair id="fence_n02_pdu2_on-instance_attributes-action" name="action" value="on"/>
| |
| <nvpair id="fence_n02_pdu2_on-instance_attributes-port" name="port" value="2"/>
| |
| </instance_attributes>
| |
| <operations>
| |
| <op id="fence_n02_pdu2_on-monitor-interval-60s" interval="60s" name="monitor"/>
| |
| </operations>
| |
| </primitive>
| |
| </resources>
| |
| <constraints/>
| |
| <fencing-topology>
| |
| <fencing-level devices="fence_n01_ipmi" id="fl-an-c03n01.alteeve.ca-1" index="1" target="an-c03n01.alteeve.ca"/>
| |
| <fencing-level devices="fence_n02_ipmi" id="fl-an-c03n02.alteeve.ca-1" index="1" target="an-c03n02.alteeve.ca"/>
| |
| <fencing-level devices="fence_n01_pdu1_off,fence_n01_pdu2_off,fence_n01_pdu1_on,fence_n01_pdu2_on" id="fl-an-c03n01.alteeve.ca-2" index="2" target="an-c03n01.alteeve.ca"/>
| |
| <fencing-level devices="fence_n02_pdu1_off,fence_n02_pdu2_off,fence_n02_pdu1_on,fence_n02_pdu2_on" id="fl-an-c03n02.alteeve.ca-2" index="2" target="an-c03n02.alteeve.ca"/>
| |
| </fencing-topology>
| |
| </configuration>
| |
| </cib>
| |
| </syntaxhighlight>
| |
| | |
| == Fencing using fence_virsh ==
| |
| | |
| {{note|1=To write this section, I used two virtual machines called <span class="code">pcmk1</span> and <span class="code">pcmk2</span>.}}
| |
| | |
| If you are trying to learn fencing using KVM or Xen virtual machines, you can use the <span class="code">fence_virsh</span>. You can also use <span class="code">[[Fencing KVM Virtual Servers|fence_virtd]]</span>, which is actually recommended by many, but I have found it to be rather unreliable.
| |
| | |
| To use <span class="code">fence_virsh</span>, first install it.
| |
| | |
| <syntaxhighlight lang="bash">
| |
| yum -y install fence-agents-virsh
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| <lots of yum output>
| |
| </syntaxhighlight>
| |
| | |
| Now test it from the command line. To do this, we need to know a few things;
| |
| * The VM host is at IP <span class="code">192.168.122.1</span>
| |
| * The username and password (<span class="code">-l</span> and <span class="code">-p</span> respectively) are the credentials used to log into VM host over [[SSH]].
| |
| ** If you don't want your password to be shown, create a little shell script that simply prints your password and then use <span class="code">-S /path/to/script</span> instead of <span class="code">-p "secret"</span>.
| |
| * The name of the target VM, as shown by <span class="code">virsh list --all</span> on the host, is the node (<span class="code">-n</span>) value. For me, the nodes are called <span class="code">an-c03n01</span> and <span class="code">an-c03n02</span>.
| |
| | |
| === Create the Password Script ===
| |
| | |
| In my case, the host is called '<span class="code">lemass</span>', so I want to create a password script called '<span class="code">/root/lemass.pw</span>'. The name of the script is entirely up to you.
| |
| | |
| {|class="wikitable"
| |
| !<span class="code">an-c03n01</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| vim /root/lemass.pw
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| echo "my secret password"
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| chmod 755 /root/lemass.pw
| |
| /root/lemass.pw
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| my secret password
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| rsync -av /root/lemass.pw root@an-c03n02:/root/
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| sending incremental file list
| |
| lemass.pw
| |
| | |
| sent 102 bytes received 31 bytes 266.00 bytes/sec
| |
| total size is 25 speedup is 0.19
| |
| </syntaxhighlight>
| |
| |-
| |
| !<span class="code">an-c03n02</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| /root/lemass.pw
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| my secret password
| |
| </syntaxhighlight>
| |
| |}
| |
| | |
| Done.
| |
| | |
| === Test fence_virsh Status from the Command Line ===
| |
| | |
| {|class="wikitable"
| |
| !<span class="code">an-c03n01</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| fence_virsh -a 192.168.122.1 -l root -S /root/lemass.pw -n an-c03n02 -o status
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Status: ON
| |
| </syntaxhighlight>
| |
| |-
| |
| !<span class="code">an-c03n02</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| fence_virsh -a 192.168.122.1 -l root -S /root/lemass.pw -n an-c03n01 -o status
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Status: ON
| |
| </syntaxhighlight>
| |
| |}
| |
| | |
| Excellent! Now to configure it in pacemaker;
| |
| | |
| {|class="wikitable"
| |
| !<span class="code">an-c03n01</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| pcs stonith create fence_n01_virsh fence_virsh pcmk_host_list="an-c03n01.alteeve.ca" ipaddr="192.168.122.1" action="reboot" login="root" passwd_script="/root/lemass.pw" port="an-c03n01" delay=15 op monitor interval=60s
| |
| pcs stonith create fence_n02_virsh fence_virsh pcmk_host_list="an-c03n02.alteeve.ca" ipaddr="192.168.122.1" action="reboot" login="root" passwd_script="/root/lemass.pw" port="an-c03n02" op monitor interval=60s
| |
| pcs cluster status
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Cluster Status:
| |
| Last updated: Sun Jan 26 15:45:31 2014
| |
| Last change: Sun Jan 26 15:06:14 2014 via crmd on an-c03n01.alteeve.ca
| |
| Stack: corosync
| |
| Current DC: an-c03n02.alteeve.ca (2) - partition with quorum
| |
| Version: 1.1.10-19.el7-368c726
| |
| 2 Nodes configured
| |
| 2 Resources configured
| |
| | |
| PCSD Status:
| |
| an-c03n01.alteeve.ca:
| |
| an-c03n01.alteeve.ca: Online
| |
| an-c03n02.alteeve.ca:
| |
| an-c03n02.alteeve.ca: Online
| |
| </syntaxhighlight>
| |
| |}
| |
| | |
| === Test Fencing ===
| |
| | |
| ToDo: Kill each node with <span class="code">echo c > /proc/sysrq-trigger</span> and make sure the other node fences it.
| |
| | |
| = Shared Storage =
| |
| | |
| == DRBD ==
| |
| | |
| We will use DRBD 8.4.
| |
| | |
| === Install DRBD 8.4.4 from AN! ===
| |
| | |
| {{warning|1=this doesn't work.}}
| |
| | |
| ToDo: Make a proper repo
| |
| | |
| {|class="wikitable"
| |
| !<span class="code">an-c03n01</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| rpm -Uvh https://alteeve.ca/files/AN-Cluster_Tutorial_3/drbd84/drbd-8.4.4-4.el7.x86_64.rpm \
| |
| https://alteeve.ca/files/AN-Cluster_Tutorial_3/drbd84/drbd-bash-completion-8.4.4-4.el7.x86_64.rpm \
| |
| https://alteeve.ca/files/AN-Cluster_Tutorial_3/drbd84/drbd-pacemaker-8.4.4-4.el7.x86_64.rpm \
| |
| https://alteeve.ca/files/AN-Cluster_Tutorial_3/drbd84/drbd-udev-8.4.4-4.el7.x86_64.rpm \
| |
| https://alteeve.ca/files/AN-Cluster_Tutorial_3/drbd84/drbd-utils-8.4.4-4.el7.x86_64.rpm \
| |
| https://alteeve.ca/files/AN-Cluster_Tutorial_3/drbd84/drbd-heartbeat-8.4.4-4.el7.x86_64.rpm \
| |
| https://alteeve.ca/files/AN-Cluster_Tutorial_3/drbd84/drbd-xen-8.4.4-4.el7.x86_64.rpm
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| </syntaxhighlight>
| |
| |-
| |
| !<span class="code">an-c03n02</span> | |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| </syntaxhighlight>
| |
| |}
| |
| | |
| | |
| === Install DRBD 8.4.4 From Source ===
| |
| | |
| At this time, no EPEL repo exists for RHEL7, and the Fedora RPMs don't work, so we will install DRBD 8.4.4 from source.
| |
| | |
| Install dependencies:
| |
| | |
| <syntaxhighlight lang="bash">
| |
| yum -y install gcc flex rpm-build wget kernel-devel
| |
| wget -c http://oss.linbit.com/drbd/8.4/drbd-8.4.4.tar.gz
| |
| tar -xvzf drbd-8.4.4.tar.gz
| |
| cd drbd-8.4.4
| |
| ./configure \
| |
| --prefix=/usr \
| |
| --localstatedir=/var \
| |
| --sysconfdir=/etc \
| |
| --with-km \
| |
| --with-udev \
| |
| --with-pacemaker \
| |
| --with-bashcompletion \
| |
| --with-utils \
| |
| --without-xen \
| |
| --without-rgmanager \
| |
| --without-heartbeat
| |
| make
| |
| make install
| |
| </syntaxhighlight>
| |
| | |
| Don't let DRBD start on boot (pacemaker will handle it for us).
| |
| | |
| <syntaxhighlight lang="bash">
| |
| systemctl disable drbd.service
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| drbd.service is not a native service, redirecting to /sbin/chkconfig.
| |
| Executing /sbin/chkconfig drbd off
| |
| </syntaxhighlight>
| |
| | |
| Done.
| |
| | |
| === Optional; Make RPMs ===
| |
| | |
| {{warning|1=I've not been able to get the RPMs genreated here to install yet. I'd recommend skipping this, unless you want to help sort out the problems. :) }}
| |
| | |
| After <span class="code">./configure</span> above, you can make RPMs instead of installing directly.
| |
| | |
| Dependencies:
| |
| | |
| <syntaxhighlight lang="bash">
| |
| yum install rpmdevtools redhat-rpm-config kernel-devel
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| <install text>
| |
| </syntaxhighlight>
| |
| | |
| Setup RPM dev tree:
| |
| | |
| <syntaxhighlight lang="bash">
| |
| cd ~
| |
| rpmdev-setuptree
| |
| ls -lah ~/rpmbuild/
| |
| wget -c http://oss.linbit.com/drbd/8.4/drbd-8.4.4.tar.gz
| |
| tar -xvzf drbd-8.4.4.tar.gz
| |
| cd drbd-8.4.4
| |
| ./configure \
| |
| --prefix=/usr \
| |
| --localstatedir=/var \
| |
| --sysconfdir=/etc \
| |
| --with-km \
| |
| --with-udev \
| |
| --with-pacemaker \
| |
| --with-bashcompletion \
| |
| --with-utils \
| |
| --without-xen \
| |
| --without-heartbeat
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| total 4.0K
| |
| drwxr-xr-x. 7 root root 67 Dec 23 20:06 .
| |
| dr-xr-x---. 6 root root 4.0K Dec 23 20:06 ..
| |
| drwxr-xr-x. 2 root root 6 Dec 23 20:06 BUILD
| |
| drwxr-xr-x. 2 root root 6 Dec 23 20:06 RPMS
| |
| drwxr-xr-x. 2 root root 6 Dec 23 20:06 SOURCES
| |
| drwxr-xr-x. 2 root root 6 Dec 23 20:06 SPECS
| |
| drwxr-xr-x. 2 root root 6 Dec 23 20:06 SRPMS
| |
| </syntaxhighlight>
| |
| | |
| Userland tools:
| |
| | |
| <syntaxhighlight lang="bash">
| |
| make rpm
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| checking for presence of 8\.4\.4 in various changelog files
| |
| <snip>
| |
| + exit 0
| |
| You have now:
| |
| /root/rpmbuild/RPMS/x86_64/drbd-8.4.4-4.el7.x86_64.rpm
| |
| /root/rpmbuild/RPMS/x86_64/drbd-utils-8.4.4-4.el7.x86_64.rpm
| |
| /root/rpmbuild/RPMS/x86_64/drbd-xen-8.4.4-4.el7.x86_64.rpm
| |
| /root/rpmbuild/RPMS/x86_64/drbd-udev-8.4.4-4.el7.x86_64.rpm
| |
| /root/rpmbuild/RPMS/x86_64/drbd-pacemaker-8.4.4-4.el7.x86_64.rpm
| |
| /root/rpmbuild/RPMS/x86_64/drbd-heartbeat-8.4.4-4.el7.x86_64.rpm
| |
| /root/rpmbuild/RPMS/x86_64/drbd-bash-completion-8.4.4-4.el7.x86_64.rpm
| |
| /root/rpmbuild/RPMS/x86_64/drbd-debuginfo-8.4.4-4.el7.x86_64.rpm
| |
| </syntaxhighlight>
| |
| | |
| Kernel module:
| |
| | |
| <syntaxhighlight lang="bash">
| |
| make kmp-rpm
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| checking for presence of 8\.4\.4 in various changelog files
| |
| <snip>
| |
| + exit 0
| |
| You have now:
| |
| /root/rpmbuild/RPMS/x86_64/drbd-8.4.4-4.el7.x86_64.rpm
| |
| /root/rpmbuild/RPMS/x86_64/drbd-utils-8.4.4-4.el7.x86_64.rpm
| |
| /root/rpmbuild/RPMS/x86_64/drbd-xen-8.4.4-4.el7.x86_64.rpm
| |
| /root/rpmbuild/RPMS/x86_64/drbd-udev-8.4.4-4.el7.x86_64.rpm
| |
| /root/rpmbuild/RPMS/x86_64/drbd-pacemaker-8.4.4-4.el7.x86_64.rpm
| |
| /root/rpmbuild/RPMS/x86_64/drbd-heartbeat-8.4.4-4.el7.x86_64.rpm
| |
| /root/rpmbuild/RPMS/x86_64/drbd-bash-completion-8.4.4-4.el7.x86_64.rpm
| |
| /root/rpmbuild/RPMS/x86_64/drbd-debuginfo-8.4.4-4.el7.x86_64.rpm
| |
| /root/rpmbuild/RPMS/x86_64/kmod-drbd-8.4.4_3.10.0_54.0.1-4.el7.x86_64.rpm
| |
| /root/rpmbuild/RPMS/x86_64/drbd-kernel-debuginfo-8.4.4-4.el7.x86_64.rpm
| |
| </syntaxhighlight>
| |
| | |
| === Configure DRBD ===
| |
| | |
| Configure <span class="code">global-common.conf</span>;
| |
| | |
| <syntaxhighlight lang="bash">
| |
| vim /etc/drbd.d/global_common.conf
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| # These are options to set for the DRBD daemon sets the default values for
| |
| # resources.
| |
| global {
| |
| # This tells DRBD that you allow it to report this installation to
| |
| # LINBIT for statistical purposes. If you have privacy concerns, set
| |
| # this to 'no'. The default is 'ask' which will prompt you each time
| |
| # DRBD is updated. Set to 'yes' to allow it without being prompted.
| |
| usage-count no;
| |
| | |
| # minor-count dialog-refresh disable-ip-verification
| |
| }
| |
| | |
| common {
| |
| handlers {
| |
| pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
| |
| pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
| |
| local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
| |
| # split-brain "/usr/lib/drbd/notify-split-brain.sh root";
| |
| # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
| |
| # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
| |
| # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
| |
|
| |
| # Hook into Pacemaker's fencing.
| |
| fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
| |
| }
| |
| | |
| startup {
| |
| # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb
| |
| }
| |
| | |
| options {
| |
| # cpu-mask on-no-data-accessible
| |
| }
| |
| | |
| disk {
| |
| # size max-bio-bvecs on-io-error fencing disk-barrier disk-flushes
| |
| # disk-drain md-flushes resync-rate resync-after al-extents
| |
| # c-plan-ahead c-delay-target c-fill-target c-max-rate
| |
| # c-min-rate disk-timeout
| |
| fencing resource-and-stonith;
| |
| }
| |
| | |
| net {
| |
| # protocol timeout max-epoch-size max-buffers unplug-watermark
| |
| # connect-int ping-int sndbuf-size rcvbuf-size ko-count
| |
| # allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri
| |
| # after-sb-1pri after-sb-2pri always-asbp rr-conflict
| |
| # ping-timeout data-integrity-alg tcp-cork on-congestion
| |
| # congestion-fill congestion-extents csums-alg verify-alg
| |
| # use-rle
| |
| | |
| # Protocol "C" tells DRBD not to tell the operating system that
| |
| # the write is complete until the data has reach persistent
| |
| # storage on both nodes. This is the slowest option, but it is
| |
| # also the only one that guarantees consistency between the
| |
| # nodes. It is also required for dual-primary, which we will
| |
| # be using.
| |
| protocol C;
| |
| | |
| # Tell DRBD to allow dual-primary. This is needed to enable
| |
| # live-migration of our servers.
| |
| allow-two-primaries yes;
| |
| | |
| # This tells DRBD what to do in the case of a split-brain when
| |
| # neither node was primary, when one node was primary and when
| |
| # both nodes are primary. In our case, we'll be running
| |
| # dual-primary, so we can not safely recover automatically. The
| |
| # only safe option is for the nodes to disconnect from one
| |
| # another and let a human decide which node to invalidate. Of
| |
| after-sb-0pri discard-zero-changes;
| |
| after-sb-1pri discard-secondary;
| |
| after-sb-2pri disconnect;
| |
| }
| |
| }
| |
| </syntaxhighlight>
| |
| | |
| And now configure the first resource;
| |
| | |
| <syntaxhighlight lang="bash">
| |
| vim /etc/drbd.d/r0.res
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| # This is the first DRBD resource. If will store the shared file systems and
| |
| # the servers designed to run on node 01.
| |
| resource r0 {
| |
| # These options here are common to both nodes. If for some reason you
| |
| # need to set unique values per node, you can move these to the
| |
| # 'on <name> { ... }' section.
| |
|
| |
| # This sets the device name of this DRBD resouce.
| |
| device /dev/drbd0;
| |
| | |
| # This tells DRBD what the backing device is for this resource.
| |
| disk /dev/sda5;
| |
| | |
| # This controls the location of the metadata. When "internal" is used,
| |
| # as we use here, a little space at the end of the backing devices is
| |
| # set aside (roughly 32 MB per 1 TB of raw storage). External metadata
| |
| # can be used to put the metadata on another partition when converting
| |
| # existing file systems to be DRBD backed, when there is no extra space
| |
| # available for the metadata.
| |
| meta-disk internal;
| |
| | |
| # NOTE: this is not required or even recommended with pacemaker. remove
| |
| # this options as soon as pacemaker is setup.
| |
| startup {
| |
| # This tells DRBD to promote both nodes to 'primary' when this
| |
| # resource starts. However, we will let pacemaker control this
| |
| # so we comment it out, which tells DRBD to leave both nodes
| |
| # as secondary when drbd starts.
| |
| #become-primary-on both;
| |
| }
| |
| | |
| # NOTE: Later, make it an option in the dashboard to trigger a manual
| |
| # verify and/or schedule periodic automatic runs
| |
| net {
| |
| # TODO: Test performance differences between sha1 and md5
| |
| # This tells DRBD how to do a block-by-block verification of
| |
| # the data stored on the backing devices. Any verification
| |
| # failures will result in the effected block being marked
| |
| # out-of-sync.
| |
| verify-alg md5;
| |
| | |
| # TODO: Test the performance hit of this being enabled.
| |
| # This tells DRBD to generate a checksum for each transmitted
| |
| # packet. If the data received data doesn't generate the same
| |
| # sum, a retransmit request is generated. This protects against
| |
| # otherwise-undetected errors in transmission, like
| |
| # bit-flipping. See:
| |
| # http://www.drbd.org/users-guide/s-integrity-check.html
| |
| data-integrity-alg md5;
| |
| }
| |
| | |
| # WARNING: Confirm that these are safe when the controller's BBU is
| |
| # depleted/failed and the controller enters write-through
| |
| # mode.
| |
| disk {
| |
| # TODO: Test the real-world performance differences gained with
| |
| # these options.
| |
| # This tells DRBD not to bypass the write-back caching on the
| |
| # RAID controller. Normally, DRBD forces the data to be flushed
| |
| # to disk, rather than allowing the write-back cachine to
| |
| # handle it. Normally this is dangerous, but with BBU-backed
| |
| # caching, it is safe. The first option disables disk flushing
| |
| # and the second disabled metadata flushes.
| |
| disk-flushes no;
| |
| md-flushes no;
| |
| }
| |
| | |
| # This sets up the resource on node 01. The name used below must be the
| |
| # named returned by "uname -n".
| |
| on an-c03n01.alteeve.ca {
| |
| # This is the address and port to use for DRBD traffic on this
| |
| # node. Multiple resources can use the same IP but the ports
| |
| # must differ. By convention, the first resource uses 7788, the
| |
| # second uses 7789 and so on, incrementing by one for each
| |
| # additional resource.
| |
| address 10.10.30.1:7788;
| |
| }
| |
| on an-c03n02.alteeve.ca {
| |
| address 10.10.30.2:7788;
| |
| }
| |
| }
| |
| </syntaxhighlight>
| |
| | |
| Disable <span class="code">drbd</span> from starting on boot.
| |
| | |
| <syntaxhighlight lang="bash">
| |
| systemctl disable drbd.service
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| drbd.service is not a native service, redirecting to /sbin/chkconfig.
| |
| Executing /sbin/chkconfig drbd off
| |
| </syntaxhighlight>
| |
| | |
| Load the config;
| |
| | |
| <syntaxhighlight lang="bash">
| |
| modprobe drbd
| |
| </syntaxhighlight>
| |
| | |
| Now check the config;
| |
| | |
| <syntaxhighlight lang="bash">
| |
| drbdadm dump
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| --== Thank you for participating in the global usage survey ==--
| |
| The server's response is:
| |
| | |
| you are the 69th user to install this version
| |
| /etc/drbd.d/r0.res:3: in resource r0:
| |
| become-primary-on is set to both, but allow-two-primaries is not set.
| |
| </syntaxhighlight>
| |
| | |
| Ignore that error. It has been reported and does not effect operation.
| |
| | |
| Create the metadisk;
| |
| | |
| <syntaxhighlight lang="bash">
| |
| drbdadm create-md r0
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Writing meta data...
| |
| initializing activity log
| |
| NOT initializing bitmap
| |
| New drbd meta data block successfully created.
| |
| success
| |
| </syntaxhighlight>
| |
| | |
| Start the DRBD resource on both nodes;
| |
| | |
| <syntaxhighlight lang="bash">
| |
| drbdadm up r0
| |
| </syntaxhighlight>
| |
| | |
| Once <span class="code">/proc/drbd</span> shows both nodes connected, force one to primary and it will sync over the second.
| |
| | |
| <syntaxhighlight lang="bash">
| |
| drbdadm primary --force r0
| |
| </syntaxhighlight>
| |
| | |
| You should see the resource syncing now. Push both nodes to primary;
| |
| | |
| <syntaxhighlight lang="bash">
| |
| drbdadm primary r0
| |
| </syntaxhighlight>
| |
| | |
| == DLM, Clustered LVM and GFS2 ==
| |
| | |
| {|class="wikitable"
| |
| !<span class="code">an-c03n01</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| sed -i.anvil 's^filter = \[ "a/\.\*/" \]^filter = \[ "a|/dev/drbd*|", "r/.*/" \]^' /etc/lvm/lvm.conf
| |
| sed -i 's/locking_type = 1$/locking_type = 3/' /etc/lvm/lvm.conf
| |
| sed -i 's/fallback_to_local_locking = 1$/fallback_to_local_locking = 0/' /etc/lvm/lvm.conf
| |
| sed -i 's/use_lvmetad = 1$/use_lvmetad = 0/' /etc/lvm/lvm.conf
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="diff">
| |
| --- /etc/lvm/lvm.conf.anvil 2013-11-27 03:28:08.000000000 -0500
| |
| +++ /etc/lvm/lvm.conf 2014-01-26 18:57:41.026928464 -0500
| |
| @@ -84,7 +84,7 @@
| |
| # lvmetad is used" comment that is attached to global/use_lvmetad setting.
| |
|
| |
| # By default we accept every block device:
| |
| - filter = [ "a/.*/" ]
| |
| + filter = [ "a|/dev/drbd*|", "r/.*/" ]
| |
|
| |
| # Exclude the cdrom drive
| |
| # filter = [ "r|/dev/cdrom|" ]
| |
| @@ -451,7 +451,7 @@
| |
| # supported in clustered environment. If use_lvmetad=1 and locking_type=3
| |
| # is set at the same time, LVM always issues a warning message about this
| |
| # and then it automatically disables lvmetad use.
| |
| - locking_type = 1
| |
| + locking_type = 3
| |
|
| |
| # Set to 0 to fail when a lock request cannot be satisfied immediately.
| |
| wait_for_locks = 1
| |
| @@ -467,7 +467,7 @@
| |
| # to 1 an attempt will be made to use local file-based locking (type 1).
| |
| # If this succeeds, only commands against local volume groups will proceed.
| |
| # Volume Groups marked as clustered will be ignored.
| |
| - fallback_to_local_locking = 1
| |
| + fallback_to_local_locking = 0
| |
|
| |
| # Local non-LV directory that holds file-based locks while commands are
| |
| # in progress. A directory like /tmp that may get wiped on reboot is OK.
| |
| @@ -594,7 +594,7 @@
| |
| # supported in clustered environment. If use_lvmetad=1 and locking_type=3
| |
| # is set at the same time, LVM always issues a warning message about this
| |
| # and then it automatically disables lvmetad use.
| |
| - use_lvmetad = 1
| |
| + use_lvmetad = 0
| |
|
| |
| # Full path of the utility called to check that a thin metadata device
| |
| # is in a state that allows it to be used.
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| rsync -av /etc/lvm/lvm.conf* root@an-c03n02:/etc/lvm/
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| sending incremental file list
| |
| lvm.conf
| |
| lvm.conf.anvil
| |
| | |
| sent 48536 bytes received 440 bytes 97952.00 bytes/sec
| |
| total size is 90673 speedup is 1.85
| |
| </syntaxhighlight>
| |
| |-
| |
| !<span class="code">an-c03n02</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| diff -u /etc/lvm/lvm.conf.anvil /etc/lvm/lvm.conf
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="diff">
| |
| --- /etc/lvm/lvm.conf.anvil 2013-11-27 03:28:08.000000000 -0500
| |
| +++ /etc/lvm/lvm.conf 2014-01-26 18:57:41.000000000 -0500
| |
| @@ -84,7 +84,7 @@
| |
| # lvmetad is used" comment that is attached to global/use_lvmetad setting.
| |
|
| |
| # By default we accept every block device:
| |
| - filter = [ "a/.*/" ]
| |
| + filter = [ "a|/dev/drbd*|", "r/.*/" ]
| |
|
| |
| # Exclude the cdrom drive
| |
| # filter = [ "r|/dev/cdrom|" ]
| |
| @@ -451,7 +451,7 @@
| |
| # supported in clustered environment. If use_lvmetad=1 and locking_type=3
| |
| # is set at the same time, LVM always issues a warning message about this
| |
| # and then it automatically disables lvmetad use.
| |
| - locking_type = 1
| |
| + locking_type = 3
| |
|
| |
| # Set to 0 to fail when a lock request cannot be satisfied immediately.
| |
| wait_for_locks = 1
| |
| @@ -467,7 +467,7 @@
| |
| # to 1 an attempt will be made to use local file-based locking (type 1).
| |
| # If this succeeds, only commands against local volume groups will proceed.
| |
| # Volume Groups marked as clustered will be ignored.
| |
| - fallback_to_local_locking = 1
| |
| + fallback_to_local_locking = 0
| |
|
| |
| # Local non-LV directory that holds file-based locks while commands are
| |
| # in progress. A directory like /tmp that may get wiped on reboot is OK.
| |
| @@ -594,7 +594,7 @@
| |
| # supported in clustered environment. If use_lvmetad=1 and locking_type=3
| |
| # is set at the same time, LVM always issues a warning message about this
| |
| # and then it automatically disables lvmetad use.
| |
| - use_lvmetad = 1
| |
| + use_lvmetad = 0
| |
|
| |
| # Full path of the utility called to check that a thin metadata device
| |
| # is in a state that allows it to be used.
| |
| </syntaxhighlight>
| |
| |}
| |
| | |
| Disable <span class="code">lvmetad</span> as it's not cluster-aware.
| |
| | |
| {|class="wikitable"
| |
| !<span class="code">an-c03n01</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| systemctl disable lvm2-lvmetad.service
| |
| systemctl disable lvm2-lvmetad.socket
| |
| systemctl stop lvm2-lvmetad.service
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| rm '/etc/systemd/system/sockets.target.wants/lvm2-lvmetad.socket'
| |
| </syntaxhighlight>
| |
| |-
| |
| !<span class="code">an-c03n02</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| systemctl disable lvm2-lvmetad.service
| |
| systemctl disable lvm2-lvmetad.socket
| |
| systemctl stop lvm2-lvmetad.service
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| rm '/etc/systemd/system/sockets.target.wants/lvm2-lvmetad.socket'
| |
| </syntaxhighlight>
| |
| |}
| |
| | |
| {{note|1=This will be moved to pacemaker shortly. We're enabling it here just long enough to configure pacemaker.}}
| |
| | |
| Start DLM and clvmd;
| |
| | |
| {|class="wikitable"
| |
| !<span class="code">an-c03n01</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| systemctl start dlm.service
| |
| systemctl start clvmd.service
| |
| </syntaxhighlight>
| |
| |-
| |
| !<span class="code">an-c03n02</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| systemctl start dlm.service
| |
| systemctl start clvmd.service
| |
| </syntaxhighlight>
| |
| |}
| |
| | |
| Create the [[PV]], [[VG]] and the <span class="code">/shared</span> [[LV]];
| |
| | |
| {|class="wikitable"
| |
| !<span class="code">an-c03n01</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| pvcreate /dev/drbd0
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Physical volume "/dev/drbd0" successfully created
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| vgcreate an-c03n01_vg0 /dev/drbd0
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| /proc/devices: No entry for device-mapper found
| |
| Clustered volume group "an-c03n01_vg0" successfully created
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| lvcreate -L 10G -n shared an-c03n01_vg0
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Logical volume "shared" created
| |
| </syntaxhighlight>
| |
| |-
| |
| !<span class="code">an-c03n02</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| pvscan
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| PV /dev/drbd0 VG an-c03n01_vg0 lvm2 [20.00 GiB / 20.00 GiB free]
| |
| Total: 1 [20.00 GiB] / in use: 1 [20.00 GiB] / in no VG: 0 [0 ]
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| vgscan
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Reading all physical volumes. This may take a while...
| |
| Found volume group "an-c03n01_vg0" using metadata type lvm2
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| lvscan
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| ACTIVE '/dev/an-c03n01_vg0/shared' [10.00 GiB] inherit
| |
| </syntaxhighlight>
| |
| |}
| |
| | |
| Format the <span class="code">/dev/an-c03n01_vg0/shared</span>;
| |
| | |
| {|class="wikitable"
| |
| !<span class="code">an-c03n01</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| mkfs.gfs2 -j 2 -p lock_dlm -t an-cluster-03:shared /dev/an-c03n01_vg0/shared
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| /dev/an-c03n01_vg0/shared is a symbolic link to /dev/dm-0
| |
| This will destroy any data on /dev/dm-0
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Are you sure you want to proceed? [y/n]y
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Device: /dev/an-c03n01_vg0/shared
| |
| Block size: 4096
| |
| Device size: 10.00 GB (2621440 blocks)
| |
| Filesystem size: 10.00 GB (2621438 blocks)
| |
| Journals: 2
| |
| Resource groups: 40
| |
| Locking protocol: "lock_dlm"
| |
| Lock table: "an-cluster-03:shared"
| |
| UUID: 20bafdb0-1f86-f424-405b-9bf608c0c486
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| mkdir /shared
| |
| mount /dev/an-c03n01_vg0/shared /shared
| |
| df -h
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Filesystem Size Used Avail Use% Mounted on
| |
| /dev/vda3 18G 5.6G 12G 32% /
| |
| devtmpfs 932M 0 932M 0% /dev
| |
| tmpfs 937M 61M 877M 7% /dev/shm
| |
| tmpfs 937M 1.9M 935M 1% /run
| |
| tmpfs 937M 0 937M 0% /sys/fs/cgroup
| |
| /dev/loop0 4.4G 4.4G 0 100% /mnt/dvd
| |
| /dev/vda1 484M 83M 401M 18% /boot
| |
| /dev/mapper/an--c03n01_vg0-shared 10G 259M 9.8G 3% /shared
| |
| </syntaxhighlight>
| |
| |-
| |
| !<span class="code">an-c03n02</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Filesystem Size Used Avail Use% Mounted on
| |
| /dev/vda3 18G 5.6G 12G 32% /
| |
| devtmpfs 932M 0 932M 0% /dev
| |
| tmpfs 937M 76M 862M 9% /dev/shm
| |
| tmpfs 937M 2.0M 935M 1% /run
| |
| tmpfs 937M 0 937M 0% /sys/fs/cgroup
| |
| /dev/loop0 4.4G 4.4G 0 100% /mnt/dvd
| |
| /dev/vda1 484M 83M 401M 18% /boot
| |
| /dev/mapper/an--c03n01_vg0-shared 10G 259M 9.8G 3% /shared
| |
| </syntaxhighlight>
| |
| |}
| |
| | |
| Shut down <span class="code">gfs2</span>, <span class="code">clvmd</span> and <span class="code">drbd</span> now.
| |
| | |
| {|class="wikitable"
| |
| !<span class="code">an-c03n01</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| umount /shared/
| |
| systemctl stop clvmd.service
| |
| drbdadm down r0
| |
| </syntaxhighlight>
| |
| |-
| |
| !<span class="code">an-c03n02</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| umount /shared/
| |
| systemctl stop clvmd.service
| |
| drbdadm down r0
| |
| </syntaxhighlight>
| |
| |}
| |
| | |
| Done.
| |
| | |
| = Add Storage to Pacemaker =
| |
| | |
| == Configure Dual-Primary DRBD ==
| |
| | |
| Setup DRBD as a dual-primary resource.
| |
| | |
| {|class="wikitable"
| |
| !<span class="code">an-c03n01</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| pcs cluster cib drbd_cfg
| |
| pcs -f drbd_cfg resource create drbd_r0 ocf:linbit:drbd drbd_resource=r0 op monitor interval=60s
| |
| pcs -f drbd_cfg resource master drbd_r0_Clone drbd_r0 master-max=2 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
| |
| pcs cluster cib-push drbd_cfg
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| CIB updated
| |
| </syntaxhighlight>
| |
| |}
| |
| | |
| Give it a couple minutes to promote both nodes to <span class="code">Master</span> on both nodes. Initially, it will appear as <span class="code">Master</span> on one node only.
| |
| | |
| Once updated, you should see this:
| |
| | |
| {|class="wikitable"
| |
| !<span class="code">an-c03n01</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| pcs status
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Cluster name: an-cluster-03
| |
| Last updated: Sun Jan 26 20:26:33 2014
| |
| Last change: Sun Jan 26 20:23:23 2014 via cibadmin on an-c03n01.alteeve.ca
| |
| Stack: corosync
| |
| Current DC: an-c03n02.alteeve.ca (2) - partition with quorum
| |
| Version: 1.1.10-19.el7-368c726
| |
| 2 Nodes configured
| |
| 4 Resources configured
| |
| | |
| | |
| Online: [ an-c03n01.alteeve.ca an-c03n02.alteeve.ca ]
| |
| | |
| Full list of resources:
| |
| | |
| fence_n01_virsh (stonith:fence_virsh): Started an-c03n01.alteeve.ca
| |
| fence_n02_virsh (stonith:fence_virsh): Started an-c03n02.alteeve.ca
| |
| Master/Slave Set: drbd_r0_Clone [drbd_r0]
| |
| Masters: [ an-c03n01.alteeve.ca an-c03n02.alteeve.ca ]
| |
| | |
| PCSD Status:
| |
| an-c03n01.alteeve.ca:
| |
| an-c03n01.alteeve.ca: Online
| |
| an-c03n02.alteeve.ca:
| |
| an-c03n02.alteeve.ca: Online
| |
| | |
| Daemon Status:
| |
| corosync: active/disabled
| |
| pacemaker: active/disabled
| |
| pcsd: active/enabled
| |
| </syntaxhighlight>
| |
| |-
| |
| !<span class="code">an-c03n02</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| pcs status
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Cluster name: an-cluster-03
| |
| Last updated: Sun Jan 26 20:26:58 2014
| |
| Last change: Sun Jan 26 20:23:23 2014 via cibadmin on an-c03n01.alteeve.ca
| |
| Stack: corosync
| |
| Current DC: an-c03n02.alteeve.ca (2) - partition with quorum
| |
| Version: 1.1.10-19.el7-368c726
| |
| 2 Nodes configured
| |
| 4 Resources configured
| |
| | |
| | |
| Online: [ an-c03n01.alteeve.ca an-c03n02.alteeve.ca ]
| |
| | |
| Full list of resources:
| |
| | |
| fence_n01_virsh (stonith:fence_virsh): Started an-c03n01.alteeve.ca
| |
| fence_n02_virsh (stonith:fence_virsh): Started an-c03n02.alteeve.ca
| |
| Master/Slave Set: drbd_r0_Clone [drbd_r0]
| |
| Masters: [ an-c03n01.alteeve.ca an-c03n02.alteeve.ca ]
| |
| | |
| PCSD Status:
| |
| an-c03n01.alteeve.ca:
| |
| an-c03n01.alteeve.ca: Online
| |
| an-c03n02.alteeve.ca:
| |
| an-c03n02.alteeve.ca: Online
| |
| | |
| Daemon Status:
| |
| corosync: active/disabled
| |
| pacemaker: active/disabled
| |
| pcsd: active/enabled
| |
| </syntaxhighlight>
| |
| |}
| |
| | |
| == Configure DLM ==
| |
| | |
| {|class="wikitable"
| |
| !<span class="code">an-c03n01</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| pcs cluster cib dlm_cfg
| |
| pcs -f dlm_cfg resource create dlm ocf:pacemaker:controld op monitor interval=60s
| |
| pcs -f dlm_cfg resource clone dlm clone-max=2 clone-node-max=1
| |
| pcs cluster cib-push dlm_cfg
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| CIB updated
| |
| </syntaxhighlight>
| |
| |-
| |
| !<span class="code">an-c03n02</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| pcs status
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Cluster name: an-cluster-03
| |
| Last updated: Sun Jan 26 20:34:36 2014
| |
| Last change: Sun Jan 26 20:33:31 2014 via cibadmin on an-c03n01.alteeve.ca
| |
| Stack: corosync
| |
| Current DC: an-c03n02.alteeve.ca (2) - partition with quorum
| |
| Version: 1.1.10-19.el7-368c726
| |
| 2 Nodes configured
| |
| 6 Resources configured
| |
| | |
| | |
| Online: [ an-c03n01.alteeve.ca an-c03n02.alteeve.ca ]
| |
| | |
| Full list of resources:
| |
| | |
| fence_n01_virsh (stonith:fence_virsh): Started an-c03n01.alteeve.ca
| |
| fence_n02_virsh (stonith:fence_virsh): Started an-c03n02.alteeve.ca
| |
| Master/Slave Set: drbd_r0_Clone [drbd_r0]
| |
| Masters: [ an-c03n01.alteeve.ca an-c03n02.alteeve.ca ]
| |
| Clone Set: dlm-clone [dlm]
| |
| Started: [ an-c03n01.alteeve.ca an-c03n02.alteeve.ca ]
| |
| | |
| PCSD Status:
| |
| an-c03n01.alteeve.ca:
| |
| an-c03n01.alteeve.ca: Online
| |
| an-c03n02.alteeve.ca:
| |
| an-c03n02.alteeve.ca: Online
| |
| | |
| Daemon Status:
| |
| corosync: active/disabled
| |
| pacemaker: active/disabled
| |
| pcsd: active/enabled
| |
| </syntaxhighlight>
| |
| |}
| |
| | |
| == Configure Cluster LVM ==
| |
| | |
| {|class="wikitable"
| |
| !<span class="code">an-c03n01</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| pcs cluster cib clvmd_cfg
| |
| pcs -f clvmd_cfg resource create clvmd lsb:clvmd params daemon_timeout=30s op monitor interval=60s
| |
| pcs -f clvmd_cfg resource clone clvmd clone-max=2 clone-node-max=1
| |
| pcs -f clvmd_cfg constraint colocation add dlm-clone clvmd-clone INFINITY
| |
| pcs -f clvmd_cfg constraint order start dlm then start clvmd-clone
| |
| pcs cluster cib-push clvmd_cfg</syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| CIB updated
| |
| </syntaxhighlight>
| |
| |-
| |
| !<span class="code">an-c03n02</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| pcs status
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Cluster name: an-cluster-03
| |
| Last updated: Mon Jan 27 19:00:33 2014
| |
| Last change: Mon Jan 27 19:00:19 2014 via crm_resource on an-c03n01.alteeve.ca
| |
| Stack: corosync
| |
| Current DC: an-c03n01.alteeve.ca (1) - partition with quorum
| |
| Version: 1.1.10-19.el7-368c726
| |
| 2 Nodes configured
| |
| 8 Resources configured
| |
| | |
| | |
| Online: [ an-c03n01.alteeve.ca an-c03n02.alteeve.ca ]
| |
| | |
| Full list of resources:
| |
| | |
| fence_n01_virsh (stonith:fence_virsh): Started an-c03n01.alteeve.ca
| |
| fence_n02_virsh (stonith:fence_virsh): Started an-c03n02.alteeve.ca
| |
| Master/Slave Set: drbd_r0_Clone [drbd_r0]
| |
| Masters: [ an-c03n01.alteeve.ca an-c03n02.alteeve.ca ]
| |
| Clone Set: dlm-clone [dlm]
| |
| Started: [ an-c03n01.alteeve.ca an-c03n02.alteeve.ca ]
| |
| Clone Set: clvmd-clone [clvmd]
| |
| Started: [ an-c03n01.alteeve.ca an-c03n02.alteeve.ca ]
| |
| | |
| PCSD Status:
| |
| an-c03n01.alteeve.ca:
| |
| an-c03n01.alteeve.ca: Online
| |
| an-c03n02.alteeve.ca:
| |
| an-c03n02.alteeve.ca: Online
| |
| | |
| Daemon Status:
| |
| corosync: active/disabled
| |
| pacemaker: active/disabled
| |
| pcsd: active/enabled
| |
| </syntaxhighlight>
| |
| |}
| |
| | |
| == Configure the /shared GFS2 Partition ==
| |
| | |
| {|class="wikitable"
| |
| !<span class="code">an-c03n01</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| pcs cluster cib fs_cfg
| |
| pcs -f fs_cfg resource create sharedFS Filesystem device="/dev/an-c03n01_vg0/shared" directory="/shared" fstype="gfs2"
| |
| pcs -f fs_cfg resource clone sharedFS
| |
| pcs cluster cib-push fs_cfg
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| CIB updated
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| df -h
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Filesystem Size Used Avail Use% Mounted on
| |
| /dev/vda3 18G 5.6G 12G 32% /
| |
| devtmpfs 932M 0 932M 0% /dev
| |
| tmpfs 937M 61M 877M 7% /dev/shm
| |
| tmpfs 937M 2.2M 935M 1% /run
| |
| tmpfs 937M 0 937M 0% /sys/fs/cgroup
| |
| /dev/loop0 4.4G 4.4G 0 100% /mnt/dvd
| |
| /dev/vda1 484M 83M 401M 18% /boot
| |
| /dev/mapper/an--c03n01_vg0-shared 10G 259M 9.8G 3% /shared
| |
| </syntaxhighlight>
| |
| |-
| |
| !<span class="code">an-c03n02</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| df -h
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Filesystem Size Used Avail Use% Mounted on
| |
| /dev/vda3 18G 5.6G 12G 32% /
| |
| devtmpfs 932M 0 932M 0% /dev
| |
| tmpfs 937M 76M 862M 9% /dev/shm
| |
| tmpfs 937M 2.6M 935M 1% /run
| |
| tmpfs 937M 0 937M 0% /sys/fs/cgroup
| |
| /dev/loop0 4.4G 4.4G 0 100% /mnt/dvd
| |
| /dev/vda1 484M 83M 401M 18% /boot
| |
| /dev/mapper/an--c03n01_vg0-shared 10G 259M 9.8G 3% /shared
| |
| </syntaxhighlight>
| |
| |}
| |
| | |
| == Configuring Constraints ==
| |
| | |
| {|class="wikitable"
| |
| !<span class="code">an-c03n01</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| pcs cluster cib cst_cfg
| |
| pcs -f cst_cfg constraint order start dlm then promote drbd_r0_Clone
| |
| pcs -f cst_cfg constraint order promote drbd_r0_Clone then start clvmd-clone
| |
| pcs -f cst_cfg constraint order promote clvmd-clone then start sharedFS-clone
| |
| pcs cluster cib-push cst_cfg
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| CIB updated
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="bash">
| |
| pcs constraint show
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Location Constraints:
| |
| Ordering Constraints:
| |
| start dlm then promote drbd_r0_Clone
| |
| promote drbd_r0_Clone then start clvmd-clone
| |
| start clvmd-clone then start sharedFS-clone
| |
| Colocation Constraints:
| |
| </syntaxhighlight>
| |
| |-
| |
| !<span class="code">an-c03n02</span>
| |
| |style="white-space: nowrap;"|<syntaxhighlight lang="bash">
| |
| pcs constraint show
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| Location Constraints:
| |
| Ordering Constraints:
| |
| start dlm then promote drbd_r0_Clone
| |
| promote drbd_r0_Clone then start clvmd-clone
| |
| start clvmd-clone then start sharedFS-clone
| |
| Colocation Constraints:
| |
| </syntaxhighlight>
| |
| |}
| |
| | |
| = Odds and Sods =
| |
| | |
| This is a section for random notes. The stuff here will be integrated into the finished tutorial or removed.
| |
| | |
| == Determine multicast Address ==
| |
| | |
| Useful if you need to ensure that your switch has persistent multicast addresses set.
| |
| | |
| <syntaxhighlight lang="bash">
| |
| corosync-cmapctl | grep mcastaddr
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| totem.interface.0.mcastaddr (str) = 239.192.122.199
| |
| </syntaxhighlight>
| |
| | |
| | |
| <span class="code"></span>
| |
| <syntaxhighlight lang="bash">
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="text">
| |
| </syntaxhighlight>
| |
| <syntaxhighlight lang="diff">
| |
| </syntaxhighlight>
| |
| | |
| = Notes =
| |
| | |
| * [http://blog.clusterlabs.org/blog/2013/pacemaker-logging/ Pacemaker Logging]
| |
| | |
| = Thanks =
| |
| | |
| This list will certainly grow as this tutorial progresses;
| |
| | |
| * [mailto:olivier.allart@eyecon.com.au Olivier Allart, RCHE] for doing a lot of the heavy lifting on the <span class="code">[[fencing_topology]]</span> configuration.
| |
| | |
| {{footer}}
| |