2-Node Red Hat KVM Cluster Tutorial - Archive: Difference between revisions

From Alteeve Wiki
Jump to navigation Jump to search
m (Digimer moved page 2-Node Red Hat KVM Cluster Tutorial to 2-Node Red Hat KVM Cluster Tutorial - Archive: Moving to rename to archival)
 
(247 intermediate revisions by 3 users not shown)
Line 1: Line 1:
{{howto_header}}
{{howto_header}}


{{note|1=This is the second edition of the original [[Red Hat Cluster Service 2 Tutorial]]. This version is updated to use the Red Hat Cluster Suite, Stable version 3. It replaces [[Xen]] in favour of [[KVM]] to stay in-line with [[Red Hat]]'s supported configuration. It also uses [[corosync]], replacing [[openais]], as the core cluster communication stack.}}
{{warning|1=This tutorial is officially deprecated. It has been replaced with [[AN!Cluster Tutorial 2]]. Please do not follow this tutorial any more.}}


This paper has one goal;
This paper has one goal;
Line 11: Line 11:
= The Task Ahead =
= The Task Ahead =


Before we start, let's take a few minutes to discuss clustering and it's complexities.
Before we start, let's take a few minutes to discuss clustering and its complexities.


== Technologies We Will Use ==
== Technologies We Will Use ==
Line 24: Line 24:
* ''Distributed Redundant Block Device'' ([[DRBD]]); Keeps shared data synchronized across cluster nodes.
* ''Distributed Redundant Block Device'' ([[DRBD]]); Keeps shared data synchronized across cluster nodes.
* ''KVM''; [[Hypervisor]] that controls and supports virtual machines.
* ''KVM''; [[Hypervisor]] that controls and supports virtual machines.
== A Note on Hardware ==
In this tutorial, I will make reference to specific hardware components and devices. I do this to share what devices and equipment I use, but I do not endorse any of the products named in this tutorial. I am in no way affiliated with any hardware vendor not do I receive any compensation or gifts from any company.


== A Note on Patience ==
== A Note on Patience ==


There is nothing inherently hard about clustering. However, there are many components that you need to understand before you can begin. The result is that clustering has an inherently steep learning curve.
When someone wants to become a pilot, they can't jump into a plane and try to take off. It's not that flying is inherently hard, but it requires a foundation of understanding. Clustering is the same in this regard; there are many different pieces that have to work together just to get off the ground.
 
You '''must''' have patience.
 
Like a pilot on their first flight, seeing a cluster come to life is a fantastic experience. Don't rush it! Do your homework and you'll be on your way before you know it.


You '''must''' have patience. Lots of it.
Coming back to earth:


Many technologies can be learned by creating a very simple base and then building on it. The classic "Hello, World!" script created when first learning a programming language is an example of this. Unfortunately, there is no real analogue to this in clustering. Even the most basic cluster requires several pieces be in place and working together. If you try to rush by ignoring pieces you think are not important, you will almost certainly waste time. A good example is setting aside [[fencing]], thinking that your test cluster's data isn't important. The cluster software has no concept of "test". It treats everything as critical all the time and ''will'' shut down if anything goes wrong.
Many technologies can be learned by creating a very simple base and then building on it. The classic "Hello, World!" script created when first learning a programming language is an example of this. Unfortunately, there is no real analogue to this in clustering. Even the most basic cluster requires several pieces be in place and working together. If you try to rush by ignoring pieces you think are not important, you will almost certainly waste time. A good example is setting aside [[fencing]], thinking that your test cluster's data isn't important. The cluster software has no concept of "test". It treats everything as critical all the time and ''will'' shut down if anything goes wrong.
Line 37: Line 45:
== Prerequisites ==
== Prerequisites ==


It is assumed that you are familiar with Linux systems administration, specifically [[Red Hat]] [[Enterprise Linux]] and its derivatives. You will need to have somewhat advanced networking experience as well. You should be comfortable working in a terminal (directly or over <span class="code">[[ssh]]</span>). Familiarity with [[XML]] will help, but is not terribly required as it's use here is pretty self-evident.
It is assumed that you are familiar with Linux systems administration, specifically [[Red Hat]] [[Enterprise Linux]] and its derivatives. You will need to have somewhat advanced networking experience as well. You should be comfortable working in a terminal (directly or over <span class="code">[[ssh]]</span>). Familiarity with [[XML]] will help, but is not terribly required as its use here is pretty self-evident.


If you feel a little out of depth at times, don't hesitate to set this tutorial aside. Branch over to the components you feel the need to study more, then return and continue on. Finally, and perhaps most importantly, you '''must''' have patience! If you have a manager asking you to "go live" with a cluster in a month, tell him or her that it simply '''won't happen'''. If you rush, you will skip important points and '''you will fail'''.  
If you feel a little out of depth at times, don't hesitate to set this tutorial aside. Browse over to the components you feel the need to study more, then return and continue on. Finally, and perhaps most importantly, you '''must''' have patience! If you have a manager asking you to "go live" with a cluster in a month, tell him or her that it simply '''won't happen'''. If you rush, you will skip important points and '''you will fail'''.  


Patience is vastly more important than any pre-existing skill.  
Patience is vastly more important than any pre-existing skill.  
Line 49: Line 57:
This tutorial will focus on High Availability clustering, often shortened to simply '''HA''' and not to be confused with the [[Linux-HA]] "heartbeat" cluster suite, which we will not be using here. The cluster will provide a shared file systems and will provide for the high availability on [[KVM]]-based virtual servers. The goal will be to have the virtual servers live-migrate during planned node outages and automatically restart on a surviving node when the original host node fails.
This tutorial will focus on High Availability clustering, often shortened to simply '''HA''' and not to be confused with the [[Linux-HA]] "heartbeat" cluster suite, which we will not be using here. The cluster will provide a shared file systems and will provide for the high availability on [[KVM]]-based virtual servers. The goal will be to have the virtual servers live-migrate during planned node outages and automatically restart on a surviving node when the original host node fails.


Below is a ''very'' brief overview;
Below is a ''very'' brief overview:


High Availability clusters like ours have two main parts; Cluster management and resource management.
High Availability clusters like ours have two main parts; Cluster management and resource management.


The cluster itself is responsible for maintaining the cluster nodes in a group. This group is part of a "Closed Process Group", or [[CPG]]. When a node fails, the cluster manager must detect the failure, reliably eject the node from the cluster using fencing and then reform the CPG. Each time the cluster changes, or "re-forms", the resource manager is called. The resource manager checks to see how the cluster changed, consults it's configuration and determines what to do, if anything.
The cluster itself is responsible for maintaining the cluster nodes in a group. This group is part of a "Closed Process Group", or [[CPG]]. When a node fails, the cluster manager must detect the failure, reliably eject the node from the cluster using fencing and then reform the CPG. Each time the cluster changes, or "re-forms", the resource manager is called. The resource manager checks to see how the cluster changed, consults its configuration and determines what to do, if anything.


The details of all this will be discussed in detail a little later on. For now, it's sufficient to have in mind these two major roles and understand that they are somewhat independent entities.
The details of all this will be discussed in detail a little later on. For now, it's sufficient to have in mind these two major roles and understand that they are somewhat independent entities.
Line 59: Line 67:
== Platform ==
== Platform ==


This tutorial was written using [[RHEL]] version 6.1 and [[CentOS]] version 6.0 [[x86_64]]. No attempt was made to test on [[i686]] or other [[EL6]] derivatives. That said, there is no reason to believe that this tutorial will not apply to any variant. As much as possible, the language will be distro-agnostic. It is advised that you use an [[x86_64]] (64-[[bit]]) platform if at all possible.
This tutorial was written using [[RHEL]] version 6.2, [[x86_64]] architecture. The KVM hypervisor will not run on [[i686]]. No testing was done on other [[EL6]] derivatives. That said, there is no reason to believe that this tutorial will not apply to any variant of EL6. As much as possible, the language will be distro-agnostic.


== A Word On Complexity ==
== A Word On Complexity ==
Line 65: Line 73:
Introducing the <span class="code">Fabimer Principle</span>:
Introducing the <span class="code">Fabimer Principle</span>:


Clustering is not inherently hard, but it is inherently complex. Consider;
Clustering is not inherently hard, but it is inherently complex. Consider:


* Any given program has <span class="code">N</span> bugs.
* Any given program has <span class="code">N</span> bugs.
Line 91: Line 99:
* When you look at the configuration file, it is quite short.
* When you look at the configuration file, it is quite short.


It isn't like most applications or technologies though. Most of us learn by taking something, like a configuration file, and tweaking it this way and that to see what happens. I tried that with clustering and learned only what it was like to bang my head against the wall.
Clustering isn't like most applications or technologies. Most of us learn by taking something such as a configuration file, and tweaking it to see what happens. I tried that with clustering and learned only what it was like to bang my head against the wall.


* Understanding the parts and how they work together is critical.
* Understanding the parts and how they work together is critical.


You will find that the discussion on the components of clustering, and how those components and concepts interact, will be much longer than the initial configuration. It is true that we could talk very briefly about the actual syntax, but it would be a disservice. Please, don't rush through the next section or, worse, skip it and go right to the configuration. You will waste far more time than you will save.
You will find that the discussion on the components of clustering, and how those components and concepts interact, will be much longer than the initial configuration. It is true that we could talk very briefly about the actual syntax, but it would be a disservice. Please don't rush through the next section, or worse, skip it and go right to the configuration. You will waste far more time than you will save.


* Clustering is easy, but it has a complex web of inter-connectivity. You must grasp this network if you want to be an effective cluster administrator!
* Clustering is easy, but it has a complex web of inter-connectivity. You must grasp this network if you want to be an effective cluster administrator!
Line 101: Line 109:
== Component; cman ==
== Component; cman ==


This was, traditionally, the <span class="code">c</span>luster <span class="code">man</span>ager. In the 3.0 series, which is what all versions of [[EL6]] will use, <span class="code">cman</span> acts mainly as a [[quorum]] provider, tallying votes and deciding on a critical property of the cluster: quorum. As of the 3.1 series, which future [[EL]] releases will use, <span class="code">cman</span> will be removed entirely.
The <span class="code">cman</span> portion of the the cluster is the '''c'''luster '''man'''ager. In the 3.0 series used in [[EL6]], <span class="code">cman</span> acts mainly as a [[quorum]] provider. That is, is adds up the votes from the cluster members and decides if there is a simple majority. If there is, the cluster is "quorate" and is allowed to provide cluster services. Newer versions of the Red Hat Cluster Suite found in [[Fedora]] will use a new quorum provider and <span class="code">cman</span> will be removed entirely.


The <span class="code">cman</span> service is used to start and stop the cluster communication, membership, locking, fencing and other cluster foundation applications.
Until it is removed, the <span class="code">cman</span> service will be used to start and stop all of the daemons needed to make the cluster operate.


== Component; corosync ==
== Component; corosync ==
Line 115: Line 123:
=== A Little History ===
=== A Little History ===


There were significant changes between [[RHCS]] version 2, which we are using, and version 3 available on [[EL6]] and recent [[Fedora]]s.
There were significant changes between [[RHCS]] the old version 2 and version 3 available on [[EL6]], which we are using.


In the RHCS version 2, there was a component called <span class="code">openais</span> which provided <span class="code">totem</span>. The OpenAIS project was designed to be the heart of the cluster and was based around the [http://www.saforum.org/ Service Availability Forum]'s [http://www.saforum.org/Application-Interface-Specification~217404~16627.htm Application Interface Specification]. AIS is an open [[API]] designed to provide inter-operable high availability services.
In the RHCS version 2, there was a component called <span class="code">openais</span> which provided <span class="code">totem</span>. The OpenAIS project was designed to be the heart of the cluster and was based around the [http://www.saforum.org/ Service Availability Forum]'s [http://www.saforum.org/Application-Interface-Specification~217404~16627.htm Application Interface Specification]. AIS is an open [[API]] designed to provide inter-operable high availability services.
Line 121: Line 129:
In 2008, it was decided that the AIS specification was overkill for most clustered applications being developed in the open source community.  At that point, OpenAIS was split in to two projects: Corosync and OpenAIS. The former, Corosync, provides totem, cluster membership, messaging, and basic APIs for use by clustered applications, while the OpenAIS project became an optional add-on to corosync for users who want the full AIS API.
In 2008, it was decided that the AIS specification was overkill for most clustered applications being developed in the open source community.  At that point, OpenAIS was split in to two projects: Corosync and OpenAIS. The former, Corosync, provides totem, cluster membership, messaging, and basic APIs for use by clustered applications, while the OpenAIS project became an optional add-on to corosync for users who want the full AIS API.


You will see a lot of references to OpenAIS while searching the web for information on clustering. Understanding it's evolution will hopefully help you avoid confusion.
You will see a lot of references to OpenAIS while searching the web for information on clustering. Understanding its evolution will hopefully help you avoid confusion.


== Concept; quorum ==
== Concept; quorum ==
Line 127: Line 135:
[[Quorum]] is defined as the minimum set of hosts required in order to provide clustered services and is used to prevent [[split-brain]] situations.
[[Quorum]] is defined as the minimum set of hosts required in order to provide clustered services and is used to prevent [[split-brain]] situations.


The quorum algorithm used by the RHCS cluster is called "simple majority quorum", which means that more than half of the hosts must be online and communicating in order to provide service. While simple majority quorum a very common quorum algorithm, other quorum algorithms exist ([[grid quorum]], [[YKD Dyanamic Linear Voting]], etc.).
The quorum algorithm used by the RHCS cluster is called "simple majority quorum", which means that more than half of the hosts must be online and communicating in order to provide service. While simple majority quorum is a very common quorum algorithm, other quorum algorithms exist ([[grid quorum]], [[YKD Dyanamic Linear Voting]], etc.).


The idea behind quorum is that, when a cluster splits into two or more partitions, which ever group of machines has quorum can safely start clustered services knowing that no other lost nodes will try to do the same.
The idea behind quorum is that, when a cluster splits into two or more partitions, which ever group of machines has quorum can safely start clustered services knowing that no other lost nodes will try to do the same.
Line 149: Line 157:
In the case of a two node cluster, as we will be building here, any failure results in a 50/50 split. If we enforced quorum in a two-node cluster, there would never be high availability because and failure would cause both nodes to withdraw. The risk with this exception is that we now place the entire safety of the cluster on [[fencing]], a concept we will cover in a second. Fencing is a second line of defense and something we are loath to rely on alone.
In the case of a two node cluster, as we will be building here, any failure results in a 50/50 split. If we enforced quorum in a two-node cluster, there would never be high availability because and failure would cause both nodes to withdraw. The risk with this exception is that we now place the entire safety of the cluster on [[fencing]], a concept we will cover in a second. Fencing is a second line of defense and something we are loath to rely on alone.


Even in a two-node cluster though, proper quorum can be maintained by using a quorum disk, called a [[qdisk]]. Unfortunately, <span class="code">qdisk</span> on a [[DRBD]] resource comes with it's own problems, so we will not be able to use it here.
Even in a two-node cluster though, proper quorum can be maintained by using a quorum disk, called a [[qdisk]]. Unfortunately, <span class="code">qdisk</span> on a [[DRBD]] resource comes with its own problems, so we will not be able to use it here.


== Concept; Virtual Synchrony ==
== Concept; Virtual Synchrony ==


Many cluster operations, like fencing, distributed locking and so on, have to occur in the same order across all nodes. This concept is called "virtual synchrony".
Many cluster operations, like distributed locking and so on, have to occur in the same order across all nodes. This concept is called "virtual synchrony".


This is provided by <span class="code">corosync</span> using "closed process groups", <span class="code">[[CPG]]</span>. A closed process group is simply a private group of processes in a cluster. Within this closed group, all messages between members are ordered. Delivery, however, is not guaranteed. If a member misses messages, it is up to the member's application to decide what action to take.
This is provided by <span class="code">corosync</span> using "closed process groups", <span class="code">[[CPG]]</span>. A closed process group is simply a private group of processes in a cluster. Within this closed group, all messages between members are ordered. Delivery, however, is not guaranteed. If a member misses messages, it is up to the member's application to decide what action to take.
Line 162: Line 170:
* Both members are able to start <span class="code">service:foo</span>.
* Both members are able to start <span class="code">service:foo</span>.
* Both want to start it, but need a lock from [[DLM]] to do so.
* Both want to start it, but need a lock from [[DLM]] to do so.
** The <span class="code">an-node01</span> member has it's totem token, and sends it's request for the lock.
** The <span class="code">an-node01</span> member has its totem token, and sends its request for the lock.
** DLM issues a lock for that service to <span class="code">an-node01</span>.
** DLM issues a lock for that service to <span class="code">an-node01</span>.
** The <span class="code">an-node02</span> member requests a lock for the same service.
** The <span class="code">an-node02</span> member requests a lock for the same service.
Line 173: Line 181:
** The <span class="code">an-node01</span> sends a request for the same lock, but DLM sees that a lock is pending and rejects the request.
** The <span class="code">an-node01</span> sends a request for the same lock, but DLM sees that a lock is pending and rejects the request.
** The <span class="code">an-node02</span> member finishes altering the file system, announces the changed over CPG and releases the lock.
** The <span class="code">an-node02</span> member finishes altering the file system, announces the changed over CPG and releases the lock.
** The <span class="code">an-node01</span> member updates it's view of the filesystem, requests a lock, receives it and proceeds to update the filesystems.
** The <span class="code">an-node01</span> member updates its view of the filesystem, requests a lock, receives it and proceeds to update the filesystems.
** It completes the changes, annouces the changes over CPG and releases the lock.
** It completes the changes, annouces the changes over CPG and releases the lock.


Messages can only be sent to the members of the CPG while the node has a totem tokem from corosync.
Messages can only be sent to the members of the CPG while the node has a totem token from corosync.


== Concept; Fencing ==
== Concept; Fencing ==


Fencing is a '''absolutely critical''' part of clustering. Without '''fully''' working fence devices, '''''your cluster will fail'''''.
{{warning|1=DO NOT BUILD A CLUSTER WITHOUT PROPER, WORKING AND TESTED FENCING.}}


Was that strong enough, or should I say that again? Let's be safe:
[[Image:fence_meme.jpg|right|300px|thumb|Laugh, but this is a weekly conversation.]]


'''''DO NOT BUILD A CLUSTER WITHOUT PROPER, WORKING AND TESTED FENCING'''''.
Fencing is a '''absolutely critical''' part of clustering. Without '''fully''' working fence devices, '''''your cluster will fail'''''.


Sorry, I promise that this will be the only time that I speak so strongly. Fencing really is critical, and explaining the need for fencing is nearly a weekly event.  
Sorry, I promise that this will be the only time that I speak so strongly. Fencing really is critical, and explaining the need for fencing is nearly a weekly event.  
Line 192: Line 200:
When a node stops responding, an internal timeout and counter start ticking away. During this time, no [[DLM]] locks are allowed to be issued. Anything using DLM, including <span class="code">rgmanager</span>, <span class="code">clvmd</span> and <span class="code">gfs2</span>, are effectively hung. The hung node is detected using a totem token timeout. That is, if a token is not received from a node within a period of time, it is considered lost and a new token is sent. After a certain number of lost tokens, the cluster declares the node dead. The remaining nodes reconfigure into a new cluster and, if they have quorum (or if quorum is ignored), a fence call against the silent node is made.
When a node stops responding, an internal timeout and counter start ticking away. During this time, no [[DLM]] locks are allowed to be issued. Anything using DLM, including <span class="code">rgmanager</span>, <span class="code">clvmd</span> and <span class="code">gfs2</span>, are effectively hung. The hung node is detected using a totem token timeout. That is, if a token is not received from a node within a period of time, it is considered lost and a new token is sent. After a certain number of lost tokens, the cluster declares the node dead. The remaining nodes reconfigure into a new cluster and, if they have quorum (or if quorum is ignored), a fence call against the silent node is made.


The fence daemon will look at the cluster configuration and get the fence devices configured for the dead node. Then, one at a time and in the order that they appear in the configuration, the fence daemon will call those fence devices, via their fence agents, passing to the fence agent any configured arguments like username, password, port number and so on. If the first fence agent returns a failure, the next fence agent will be called. If the second fails, the third will be called, then the forth and so on. Once the last (or perhaps only) fence device fails, the fence daemon will retry again, starting back at the start of the list. It will do this indefinitely until one of the fence devices success.
The fence daemon will look at the cluster configuration and get the fence devices configured for the dead node. Then, one at a time and in the order that they appear in the configuration, the fence daemon will call those fence devices, via their fence agents, passing to the fence agent any configured arguments like username, password, port number and so on. If the first fence agent returns a failure, the next fence agent will be called. If the second fails, the third will be called, then the forth and so on. Once the last (or perhaps only) fence device fails, the fence daemon will retry again, starting back at the start of the list. It will do this indefinitely until one of the fence devices succeeds.


Here's the flow, in point form:
Here's the flow, in point form:
Line 229: Line 237:
== Component; totem ==
== Component; totem ==


The <span class="code">[[totem]]</span> protocol defines message passing within the cluster and it is used by <span class="code">corosync</span>. A token is passed around all the nodes in the cluster, and nodes can only send messages while they have the token. A node will keep it's messages in memory until it gets the token back with no "not ack" messages. This way, if a node missed a message, it can request it be resent when it gets it's token. If a node isn't up, it will simply miss the messages.
The <span class="code">[[totem]]</span> protocol defines message passing within the cluster and it is used by <span class="code">corosync</span>. A token is passed around all the nodes in the cluster, and nodes can only send messages while they have the token. A node will keep its messages in memory until it gets the token back with no "not ack" messages. This way, if a node missed a message, it can request it be resent when it gets its token. If a node isn't up, it will simply miss the messages.


The <span class="code">totem</span> protocol supports something called '<span class="code">rrp</span>', '''R'''edundant '''R'''ing '''P'''rotocol. Through <span class="code">rrp</span>, you can add a second backup ring on a separate network to take over in the event of a failure in the first ring. In RHCS, these rings are known as "<span class="code">ring 0</span>" and "<span class="code">ring 1</span>". The RRP is being re-introduced in RHCS version 3. It's use is experimental and should only be used with plenty of testing.
The <span class="code">totem</span> protocol supports something called '<span class="code">rrp</span>', '''R'''edundant '''R'''ing '''P'''rotocol. Through <span class="code">rrp</span>, you can add a second backup ring on a separate network to take over in the event of a failure in the first ring. In RHCS, these rings are known as "<span class="code">ring 0</span>" and "<span class="code">ring 1</span>". The RRP is being re-introduced in RHCS version 3. Its use is experimental and should only be used with plenty of testing.


== Component; rgmanager ==
== Component; rgmanager ==


When the cluster membership changes, <span class="code">corosync</span> tells the <span class="code">rgmanager</span> that it needs to recheck it's services. It will examine what changed and then will start, stop, migrate or recover cluster resources as needed.
When the cluster membership changes, <span class="code">corosync</span> tells the <span class="code">rgmanager</span> that it needs to recheck its services. It will examine what changed and then will start, stop, migrate or recover cluster resources as needed.


Within <span class="code">rgmanager</span>, one or more ''resources'' are brought together as a ''service''. This service is then optionally assigned to a ''failover domain'', an subset of nodes that can have preferential ordering.
Within <span class="code">rgmanager</span>, one or more ''resources'' are brought together as a ''service''. This service is then optionally assigned to a ''failover domain'', an subset of nodes that can have preferential ordering.
Line 245: Line 253:
{{note|1=<span class="code">qdisk</span> does not work reliably on a DRBD resource, so we will not be using it in this tutorial.}}
{{note|1=<span class="code">qdisk</span> does not work reliably on a DRBD resource, so we will not be using it in this tutorial.}}


A Quorum disk, known as a <span class="code">qdisk</span> is small partition on [[SAN]] storage used to enhance quorum. It generally carries enough votes to allow even a single node to take quorum during a cluster partition. It does this by using configured heuristics, that is custom tests, to decided which which node or partition is best suited for providing clustered services during a cluster reconfiguration. These heuristics can be simple, like testing which partition has access to a given router, or they can be as complex as the administrator wishes using custom scripts.
A Quorum disk, known as a <span class="code">qdisk</span> is small partition on [[SAN]] storage used to enhance quorum. It generally carries enough votes to allow even a single node to take quorum during a cluster partition. It does this by using configured heuristics, that is custom tests, to decided which node or partition is best suited for providing clustered services during a cluster reconfiguration. These heuristics can be simple, like testing which partition has access to a given router, or they can be as complex as the administrator wishes using custom scripts.


Though we won't be using it here, it is well worth knowing about when you move to a cluster with [[SAN]] storage.
Though we won't be using it here, it is well worth knowing about when you move to a cluster with [[SAN]] storage.
Line 268: Line 276:
* All three DRBD resources are managed by clustered LVM.
* All three DRBD resources are managed by clustered LVM.
* The GFS2-formatted [[LV]] is mounted on <span class="code">/shared</span> on both nodes.
* The GFS2-formatted [[LV]] is mounted on <span class="code">/shared</span> on both nodes.
* Each [[VM]] gets it's own [[LV]].
* Each [[VM]] gets its own [[LV]].
* All three DRBD resources sync over the [[Storage Network]], which uses the bonded <span class="code">bond1</span> (backed be <span class="code">eth1</span> and <span class="code">eth4</span>).
* All three DRBD resources sync over the [[Storage Network]], which uses the bonded <span class="code">bond1</span> (backed by <span class="code">eth1</span> and <span class="code">eth4</span>).


Don't worry if this seems illogical at this stage. The main thing to look at are the <span class="code">drbdX</span> devices and how they each tie back to a corresponding <span class="code">sdaY</span> device on either node.
Don't worry if this seems illogical at this stage. The main thing to look at are the <span class="code">drbdX</span> devices and how they each tie back to a corresponding <span class="code">sdaY</span> device on either node.
Line 307: Line 315:
|    [_an01-vg0_]                              |  |              |  |                              [_an01-vg0_]    |
|    [_an01-vg0_]                              |  |              |  |                              [_an01-vg0_]    |
|      |  ________________________    _____  |  |              |  | .......    ________________________  |      |
|      |  ________________________    _____  |  |              |  | .......    ________________________  |      |
|      +--[_/dev/an01-vg0/vm0001_1_]---[_vm1_] |  |              |  | :.vm1.:---[_/dev/an02-vg0/vm0001_1_]--+      |
|      +--[_/dev/an01-vg0/vm0001_1_]---[_vm1_] |  |              |  | :.vm1.:---[_/dev/an01-vg0/vm0001_1_]--+      |
|      |  ________________________    _____  |  |              |  | .......    ________________________  |      |
|      |  ________________________    _____  |  |              |  | .......    ________________________  |      |
|      \--[_/dev/an01-vg0/vm0002_1_]---[_vm2_] |  |              |  | :.vm2.:---[_/dev/an02-vg0/vm0002_1_]--/      |
|      \--[_/dev/an01-vg0/vm0002_1_]---[_vm2_] |  |              |  | :.vm2.:---[_/dev/an01-vg0/vm0002_1_]--/      |
|            _______________    ____________  |  |              |  |  ____________    _______________            |
|            _______________    ____________  |  |              |  |  ____________    _______________            |
|        /--[_Clustered_LVM_]--[_/dev/drbd0_]--/  |              |  \--[_/dev/drbd0_]--[_Clustered_LVM_]--\        |
|        /--[_Clustered_LVM_]--[_/dev/drbd0_]--/  |              |  \--[_/dev/drbd0_]--[_Clustered_LVM_]--\        |
Line 343: Line 351:
It works much like standard filesystem, with user-land tools like <span class="code">mkfs.gfs2</span>, <span class="code">fsck.gfs2</span> and so on. The major difference is that it and <span class="code">clvmd</span> use the cluster's [[DLM|distributed locking mechanism]] provided by the <span class="code">dlm_controld</span> daemon. Once formatted, the GFS2-formatted partition can be mounted and used by any node in the cluster's [[CPG|closed process group]]. All nodes can then safely read from and write to the data on the partition simultaneously.
It works much like standard filesystem, with user-land tools like <span class="code">mkfs.gfs2</span>, <span class="code">fsck.gfs2</span> and so on. The major difference is that it and <span class="code">clvmd</span> use the cluster's [[DLM|distributed locking mechanism]] provided by the <span class="code">dlm_controld</span> daemon. Once formatted, the GFS2-formatted partition can be mounted and used by any node in the cluster's [[CPG|closed process group]]. All nodes can then safely read from and write to the data on the partition simultaneously.


{{note|1=GFS2 is '''only''' supported when run on top of Clustered LVM [[LV]]s. This is because, in certain error states, <span class="code">gfs2_controld</span> will call <span class="code">dmsetup</span> to disconnect the GFS2 partition from it's storage in certain failure states.}}
{{note|1=GFS2 is '''only''' supported when run on top of Clustered LVM [[LV]]s. This is because, in certain error states, <span class="code">gfs2_controld</span> will call <span class="code">dmsetup</span> to disconnect the GFS2 partition from its storage in certain failure states.}}


== Component; DLM ==
== Component; DLM ==
Line 359: Line 367:
Two of the most popular open-source virtualization platforms available in the Linux world today and [[Xen]] and [[KVM]]. The former is maintained by [http://www.citrix.com/xenserver Citrix] and the other by [http://www.redhat.com/solutions/virtualization/ Redhat]. It would be difficult to say which is "better", as they're both very good. Xen can be argued to be more mature where KVM is the "official" solution supported by Red Hat in [[EL6]].
Two of the most popular open-source virtualization platforms available in the Linux world today and [[Xen]] and [[KVM]]. The former is maintained by [http://www.citrix.com/xenserver Citrix] and the other by [http://www.redhat.com/solutions/virtualization/ Redhat]. It would be difficult to say which is "better", as they're both very good. Xen can be argued to be more mature where KVM is the "official" solution supported by Red Hat in [[EL6]].


We will be using the KVM [[hypervisor]] within which our highly-available virtual machine guests will reside. It is a type-2 hypervisor, which means that the host operating system runs directly on the bare hardware. Contrasted against Xen, which is a type-1 hypervisor where even the installed OS is itself just another virtual machine.
We will be using the KVM [[hypervisor]] within which our highly-available virtual machine guests will reside. It is a type-1 hypervisor, which means that the host operating system runs directly on the bare hardware. Contrasted against Xen, which is a type-2 hypervisor where even the installed OS is itself just another virtual machine.


= Network =
= Node Installation =
 
This section is going to be intentionally vague, as I don't want to influence too heavily what hardware you buy or how you install your operating systems. However, we need a baseline, a minimum system requirement of sorts. Also, I will refer fairly frequently to my setup, so I will share with you the details of what I bought. Please don't take this as an endorsement though... Every cluster will have its own needs, and you should plan and purchase for your particular needs.
 
In my case, my goal was to have a low-power consumption setup and I knew that I would never put my cluster into production as it's strictly a research and design cluster. As such, I can afford to be quite modest.
 
== Minimum Requirements ==


The cluster will use three separate Class B networks;
This will cover two sections;


{|class="wikitable"
* Node Minimum requirements
!Purpose
* Infrastructure requirements
!Subnet
!Notes
|-
|Internet-Facing Network ([[IFN]])
|<span class="code">10.255.0.0/16</span>
|
* Each node will use <span class="code">10.255.0.x</span> where <span class="code">x</span> matches the node ID.<br />
* Virtual Machines in the cluster that need to be connected to the Internet will use <span class="code">10.255.y.z</span> where <span class="code">y</span> corresponds to the cluster and <span class="code">z</span> is a simple sequence number matching the VM ID.
|-
|Storage Network ([[SN]])
|<span class="code">10.10.0.0/16</span>
|
* Each node will use <span class="code">10.10.0.x</span> where <span class="code">x</span> matches the node ID.
|-
|Back-Channel Network ([[BCN]])
|<span class="code">10.20.0.0/16</span>
|
* Each node will use <span class="code">10.20.0.x</span> where <span class="code">x</span> matches the node ID.<br />
* Node-specific [[IPMI]] or other out-of-band management devices will use <span class="code">10.20.1.x</span> where <span class="code">x</span> matches the node ID.<br />
* Multi-port fence devices, switches and similar will use <span class="code">10.20.2.z</span> where <span class="code">z</span> is a simple sequence.<br />
Miscellaneous equipment in the cluster, like managed switches, will use <span class="code">10.20.3.z</span> where <span class="code">z</span> is a simple sequence.<br />
|-
|''Optional'' OpenVPN Network
|<span class="code">10.30.0.0/16</span>
|* For clients behind firewalls, I like to create a [[OpenVPN Server on EL6|VPN]] server for the cluster nodes to log into when support is needed. This way, the client retains control over when remote access is available simply by starting and stopping the <span class="code">openvpn</span> daemon. This will not be discussed any further in this tutorial.
|}


We will be using six interfaces, bonded into three pairs of two NICs in Active/Passive (mode 1) configuration. Each link of each bond will be on alternate, unstacked switches. This configuration is the only configuration supported by [[Red Hat]] in clusters. We will also configure affinity by specifying interfaces <span class="code">eth0</span>, <span class="code">eth1</span> and <span class="code">eth2</span> as primary for the <span class="code">bond0</span>, <span class="code">bond1</span> and <span class="code">bond2</span> interfaces, respectively. This way, when everything is working fine, all traffic is routed through the same switch for maximum performance.
The '''nodes''' are the two separate servers that will, together, form the base of our cluster. The infrastructure covers the networking and the switched power bars called a '''[[PDU]]s'''.


{{note|1=Only the bonded interface used by corosync must be in Active/Passive configuration (<span class="code">bond0</span> in this tutorial). If you want to experiment with other bonding modes for <span class="code">bond1</span> or <span class="code">bond2</span>, please feel free to do so. That is outside the scope of this tutorial, however.}}
=== Node Requirements ===


If you can not install six interfaces in your server, then four interfaces will do with the [[SN]] and [[BCN]] networks merged.
''General'';


{{warning|1=If you wish to merge the [[SN]] and [[BCN]] onto one interface, test to ensure that the storage traffic will not block cluster communication. Test by forming your cluster and then pushing your storage to maximum read and write performance for an extended period of time (minimum of several seconds). If the cluster partitions, you will need to do some advanced quality-of-service or other network configuration to ensure reliable delivery of cluster network traffic.}}
As these nodes will host virtual machines, then will need sufficient [[RAM]] and provide [http://en.wikipedia.org/wiki/AMD-V#AMD_virtualization_.28AMD-V.29 virtualization-enabled] [[CPU]]s. Most, though not all, modern processors support hardware virtualization extensions. Finally, you need to have sufficient network bandwidth across two independent links to support the maximum burst storage traffic plus enough headroom to ensure that cluster traffic is never interrupted.


In this tutorial, we will use two [http://dlink.ca/products/?pid=DGS-3100-24 D-Link DGS-3100-24], unstacked, using three [[VLAN]]s to isolate the three networks.
''Network'';
* [[IFN]] will have VLAN ID number 100.
* [[SN]] will have VLAN ID number 101.
* [[BCN]] will have VLAN IS number 102.


You could just as easily use four or six unmanaged [http://dlink.ca/products/?pid=DGS-1005G 5 port] or [http://dlink.ca/products/?pid=DGS-1008G 8 port] switches. What matters is that the three subnets are isolated and that each link of each bond is on a separate switch. Lastly, only connect the [[IFN]] switches or VLANs to the Internet for security reasons.
This tutorial will use three independent networks, each using two physical interfaces in a bonded configuration. These will route through two separate managed switches for high-availability networking. Each network will be dedicated to a given traffic type. This requires six interfaces and, with a separate [[IPMI]] interface, consumes a staggering seven ports per node.  


The actual mapping of interfaces to bonds to networks will be:
Understanding that this may not be feasible, you can drop this to just two connections in a single bonded interface. If you decide to do this, you will need to configure [[QoS]] to ensure that [[totem]] [[multicast]] traffic gets highest priority as a delay of less than one second can cause the cluster to break. You also need to test sustained, heavy disk traffic to ensure that it doesn't cause problems. In particular, run storage tests from a virtual machine and then live-migrate that machine to create a "worst case" network load. If that succeeds, you are probably safe. All of this is outside of this tutorial's scope though.


{|class="wikitable"
''Power'';
!Subnet
!Link 1
!Link 2
!Bond
!IP
|-
|[[BCN]]
|<span class="code">eth0</span>
|<span class="code">eth3</span>
|<span class="code">bond0</span>
|<span class="code">10.20.0.x</span>
|-
|[[SN]]
|<span class="code">eth1</span>
|<span class="code">eth4</span>
|<span class="code">bond1</span>
|<span class="code">10.10.0.x</span>
|-
|[[IFN]]
|<span class="code">eth2</span>
|<span class="code">eth5</span>
|<span class="code">bond2</span>
|<span class="code">10.255.0.x</span>
|}


== Setting Up the Network ==
In production, you will want to use servers which have redundant power supplies and ensure that either side of the power connects to two separate power sources.


{{warning|1=The following steps can easily get confusing, given how many files we need to edit. Losing access to your server's network is a very real possibility! '''Do not continue without direct access to your servers!''' If you have out-of-band access via [[iKVM]], console redirection or similar, be sure to test that it is working before proceeding.}}
''Out-of-Band Management'';


=== Managed and Stacking Switch Notes ===
As we will discuss later, the ideal method of fencing a node is to use [[IPMI]] or one of the vendor-specific variants like HP's [[iLO]], Dell's [[DRAC]] or IBM's [[RSA]]. This allows another node in the cluster to force the host node to power off, regardless of the state of the operating system. Critically, it can confirm to the caller once the node has been shut down, which allows for the cluster to safely and confidently recover lost services.


{{note|1=If you have two stacked switches, do not stack them!}}
The two nodes used to create this tutorial have the following hardware (again, these will never see production use, so I could afford to go low);
* 1x Tyan [http://www.tyan.com/product_SKU_spec.aspx?ProductType=MB&pid=698&SKU=600000217 S5510GM3NR] Mainboard (note that the '-LE' has no IPMI)
* 1x Intel [http://ark.intel.com/products/52269?wapkw=%28E3-1220%29 Xeon E3-1220] CPU
* 2x Kingston [http://www.ec.kingston.com/ecom/configurator_new/partsinfo.asp?root=&LinkBack=&ktcpartno=KVR1333D3E9S/4GHB KVR1333D3E9S/4GHB] DDR3 ECC DIMMs
* 3x Intel [http://www.intel.com/products/desktop/adapters/gigabit-ct/gigabit-ct-overview.htm Gigabit CT] PCIe Ethernet adapters


There are two things you need to be wary of with managed switches.
=== Infrastructure Requirements ===


* Don't stack them. It may seem like it makes sense to stack them and create Link Aggregation Groups, but this is not supported. Leave the two switches as independent units.
''Network'';
* Disable Spanning Tree Protocol on all ports used by the cluster. Otherwise, when a lost switch is recovered, STP negotiation will cause traffic to stop on the ports for upwards of thirty seconds. This is more than enough time to partition a cluster.


Enable STP on the ports you use for uplinking the two switches and disable it on all other ports.
You will need two separate switches in order to provide High Availability. These do not need to be stacked or even managed, but you do need to consider their actual capabilities and disregard the stated capacity. What I mean by this, in essence, is that not all gigabit equipment is equal. You will need to calculate how much bandwidth (in raw data throughput and as packets-per-second) and confirm that the switch can sustain that load. Most switches will rate these two values as their switching fabric capacity, so be sure to look closely at the specifications.


=== Making Sure We Know Our Interfaces ===
Another thing to consider is whether you wish to run at an [[MTU]] higher that 1500 [[bytes]] per packet. This is generally referred to in specification sheets as "jumbo frame" support. However, many lesser companies will advertise support for jumbo frames, but they only support up to 4 [[KiB]]. Most professional networks looking to implement large MTU sizes aim for 9 [[KiB]] frame sizes, so be sure to look at the actual size of the largest supported jumbo frame before purchasing network equipment.


When you installed the operating system, the network interfaces names are somewhat randomly assigned to the physical network interfaces. It more than likely that you will want to re-order.
''Power'';


Before you start moving interface names around, you will want to consider which physical interfaces you will want to use on which networks. At the end of the day, the names themselves have no meaning. At the very least though, make them consistent across nodes.
As we will discuss later, we need a backup fence device. This will be implemented using a specific brand and model of switched power distribution unit, called a [[PDU]] which is effectively a power bar whose outlets can be independently turned on and off over the network. This tutorial uses an [ APC AP7900] PDU, but many others are available. Should you choose to use another make or model, you '''must''' first ensure that it has a supported [http://git.fedorahosted.org/git/?p=fence-agents.git;a=tree;f=fence/agents;hb=HEAD fence agent]. Ensuring this is an exercise for the reader.
 
Some things to consider, in order of importance:


* If you have a shared interface for your out-of-band management interface, like [[IPMI]] or [[iLO]], you will want that interface to be on the [[Back-Channel Network]].
In production environments, it is ideal to have each PDU backed by its own [[UPS]], and each UPS connected to a separate mains electrical circuit. This way, the failure of a given PDU, UPS or mains circuit will not cause an interruption to the cluster. Do be sure to plan your power infrastructure to supply enough power to drive the entire cluster at full load in a failed state. That is, more plainly, don't divide the total load in two when planning your infrastructure. You must always plan for a failed state!
* For redundancy, you want to spread out which interfaces are paired up. In my case, I have three interfaces on my mainboard and three additional add-in cards. I will pair each onboard interface with an add-in interface. In my case, my IPMI interface physically piggy-backs on one of the onboard interfaces so this interface will need to be part of the [[BCN]] bond.
* Your interfaces with the lowest latency should be used for the back-channel network.
* Your two fastest interfaces should be used for your storage network.
* The remaining two slowest interfaces should be used for the [[Internet-Facing Network]] bond.


In my case, all six interfaces are identical, so there is little to consider. The left-most interface on my system has IPMI, so it's paired network interface will be <span class="code">eth0</span>. I simply work my way left, incrementing as I go. What you do will be whatever makes most sense to you.
Hardware used in this tutorial are;
* 2x D-Link [http://dlink.ca/products/?pid=DGS-3100-24 DGS-3100-24] 24-port Gbit switches supporting 10 [[KiB]] jumbo frames.
* 1x APC [http://www.apc.com/products/resource/include/techspec_index.cfm?base_sku=AP7900 AP7900] switched PDU (supported by the <span class="code">[http://git.fedorahosted.org/git/?p=fence-agents.git;a=tree;f=fence/agents/apc_snmp;hb=HEAD fence_apc_snmp]</span> fence agent).


There is a separate, short tutorial on re-ordering network interface;
''Two Notes'';


* '''[[Changing the ethX to Ethernet Device Mapping in EL6 and Fedora 12+]]'''
# The D-Link switch I use is being phased out and is being replaced by the [http://dlink.ca/products/?pid=DGS-3120-24TC DGS-3120-24TC] models. The DGS-3120 models are much improved over the DGS-3100 series and can be safely used in stacked configuration (thus enabling the use of [[VLAN]] [[LAG]]s). The DGS-3100 would interrupt traffic when a switch in the stack recovered, which would partition the cluster. This forced me to unstack the switches in this tutorial.
# Given my budget, I could not afford to purchase redundant power supplies for use in this tutorial. As such, my test cluster has the power as a single point of failure. For learning, this is fine, but it is strongly ill-advised in production. I do show an example configuration of redundant [[PSU]] use spread across separate PDUs from a production cluster.


Once you have the physical interfaces named the way you like, proceed to the next step.
== Pre-Installation Planning ==


=== Planning Our Network ===
Before you assemble your servers, it is highly advised to first record the [[MAC]] addresses of the NICs. I always write a little file called <span class="code"><node>-nics.txt</span> matched to the device name I plan to set it to.


To setup our network, we will need to edit the <span class="code">ifcfg-ethX</span>, <span class="code">ifcfg-bondX</span> and <span class="code">ifcfg-vbrX</span> scripts. The last one will create bridges which will be used to route network connections to the virtual machines. We '''won't''' be creating an <span class="code">vbr1</span> bridge though, and <span class="code">bond1</span> will be dedicated to the storage and never used by a VM. The bridges will have the [[IP]] addresses, not the bonded interfaces. They will instead be slaved to their respective bridges.
<source lang="bash">
vim ~/an-node01-nics.txt
</source>
<source lang="text">
eth0 00:E0:81:C7:EC:49 # Back-Channel Network - Link 1
eth1 00:E0:81:C7:EC:48 # Storage Network - Link 1
eth2 00:E0:81:C7:EC:47 # Internet-Facing Network - Link 1
eth3 00:1B:21:9D:59:FC # Back-Channel Network - Link 2
eth4 00:1B:21:BF:70:02 # Storage Network - Link 2
eth5 00:1B:21:BF:6F:FE # Back-Channel Network - Link 2
</source>
 
How, or even if you record this is entirely up to you.
 
== OS Installation ==
 
{{warning|1=[[EL6]].1 shipped with a version of <span class="code">[[corosync]]</span> that had a token retransmit bug. On slower systems, there would be a form of race condition which would cause <span class="code">[[totem]]</span> tokens the be retransmitted and cause significant performance problems. This has been resolved in [[EL6]].2 so please be sure to upgrade.}}
 
Beyond being based on [[RHEL]] 6, there are no requirements for how the operating system is installed. This tutorial is written using "minimal" installs, and as such, installation instructions will be provided that will install all needed packages if they aren't already installed on your nodes.
 
A few notes about the installation used for this tutorial;
* [[RHCS]] stable 3 supports <span class="code">[[selinux]]</span>, but it is disabled in this tutorial.
* Both <span class="code">[[iptables]]</span> and <span class="code">[[ip6tables]]</span> firewalls are disabled.
 
Obviously, this significantly reduces the security of your nodes. For learning, which is the goal here, this helps keep a focus on the clustering and simplifies debugging when things go wrong. In production clusters though, these steps are ill advised. It is strongly suggested that you enable first the firewall, then when that is working, enabling <span class="code">selinux</span>. Leaving <span class="code">selinux</span> for last is intentional, as it generally takes the most work to get right.
 
=== Network Security ===
 
When building production clusters, you will want to consider two options with regard to network security.
 
First, the interfaces connected to an untrusted network, like the Internet, should not have an IP address, though the interfaces themselves will need to be up so that virtual machines can route through them to the outside world. Alternatively, anything inbound from the virtual machines or inbound from the untrusted network should be <span class="code">DROP</span>ed by the firewall.
 
Second, if you can not run the cluster communications or storage traffic on dedicated network connections over isolated subnets, you will need to configure the firewall to block everything except the ports needed by storage and cluster traffic. The default ports are below.


We're going to be editing a lot of files. It's best to lay out what we'll be doing in a chart. So our setup will be:
* [http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html-single/Cluster_Administration/index.html#s1-iptables_firewall-CA RHEL 6 Cluster Configuration, Firewall Setup]
* [http://www.drbd.org/users-guide-8.3/s-prepare-network.html Linbit's DRBD, Firewall Configuration]


{|class="wikitable"
{|class="wikitable sortable"
!Node
!Component
!BCN IP and Device
!Protocol
!SN IP and Device
!Port
!IFN IP and Device
!Note
|-
|<span class="code">[[dlm]]</span>
|[[TCP]]
|<span class="code">21064</span>
|
|-
|<span class="code">[[drbd]]</span>
|[[TCP]]
|<span class="code">7788</span>+
|Each [[DRBD]] resource will use an additional port, generally counting up (ie: <span class="code">r0</span> will use <span class="code">7788</span>, <span class="code">r1</span> will use <span class="code">7789</span>, <span class="code">r2</span> will use <span class="code">7790</span> and so on).
|-
|<span class="code">[[luci]]</span>
|[[TCP]]
|<span class="code">8084</span>
|Optional web-based configuration tool, not used in this tutorial.
|-
|<span class="code">[[modclusterd]]</span>
|[[TCP]]
|<span class="code">16851</span>
|
|-
|-
|<span class="code">an-node01</span>
|<span class="code">[[ricci]]</span>
|<span class="code">10.20.0.1</span> on <span class="code">vbr0</span> (<span class="code">bond0</span> slaved)
|[[TCP]]
|<span class="code">10.10.0.1</span> on <span class="code">bond1</span>
|<span class="code">11111</span>
|<span class="code">10.255.0.1</span> on <span class="code">vbr2</span> (<span class="code">bond2</span> slaved)
|Each [[DRBD]] resource will use an additional port, generally counting up (ie: <span class="code">r1</span> will use <span class="code">7790</span>, <span class="code">r2</span> will use <span class="code">7791</span> and so on).
|-
|-
|<span class="code">an-node02</span>
|<span class="code">[[totem]]</span>
|<span class="code">10.20.0.2</span> on <span class="code">vbr0</span> (<span class="code">bond0</span> slaved)
|[[UDP]]/[[multicast]]
|<span class="code">10.10.0.2</span> on <span class="code">bond1</span>
|<span class="code">5404</span>, <span class="code">5405</span>
|<span class="code">10.255.0.2</span> on <span class="code">vbr2</span> (<span class="code">bond2</span> slaved)
|Uses a multicast group for cluster communications
|}
|}


=== Creating Some Network Configuration Files ===
{{note|1=As of [[EL6]].2, you can now use [[unicast]] for totem communication instead of multicast. This is '''not''' advised, and should only be used for clusters of two or three nodes on networks where unresolvable [[multicast]] issues exist. If using [[gfs2]], as we do here, using unicast for totem is strongly discouraged.}}


Bridge configuration files '''must''' have a file name that sorts '''after''' the interfaces and bridges. The actual device name can be whatever you want though. If the system tries to start the bridge before it's interface is up, it will fail. I personally like to use the name <span class="code">vbrX</span> for "virtual machine bridge". You can use whatever makes sense to you, with the above concern in mind.
As mentioned above, we will disable <span class="code">selinux</span> and <span class="code">iptables</span>. This is to simplify the learning process and both should be enabled pre-production.


Start by <span class="code">touch</span>ing the configuration files we will need.
To disable the firewall (note that I disable both <span class="code">iptables</span> and <span class="code">ip6tables</span>):


<source lang="bash">
<source lang="bash">
touch /etc/sysconfig/network-scripts/ifcfg-bond{0,1,2}
chkconfig iptables off
touch /etc/sysconfig/network-scripts/ifcfg-vbr{0,2}
chkconfig ip6tables off
/etc/init.d/iptables stop
/etc/init.d/ip6tables stop
</source>
</source>


Now make a backup of your configuration files, in case something goes wrong and you want to start over.
To disable <span class="code">selinux</span>:


<source lang="bash">
<source lang="bash">
mkdir /root/backups/
cp /etc/selinux/config /etc/selinux/config.orig
rsync -av /etc/sysconfig/network-scripts/ifcfg-eth* /root/backups/
vim /etc/selinux/config
diff -u /etc/selinux/config.orig /etc/selinux/config
</source>
<source lang="diff">
--- /etc/selinux/config.orig 2012-06-15 18:13:12.416646749 -0400
+++ /etc/selinux/config 2012-06-15 18:09:46.920938956 -0400
@@ -4,7 +4,7 @@
#    enforcing - SELinux security policy is enforced.
#    permissive - SELinux prints warnings instead of enforcing.
#    disabled - No SELinux policy is loaded.
-SELINUX=enforcing
+SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
#    targeted - Targeted processes are protected,
#    mls - Multi Level Security protection.
</source>
</source>
You '''must''' reboot for the <span class="code">selinux</span> changes to take effect.
= Network =
Before we begin, let's take a look at a block diagram of what we're going to build. This will help when trying to see what we'll be talking about.
<source lang="text">
<source lang="text">
sending incremental file list
                                                              ______________                                                       
ifcfg-eth0
                                                            [___Internet___]                                                       
ifcfg-eth1
  _____________________________________________________            |            _____________________________________________________
ifcfg-eth2
| [ an-node01 ]                                      |            |            |                                      [ an-node02 ] |
ifcfg-eth3
|                      ____________    ______________|        ____|____        |______________    ____________                      |
ifcfg-eth4
|                      |    vbr2    |--| bond2        |      | [ IFN ] |      |        bond2 |--|  vbr2    |                      |
ifcfg-eth5
|  _________________  | 10.255.0.1 |  | ______      |      _|_________|_      |      ______ |  | 10.255.0.2 |  ................... |
| | [ vm0001-dev ]  |  |____________|  || eth2 =--\  |    |  Switch 1  |    |  /--= eth2 ||  |____________|  :  [ vm0001-dev ] : |
| | [ Dev Server ]  |    | | : :      ||_____|    \--=-----|_____________|-----=--/    |_____||      | | : :    :  [ Dev Server ] : |
| |          ______|    | | : :      | ______    /--=-----|  Switch 2  |-----=--\    ______ |      | | : :    :.......          : |
| |          | eth0 =----/ | : :      || eth5 =--/  |    |_____________|    |  \--= eth5 ||      | | : :----= eth0 :          : |
| |          |_____||      | : :      ||_____|      |                        |      |_____||      | | :      ::.....:          : |
| |      10.254.0.1 |      | : :      |______________|                        |______________|      | | :      :                : |
| |_________________|      | : :        ______________|                        |______________        | | :      :.................: |
|                          | : :      | bond1        |        _________        |        bond1 |      | | :                          |
|  _________________      | : :      |  10.10.0.1  |      | [ SN  ] |      | 10.10.0.2    |      | | :      ................... |
| | [ vm0002-web ]  |      | : :      | ______      |      _|_________|_      |      ______ |      | | :      :  [ vm0002-web ] : |
| | [ Web Server ]  |      | : :      || eth1 =--\  |    |  Switch 1  |    |  /--= eth1 ||      | | :      :  [ Web Server ] : |
| |          ______|      | : :      ||_____|    \--=-----|_____________|-----=--/    |_____||      | | :      :.......          : |
| |          | eth0 =------/ : :      | ______    /--=-----|  Switch 2  |-----=--\    ______ |      | | :------= eth0 :          : |
| |          |_____||        : :      || eth4 =--/  |    |_____________|    |  \--= eth4 ||      | |        ::.....:          : |
| |      10.254.0.2 |        : :      ||_____|      |                        |      |_____||      | |        :                : |
| |_________________|        : :      |______________|                        |______________|      | |        :.................: |
|                            : :        ______________|                        |______________        | |                            |
| ...................        : :      | bond0        |        _________        |        bond0 |      | |        _________________  |
| : [ vm0003-db  ]  :        : :      |  10.20.0.1  |      | [ BCN ] |      | 10.20.0.2    |      | |        |  [ vm0003-db  ] | |
| : [ DB Server  ]  :        : :      | ______      |      _|_________|_      |      ______ |      | |        |  [ DB Server  ] | |
| :          .......:        : :      || eth0 =--\  |  /--|  Switch 1  |--\  |  /--= eth0 ||      | |        |______          | |
| :          : eth0 =--------: :      ||_____|    \--=--+--|_____________|--+--=--/    |_____||      | \--------= eth0 |          | |
| :          :.....::          :      | ______    /--=--+--|  Switch 2  |--+--=--\    ______ |      |          ||_____|          | |
| :                :          :      || eth3 =--/  |  |  |_____________|  |  |  \--= eth3 ||      |          | 10.254.0.3      | |
| :.................:          :      ||_____|      |  |    |      |    |  |      |_____||      |          |_________________| |
|                              :      |______________|  |    |      |    |  |______________|      |                              |
| ...................          :                      |  |    |      |    |  |                      |          _________________  |
| : [ vm0004-win ]  :          :                      |  |    |      |    |  |                      |          |  [ vm0004-win ] | |
| : [ MS Server  ]  :          :                      |  |    |      |    |  |                      |          |  [ MS Server  ] | |
| :          .......:          :                      |  |    |      |    |  |                      |          |______          | |
| :          : NIC0 =----------:                      |  |    |      |    |  |                      \----------= NIC0 |          | |
| :          :.....::                          ______|  |    |      |    |  |______                          ||_____|          | |
| :                :                  _____  | IPMI =--/    |      |    \--= IPMI |  _____                  | 10.254.0.4      | |
| :.................:                [_BMC_]--|_____||        |      |        ||_____|--[_BMC_]                |_________________| |
|                                                    |        |      |        |                                                    |
|                                ______ ______      |        |      |        |      ______ ______                                |
|                                | PSU1 | PSU2 |      |        |      |        |      | PSU2 | PSU1 |                                |
|________________________________|______|______|______|        |      |        |______|______|______|________________________________|
                                      || ||                ____|_    _|____                || ||                                     
                                      || ||              | PDU1 |  | PDU2 |              || ||                                     
                                      || ||              |______|  |______|              || ||                                     
                                      || ||                || ||    || ||                || ||                                     
                                      || \\===[ Power 1 ]===// ||    || \\===[ Power 1 ]===// ||                                     
                                      \\======[ Power 2 ]======||=====//                      ||                                     
                                                                \\=============[ Power 2 ]======//                                     
</source>
 
The cluster will use three separate <span class="code">/16</span> (<span class="code">255.255.0.0</span>) networks;


sent 1467 bytes  received 126 bytes  3186.00 bytes/sec
{{note|1=There are situations where it is not possible to add additional network cards, blades being a prime example. In these cases it will be up to the admin to decide how to proceed. If there is sufficient bandwidth, you can merge all networks, but it is advised in such cases to isolate IFN traffic from the SN/BCN traffic using [[VLAN]]s.}}
total size is 1119  speedup is 0.70
</source>


=== Configuring Our Bridges ===
{|class="wikitable"
!Purpose
!Subnet
!Notes
|-
|Internet-Facing Network ([[IFN]])
|<span class="code">10.255.0.0/16</span>
|
* Each node will use <span class="code">10.255.0.x</span> where <span class="code">x</span> matches the node ID.<br />
* Virtual Machines in the cluster that need to be connected to the Internet will use <span class="code">192.168.1.0/24</span>. These IPs are intentionally separate from the two nodes' IFN bridge's IPs. If you are particularly concerned about security, you can drop the bridges' IPs once the cluster is build and add a firewall rule to reject all traffic from the VMs.
|-
|Storage Network ([[SN]])
|<span class="code">10.10.0.0/16</span>
|
* Each node will use <span class="code">10.10.0.x</span> where <span class="code">x</span> matches the node ID.
|-
|Back-Channel Network ([[BCN]])
|<span class="code">10.20.0.0/16</span>
|
* Each node will use <span class="code">10.20.0.x</span> where <span class="code">x</span> matches the node ID.<br />
* Node-specific [[IPMI]] or other out-of-band management devices will use <span class="code">10.20.1.x</span> where <span class="code">x</span> matches the node ID.<br />
* Multi-port fence devices, switches and similar will use <span class="code">10.20.2.z</span> where <span class="code">z</span> is a simple sequence.<br />
Miscellaneous equipment in the cluster, like managed switches, will use <span class="code">10.20.3.z</span> where <span class="code">z</span> is a simple sequence.<br />
|-
|''Optional'' OpenVPN Network
|<span class="code">10.30.0.0/16</span>
|* For clients behind firewalls, I like to create a [[OpenVPN Server on EL6|VPN]] server for the cluster nodes to log into when support is needed. This way, the client retains control over when remote access is available simply by starting and stopping the <span class="code">openvpn</span> daemon. This will not be discussed any further in this tutorial.
|}


Now lets start in reverse order. We'll write the bridge configuration, then the bond interfaces and finally alter the interface configuration files. The reason for doing this in reverse is to minimize the amount of time where a sudden restart would leave us without network access.
We will be using six interfaces, bonded into three pairs of two NICs in Active/Passive (mode 1) configuration. Each link of each bond will be on alternate, unstacked switches. This configuration is the only configuration supported by [[Red Hat]] in clusters. We will also configure affinity by specifying interfaces <span class="code">eth0</span>, <span class="code">eth1</span> and <span class="code">eth2</span> as primary for the <span class="code">bond0</span>, <span class="code">bond1</span> and <span class="code">bond2</span> interfaces, respectively. This way, when everything is working fine, all traffic is routed through the same switch for maximum performance.


{{note|1=If you know now that none of your VMs will ever need access to the [[BCN]], as might be the case if all [[VM]]s will be web-facing, then you can skip the creation of <span class="code">vbr0</span>. In this case, move the [[IP]] address and related values to the <span class="code">ifcfg-bond0</span> configuration file.}}
{{note|1=Only the bonded interface used by corosync must be in Active/Passive configuration (<span class="code">bond0</span> in this tutorial). If you want to experiment with other bonding modes for <span class="code">bond1</span> or <span class="code">bond2</span>, please feel free to do so. That is outside the scope of this tutorial, however.}}


'''<span class="code">an-node01</span>''' BCN Bridge:
If you can not install six interfaces in your server, then four interfaces will do with the [[SN]] and [[BCN]] networks merged.
<source lang="bash">
vim /etc/sysconfig/network-scripts/ifcfg-vbr0
</source>
<source lang="bash">
# Back-Channel Network - Bridge
DEVICE="vbr0"
TYPE="Bridge"
BOOTPROTO="static"
IPADDR="10.20.0.1"
NETMASK="255.255.0.0"
</source>


'''<span class="code">an-node01</span>''' IFN Bridge:
{{warning|1=If you wish to merge the [[SN]] and [[BCN]] onto one interface, test to ensure that the storage traffic will not block cluster communication. Test by forming your cluster and then pushing your storage to maximum read and write performance for an extended period of time (minimum of several seconds). If the cluster partitions, you will need to do some advanced quality-of-service or other network configuration to ensure reliable delivery of cluster network traffic.}}
<source lang="bash">
vim /etc/sysconfig/network-scripts/ifcfg-vbr2
</source>
<source lang="bash">
# Internet-Facing Network - Bridge
DEVICE="vbr2"
TYPE="Bridge"
BOOTPROTO="static"
IPADDR="10.255.0.1"
NETMASK="255.255.0.0"
GATEWAY="10.255.255.254"
DNS1="192.139.81.117"
DNS2="192.139.81.1"
DEFROUTE="yes"
</source>


=== Creating the Bonded Interfaces ===
In this tutorial, we will use two [http://dlink.ca/products/?pid=DGS-3120-24TC D-Link DGS-3120-24TC/SI], stacked, using three [[VLAN]]s to isolate the three networks.
* [[BCN]] will have VLAN ID of <span class="code">1</span>, which is the default VLAN.
* [[SN]] will have VLAN ID number 100.
* [[IFN]] will have VLAN ID number 101.


Now we can create the actual bond configuration files.
{{note|Switch configuration [[D-Link_Notes|details]].}}


To explain the <span class="code">[http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/sec-Using_Channel_Bonding.html BONDING_OPTS]</span> options;
The actual mapping of interfaces to bonds to networks will be:
* <span class="code">mode=1</span> sets the bonding mode to <span class="code">active-backup</span>.
* The <span class="code">miimon=100</span> tells the bonding driver to check if the network cable has been unplugged or plugged in every 100 milliseconds.
* The <span class="code">use_carrier=1</span> tells the driver to use the driver to maintain the link state. Some drivers don't support that. If you run into trouble, try changing this to <span class="code">0</span>.
* The <span class="code">updelay=120000</span> tells the driver to delay switching back to the primary interface for 120,000 milliseconds (2 minutes). This is designed to give the switch connected to the primary interface time to finish booting. Setting this too low may cause the bonding driver to sitch back before the network switch is ready to actually move data.
* The <span class="code">downdelay=0</span> tells the driver not to wait before changing the state of an interface when the link goes down. That is, when the driver detects a fault, it will switch to the backup interface immediately.


'''<span class="code">an-node01</span>''' BCN Bond:
{|class="wikitable"
<source lang="bash">
!Subnet
vim /etc/sysconfig/network-scripts/ifcfg-bond0
!Cable Colour
</source>
![[VLAN]] ID
<source lang="bash">
!Link 1
# Back-Channel Network - Bond
!Link 2
DEVICE="bond0"
!Bond
BRIDGE="vbr0"
!IP
BOOTPROTO="none"
|-
NM_CONTROLLED="no"
|[[BCN]]
ONBOOT="yes"
|Blue
BONDING_OPTS="mode=1 miimon=100 use_carrier=1 updelay=120000 downdelay=0 primary=eth0"
|<span class="code">1</span>
|<span class="code">eth0</span>
|<span class="code">eth3</span>
|<span class="code">bond0</span>
|<span class="code">10.20.0.x</span>
|-
|[[SN]]
|Green
|<span class="code">100</span>
|<span class="code">eth1</span>
|<span class="code">eth4</span>
|<span class="code">bond1</span>
|<span class="code">10.10.0.x</span>
|-
|[[IFN]]
|Black
|<span class="code">101</span>
|<span class="code">eth2</span>
|<span class="code">eth5</span>
|<span class="code">bond2</span>
|<span class="code">10.255.0.x</span>
|}
 
== Setting Up the Network ==
 
{{warning|1=The following steps can easily get confusing, given how many files we need to edit. Losing access to your server's network is a very real possibility! '''Do not continue without direct access to your servers!''' If you have out-of-band access via [[iKVM]], console redirection or similar, be sure to test that it is working before proceeding.}}
 
=== Planning The Use of Physical Interfaces ===
 
In production clusters, I generally intentionally get three separate dual-port controllers (two on-board interfaces plus two separate dual-port PCIe cards). I then ensure that no bond uses two interfaces on the same physical board. Thus, should a card or its bus interface fail, none of the bonds will fail completely.
 
Lets take a look at an example layout;
 
<source lang="text">
____________________                           
| [ an-node01 ]      |                         
|        ___________|            _______             
|        |    ______|          | bond0 |           
|        | O  | eth0 =-----------=---.---=------{
|        | n  |_____||  /--------=--/    |           
|        | b        |  |        |_______|           
|        | o  ______|  |        _______       
|        | a  | eth1 =--|--\    | bond1 |     
|        | r  |_____||  |  \----=--.----=------{
|        | d        |  |  /-----=--/    |     
|        |___________|  |  |    |_______|     
|        ___________|  |  |      _______       
|        |    ______|  |  |    | bond2 |     
|        | P  | eth2 =--|--|-----=---.---=------{
|        | C  |_____||  |  |  /--=--/    |     
|        | I        |  |  |  |  |_______|     
|        | e  ______|  |  |  |                 
|        |    | eth3 =--/  |  |                 
|        | 1  |_____||    |  |                 
|        |___________|    |  |                 
|        ___________|    |  |                 
|        |    ______|    |  |                 
|        | P  | eth4 =-----/  |                 
|        | C  |_____||        |                 
|        | I        |        |                 
|        | e  ______|        |                 
|        |    | eth5 =--------/                 
|        | 2  |_____||                         
|        |___________|                         
|____________________|                         
</source>
</source>


'''<span class="code">an-node01</span>''' SN Bond:
Consider the possible failure scenarios;
<source lang="bash">
* The on-board controllers fail;
vim /etc/sysconfig/network-scripts/ifcfg-bond1
** <span class="code">bond0</span> falls back onto <span class="code">eth3</span> on the <span class="code">PCIe 1</span> controller.
</source>
** <span class="code">bond1</span> falls back onto <span class="code">eth4</span> on the <span class="code">PCIe 2</span> controller.
<source lang="bash">
** <span class="code">bond2</span> is unaffected.
# Storage Network - Bond
* The PCIe #1 controller fails
DEVICE="bond1"
** <span class="code">bond0</span> remains on <span class="code">eth0</span> interface but losses its redundancy as <span class="code">eth3</span> is down.
BOOTPROTO="static"
** <span class="code">bond1</span> is unaffected.
NM_CONTROLLED="no"
** <span class="code">bond2</span> falls back onto <span class="code">eth5</span> on the <span class="code">PCIe 2</span> controller.
ONBOOT="yes"
* The PCIe #2 controller fails
BONDING_OPTS="mode=1 miimon=100 use_carrier=1 updelay=120000 downdelay=0 primary=eth1"
** <span class="code">bond0</span> is unaffected.
IPADDR="10.10.0.1"
** <span class="code">bond1</span> remains on <span class="code">eth1</span> interface but losses its redundancy as <span class="code">eth4</span> is down.
NETMASK="255.255.0.0"
** <span class="code">bond2</span> remains on <span class="code">eth2</span> interface but losses its redundancy as <span class="code">eth5</span> is down.
</source>
 
In all three failure scenarios, no network interruption occurs making for the most robust configuration possible.


'''<span class="code">an-node01</span>''' IFN Bond:
=== Managed and Stacking Switch Notes ===
<source lang="bash">
 
vim /etc/sysconfig/network-scripts/ifcfg-bond2
{{note|1=If you have two stacked switches, be extra careful to test them to ensure that traffic will not block when a switch is lost or is recovering!}}
</source>
<source lang="bash">
# Internet-Facing Network - Bond
DEVICE="bond2"
BRIDGE="vbr2"
BOOTPROTO="none"
NM_CONTROLLED="no"
ONBOOT="yes"
BONDING_OPTS="mode=1 miimon=100 use_carrier=1 updelay=120000 downdelay=0 primary=eth2"
</source>


=== Alter The Interface Configurations ===
There are two things you need to be wary of with managed switches.


Now, finally, alter the interfaces themselves to join their respective bonds.
* Don't stack them unless you can confirm that there will be no interruption in traffic flow on the surviving switch when the lost switch disappears or recovers. It may seem like it makes sense to stack them and create Link Aggregation Groups, but this can cause problems. When in doubt, don't stack the switches.
* Disable Spanning Tree Protocol on all ports used by the cluster. Otherwise, when a lost switch is recovered, STP negotiation will cause traffic to stop on the ports for upwards of thirty seconds. This is more than enough time to partition a cluster.


'''<span class="code">an-node01</span>''''s <span class="code">eth0</span>, the BCN <span class="code">bond0</span>, Link 1:
If you use three [[VLAN]]s across two unstacked switches, be sure to use a dedicate uplink for each VLAN. You may need to enable [[STP]] of these uplinks to avoid switch loops if the VLANs themselves are not enough. The reason for doing this is to ensure that cluster communications always have a clear path for traffic. If you had only one uplink between the two switches, and you found yourself in a situation where a node's [[BCN]] and [[SN]] faulted through the backup switch, the storage traffic could saturate the uplink and cause intolerable latency for the BCN traffic, leading to cluster partitioning.
<source lang="bash">
vim /etc/sysconfig/network-scripts/ifcfg-eth0
</source>
<source lang="bash">
# Back-Channel Network - Link 1
HWADDR="00:E0:81:C7:EC:49"
DEVICE="eth0"
NM_CONTROLLED="no"
ONBOOT="yes"
BOOTPROTO="none"
MASTER="bond0"
SLAVE="yes"
</source>


'''<span class="code">an-node01</span>''''s <span class="code">eth1</span>, the SN <span class="code">bond1</span>, Link 1:
=== Connecting Fence Devices ===
<source lang="bash">
vim /etc/sysconfig/network-scripts/ifcfg-eth1
</source>
<source lang="bash">
# Storage Network - Link 1
HWADDR="00:E0:81:C7:EC:48"
DEVICE="eth1"
NM_CONTROLLED="no"
ONBOOT="yes"
BOOTPROTO="none"
MASTER="bond1"
SLAVE="yes"
</source>


'''<span class="code">an-node01</span>''''s <span class="code">eth2</span>, the IFN <span class="code">bond2</span>, Link 1:
As we will see soon, each node can be fenced either by calling its [[IPMI]] interface or by calling the [[PDU]] and cutting the node's power. Each of these methods are inherently single points of failure as each has only one network connection. To work around this concern, we will connect all IPMI interfaces to one switch and the PDUs to the secondary switch. This way, should a switch fail, only one of the two fence devices will fail and fencing in general will still be possible via the alternate fence device.
<source lang="bash">
vim /etc/sysconfig/network-scripts/ifcfg-eth2
</source>
<source lang="bash">
# Internet-Facing Network - Link 1
HWADDR="00:E0:81:C7:EC:47"
DEVICE="eth2"
NM_CONTROLLED="no"
ONBOOT="yes"
BOOTPROTO="none"
MASTER="bond2"
SLAVE="yes"
</source>


'''<span class="code">an-node01</span>''''s <span class="code">eth3</span>, the BCN <span class="code">bond0</span>, Link 2:
Generally speaking, I like to connect the IPMI interfaces to the primary switch and the PDUs to the backup switch.
<source lang="bash">
vim /etc/sysconfig/network-scripts/ifcfg-eth3
</source>
<source lang="bash">
# Back-Channel Network - Link 2
HWADDR="00:1B:21:9D:59:FC"
DEVICE="eth3"
NM_CONTROLLED="no"
ONBOOT="yes"
BOOTPROTO="none"
MASTER="bond0"
SLAVE="yes"
</source>


'''<span class="code">an-node01</span>''''s <span class="code">eth4</span>, the SN <span class="code">bond1</span>, Link 2:
=== Making Sure We Know Our Interfaces ===
<source lang="bash">
vim /etc/sysconfig/network-scripts/ifcfg-eth4
</source>
<source lang="bash">
# Storage Network - Link 2
HWADDR="00:1B:21:BF:6F:FE"
DEVICE="eth4"
NM_CONTROLLED="no"
ONBOOT="yes"
BOOTPROTO="none"
MASTER="bond1"
SLAVE="yes"
</source>


'''<span class="code">an-node01</span>''''s <span class="code">eth5</span>, the IFN <span class="code">bond2</span>, Link 2:
When you installed the operating system, the network interfaces names are somewhat randomly assigned to the physical network interfaces. It more than likely that you will want to re-order.
<source lang="bash">
 
vim /etc/sysconfig/network-scripts/ifcfg-eth5
Before you start moving interface names around, you will want to consider which physical interfaces you will want to use on which networks. At the end of the day, the names themselves have no meaning. At the very least though, make them consistent across nodes.
</source>
<source lang="bash">
# Internet-Facing Network - Link 2
HWADDR="00:1B:21:BF:70:02"
DEVICE="eth5"
NM_CONTROLLED="no"
ONBOOT="yes"
BOOTPROTO="none"
MASTER="bond2"
SLAVE="yes"
</source>


== Loading The New Network Configuration ==
Some things to consider, in order of importance:


Simple restart the <span class="code">network</span> service.
* If you have a shared interface for your out-of-band management interface, like [[IPMI]] or [[iLO]], you will want that interface to be on the [[Back-Channel Network]].
* For redundancy, you want to spread out which interfaces are paired up. In my case, I have three interfaces on my mainboard and three additional add-in cards. I will pair each onboard interface with an add-in interface. In my case, my IPMI interface physically piggy-backs on one of the onboard interfaces so this interface will need to be part of the [[BCN]] bond.
* Your interfaces with the lowest latency should be used for the back-channel network.
* Your two fastest interfaces should be used for your storage network.
* The remaining two slowest interfaces should be used for the [[Internet-Facing Network]] bond.


<source lang="bash">
In my case, all six interfaces are identical, so there is little to consider. The left-most interface on my system has IPMI, so its paired network interface will be <span class="code">eth0</span>. I simply work my way left, incrementing as I go. What you do will be whatever makes most sense to you.
/etc/init.d/network restart
</source>


== Updating /etc/hosts ==
There is a separate, short tutorial on re-ordering network interface;


On both nodes, update the <span class="code">/etc/hosts</span> file to reflect your network configuration. Remember to add entries for your [[IPMI]], switched PDUs and other devices.
* '''[[Changing the ethX to Ethernet Device Mapping in EL6 and Fedora 12+]]'''


<source lang="bash">
Once you have the physical interfaces named the way you like, proceed to the next step.
vim /etc/hosts
</source>
<source lang="text">
127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4
::1        localhost localhost.localdomain localhost6 localhost6.localdomain6


# an-node01
=== Planning Our Network ===
10.20.0.1 an-node01 an-node01.bcn an-node01.alteeve.com
10.20.1.1 an-node01.ipmi
10.10.0.1 an-node01.sn
10.255.0.1 an-node01.ifn


# an-node01
To setup our network, we will need to edit the <span class="code">ifcfg-ethX</span>, <span class="code">ifcfg-bondX</span> and <span class="code">ifcfg-vbr2</span> scripts. The last one will create a bridge, like a virtual network switch, which will be used to route network connections between the virtual machines and the outside world, via the [[IFN]]. You will note that the bridge will have the [[IP]] addresses, not the bonded interface <span class="code">bond2</span>. It will instead be slaved to the <span class="code">vbr2</span> bridge.
10.20.0.2 an-node02 an-node02.bcn an-node02.alteeve.com
 
10.20.1.2 an-node02.ipmi
We're going to be editing a lot of files. It's best to lay out what we'll be doing in a chart. So our setup will be:
10.10.0.2 an-node02.sn
 
10.255.0.2 an-node02.ifn
{|class="wikitable sortable"
!Node
!BCN IP and Device
!SN IP and Device
!IFN IP and Device
|-
|<span class="code">an-node01</span>
|<span class="code">10.20.0.1</span> on <span class="code">bond0</span>
|<span class="code">10.10.0.1</span> on <span class="code">bond1</span>
|<span class="code">10.255.0.1</span> on <span class="code">vbr2</span> (<span class="code">bond2</span> slaved)
|-
|<span class="code">an-node02</span>
|<span class="code">10.20.0.2</span> on <span class="code">bond0</span>
|<span class="code">10.10.0.2</span> on <span class="code">bond1</span>
|<span class="code">10.255.0.2</span> on <span class="code">vbr2</span> (<span class="code">bond2</span> slaved)
|}
 
=== Switch Network Daemons ===


# Fence devices
The new <span class="code">NetworkManager</span> daemon is much more flexible and is perfect for machines like laptops which move around networks a lot. However, it does this by making a lot of decisions for you and changing the network as it sees fit. As good as this is for laptops and the like, it's not appropriate for servers. We will want to use the traditional <span class="code">network</span> service.
10.20.2.1      pdu1 pdu1.alteeve.com
10.20.2.2      pdu2 pdu2.alteeve.com


# VPN interfaces, if used.
<source lang="bash">
10.30.0.1 an-node01.vpn
yum remove NetworkManager
10.30.0.2 an-node02.vpn
</source>
</source>


{{warning|1=Which ever switch you have the IPMI interfaces connected to, be sure to connect the PDU into the '''opposite''' switch! If both fence types are on one switch, then that switch becomes a single point of failure!}}
Now enable <span class="code">network</span> to start with the system.


== Setting up SSH ==
<source lang="bash">
chkconfig network on
chkconfig --list network
</source>
<source lang="bash">
network        0:off 1:off 2:on 3:on 4:on 5:on 6:off
</source>


Setting up [[SSH]] shared keys will allow your nodes to pass files between one another and execute commands remotely without needing to enter a password. This will be needed later when we want to enable applications like <span class="code">libvirtd</span> and it's tools, like <span class="code">virt-manager</span>.
=== Creating Some Network Configuration Files ===


SSH is, on it's own, a very big topic. If you are not familiar with SSH, please take some time to learn about it before proceeding. A great first step is the [http://en.wikipedia.org/wiki/Secure_Shell Wikipedia] entry on SSH, as well as the SSH [[man]] page; <span class="code">man ssh</span>.
{{warning|1=Bridge configuration files '''must''' have a file name which will sort '''after''' the interface and bridge files. The actual device name can be whatever you want though. If the system tries to start a bridge before its slaved interface is up, it will fail. I personally like to use the name <span class="code">vbrX</span> for "'''v'''irtual machine '''br'''idge". You can use whatever makes sense to you, with the above concern in mind.}}


[[SSH]] can be a bit confusing keeping connections straight in you head. When you connect to a remote machine, you start the connection on your machine as the user you are logged in as. This is the source user. When you call the remote machine, you tell the machine what user you want to log in as. This is the remote user.
Start by <span class="code">touch</span>ing the configuration files we will need.


You will need to create an SSH key for each source user on each node, and then you will need to copy the newly generated public key to each remote machine's user directory that you want to connect to. In this example, we want to connect to either node, from either node, as the <span class="code">root</span> user. So we will create a key for each node's <span class="code">root</span> user and then copy the generated public key to the ''other'' node's <span class="code">root</span> user's directory.
<source lang="bash">
touch /etc/sysconfig/network-scripts/ifcfg-bond{0,1,2}
touch /etc/sysconfig/network-scripts/ifcfg-vbr2
</source>


For each user, on each machine you want to connect '''from''', run:
Now make a backup of your configuration files, in case something goes wrong and you want to start over.


<source lang="bash">
<source lang="bash">
# The '2047' is just to screw with brute-forces a bit. :)
mkdir /root/backups/
ssh-keygen -t rsa -N "" -b 2047 -f ~/.ssh/id_rsa
rsync -av /etc/sysconfig/network-scripts/ifcfg-eth* /root/backups/
</source>
</source>
<source lang="text">
<source lang="text">
Generating public/private rsa key pair.
sending incremental file list
Created directory '/root/.ssh'.
ifcfg-eth0
Your identification has been saved in /root/.ssh/id_rsa.
ifcfg-eth1
Your public key has been saved in /root/.ssh/id_rsa.pub.
ifcfg-eth2
The key fingerprint is:
ifcfg-eth3
41:27:48:2f:23:20:f5:46:39:45:fa:91:1e:c6:4d:eb root@an-node01.alteeve.com
ifcfg-eth4
The key's randomart image is:
ifcfg-eth5
+--[ RSA 2047]----+
 
|..o .=+.+ .      |
sent 1467 bytes  received 126 bytes  3186.00 bytes/sec
| . +oo.* +      |
total size is 1119  speedup is 0.70
|    =.O =        |
. = * .      |
|      o E        |
|                |
|                |
|                |
|                |
+-----------------+
</source>
</source>


This will create two files: the private key called <span class="code">~/.ssh/id_rsa</span> and the public key called <span class="code">~/.ssh/id_rsa.pub</span>. The private '''''must never''''' be group or world readable! That is, it should be set to mode <span class="code">0600</span>.
=== Configuring The Bridge ===


The two files should look like:
We'll start in reverse order, crafting the bridge's script first.


'''Private key''':
'''<span class="code">an-node01</span>''' IFN Bridge:
<source lang="bash">
<source lang="bash">
cat ~/.ssh/id_rsa
vim /etc/sysconfig/network-scripts/ifcfg-vbr2
</source>
</source>
<source lang="text">
-----BEGIN RSA PRIVATE KEY-----
MIIEnwIBAAKCAQBjnrw3DbaFZYX5rV/jWqJ33kPzMDTW1OS6bxlhz8EFs0oyyvfl
eeuwxk6sIkstR3R4mjpDf27jJhQ2w0s976iTQIqQvwGFaokkW86+bDIjwACIZvxa
6hOvSZT2W1a7ExuaBid1rBMN04yKPUXkR9vFfbU3lnHBPg1V3xcaXfCuASZnHklu
o7ayLaD06AL6wBPuub0UzM5H2XB9ESY7Rv7YxdmBvU8b769xZqN9K02jy9Tx5iXG
L1Sq3hiXZB2U7edvX0l2dlSGUYW2m1ua+Nu+pP7W8lyCKAzbLM4O5gLzO88WtRTg
odARPDqplVEK8bBndkD0ZHFEo5zRMlAcuobvAgEjAoIBADBjCvYrO2VdMnH1H/IW
FGYbgBcIyTUlk6sCwy+CKo29e8+HJ/Md9iKpdqthHTM/9r48SC9pyCUvwKzig6G9
jGTHk8Kl6svGCBj5ZGsenAK8WAe8/jrJd0aC01MWa/R2/scnj4JMQ8TjGGBm527h
EvmGM3LFanPNrrSuL8r6bZZXr2gnxZpd1pjJx4F4kURp5hRpaJQ6Ulnzkrmqs7Ax
ZP+TJaE6tPYLAAl5EZ/wX0fvCcvMYOJ15FRjfLzn07tshqrM3u8p9QFPohtXM4ER
D0yeN0QmQql16lvmXpdYR54XeDqGLtTuvfeeB6Sf7EoGHtgBCGK5iAKf9xjaQ+4x
kNMCgYEA5AKZw9Z2CmlBmJUT3IVmlgpzl3348Crwzj4gZRDcRSq1oQ1qwKj/q3RK
SuGnewuIVPTMcNIaQD4gaa5rI0Z0CTaXRW9dSWndKWVxbaJqRhQPuULS+LXKao6Z
/H+NrV1jrsqxSyv3qZjVbkJXOAWMUU4pjRDhVcCK7bZJ+ER+XR0CgYBv2V/Qpai2
O4jXCfoKcAb+jRLGwAFwzSBWC/q4oYyPY6UJGVIo8NleQ/XpBu9Kqc/VO2XebSJc
c7SfCbJGYzEFM3yTZf7lgyrDRgz8Z01YDNcQB9CeUVvO2ql4OaKU9gIoxHcO7zcX
1PREMm1QqTqnIFmin8jlYJMbyZ9xoorSewKBgDqhlUD8ogKtWgKp/cr9u0snQkuG
yvvtyOSToe4aR0T8WpcgtQz4Qb5fu1UVdDWVQFBcNJIKJAHkFvaakJQKzWF0cAqK
WdEMl/Szr3P2kFspuuZ+5crs6ugzTCr8Ol/HjqlKEFUokCRL3xxaM69R4kDK9L3f
xOoqMlpiEwVTcPNfAoGABmQxXGE81zaaGuqgkuHUg5MIYyDqI7PzOB3xEd1YfTjk
2/LRfq6tVdf/bGbL2GF5lyf3MUgQl5GVSuqceQz7fKPbO59tprcJwgQAvfdGP4u0
hJK7dsLSC9H7Di8t3KBX5RKKdeHIokau31NWtCbtaKLDSx8Sy0dY3QQ0/y3cDAcC
gYEAyKYV/wlmlxYv9TkiHoX6CkQvxoayOMPD7sEE0KPbddzpt+9Y98TRReXrtvIk
8RBg5fazReWWFrcekbBHiDlb+rHwARofi2TvdGR3sQkfQl5kTi9qEveJcCYPBO6r
Wk9i6hozTEZpkIRr4T5wJ97DdzdTKYTIV+3+1msIaIz7s0E=
-----END RSA PRIVATE KEY-----
</source>
'''Public key''' (wrapped to make it more readable):
<source lang="bash">
<source lang="bash">
cat ~/.ssh/id_rsa.pub
# Internet-Facing Network - Bridge
</source>
DEVICE="vbr2"
<source lang="text">
TYPE="Bridge"
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQBjnrw3DbaFZYX5rV/jWqJ33kPzMDTW1OS6bxlh
BOOTPROTO="static"
z8EFs0oyyvfleeuwxk6sIkstR3R4mjpDf27jJhQ2w0s976iTQIqQvwGFaokkW86+bDIjwACI
IPADDR="10.255.0.1"
Zvxa6hOvSZT2W1a7ExuaBid1rBMN04yKPUXkR9vFfbU3lnHBPg1V3xcaXfCuASZnHkluo7ay
NETMASK="255.255.0.0"
LaD06AL6wBPuub0UzM5H2XB9ESY7Rv7YxdmBvU8b769xZqN9K02jy9Tx5iXGL1Sq3hiXZB2U
GATEWAY="10.255.255.254"
7edvX0l2dlSGUYW2m1ua+Nu+pP7W8lyCKAzbLM4O5gLzO88WtRTgodARPDqplVEK8bBndkD0
DNS1="8.8.8.8"
ZHFEo5zRMlAcuobv root@an-node01.alteeve.com
DNS2="8.8.4.4"
DEFROUTE="yes"
</source>
</source>


{{note|1=Generate the key on <span class="code">an-node02</span> before proceeding.}}
=== Creating the Bonded Interfaces ===


In order to enable password-less login, we need to create a file called <span class="code"></span> and put both nodes' public key in it. To seed the <span class="code">~/.ssh/authorized_keys</span> file, we'll simply copy the <span class="code">~/.ssh/id_rsa.pub</span> file. After that, we will append <span class="code">an-node02</span>'s public key into it over ssh. Once both keys are in it, we'll push it over to <span class="code">an-node02</span>. If you want to add your workstation's key as well, this is the best time to do so.
Next up, we'll can create the three bonding configuration files. This is where two physical network interfaces are tied together to work like a single, highly available network interface. You can think of a bonded interface as being akin to [[TLUG_Talk:_Storage_Technologies_and_Theory#Level_1|RAID level 1]]; A new virtual device is created out of two real devices.


From '''an-node01''', type:
We're going to see a long line called "<span class="code">[http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/sec-Using_Channel_Bonding.html BONDING_OPTS]</span>". Let's look at the meaning of these options before we look at the configuration;
* <span class="code">mode=1</span> sets the bonding mode to <span class="code">active-backup</span>.
* The <span class="code">miimon=100</span> tells the bonding driver to check if the network cable has been unplugged or plugged in every 100 milliseconds.
* The <span class="code">use_carrier=1</span> tells the driver to use the driver to maintain the link state. Some drivers don't support that. If you run into trouble, try changing this to <span class="code">0</span>.
* The <span class="code">updelay=120000</span> tells the driver to delay switching back to the primary interface for 120,000 milliseconds (2 minutes). This is designed to give the switch connected to the primary interface time to finish booting. Setting this too low may cause the bonding driver to switch back before the network switch is ready to actually move data. Some switches will not provide a link until it is fully booted, so please experiment.
* The <span class="code">downdelay=0</span> tells the driver not to wait before changing the state of an interface when the link goes down. That is, when the driver detects a fault, it will switch to the backup interface immediately.


'''<span class="code">an-node01</span>''' BCN Bond:
<source lang="bash">
<source lang="bash">
rsync -av ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
vim /etc/sysconfig/network-scripts/ifcfg-bond0
</source>
</source>
<source lang="text">
<source lang="bash">
sending incremental file list
# Back-Channel Network - Bond
id_rsa.pub
DEVICE="bond0"
 
BOOTPROTO="static"
sent 482 bytes  received 31 bytes  1026.00 bytes/sec
NM_CONTROLLED="no"
total size is 404  speedup is 0.79
ONBOOT="yes"
You have new mail in /var/spool/mail/root
BONDING_OPTS="mode=1 miimon=100 use_carrier=1 updelay=120000 downdelay=0 primary=eth0"
IPADDR="10.20.0.1"
NETMASK="255.255.0.0"
</source>
</source>


Now we'll grab the public key from <span class="code">an-node02</span> over SSH and append it to the new <span class="code">authorized_keys</span> file.
'''<span class="code">an-node01</span>''' SN Bond:
 
<source lang="bash">
<source lang="bash">
ssh root@an-node02 "cat ~/.ssh/id_rsa.pub" >> ~/.ssh/authorized_keys
vim /etc/sysconfig/network-scripts/ifcfg-bond1
</source>
</source>
<source lang="text">
<source lang="bash">
The authenticity of host 'an-node02 (10.20.0.2)' can't be established.
# Storage Network - Bond
RSA key fingerprint is 94:8a:6b:88:57:69:ca:da:80:e0:62:37:b4:06:32:98.
DEVICE="bond1"
Are you sure you want to continue connecting (yes/no)? yes
BOOTPROTO="static"
Warning: Permanently added 'an-node02,10.20.0.2' (RSA) to the list of known hosts.
NM_CONTROLLED="no"
root@an-node02's password:
ONBOOT="yes"
BONDING_OPTS="mode=1 miimon=100 use_carrier=1 updelay=120000 downdelay=0 primary=eth1"
IPADDR="10.10.0.1"
NETMASK="255.255.0.0"
</source>
</source>


{{note|1=If you want to add your workstation's key, do so here.}}
'''<span class="code">an-node01</span>''' IFN Bond:
 
Now push the local copy of <span class="code">authorized_keys</span> with both keys over to <span class="code">an-node02</span>.
 
<source lang="bash">
<source lang="bash">
rsync -av ~/.ssh/authorized_keys root@an-node02:/root/.ssh/
vim /etc/sysconfig/network-scripts/ifcfg-bond2
</source>
</source>
<source lang="text">
<source lang="bash">
root@an-node02's password:
# Internet-Facing Network - Bond
sending incremental file list
DEVICE="bond2"
authorized_keys
BRIDGE="vbr2"
 
BOOTPROTO="none"
sent 1704 bytes  received 31 bytes  495.71 bytes/sec
NM_CONTROLLED="no"
total size is 1621  speedup is 0.93
ONBOOT="yes"
BONDING_OPTS="mode=1 miimon=100 use_carrier=1 updelay=120000 downdelay=0 primary=eth2"
</source>
</source>


Now log into the remote machine. This time, the connection should succeed without having entered a password!
=== Alter The Interface Configurations ===
 
<source lang="text">
[root@an-node01 ~]# ssh root@an-node02
</source>
<source lang="text">
Last login: Wed Nov  2 13:45:25 2011 from 10.20.255.254
[root@an-node02 ~]#
</source>


=== Populating And Pushing ~/.ssh/known_hosts ===
With the bridge and bonds in place, we can now alter the interface configurations.


Various applications will connect to the other node using different methods and networks. Each connection, when first established, will prompt for you to confirm that you trust the authentication, as we saw above. Many programs can't handle this prompt and will simply fail to connect. So to get around this, lets <span class="code">ssh</span> into both nodes using all host names. This will populate a file called <span class="code">~/.ssh/known_hosts</span>. Once you do this on one node, you can simply copy the <span class="code">known_hosts</span> to the other nodes and user's <span class="code">~/.ssh/</span> directories.
Which two interfaces you use in a given bond is entirely up to you. I've found it easiest to keep straight when I match the <span class="code">bondX</span> to the primary interface's <span class="code">ethX</span> number.
 
I simply paste this into a terminal, answering <span class="code">yes</span> and then immediately <span class="code">exit</span> from the <span class="code">ssh</span> session. This is a bit tedious, I admit, but it only needs to be done one time for all nodes. Take the time to check the fingerprints as they are displayed to you. It is a bad habit to blindly type <span class="code">yes</span>.
 
Alter this to suit your host names.


'''<span class="code">an-node01</span>''''s <span class="code">eth0</span>, the BCN <span class="code">bond0</span>, Link 1:
<source lang="bash">
<source lang="bash">
ssh root@an-node01 && \
vim /etc/sysconfig/network-scripts/ifcfg-eth0
ssh root@an-node01.alteeve.com && \
ssh root@an-node01.bcn && \
ssh root@an-node01.sn && \
ssh root@an-node01.ifn && \
ssh root@an-node02 && \
ssh root@an-node02.alteeve.com && \
ssh root@an-node02.bcn && \
ssh root@an-node02.sn && \
ssh root@an-node02.ifn
</source>
</source>
The authenticity of host 'an-node01 (10.20.0.1)' can't be established.
<source lang="bash">
RSA key fingerprint is 48:4d:8c:ed:c7:38:44:ea:01:be:8d:8d:0b:6e:8e:a9.
# Back-Channel Network - Link 1
Are you sure you want to continue connecting (yes/no)? yes
HWADDR="00:E0:81:C7:EC:49"
Warning: Permanently added 'an-node01,10.20.0.1' (RSA) to the list of known hosts.
DEVICE="eth0"
Last login: Wed Nov  2 12:46:15 2011 from 10.20.255.254
NM_CONTROLLED="no"
<source lang="bash">
ONBOOT="yes"
exit
BOOTPROTO="none"
</source>
MASTER="bond0"
<source lang="text">
SLAVE="yes"
logout
Connection to an-node01 closed.
The authenticity of host 'an-node01.alteeve.com (10.20.0.1)' can't be established.
RSA key fingerprint is 48:4d:8c:ed:c7:38:44:ea:01:be:8d:8d:0b:6e:8e:a9.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'an-node01.alteeve.com' (RSA) to the list of known hosts.
Last login: Wed Nov  2 13:13:26 2011 from an-node01
</source>
</source>


'''<span class="code">an-node01</span>''''s <span class="code">eth1</span>, the SN <span class="code">bond1</span>, Link 1:
<source lang="bash">
<source lang="bash">
exit
vim /etc/sysconfig/network-scripts/ifcfg-eth1
</source>
</source>
<source lang="text">
<source lang="bash">
logout
# Storage Network - Link 1
Connection to an-node01.alteeve.com closed.
HWADDR="00:E0:81:C7:EC:48"
The authenticity of host 'an-node01.bcn (10.20.0.1)' can't be established.
DEVICE="eth1"
RSA key fingerprint is 48:4d:8c:ed:c7:38:44:ea:01:be:8d:8d:0b:6e:8e:a9.
NM_CONTROLLED="no"
Are you sure you want to continue connecting (yes/no)? yes
ONBOOT="yes"
Warning: Permanently added 'an-node01.bcn' (RSA) to the list of known hosts.
BOOTPROTO="none"
Last login: Wed Nov  2 13:14:16 2011 from an-node01
MASTER="bond1"
SLAVE="yes"
</source>
</source>


'''<span class="code">an-node01</span>''''s <span class="code">eth2</span>, the IFN <span class="code">bond2</span>, Link 1:
<source lang="bash">
<source lang="bash">
exit
vim /etc/sysconfig/network-scripts/ifcfg-eth2
</source>
</source>
<source lang="text">
<source lang="bash">
logout
# Internet-Facing Network - Link 1
Connection to an-node01.bcn closed.
HWADDR="00:E0:81:C7:EC:47"
The authenticity of host 'an-node01.sn (10.10.0.1)' can't be established.
DEVICE="eth2"
RSA key fingerprint is 48:4d:8c:ed:c7:38:44:ea:01:be:8d:8d:0b:6e:8e:a9.
NM_CONTROLLED="no"
Are you sure you want to continue connecting (yes/no)? yes
ONBOOT="yes"
Warning: Permanently added 'an-node01.sn,10.10.0.1' (RSA) to the list of known hosts.
BOOTPROTO="none"
Last login: Wed Nov  2 13:14:35 2011 from an-node01
MASTER="bond2"
SLAVE="yes"
</source>
</source>


'''<span class="code">an-node01</span>''''s <span class="code">eth3</span>, the BCN <span class="code">bond0</span>, Link 2:
<source lang="bash">
<source lang="bash">
exit
vim /etc/sysconfig/network-scripts/ifcfg-eth3
</source>
</source>
<source lang="text">
<source lang="bash">
logout
# Back-Channel Network - Link 2
Connection to an-node01.sn closed.
HWADDR="00:1B:21:9D:59:FC"
The authenticity of host 'an-node01.ifn (10.255.0.1)' can't be established.
DEVICE="eth3"
RSA key fingerprint is 48:4d:8c:ed:c7:38:44:ea:01:be:8d:8d:0b:6e:8e:a9.
NM_CONTROLLED="no"
Are you sure you want to continue connecting (yes/no)? yes
ONBOOT="yes"
Warning: Permanently added 'an-node01.ifn,10.255.0.1' (RSA) to the list of known hosts.
BOOTPROTO="none"
Last login: Wed Nov  2 13:14:51 2011 from an-node01.sn
MASTER="bond0"
SLAVE="yes"
</source>
</source>


'''<span class="code">an-node01</span>''''s <span class="code">eth4</span>, the SN <span class="code">bond1</span>, Link 2:
<source lang="bash">
<source lang="bash">
exit
vim /etc/sysconfig/network-scripts/ifcfg-eth4
</source>
</source>
<source lang="text">
<source lang="bash">
Last login: Wed Nov  2 14:09:15 2011 from an-node01
# Storage Network - Link 2
HWADDR="00:1B:21:BF:70:02"
DEVICE="eth4"
NM_CONTROLLED="no"
ONBOOT="yes"
BOOTPROTO="none"
MASTER="bond1"
SLAVE="yes"
</source>
</source>


'''<span class="code">an-node01</span>''''s <span class="code">eth5</span>, the IFN <span class="code">bond2</span>, Link 2:
<source lang="bash">
<source lang="bash">
exit
vim /etc/sysconfig/network-scripts/ifcfg-eth5
</source>
</source>
<source lang="text">
<source lang="bash">
logout
# Internet-Facing Network - Link 2
Connection to an-node02 closed.
HWADDR="00:1B:21:BF:6F:FE"
The authenticity of host 'an-node02.alteeve.com (10.20.0.2)' can't be established.
DEVICE="eth5"
RSA key fingerprint is 94:8a:6b:88:57:69:ca:da:80:e0:62:37:b4:06:32:98.
NM_CONTROLLED="no"
Are you sure you want to continue connecting (yes/no)? yes
ONBOOT="yes"
Warning: Permanently added 'an-node02.alteeve.com' (RSA) to the list of known hosts.
BOOTPROTO="none"
Last login: Wed Nov  2 14:14:22 2011 from an-node01
MASTER="bond2"
SLAVE="yes"
</source>
</source>
== Loading The New Network Configuration ==
Simple restart the <span class="code">network</span> service.


<source lang="bash">
<source lang="bash">
exit
/etc/init.d/network restart
</source>
<source lang="text">
logout
Connection to an-node02.alteeve.com closed.
The authenticity of host 'an-node02.bcn (10.20.0.2)' can't be established.
RSA key fingerprint is 94:8a:6b:88:57:69:ca:da:80:e0:62:37:b4:06:32:98.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'an-node02.bcn' (RSA) to the list of known hosts.
Last login: Wed Nov  2 14:14:32 2011 from an-node01
</source>
</source>
== Updating /etc/hosts ==
On both nodes, update the <span class="code">/etc/hosts</span> file to reflect your network configuration. Remember to add entries for your [[IPMI]], switched PDUs and other devices.


<source lang="bash">
<source lang="bash">
exit
vim /etc/hosts
</source>
</source>
<source lang="text">
<source lang="text">
logout
127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4
Connection to an-node02.bcn closed.
::1        localhost localhost.localdomain localhost6 localhost6.localdomain6
The authenticity of host 'an-node02.sn (10.10.0.2)' can't be established.
RSA key fingerprint is 94:8a:6b:88:57:69:ca:da:80:e0:62:37:b4:06:32:98.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'an-node02.sn,10.10.0.2' (RSA) to the list of known hosts.
Last login: Wed Nov  2 14:14:50 2011 from an-node01
</source>


<source lang="bash">
# an-node01
exit
10.20.0.1 an-node01 an-node01.bcn an-node01.alteeve.ca
</source>
10.20.1.1 an-node01.ipmi
<source lang="text">
10.10.0.1 an-node01.sn
logout
10.255.0.1 an-node01.ifn
Connection to an-node02.sn closed.
The authenticity of host 'an-node02.ifn (10.255.0.2)' can't be established.
RSA key fingerprint is 94:8a:6b:88:57:69:ca:da:80:e0:62:37:b4:06:32:98.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'an-node02.ifn,10.255.0.2' (RSA) to the list of known hosts.
Last login: Wed Nov  2 14:15:11 2011 from an-node01.sn
</source>


<source lang="bash">
# an-node02
exit
10.20.0.2 an-node02 an-node02.bcn an-node02.alteeve.ca
</source>
10.20.1.2 an-node02.ipmi
<source lang="text">
10.10.0.2 an-node02.sn
logout
10.255.0.2 an-node02.ifn
Connection to an-node02.ifn closed.
</source>


Done!
# Fence devices
10.20.2.1      pdu1 pdu1.alteeve.ca
10.20.2.2      pdu2 pdu2.alteeve.ca


Now we can simply copy the <span class="code">~/.ssh/known_hosts</span> file to the other node.
# VPN interfaces, if used.
 
10.30.0.1 an-node01.vpn
<source lang="bash">
10.30.0.2 an-node02.vpn
rsync -av root@an-node01:/root/.ssh/known_hosts ~/.ssh/
</source>
</source>
<source lang="text">
receiving incremental file list


sent 11 bytes  received 41 bytes  104.00 bytes/sec
{{warning|1=Remember, which ever switch you have the IPMI interfaces connected to, be sure to connect the PDU into the '''opposite''' switch! If both fence types are on one switch, then that switch becomes a single point of failure!}}
total size is 4018  speedup is 77.27
</source>


Notice that we didn't get prompted for a password this time?
{{note|1=I like to run an [[OpenVPN Server on EL6|OpenVPN]] server and set up my remote clusters and customers as clients on this VPN to enable rapid, secure remote access when the client's firewall blocks inbound connections. This offers the client the option of disabling the <span class="code">openvpn</span> client daemon until they wish to enable access. This tends to be easier for the client to manage as opposed to manipulating the firewall on demand. This will be the only mention of the VPN in this tutorial, but explains the last entries in the file above.}}


= Configuring The Cluster Foundation =
== Setting up SSH ==


We need to configure the cluster in two stages. This is because we have something of a chicken-and-egg problem.
Setting up [[SSH]] shared keys will allow your nodes to pass files between one another and execute commands remotely without needing to enter a password. This will be needed later when we want to enable applications like <span class="code">libvirtd</span> and its tools, like <span class="code">virt-manager</span>.


* We need clustered storage for our virtual machines.
SSH is, on its own, a very big topic. If you are not familiar with SSH, please take some time to learn about it before proceeding. A great first step is the [http://en.wikipedia.org/wiki/Secure_Shell Wikipedia] entry on SSH, as well as the SSH [[man]] page; <span class="code">man ssh</span>.
* Our clustered storage needs the cluster for fencing.


Conveniently, clustering has two logical parts;
[[SSH]] can be a bit confusing keeping connections straight in you head. When you connect to a remote machine, you start the connection on your machine as the user you are logged in as. This is the source user. When you call the remote machine, you tell the machine what user you want to log in as. This is the remote user.
* Cluster communication and membership.
* Cluster resource management.


The first, communication and membership, covers which nodes are part of the cluster and ejecting faulty nodes from the cluster, among other tasks. The second part, resource management, is provided by a second tool called <span class="code">rgmanager</span>. It's this second part that we will set aside for later.
You will need to create an SSH key for each source user on each node, and then you will need to copy the newly generated public key to each remote machine's user directory that you want to connect to. In this example, we want to connect to either node, from either node, as the <span class="code">root</span> user. So we will create a key for each node's <span class="code">root</span> user and then copy the generated public key to the ''other'' node's <span class="code">root</span> user's directory.


== Installing Required Programs ==
For each user, on each machine you want to connect '''from''', run:
 
Installing the cluster software is pretty simple;


<source lang="bash">
<source lang="bash">
yum install cman corosync rgmanager ricci gfs2-utils
# The '2047' is just to screw with brute-forces a bit. :)
ssh-keygen -t rsa -N "" -b 2047 -f ~/.ssh/id_rsa
</source>
<source lang="text">
Generating public/private rsa key pair.
Created directory '/root/.ssh'.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
4a:52:a1:c7:60:d5:e8:6d:c4:75:20:dd:62:2b:86:c5 root@an-node01.alteeve.ca
The key's randomart image is:
+--[ RSA 2047]----+
|    o.o=.ooo.    |
|  . +..E.+..    |
|    ..+= . o    |
|    oo = .      |
|    . .oS.      |
|    o .        |
|      .          |
|                |
|                |
+-----------------+
</source>
</source>


== Configuration Methods ==
This will create two files: the private key called <span class="code">~/.ssh/id_rsa</span> and the public key called <span class="code">~/.ssh/id_rsa.pub</span>. The private '''''must never''''' be group or world readable! That is, it should be set to mode <span class="code">0600</span>.


In [[Red Hat]] Cluster Services, the heart of the cluster is found in the <span class="code">[[RHCS v3 cluster.conf|/etc/cluster/cluster.conf]]</span> [[XML]] configuration file.
If you look closely when you created the ssh key, the node's fingerprint is show (<span class="code">4a:52:a1:c7:60:d5:e8:6d:c4:75:20:dd:62:2b:86:c5</span> for <span class="code">an-node01</span> above). Make a note of the fingerprint for each machine, and then compare it to the one presented to you when you ssh to a machine for the first time. If you are presented with a fingerprint that doesn't match, you could be facing a "man in the middle" attack.  


There are three main ways of editing this file. Two are already well documented, so I won't bother discussing them, beyond introducing them. The third way is by directly hand-crafting the <span class="code">cluster.conf</span> file. This method is not very well documented, and directly manipulating configuration files is my preferred method. As my boss loves to say; "''The more computers do for you, the more they do to you''".
To look up a fingerprint in the future, you can run the following;


The first two, well documented, graphical tools are:
<source lang="bash">
ssh-keygen -l -f ~/.ssh/id_rsa
</source>
<source lang="bash">
2047 4a:52:a1:c7:60:d5:e8:6d:c4:75:20:dd:62:2b:86:c5 /root/.ssh/id_rsa.pub (RSA)
</source>


* <span class="code">[http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Cluster_Administration/ch-config-scc-CA.html system-config-cluster]</span>, older GUI tool run directly from one of the cluster nodes.
The two newly generated files should look like;
* [http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Cluster_Administration/ch-config-conga-CA.html Conga], comprised of the <span class="code">ricci</span> node-side client and the <span class="code">luci</span> web-based server (can be run on machines outside the cluster).


I do like the tools above, but I often find issues that send me back to the command line. I'd recommend setting them aside for now as well. Once you feel comfortable with <span class="code">cluster.conf</span> syntax, then by all means, go back and use them. I'd recommend not relying on them though, which might be the case if you try to use them too early in your studies.
'''Private key''':
<source lang="bash">
cat ~/.ssh/id_rsa
</source>
<source lang="text">
-----BEGIN RSA PRIVATE KEY-----
MIIEnwIBAAKCAQBs+CsWeKegqmtneZcLDvHV4QT1n+ajj98gkmjoLcIFW5g/VFRL
pSMMkwkQBgGDkmKPvYFa5OolL6qBQSAN1NpP8zET+1lZr4OFg/TZTuA8QnhNeh6V
mU2hSoyJfEkKJ6TVYg4s1rsbbTZPLdCDe9CMn/iI824WUu2wA8RwhF2WTqqTrWTW
4h8tYK9Y4eT4IYMXiYZ8+eQfzHyMaNxvUcI1Z8heMn/CEnrA67ja7Czi/ljYnw0I
3MXy9d2ANYjYahBLF2+ok19NS9tkFHDlcZTh0gTQ4vV5fksgdJjsWl5l/aLjnSRf
x2pQrMl3w8U7JBpr0PWJPIuzd4q47+KBI1A9AgEjAoIBADTtkUVtzcMQ8lbUqHMV
4y1eqqMwaLXYKowp2y7xp2GwJWCWrJnFPOjZs/HXCAy00Ml5TXVKnZ0IhgRENCP5
q92wos8w8OJrMUDZsXDdKxX0ZlGEdUFZFxPTwJqM0wTuryXQiorOsqbr5y3Fy62T
6PPYq+q/YVtM2dkmZrpO66DGcTkBA8tq8tTU3TdqZEVfmCzM9DIGz2hprvky+yDU
Pa296CP7+lHFty34K6j/WxD49+aKrdxXxdLbH/3Wfq7a9fu/FuYObPRtXoYRJNGP
ZEzfVoNwVdc3vETuzZPDoidkc4jomA4vM4cTS1EvwEWVHfaSdIE0wF16N1FlDgNA
hKsCgYEA9Xp5vGoPRer3hTSglGrPOTTkGEhXiE/JDMZ7w4fk2lXo+Q7HqxetrS6l
hMxY+x2W0FBfKwJqBuhVv4Y5MPLbC2JazwYDoP85g6RWH72ebsqdYwYvSx808iDs
C8HArWv8RtQ/K1pRVkq0GPhTdc22sYE9aKa5Hc6nd0SEmq+hLoUCgYBxo9c3M28h
jDpxwTkYszMfpIb++tCSrcBw8guqdqjhW6yH9kXva3NjfuzpOisb7cFN6dcSqjaC
HEZjpBWPUGLOPMnL1/mSsTErusgyh2+x8WjRjuqBJrh7CDN8gejMiski5nALQpxt
s6PKI5WHVqPQ395+549LQnoaCROyf4TUWQKBgFQp/doy/ewWC7ikVFAkntHI/b8u
vuzoJ6yb0qlwa7iSe8MbAwaldo8IrcchfZfs40AbjlfjkhD/M1ebu9ZEot9U6+81
QxKgpgE/qH/pPaJUGLQ8ooAn9OVNHbrjWADx0tZ0p/GbTxZFf5OIVyETVJShVuIN
RshkHCjkSrixPpObAoGAPbC2qPAJINcYaaNoI1n3Lm9B+CHBrrYYAsyJ/XOdgabL
X8A0l+nfjciPPMfOQlx+4ScrnGsHpbeT7PKsnkGUuRmvYAeHe4TC69psrbc8om0b
pPXPwnQbAPXSzo+qQybE9bBLc9O0AQm/UHm3kpy/VCHB7R6ePsxQ6Y/mHxIGR2MC
gYEAhW7evwpxUMcW+BV84xIIt7cW2K/mu8nOb2qajFTej+WgvHNT+h4vgs4ZrTkH
rHyUiN/tzTCxBnkoh1w9FmCdnAdr/+br56Zq8oEXzBUUALqeW0xnB0zpTc6Hn0xq
iU0P5cM1sgyCWv83MgeGegcpxt54K5bqUjPKjaUpLNqbtiA=
-----END RSA PRIVATE KEY-----
</source>


== The First cluster.conf Foundation Configuration ==
'''Public key''' (single line, but wrapped here to make it more readable):
<source lang="bash">
cat ~/.ssh/id_rsa.pub
</source>
<source lang="text">
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQBs+CsWeKegqmtneZcLDvHV4QT1n+ajj98gkmjo
LcIFW5g/VFRLpSMMkwkQBgGDkmKPvYFa5OolL6qBQSAN1NpP8zET+1lZr4OFg/TZTuA8QnhN
eh6VmU2hSoyJfEkKJ6TVYg4s1rsbbTZPLdCDe9CMn/iI824WUu2wA8RwhF2WTqqTrWTW4h8t
YK9Y4eT4IYMXiYZ8+eQfzHyMaNxvUcI1Z8heMn/CEnrA67ja7Czi/ljYnw0I3MXy9d2ANYjY
ahBLF2+ok19NS9tkFHDlcZTh0gTQ4vV5fksgdJjsWl5l/aLjnSRfx2pQrMl3w8U7JBpr0PWJ
PIuzd4q47+KBI1A9 root@an-node01.alteeve.ca
</source>


The very first stage of building the cluster is to create a configuration file that is as minimal as possible. To do that, we need to define a few thing;
{{note|1=Generate the key on <span class="code">an-node02</span> before proceeding.}}


* The name of the cluster and the cluster file version.
In order to enable password-less login, we need to create a file called <span class="code">~/.ssh/authorized_keys</span> and put both nodes' public key in it. To seed the <span class="code">~/.ssh/authorized_keys</span> file, we'll simply copy the <span class="code">~/.ssh/id_rsa.pub</span> file. After that, we will append <span class="code">an-node02</span>'s public key into it over ssh. Once both keys are in it, we'll push it over to <span class="code">an-node02</span>. If you want to add your workstation's key as well, this is the best time to do so.
** Define <span class="code">cman</span> options
** The nodes in the cluster
*** The fence method for each node
** Define fence devices
** Define <span class="code">fenced</span> options


That's it. Once we've defined this minimal amount, we will be able to start the cluster for the first time! So lets get to it, finally.
From '''an-node01''', type:


=== Name the Cluster and Set The Configuration Version ===
<source lang="bash">
rsync -av ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
</source>
<source lang="text">
sending incremental file list
id_rsa.pub
 
sent 482 bytes  received 31 bytes  1026.00 bytes/sec
total size is 404  speedup is 0.79
</source>
 
Now we'll grab the public key from <span class="code">an-node02</span> over SSH and append it to the new <span class="code">authorized_keys</span> file.


The <span class="code">[[RHCS_v3_cluster.conf#cluster.3B_The_Parent_Tag|cluster]]</span> tag is the parent tag for the entire cluster configuration file.
I noted when I created <span class="code">an-node02</span>'s ssh key that its fingerprint was <span class="code">04:08:37:43:6b:5c:a0:b0:f5:27:a7:46:d4:77:a3:34</span>. This matches the one presented to me in the next step, so I trust that I am talking to the right machine.


<source lang="bash">
<source lang="bash">
vim /etc/cluster/cluster.conf
ssh root@an-node02 "cat ~/.ssh/id_rsa.pub" >> ~/.ssh/authorized_keys
</source>
</source>
<source lang="xml">
<source lang="text">
<?xml version="1.0"?>
The authenticity of host 'an-node02 (10.20.0.2)' can't be established.
<cluster name="an-clusterA" config_version="1">
RSA key fingerprint is 04:08:37:43:6b:5c:a0:b0:f5:27:a7:46:d4:77:a3:34.
</cluster>
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'an-node02,10.20.0.2' (RSA) to the list of known hosts.
root@an-node02's password:
</source>
</source>


This has two attributes that we need to set are <span class="code">name=""</span> and <span class="code">config_version=""</span>.
{{note|1=If you want to add your workstation's key, do so here.}}
 
Now push the local copy of <span class="code">authorized_keys</span> with both keys over to <span class="code">an-node02</span>.


The <span class="code">[[RHCS v3 cluster.conf#name|name]]=""</span> attribute defines the name of the cluster. It must be unique amongst the clusters on your network. It should be descriptive, but you will not want to make it too long, either. You will see this name in the various cluster tools and you will enter in, for example, when creating a [[GFS2]] partition later on. This tutorial uses the cluster name <span class="code">an_clusterA</span>. The reason for the <span class="code">A</span> is to help differentiate it from the nodes which use sequence numbers.
<source lang="bash">
rsync -av ~/.ssh/authorized_keys root@an-node02:/root/.ssh/
</source>
<source lang="text">
root@an-node02's password:
sending incremental file list
authorized_keys


The <span class="code">[[RHCS v3 cluster.conf#config_version|config_version]]=""</span> attribute is an integer marking the version of the configuration file. Whenever you make a change to the <span class="code">cluster.conf</span> file, you will need to increment this version number by 1. If you don't increment this number, then the cluster tools will not know that the file needs to be reloaded. As this is the first version of this configuration file, it will start with <span class="code">1</span>. Note that this tutorial will increment the version after every change, regardless of whether it is explicitly pushed out to the other nodes and reloaded. The reason is to help get into the habit of always increasing this value.
sent 1704 bytes  received 31 bytes  694.00 bytes/sec
total size is 1621  speedup is 0.93
</source>


=== Configuring cman Options ===
Now log into the remote machine. This time, the connection should succeed without having entered a password!


We are going to setup a special case for our cluster; A 2-Node cluster.
<source lang="bash">
ssh root@an-node02
</source>
<source lang="text">
Last login: Sat Dec 10 16:06:21 2011 from 10.20.255.254
</source>


This is a special case because traditional quorum will not be useful. With only two nodes, each having a vote of <span class="code">1</span>, the total votes is <span class="code">2</span>. Quorum needs <span class="code">50% + 1</span>, which means that a single node failure would shut down the cluster, as the remaining node's vote is <span class="code">50%</span> exactly. That kind of defeats the purpose to having a cluster at all.
Perfect! Once you can log into both nodes, from either node, without a password you will be finished.


So to account for this special case, there is a special attribute called <span class="code">[[RHCS_v3_cluster.conf#two_node|two_node]]="1"</span>. This tells the cluster manager to continue operating with only one vote. This option requires that the <span class="code">[[RHCS_v3_cluster.conf#expected_votes|expected_votes]]=""</span> attribute be set to <span class="code">1</span>. Normally, <span class="code">expected_votes</span> is set automatically to the total sum of the defined cluster nodes' votes (which itself is a default of <span class="code">1</span>). This is the other half of the "trick", as a single node's vote of <span class="code">1</span> now always provides quorum (that is, <span class="code">1</span> meets the <span class="code">50% + 1</span> requirement).
=== Populating And Pushing ~/.ssh/known_hosts ===


<source lang="xml">
Various applications will connect to the other node using different methods and networks. Each connection, when first established, will prompt for you to confirm that you trust the authentication, as we saw above. Many programs can't handle this prompt and will simply fail to connect. So to get around this, lets <span class="code">ssh</span> into both nodes using all host names. This will populate a file called <span class="code">~/.ssh/known_hosts</span>. Once you do this on one node, you can simply copy the <span class="code">known_hosts</span> to the other nodes and user's <span class="code">~/.ssh/</span> directories.
<?xml version="1.0"?>
<cluster name="an-clusterA" config_version="2">
<cman expected_votes="1" two_node="1"/>
</cluster>
</source>


Take note of the self-closing <span class="code"><... /></span> tag. This is an [[XML]] syntax that tells the parser not to look for any child or a closing tags.
I simply paste this into a terminal, answering <span class="code">yes</span> and then immediately <span class="code">exit</span> from the <span class="code">ssh</span> session. This is a bit tedious, I admit, but it only needs to be done one time for all nodes. Take the time to check the fingerprints as they are displayed to you. It is a bad habit to blindly type <span class="code">yes</span>.


=== Defining Cluster Nodes ===
Alter this to suit your host names.


This example is a little artificial, please don't load it into your cluster as we will need to add a few child tags, but one thing at a time.
<source lang="bash">
ssh root@an-node01 && \
ssh root@an-node01.alteeve.ca && \
ssh root@an-node01.bcn && \
ssh root@an-node01.sn && \
ssh root@an-node01.ifn && \
ssh root@an-node02 && \
ssh root@an-node02.alteeve.ca && \
ssh root@an-node02.bcn && \
ssh root@an-node02.sn && \
ssh root@an-node02.ifn
</source>


This actually introduces two tags.
<source lang="text">
 
The authenticity of host 'an-node01 (10.20.0.1)' can't be established.
The first is parent <span class="code">[[RHCS_v3_cluster.conf#clusternodes.3B_Defining_Cluster_Nodes|clusternodes]]</span> tag, which takes no variables of it's own. It's sole purpose is to contain the <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_clusternode|clusternode]]</span> child tags.
RSA key fingerprint is e6:cb:50:41:88:26:c3:a5:aa:85:80:89:02:6f:ae:5e.
 
Are you sure you want to continue connecting (yes/no)? yes
<source lang="xml">
</source>
<?xml version="1.0"?>
<source lang="text">
<cluster name="an-clusterA" config_version="3">
Warning: Permanently added 'an-node01,10.20.0.1' (RSA) to the list of known hosts.
<cman expected_votes="1" two_node="1"/>
Last login: Sun Dec 11 04:45:50 2011 from 10.20.255.254
<clusternodes>
[root@an-node01 ~]#
<clusternode name="an-node01.alteeve.com" nodeid="1" />
</source>
<clusternode name="an-node02.alteeve.com" nodeid="2" />
<source lang="bash">
</clusternodes>
exit
</cluster>
</source>
<source lang="text">
logout
Connection to an-node01 closed.
</source>
</source>


The <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_clusternode|clusternode]]</span> tag defines each cluster node. There are many attributes available, but we will look at just the two required ones.  
<source lang="text">
The authenticity of host 'an-node01.alteeve.ca (10.20.0.1)' can't be established.
RSA key fingerprint is e6:cb:50:41:88:26:c3:a5:aa:85:80:89:02:6f:ae:5e.
Are you sure you want to continue connecting (yes/no)? yes
</source>
<source lang="text">
Warning: Permanently added 'an-node01.alteeve.ca' (RSA) to the list of known hosts.
Last login: Sun Dec 11 04:50:24 2011 from an-node01
[root@an-node01 ~]#
</source>
<source lang="bash">
exit
</source>
<source lang="text">
logout
Connection to an-node01.alteeve.ca closed.
</source>


The first is the <span class="code">[[RHCS_v3_cluster.conf#clusternode.27s_name_attribute|name]]=""</span> attribute. This '''should''' match the name given by <span class="code">uname -n</span> (<span class="code">$HOSTNAME</span>) when run on each node. The [[IP]] address that the <span class="code">name</span> resolves to also sets the interface and subnet that the [[totem]] ring will run on. That is, the main cluster communications, which we are calling the '''Back-Channel Network'''. This is why it is so important to setup our <span class="code">[[2-Node_Red_Hat_KVM_Cluster_Tutorial#Setup_.2Fetc.2Fhosts|/etc/hosts]]</span> file correctly. Please see the [[RHCS_v3_cluster.conf#clusternode.27s_name_attribute|clusternode's name]] attribute document for details on how name to interface mapping is resolved.
<source lang="text">
The authenticity of host 'an-node01.bcn (10.20.0.1)' can't be established.
RSA key fingerprint is e6:cb:50:41:88:26:c3:a5:aa:85:80:89:02:6f:ae:5e.
Are you sure you want to continue connecting (yes/no)? yes
</source>
<source lang="text">
Warning: Permanently added 'an-node01.bcn' (RSA) to the list of known hosts.
Last login: Sun Dec 11 04:51:14 2011 from an-node01
[root@an-node01 ~]#
</source>
<source lang="bash">
exit
</source>
<source lang="text">
logout
Connection to an-node01.bcn closed.
</source>


The second attribute is <span class="code">[[RHCS_v3_cluster.conf#clusternode.27s_nodeid_attribute|nodeid]]=""</span>. This must be a unique integer amongst the <span class="code"><clusternode ...></span> tags. It is used by the cluster to identify the node.
<source lang="text">
The authenticity of host 'an-node01.sn (10.10.0.1)' can't be established.
RSA key fingerprint is e6:cb:50:41:88:26:c3:a5:aa:85:80:89:02:6f:ae:5e.
Are you sure you want to continue connecting (yes/no)? yes
</source>
<source lang="text">
Warning: Permanently added 'an-node01.sn,10.10.0.1' (RSA) to the list of known hosts.
Last login: Sun Dec 11 04:53:23 2011 from an-node01
[root@an-node01 ~]#
</source>
<source lang="bash">
exit
</source>
<source lang="text">
logout
Connection to an-node01.sn closed.
</source>


=== Defining Fence Devices ===
<source lang="text">
 
The authenticity of host 'an-node01.ifn (10.255.0.1)' can't be established.
[[2-Node_Red_Hat_KVM_Cluster_Tutorial#Concept.3B_Fencing|Fencing]] devices are designed to forcible eject a node from a cluster. This is generally done by forcing it to power off or reboot. Some [[SAN]] switches can logically disconnect a node from the shared storage device, which has the same effect of guaranteeing that the defective node can not alter the shared storage. A common, third type of fence device is one that cuts the mains power to the server.
RSA key fingerprint is e6:cb:50:41:88:26:c3:a5:aa:85:80:89:02:6f:ae:5e.
 
Are you sure you want to continue connecting (yes/no)? yes
In this tutorial, our nodes support [[IPMI]], which we will use as the primary fence device. We also have an [http://www.apc.com/products/resource/include/techspec_index.cfm?base_sku=AP7900 APC] brand switched PDU which will act as a backup in case a fault in the node disables the IPMI [[BMC]].
</source>
 
<source lang="text">
{{note|1=Not all brands of switched PDUs are supported as fence devices. Before you purchase a fence device, confirm that it is supported.}}
Warning: Permanently added 'an-node01.ifn,10.255.0.1' (RSA) to the list of known hosts.
 
Last login: Sun Dec 11 04:54:30 2011 from an-node01.sn
All fence devices are contained within the parent <span class="code">[[RHCS_v3_cluster.conf#fencedevices.3B_Defining_Fence_Devices|fencedevices]]</span> tag. This parent tag has no attributes. Within this parent tag are one or more <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_fencedevice|fencedevice]]</span> child tags.
[root@an-node01 ~]#
 
</source>
<source lang="xml">
<source lang="bash">
<?xml version="1.0"?>
exit
<cluster name="an-clusterA" config_version="4">
</source>
<cman expected_votes="1" two_node="1"/>
<source lang="text">
<clusternodes>
logout
<clusternode name="an-node01.alteeve.com" nodeid="1" />
Connection to an-node01.ifn closed.
<clusternode name="an-node02.alteeve.com" nodeid="2" />
</clusternodes>
<fencedevices>
<fencedevice name="ipmi_an01" agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" passwd="secret" />
<fencedevice name="ipmi_an02" agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" passwd="secret" />
<fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.com" name="pdu2"/>
</fencedevices>
</cluster>
</source>
</source>


Every fence device used in your cluster will have it's own <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_fencedevice|fencedevice]]</span> tag. If you are using [[IPMI]], this means you will have a <span class="code">fencedevice</span> entry for each node, as each physical IPMI [[BMC]] is a unique fence device. On the other hand, fence devices that support multiple nodes, like switched PDUs, will have just one entry. In our case, we're using both types, so we have three fences devices; The two IPMI BMCs plus the switched PDU.
This is the connection to <span class="code">an-node02</span>, which we established earlier when we pushed the <span class="code">authorized_keys</span>, so this time we're not asked to verify the key.


All <span class="code">fencedevice</span> tags share two basic attributes; <span class="code">[[RHCS_v3_cluster.conf#fencedevice.27s_name_attribute|name]]=""</span> and <span class="code">[[RHCS_v3_cluster.conf#fencedevice.27s_agent_attribute|agent]]=""</span>.
<source lang="text">
Last login: Sun Dec 11 05:44:40 2011 from 10.20.255.254
[root@an-node02 ~]#
</source>
<source lang="bash">
exit
</source>
<source lang="text">
logout
Connection to an-node02 closed.
</source>


* The <span class="code">name</span> attribute must be unique among all the fence devices in your cluster. As we will see in the next step, this name will be used within the <span class="code"><clusternode...></span> tag.
Now we'll be asked to verify keys again, as only the base <span class="code">an-node02</span> hostname had been recorded earlier.
* The <span class="code">agent</span> tag tells the cluster which [[fence agent]] to use when the <span class="code">[[fenced]]</span> daemon needs to communicate with the physical fence device. A fence agent is simple a shell script that acts as a glue layer between the <span class="code">fenced</span> daemon and the fence hardware. This agent takes the arguments from the daemon, like what port to act on and what action to take, and executes the node. The agent is responsible for ensuring that the execution succeeded and returning an appropriate success or failure exit code, depending. For those curious, the full details are described in the <span class="code">[https://fedorahosted.org/cluster/wiki/FenceAgentAPI FenceAgentAPI]</span>. If you have two or more of the same fence device, like IPMI, then you will use the same fence <span class="code">agent</span> value a corresponding number of times.


Beyond these two attributes, each fence agent will have it's own subset of attributes. The scope of which is outside this tutorial, though we will see examples for IPMI and a switched PDU. Most, if not all, fence agents have a corresponding man page that will show you what attributes it accepts and how they are used. The two fence agents we will see here have their attributes defines in the following <span class="code">[[man]]</span> pages.
<source lang="text">
 
The authenticity of host 'an-node02.alteeve.ca (10.20.0.2)' can't be established.
* <span class="code">man fence_ipmilan</span> - IPMI fence agent.
RSA key fingerprint is 04:08:37:43:6b:5c:a0:b0:f5:27:a7:46:d4:77:a3:34.
* <span class="code">man fence_apc</span> - APC-brand switched PDU.
Are you sure you want to continue connecting (yes/no)? yes
 
</source>
The example above is what this tutorial will use.
<source lang="text">
 
Warning: Permanently added 'an-node02.alteeve.ca' (RSA) to the list of known hosts.
==== Example <fencedevice...> Tag For IPMI ====
Last login: Sun Dec 11 05:54:44 2011 from an-node01
[root@an-node02 ~]#
</source>
<source lang="bash">
exit
</source>
<source lang="text">
logout
Connection to an-node02.alteeve.ca closed.
</source>


Here we will show what [[IPMI]] <span class="code"><fencedevice...></span> tags look like.
<source lang="text">
 
The authenticity of host 'an-node02.bcn (10.20.0.2)' can't be established.
<source lang="xml">
RSA key fingerprint is 04:08:37:43:6b:5c:a0:b0:f5:27:a7:46:d4:77:a3:34.
...
Are you sure you want to continue connecting (yes/no)? yes
<clusternode name="an-node01.alteeve.com" nodeid="1">
</source>
<fence>
<source lang="text">
<method name="ipmi">
Warning: Permanently added 'an-node02.bcn' (RSA) to the list of known hosts.
<device name="ipmi_an01" action="reboot"/>
Last login: Sun Dec 11 06:05:58 2011 from an-node01
</method>
[root@an-node02 ~]#
</fence>
</source>
</clusternode>
<source lang="bash">
<clusternode name="an-node02.alteeve.com" nodeid="2">
exit
<fence>
</source>
<method name="ipmi">
<source lang="text">
<device name="ipmi_an02" action="reboot"/>
logout
</method>
Connection to an-node02.bcn closed.
</fence>
</clusternode>
...
<fencedevices>
<fencedevice name="ipmi_an01" agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" passwd="secret" />
<fencedevice name="ipmi_an02" agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" passwd="secret" />
</fencedevices>
</source>
</source>


* <span class="code">ipaddr</span>; This is the resolvable name or [[IP]] address of the device. If you use a resolvable name, it is strongly advised that you put the name in <span class="code">/etc/hosts</span> as [[DNS]] is another layer of abstraction which could fail.
<source lang="text">
* <span class="code">login</span>; This is the login name to use when the <span class="code">fenced</span> daemon connects to the device.
The authenticity of host 'an-node02.sn (10.10.0.2)' can't be established.
* <span class="code">passwd</span>; This is the login password to use when the <span class="code">fenced</span> daemon connects to the device.
RSA key fingerprint is 04:08:37:43:6b:5c:a0:b0:f5:27:a7:46:d4:77:a3:34.
* <span class="code">name</span>; This is the name of this particular fence device within the cluster which, as we will see shortly, is matched in the <span class="code"><clusternode...></span> element where appropriate.
Are you sure you want to continue connecting (yes/no)? yes
</source>
<source lang="text">
Warning: Permanently added 'an-node02.sn,10.10.0.2' (RSA) to the list of known hosts.
Last login: Sun Dec 11 06:07:20 2011 from an-node01
</source>
<source lang="bash">
exit
</source>
<source lang="text">
logout
Connection to an-node02.sn closed.
</source>


{{note|1=We will see shortly that, unlike switched PDUs or other network fence devices, [[IPMI]] does not have ports. This is because each [[IPMI]] BMC supports just it's host system. More on that later.}}
<source lang="text">
The authenticity of host 'an-node02.ifn (10.255.0.2)' can't be established.
RSA key fingerprint is 04:08:37:43:6b:5c:a0:b0:f5:27:a7:46:d4:77:a3:34.
Are you sure you want to continue connecting (yes/no)? yes
</source>
<source lang="text">
Warning: Permanently added 'an-node02.ifn,10.255.0.2' (RSA) to the list of known hosts.
Last login: Sun Dec 11 06:08:11 2011 from an-node01.sn
[root@an-node02 ~]#
</source>
<source lang="bash">
exit
</source>
<source lang="text">
logout
Connection to an-node02.ifn closed.
</source>


==== Example <fencedevice...> Tag For HP iLO ====
Finally done!


Here we will show how to use [http://h18013.www1.hp.com/products/servers/management/remotemgmt.html iLO] (integraterd Lights-Out) management devices as <span class="code"><fencedevice...></span> entries. We won't be using it ourselves, but it is quite popular as a fence device so I wanted to show an example of it's use.
Now we can simply copy the <span class="code">~/.ssh/known_hosts</span> file to the other node.


<source lang="xml">
<source lang="bash">
...
rsync -av root@an-node01:/root/.ssh/known_hosts ~/.ssh/
<clusternode name="an-node01.alteeve.com" nodeid="1">
</source>
<fence>
<source lang="text">
<method name="ilo">
receiving incremental file list
<device action="reboot" name="ilo_an01"/>
 
</method>
sent 11 bytes  received 41 bytes  104.00 bytes/sec
</fence>
total size is 4413  speedup is 84.87
</clusternode>
<clusternode name="an-node02.alteeve.com" nodeid="2">
<fence>
<method name="ilo">
<device action="reboot" name="ilo_an02"/>
</method>
</fence>
</clusternode>
...
<fencedevices>
<fencedevice agent="fence_ilo" ipaddr="an-node01.ilo" login="root" name="ilo_an01" passwd="secret"/>
<fencedevice agent="fence_ilo" ipaddr="an-node02.ilo" login="root" name="ilo_an02" passwd="secret"/>
</fencedevices>
</source>
</source>


* <span class="code">ipaddr</span>; This is the resolvable name or [[IP]] address of the device. If you use a resolvable name, it is strongly advised that you put the name in <span class="code">/etc/hosts</span> as [[DNS]] is another layer of abstraction which could fail.
Now we can connect via SSH to either node, from either node, using any of the networks and we will not be prompted to enter a password or to verify SSH fingerprints any more.
* <span class="code">login</span>; This is the login name to use when the <span class="code">fenced</span> daemon connects to the device.
 
* <span class="code">passwd</span>; This is the login password to use when the <span class="code">fenced</span> daemon connects to the device.
= Configuring The Cluster Foundation =
* <span class="code">name</span>; This is the name of this particular fence device within the cluster which, as we will see shortly, is matched in the <span class="code"><clusternode...></span> element where appropriate.
 
We need to configure the cluster in two stages. This is because we have something of a chicken-and-egg problem.
 
* We need clustered storage for our virtual machines.
* Our clustered storage needs the cluster for fencing.
 
Conveniently, clustering has two logical parts;
* Cluster communication and membership.
* Cluster resource management.


{{note|1=Like [[IPMI]], [[iLO]] does not have ports. This is because each [[iLO]] BMC supports just it's host system.}}
The first, communication and membership, covers which nodes are part of the cluster and ejecting faulty nodes from the cluster, among other tasks. The second part, resource management, is provided by a second tool called <span class="code">rgmanager</span>. It's this second part that we will set aside for later.


==== Example <fencedevice...> Tag For APC Switched PDUs ====
== Installing Required Programs ==


Here we will show how to configure APC switched [[PDU]] <span class="code"><fencedevice...></span> tags. There are two agents for these devices; One that uses the telnet or ssh login and one that uses [[SNMP]]. This tutorial uses the later, and it is recommended that you do the same.
You will need to install the packages below. Under [[CentOS]], [[Scientific Linux]] or other [[RHEL]]-based distros, you can simply run the command below.  


<source lang="xml">
For [[Red Hat]] customers though, you will need to enable the "[http://www.redhat.com/rhel/add-ons/resilient_storage.html RHEL Server Resilient Storage]" entitlement. If you are foregoing [[GFS2]] to save money, then you will need to instead enable the "[http://www.redhat.com/rhel/add-ons/high_availability.html RHEL Server High Availability]" entitlement instead.  
...
 
<clusternode name="an-node01.alteeve.com" nodeid="1">
Once you are ready, run the following command to install what you need. If you opted not to use GFS2, remove <span class="code">gfs2-utils</span>. The <span class="code">gpm</span> is also optional as it provides mouse facility in the command-line.
<fence>
 
<method name="pdu2">
<source lang="bash">
<device name="pdu2" port="1" action="reboot"/>
yum install cman corosync rgmanager ricci gfs2-utils ntp \
</method>
            libvirt lvm2-cluster qemu-kvm qemu-kvm-tools \
</fence>
            virt-install virt-viewer syslinux wget gpm \
</clusternode>
            rsync
<clusternode name="an-node02.alteeve.com" nodeid="2">
<fence>
<method name="pdu2">
<device name="pdu2" port="2" action="reboot"/>
</method>
</fence>
</clusternode>
...
<fencedevices>
                <fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.com" name="pdu2"/>
</fencedevices>
</source>
</source>


* <span class="code">agent</span>; This is the name of the script under <span class="code">/usr/sbin/</span> to use when calling the physical PDU.
=== Disable the 'qemu' Bridge ===
* <span class="code">ipaddr</span>; This is the resolvable name or [[IP]] address of the device. If you use a resolvable name, it is strongly advised that you put the name in <span class="code">/etc/hosts</span> as [[DNS]] is another layer of abstraction which could fail.
 
* <span class="code">name</span>; This is the name of this particular fence device within the cluster which, as we will see shortly, is matched in the <span class="code"><clusternode...></span> element where appropriate.
By default, <span class="code">[[libvirtd]]</span> creates a bridge called <span class="code">virbr0</span> designed to connect virtual machines to the first <span class="code">eth0</span> interface. Our system will not need this, so we will remove it now.


==== Example <fencedevice...> Tag For TrippLite Switched PDUs ====
If <span class="code">libvirtd</span> has started, skip to the next step. If you haven't started <span class="code">libvirtd</span> yet, you can manually disable the bridge by blanking out the config file.


{{note|1=Support for TrippLite PDUs as fence devices is in the process of being added to RHCS. In the meantime, you will need to modify the <span class="code">cluster.rng</span> validation file and save the <span class="code">[[fence_tripplite_snmp]]</span> fence agent to each node's <span class="code">/usr/sbin/</span> directory.}}
<source lang="bash">
cat /dev/null >/etc/libvirt/qemu/networks/default.xml
</source>


We won't be using TrippLite PDUs ourselves, but it is quite inexpensive PDU and there is fairly frequent request for this device to be used as a fence device.
If <span class="code">libvirtd</span> has started, then you will need to first stop the bridge.


{{warning|1=With this new agent, TrippLite PDUs can be used in fencing, but beware; They can take up to 40 seconds to complete and verify a full <span class="code">reboot</span> fence action. This will cause fairly heavy delay in recovery. Be sure that you can tolerate a delay this long before deciding to use this fence device!}}
<source lang="bash">
virsh net-destroy default
</source>
<source lang="text">
Network default destroyed
</source>


Please see:
To disable and remove it, run the following;
* [[Adding TrippLite PDU Support To EL6]]


<source lang="xml">
<source lang="bash">
...
virsh net-autostart default --disable
<clusternode name="an-node01.alteeve.com" nodeid="1">
</source>
<fence>
<source lang="text">
<method name="pdu1">
Network default unmarked as autostarted
<device name="pdu1" port="1" action="reboot"/>
</source>
</method>
<source lang="bash">
</fence>
virsh net-undefine default
</clusternode>
</source>
<clusternode name="an-node02.alteeve.com" nodeid="2">
<source lang="text">
<fence>
Network default has been undefined
<method name="pdu1">
<device name="pdu1" port="2" action="reboot"/>
</method>
</fence>
</clusternode>
...
<fencedevices>
<fencedevice agent="fence_tripplite_snmp" ipaddr="pdu1.alteeve.com" name="pdu1"/>
</fencedevices>
</source>
</source>


* <span class="code">agent</span>; This is the name of the script under <span class="code">/usr/sbin/</span> to use when calling the physical PDU.
== Keeping Time In Sync ==
* <span class="code">ipaddr</span>; This is the resolvable name or [[IP]] address of the device. If you use a resolvable name, it is strongly advised that you put the name in <span class="code">/etc/hosts</span> as [[DNS]] is another layer of abstraction which could fail.
 
* <span class="code">name</span>; This is the name of this particular fence device within the cluster which, as we will see shortly, is matched in the <span class="code"><clusternode...></span> element where appropriate.
It is very important that time on both nodes be kept in sync. The way to do this is to setup [[[NTP]], the network time protocol. I like to use the <span class="code">tick.redhat.com</span> time server, though you are free to substitute your preferred time source.


=== Using the Fence Devices ===
First, add the timeserver to the NTP configuration file by appending the following lines to the end of it. <span class="code"></span>


Now we have nodes and fence devices defined, we will go back and tie them together. This is done by:
<source lang="bash">
* Defining a <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_fence|fence]]</span> tag containing all fence methods and devices.
echo server tick.redhat.com$'\n'restrict tick.redhat.com mask 255.255.255.255 nomodify notrap noquery >> /etc/ntp.conf
** Defining one or more <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_method|method]]</span> tag(s) containing the device call(s) needed for each fence attempt.
tail -n 4 /etc/ntp.conf
*** Defining one or more <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_device|device]]</span> tag(s) containing attributes describing how to call the fence device to kill this node.
</source>
<source lang="text">
# Enable writing of statistics records.
#statistics clockstats cryptostats loopstats peerstats
server tick.redhat.com
restrict tick.redhat.com mask 255.255.255.255 nomodify notrap noquery
</source>


Here is how we implement [[IPMI]] as the primary fence device with the APC switched PDU as the backup method.
Now make sure that the <span class="code">ntpd</span> service starts on boot, then start it manually.


<source lang="xml">
<source lang="bash">
<?xml version="1.0"?>
chkconfig ntpd on
<cluster name="an-clusterA" config_version="5">
/etc/init.d/ntpd start
<cman expected_votes="1" two_node="1"/>
</source>
<clusternodes>
<source lang="text">
<clusternode name="an-node01.alteeve.com" nodeid="1">
Starting ntpd:                                            [  OK  ]
<fence>
<method name="ipmi">
<device name="ipmi_an01" action="reboot"/>
</method>
<method name="pdu2">
<device name="pdu2" port="1" action="reboot"/>
</method>
</fence>
</clusternode>
<clusternode name="an-node02.alteeve.com" nodeid="2">
<fence>
<method name="ipmi">
<device name="ipmi_an02" action="reboot"/>
</method>
<method name="pdu2">
<device name="pdu2" port="2" action="reboot"/>
</method>
</fence>
</clusternode>
</clusternodes>
<fencedevices>
<fencedevice name="ipmi_an01" agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" passwd="secret" />
<fencedevice name="ipmi_an02" agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" passwd="secret" />
<fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.com" name="pdu2"/>
</fencedevices>
</cluster>
</source>
</source>


First, notice that the <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_fence|fence]]</span> tag has no attributes. It's merely a container for the <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_method|method]](s)</span>.
== Configuration Methods ==


The next level is the <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_method|method]]</span> named <span class="code">ipmi</span>. This name is merely a description and can be whatever you feel is most appropriate. It's purpose is simply to help you distinguish this method from other methods. The reason for <span class="code">method</span> tags is that some fence device calls will have two or more steps. A classic example would be a node with a redundant power supply on a switch PDU acting as the fence device. In such a case, you will need to define multiple <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_device|device]]</span> tags, one for each power cable feeding the node. In such a case, the cluster will not consider the fence a success unless and until all contained <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_device|device]]</span> calls execute successfully.
In [[Red Hat]] Cluster Services, the heart of the cluster is found in the <span class="code">[[RHCS v3 cluster.conf|/etc/cluster/cluster.conf]]</span> [[XML]] configuration file.


The same pair of <span class="code">method</span> and <span class="code">device</span> tags are supplied a second time. The first pair defined the IPMI interfaces, and the second pair defined the switched PDU. Note that the PDU definition needs a <span class="code">port=""</span> attribute where the IPMI fence device does not. When a fence call is needed, the fence devices will be called in the order they are found here. If both devices fail, the cluster will go back to the start and try again, looping indefinitely until one device succeeds.
There are three main ways of editing this file. Two are already well documented, so I won't bother discussing them, beyond introducing them. The third way is by directly hand-crafting the <span class="code">cluster.conf</span> file. This method is not very well documented, and directly manipulating configuration files is my preferred method. As my boss loves to say; "''The more computers do for you, the more they do to you''".


{{note|1=It's important to understand why we use IPMI as the primary fence device. It is suggested, but not required, that the fence device confirm that the node is off. IPMI can do this, the switched PDU does not. Thus, IPMI won't return a success unless the node is truly off. The PDU though will return a success once the power is cut to the requested port. However, a misconfigured node with redundant PDU may in fact still be running, leading to disastrous consequences.}}
The first two, well documented, graphical tools are:


The actual fence <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_device|device]]</span> configuration is the final piece of the puzzle. It is here that you specify per-node configuration options and link these attributes to a given <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_fencedevice|fencedevice]]</span>. Here, we see the link to the <span class="code">fencedevice</span> via the <span class="code">[[RHCS_v3_cluster.conf#device.27s_name_attribute|name]]</span>, <span class="code">fence_na01</span> in this example.
* <span class="code">[http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Cluster_Administration/ch-config-scc-CA.html system-config-cluster]</span>, older GUI tool run directly from one of the cluster nodes.
* [http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Cluster_Administration/ch-config-conga-CA.html Conga], comprised of the <span class="code">ricci</span> node-side client and the <span class="code">luci</span> web-based server (can be run on machines outside the cluster).


Let's step through an example fence call to help show how the per-cluster and fence device attributes are combined during a fence call.
I do like the tools above, but I often find issues that send me back to the command line. I'd recommend setting them aside for now as well. Once you feel comfortable with <span class="code">cluster.conf</span> syntax, then by all means, go back and use them. I'd recommend not relying on them though, which might be the case if you try to use them too early in your studies.
 
== The First cluster.conf Foundation Configuration ==
 
The very first stage of building the cluster is to create a configuration file that is as minimal as possible. We're going to do this on <span class="code">an-node01</span> and, when we're done, copy it over to <span class="code">an-node02</span>.


* The cluster manager decides that a node needs to be fenced. Let's say that the victim is <span class="code">an-node02</span>.
=== Name the Cluster and Set The Configuration Version ===
* The first <span class="code">method</span> in the <span class="code">fence</span> section under <span class="code">an-node02</span> is consulted. Within it there are two <span class="code">method</span> entries, named <span class="code">ipmi</span> and <span class="code">pdu</span>. The IPMI method's <span class="code">device</span> has one attribute while the PDU's <span class="code">device</span> has two attributes;
** <span class="code">port</span>; only found in the PDU <span class="code">method</span>, this tells the cluster that <span class="code">an-node02</span> is connected to switched PDU's port number <span class="code">2</span>.
** <span class="code">action</span>; Found on both devices, this tells the cluster that the fence action to take is <span class="code">reboot</span>. How this action is actually interpreted depends on the fence device in use, though the name certainly implies that the node will be forced off and then restarted.
* The cluster searches in <span class="code">fencedevices</span> for a <span class="code">fencedevice</span> matching the name <span class="code">ipmi_an02</span>. This fence device has four attributes;
** <span class="code">agent</span>; This tells the cluster to call the <span class="code">fence_ipmilan</span> fence agent script, as we discussed earlier.
** <span class="code">ipaddr</span>; This tells the fence agent where on the network to find this particular IPMI BMC. This is how multiple fence devices of the same type can be used in the cluster.
** <span class="code">login</span>; This is the login user name to use when authenticating against the fence device.
** <span class="code">passwd</span>; This is the password to supply along with the <span class="code">login</span> name when authenticating against the fence device.
* Should the IPMI fence call fail for some reason, the cluster will move on to the second <span class="code">pdu</span> method, repeating the steps above but using the PDU values.


When the cluster calls the fence agent, it does so by initially calling the fence agent script with no arguments.
The <span class="code">[[RHCS_v3_cluster.conf#cluster.3B_The_Parent_Tag|cluster]]</span> tag is the parent tag for the entire cluster configuration file.


<source lang="bash">
<source lang="bash">
/usr/sbin/fence_ipmilan
vim /etc/cluster/cluster.conf
</source>
<source lang="xml">
<?xml version="1.0"?>
<cluster name="an-cluster-A" config_version="1">
</cluster>
</source>
</source>


Then it will pass to that agent the following arguments:
The <span class="code">cluster</span> element has two attributes that we need to set;
* <span class="code">name=""</span>
* <span class="code">config_version=""</span>


<source lang="text">
The <span class="code">[[RHCS v3 cluster.conf#name|name]]=""</span> attribute defines the name of the cluster. It must be unique amongst the clusters on your network. It should be descriptive, but you will not want to make it too long, either. You will see this name in the various cluster tools and you will enter in, for example, when creating a [[GFS2]] partition later on. This tutorial uses the cluster name <span class="code">an-cluster-A</span>.
ipaddr=an-node02.ipmi
login=root
passwd=secret
action=reboot
</source>


As you can see then, the first three arguments are from the <span class="code">fencedevice</span> attributes and the last one is from the <span class="code">device</span> attributes under <span class="code">an-node02</span>'s <span class="code">clusternode</span>'s <span class="code">fence</span> tag.  
The <span class="code">[[RHCS v3 cluster.conf#config_version|config_version]]=""</span> attribute is an integer indicating the version of the configuration file. Whenever you make a change to the <span class="code">cluster.conf</span> file, you will need to increment this version number by <span class="code">1</span>. If you don't increment this number, then the cluster tools will not know that the file needs to be reloaded. As this is the first version of this configuration file, it will start with <span class="code">1</span>. Note that this tutorial will increment the version after every change, regardless of whether it is explicitly pushed out to the other nodes and reloaded. The reason is to help get into the habit of always increasing this value.


If this method fails, then the PDU will be called in a very similar way, but with an extra argument from the <span class="code">device</span> attributes.
=== Configuring cman Options ===


<source lang="bash">
We are setting up a special kind of cluster, called a 2-Node cluster.
/usr/sbin/fence_apc
</source>


Then it will pass to that agent the following arguments:
This is a special case because traditional [[quorum]] will not be useful. With only two nodes, each having a vote of <span class="code">1</span>, the total votes is <span class="code">2</span>. Quorum needs <span class="code">50% + 1</span>, which means that a single node failure would shut down the cluster, as the remaining node's vote is <span class="code">50%</span> exactly. That kind of defeats the purpose to having a cluster at all.


<source lang="text">
So to account for this special case, there is a special attribute called <span class="code">[[RHCS_v3_cluster.conf#two_node|two_node]]="1"</span>. This tells the cluster manager to continue operating with only one vote. This option requires that the <span class="code">[[RHCS_v3_cluster.conf#expected_votes|expected_votes]]=""</span> attribute be set to <span class="code">1</span>. Normally, <span class="code">expected_votes</span> is set automatically to the total sum of the defined cluster nodes' votes (which itself is a default of <span class="code">1</span>). This is the other half of the "trick", as a single node's vote of <span class="code">1</span> now always provides quorum (that is, <span class="code">1</span> meets the <span class="code">50% + 1</span> requirement).
ipaddr=pdu2.alteeve.com
login=root
passwd=secret
port=2
action=reboot
</source>


Should this fail, the cluster will go back and try the IPMI interface again. It will loop through the fence device methods forever until one of the methods succeeds.
In short; this disables quorum.


=== Give Nodes More Time To Start ===
<source lang="xml">
<?xml version="1.0"?>
<cluster name="an-cluster-A" config_version="2">
<cman expected_votes="1" two_node="1" />
</cluster>
</source>
 
Take note of the self-closing <span class="code"><... /></span> tag. This is an [[XML]] syntax that tells the parser not to look for any child or a closing tags.
 
=== Defining Cluster Nodes ===
 
This example is a little artificial, please don't load it into your cluster as we will need to add a few child tags, but one thing at a time.
 
This introduces two tags, the later a child tag of the former;


Clusters with more than three nodes will have to gain quorum before they can fence other nodes. As we saw earlier though, this is not really the case when using the <span class="code">[[RHCS_v3_cluster.conf#two_node|two_node]]="1"</span> attribute in the <span class="code">[[RHCS_v3_cluster.conf#cman.3B_The_Cluster_Manager|cman]]</span> tag. What this means in practice is that if you start the cluster on one node and then wait too long to start the cluster on the second node, the first will fence the second.
* <span class="code">clusternodes</span>
** <span class="code">clusternode</span>


The logic behind this is; When the cluster starts, it will try to talk to it's fellow node and then fail. With the special <span class="code">two_node="1"</span> attribute set, the cluster knows that it is allowed to start clustered services, but it has no way to say for sure what state the other node is in. It could well be online and hosting services for all it knows. So it has to proceed on the assumption that the other node is alive and using shared resources. Given that, and given that it can not talk to the other node, it's only safe option is to fence the other node. Only then can it be confident that it is safe to start providing clustered services.
The first is the parent <span class="code">[[RHCS_v3_cluster.conf#clusternodes.3B_Defining_Cluster_Nodes|clusternodes]]</span> tag, which takes no attributes of its own. Its sole purpose is to contain the <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_clusternode|clusternode]]</span> child tags, of which there will be one per node.  


<source lang="xml">
<source lang="xml">
<?xml version="1.0"?>
<?xml version="1.0"?>
<cluster name="an-clusterA" config_version="7">
<cluster name="an-cluster-A" config_version="3">
        <cman expected_votes="1" two_node="1"/>
<cman expected_votes="1" two_node="1" />
        <clusternodes>
<clusternodes>
                <clusternode name="an-node01.alteeve.com" nodeid="1">
<clusternode name="an-node01.alteeve.ca" nodeid="1" />
                        <fence>
<clusternode name="an-node02.alteeve.ca" nodeid="2" />
                                <method name="ipmi">
</clusternodes>
                                        <device name="ipmi_an01" action="reboot"/>
                                </method>
                                <method name="pdu2">
                                        <device name="pdu2" port="1" action="reboot"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="an-node02.alteeve.com" nodeid="2">
                        <fence>
                                <method name="ipmi">
                                        <device name="ipmi_an02" action="reboot"/>
                                </method>
                                <method name="pdu2">
                                        <device name="pdu2" port="2" action="reboot"/>
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <fencedevices>
                <fencedevice name="ipmi_an01" agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" passwd="secret" />
                <fencedevice name="ipmi_an02" agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" passwd="secret" />
                <fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.com" name="pdu2"/>
        </fencedevices>
        <fence_daemon post_join_delay="30"/>
</cluster>
</cluster>
</source>
</source>


The new tag is <span class="code">[[RHCS_v3_cluster.conf#fence_daemon.3B_Fencing|fence_daemon]]</span>, seen near the bottom if the file above. The change is made using the <span class="code">[[RHCS_v3_cluster.conf#post_join_delay|post_join_delay]]="60"</span> attribute. By default, the cluster will declare the other node dead after just <span class="code">6</span> seconds. The reason is that the larger this value, the slower the start-up of the cluster services will be. During testing and development though, I find this value to be far too short and frequently led to unnecessary fencing. Once your cluster is setup and working, it's not a bad idea to reduce this value to the lowest value that you are comfortable with.
The <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_clusternode|clusternode]]</span> tag defines each cluster node. There are many attributes available, but we will look at just the two required ones.  


=== Configuring Totem ===
The first is the <span class="code">[[RHCS_v3_cluster.conf#clusternode.27s_name_attribute|name]]=""</span> attribute. The value '''should''' match the fully qualified domain name, which you can check by running <span class="code">uname -n</span> on each node. This isn't strictly required, mind you, but for simplicity's sake, this is the name we will use.


This is almost a misnomer, as we're more or less ''not'' configuring the [[totem]] protocol in this cluster.
The cluster decides which network to use for cluster communication by resolving the <span class="code">name="..."</span> value. It will take the returned [[IP]] address and try to match it to one of the IPs on the system. Once it finds a match, that becomes the network the cluster will use. In our case, <span class="code">an-node01.alteeve.ca</span> resolves to <span class="code">10.20.0.1</span>, which is used by <span class="code">bond0</span>.


<source lang="xml">
If you have <span class="code">syslinux</span> installed, you can check this out yourself using the following command;
<?xml version="1.0"?>
<cluster name="an-clusterA" config_version="8">
<cman expected_votes="1" two_node="1"/>
<clusternodes>
<clusternode name="an-node01.alteeve.com" nodeid="1">
<fence>
<method name="ipmi">
<device name="ipmi_an01" action="reboot"/>
</method>
<method name="pdu2">
<device name="pdu2" port="1" action="reboot"/>
</method>
</fence>
</clusternode>
<clusternode name="an-node02.alteeve.com" nodeid="2">
<fence>
<method name="ipmi">
<device name="ipmi_an02" action="reboot"/>
</method>
<method name="pdu2">
<device name="pdu2" port="2" action="reboot"/>
</method>
</fence>
</clusternode>
</clusternodes>
<fencedevices>
<fencedevice name="ipmi_an01" agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" passwd="secret" />
<fencedevice name="ipmi_an02" agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" passwd="secret" />
<fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.com" name="pdu2"/>
</fencedevices>
<fence_daemon post_join_delay="30"/>
<totem rrp_mode="none" secauth="off"/>
</cluster>
</source>
 
{{note|1=At this time, [[redundant ring protocol]] is not supported ([[RHEL6]].1 and lower) and will be in technology preview mode ([[RHEL6]].2 and above). For this reason, we will not be using it. However, we are using bonding, so we still have removed a single point of failure.}}
 
[[RRP]] is an optional second ring that can be used for cluster communication in the case of a break down in the first ring. However, if you wish to explore it further, please take a look at the <span class="code">clusternode</span> element tag called <span class="code"><[[RHCS_v3_cluster.conf#Tag.3B_altname|altname]]...></span>. When <span class="code">altname</span> is used though, then the <span class="code">[[RHCS_v3_cluster.conf#rrp_mode|rrp_mode]]</span> attribute will need to be changed to either <span class="code">active</span> or <span class="code">passive</span> (the details of which are outside the scope of this tutorial).
 
The second option we're looking at here is the <span class="code">[[RHCS_v3_cluster.conf#secauth|secauth]]="off"</span> attribute. This controls whether the cluster communications are encrypted or not. We can safely disable this because we're working on a known-private network, which yields two benefits; It's simpler to setup and it's a lot faster. If you must encrypt the cluster communications, then you can do so here. The details of which are also outside the scope of this tutorial though.
 
=== Validating and Pushing the /etc/cluster/cluster.conf File ===
 
One of the most noticeable changes in [[RHCS]] cluster stable 3 is that we no longer have to make a long, cryptic <span class="code">xmllint</span> call to validate our cluster configuration. Now we can simply call <span class="code">ccs_config_validate</span>.


<source lang="bash">
<source lang="bash">
ccs_config_validate
ifconfig |grep -B 1 $(gethostip -d $(uname -n)) | grep HWaddr | awk '{ print $1 }'
</source>
</source>
<source lang="text">
<source lang="text">
Configuration validates
bond0
</source>
</source>


If there was a problem, you need to go back and fix it. '''DO NOT''' proceed until your configuration validates. Once it does, we're ready to move on!
Please see the <span class="code">clusternode</span>'s <span class="code">[[RHCS_v3_cluster.conf#name_3|name]]</span> attribute document for details on how name to interface mapping is resolved.


With it validated, we need to push it to the other node. As the cluster is not running yet, we will push it out using <span class="code">rsync</span>.
The second attribute is <span class="code">[[RHCS_v3_cluster.conf#clusternode.27s_nodeid_attribute|nodeid]]=""</span>. This must be a unique integer amongst the <span class="code"><clusternode ...></span> elements in the cluster. It is what the cluster itself uses to identify the node.


<source lang="bash">
=== Defining Fence Devices ===
rsync -av /etc/cluster/cluster.conf root@an-node02:/etc/cluster/
</source>
<source lang="text">
sending incremental file list
cluster.conf


sent 1192 bytes  received 55 bytes  2494.00 bytes/sec
[[2-Node_Red_Hat_KVM_Cluster_Tutorial#Concept.3B_Fencing|Fencing]] devices are used to forcible eject a node from a cluster if it stops responding.
total size is 1112  speedup is 0.89
</source>


=== Setting Up ricci ===
This is generally done by forcing it to power off or reboot. Some [[SAN]] switches can logically disconnect a node from the shared storage device, a process called fabric fencing, which has the same effect of guaranteeing that the defective node can not alter the shared storage. A common, third type of fence device is one that cuts the mains power to the server. These are called [[PDU]]s and are effectively power bars where each outlet can be independently switched off over the network.


Another change from [[RHCS]] stable 2 is how configuration changes are propagated. Before, after a change, we'd push out the updated cluster configuration by calling <span class="code">ccs_tool update /etc/cluster/cluster.conf</span>. Now this is done with <span class="code">cman_tool version -r</span>. More fundamentally though, the cluster needs to authenticate against each node and does this using the local <span class="code">ricci</span> system user. The user has no password initially, so we need to set one.
In this tutorial, our nodes support [[IPMI]], which we will use as the primary fence device. We also have an [http://www.apc.com/products/resource/include/techspec_index.cfm?base_sku=AP7900 APC] brand switched PDU which will act as a backup fence device.


On '''both''' nodes:
{{note|1=Not all brands of switched PDUs are supported as fence devices. Before you purchase a fence device, confirm that it is supported.}}
 
All fence devices are contained within the parent <span class="code">[[RHCS_v3_cluster.conf#fencedevices.3B_Defining_Fence_Devices|fencedevices]]</span> tag, which has no attributes of its own. Within this parent tag are one or more <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_fencedevice|fencedevice]]</span> child tags.


<source lang="bash">
<source lang="xml">
passwd ricci
<?xml version="1.0"?>
</source>
<cluster name="an-cluster-A" config_version="4">
<source lang="text">
        <cman expected_votes="1" two_node="1" />
Changing password for user ricci.
        <clusternodes>
New password:
                <clusternode name="an-node01.alteeve.ca" nodeid="1" />
Retype new password:
                <clusternode name="an-node02.alteeve.ca" nodeid="2" />
passwd: all authentication tokens updated successfully.
        </clusternodes>
        <fencedevices>
                <fencedevice name="ipmi_an01" agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" passwd="secret" />
                <fencedevice name="ipmi_an02" agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" passwd="secret" />
                <fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.ca" name="pdu2" />
        </fencedevices>
</cluster>
</source>
</source>


You will need to enter this password once from each node against the other node. We will see this later.
In our cluster, each fence device used will have its own <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_fencedevice|fencedevice]]</span> tag. If you are using [[IPMI]], this means you will have a <span class="code">fencedevice</span> entry for each node, as each physical IPMI [[BMC]] is a unique fence device. On the other hand, fence devices that support multiple nodes, like switched PDUs, will have just one entry. In our case, we're using both types, so we have three fences devices; The two IPMI BMCs plus the switched PDU.


Now make sure that the <span class="code">ricci</span> daemon is set to start on boot and is running now.
All <span class="code">fencedevice</span> tags share two basic attributes; <span class="code">[[RHCS_v3_cluster.conf#fencedevice.27s_name_attribute|name]]=""</span> and <span class="code">[[RHCS_v3_cluster.conf#fencedevice.27s_agent_attribute|agent]]=""</span>.  


<source lang="bash">
* The <span class="code">name</span> attribute must be unique among all the fence devices in your cluster. As we will see in the next step, this name will be used within the <span class="code"><clusternode...></span> tag.  
chkconfig ricci on
* The <span class="code">agent</span> tag tells the cluster which [[fence agent]] to use when the <span class="code">[[fenced]]</span> daemon needs to communicate with the physical fence device. A fence agent is simple a shell script that acts as a go-between layer between the <span class="code">fenced</span> daemon and the fence hardware. This agent takes the arguments from the daemon, like what port to act on and what action to take, and performs the requested action against the target node. The agent is responsible for ensuring that the execution succeeded and returning an appropriate success or failure exit code.  
/etc/init.d/ricci start
</source>
<source lang="text">
Starting ricci:                                            [ OK  ]
</source>
 
{{note|1=If you don't see <span class="code">[  OK  ]</span>, don't worry, it is probably because it was already running.}}


=== Starting the Cluster for the First Time ===
For those curious, the full details are described in the <span class="code">[https://fedorahosted.org/cluster/wiki/FenceAgentAPI FenceAgentAPI]</span>. If you have two or more of the same fence device, like IPMI, then you will use the same fence <span class="code">agent</span> value a corresponding number of times.


It's a good idea to open a second terminal on either node and <span class="code">tail</span> the <span class="code">/var/log/messages</span> [[syslog]] file. All cluster messages will be recorded here and it will help to debug problems if you can watch the logs. To do this, in the new terminal windows run;
Beyond these two attributes, each fence agent will have its own subset of attributes. The scope of which is outside this tutorial, though we will see examples for IPMI and a switched PDU. All fence agents have a corresponding man page that will show you what attributes it accepts and how they are used. The two fence agents we will see here have their attributes defines in the following <span class="code">[[man]]</span> pages.


<source lang="bash">
* <span class="code">man fence_ipmilan</span> - IPMI fence agent.
clear; tail -f -n 0 /var/log/messages
* <span class="code">man fence_apc_snmp</span> - APC-brand switched PDU using [[SNMP]].
</source>


This will clear the screen and start watching for new lines to be written to syslog. When you are done watching syslog, press the <span class="code"><ctrl></span> + <span class="code">c</span> key combination.
The example above is what this tutorial will use.


How you lay out your terminal windows is, obviously, up to your own preferences. Below is a configuration I have found very useful.
=== Using the Fence Devices ===


[[Image:2-node-rhcs3_terminal-window-layout_01.png|thumb|center|700px|Terminal window layout for watching 2 nodes. Left windows are used for entering commands and the left windows are used for tailing syslog.]]
Now we have nodes and fence devices defined, we will go back and tie them together. This is done by:
* Defining a <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_fence|fence]]</span> tag containing all fence methods and devices.
** Defining one or more <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_method|method]]</span> tag(s) containing the device call(s) needed for each fence attempt.
*** Defining one or more <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_device|device]]</span> tag(s) containing attributes describing how to call the fence device to kill this node.
 
Here is how we implement [[IPMI]] as the primary fence device with the APC switched PDU as the backup method.
 
<source lang="xml">
<?xml version="1.0"?>
<cluster name="an-cluster-A" config_version="5">
        <cman expected_votes="1" two_node="1" />
        <clusternodes>
                <clusternode name="an-node01.alteeve.ca" nodeid="1">
                        <fence>
                                <method name="ipmi">
                                        <device name="ipmi_an01" action="reboot" />
                                </method>
                                <method name="pdu">
                                        <device name="pdu2" port="1" action="reboot" />
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="an-node02.alteeve.ca" nodeid="2">
                        <fence>
                                <method name="ipmi">
                                        <device name="ipmi_an02" action="reboot" />
                                </method>
                                <method name="pdu">
                                        <device name="pdu2" port="2" action="reboot" />
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <fencedevices>
                <fencedevice name="ipmi_an01" agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" passwd="secret" />
                <fencedevice name="ipmi_an02" agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" passwd="secret" />
                <fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.ca" name="pdu2" />
        </fencedevices>
</cluster>
</source>
 
First, notice that the <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_fence|fence]]</span> tag has no attributes. It's merely a parent for the <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_method|method]](s)</span> child elements.
 
There are two <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_method|method]]</span> elements, one for each fence device, named <span class="code">ipmi</span> and <span class="code">pdu</span>. These names are merely descriptive and can be whatever you feel is most appropriate.
 
Within each <span class="code">method</span> element is one or more <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_device|device]]</span> tags. For a given method to succeed, all defined <span class="code">device</span> elements must themselves succeed. This is very useful for grouping calls to separate PDUs when dealing with nodes having redundant power supplies, as shown in the [[2-Node_Red_Hat_KVM_Cluster_Tutorial#Example_.3Cfencedevice....3E_Tag_For_APC_Switched_PDUs|PDU example]] above.
 
The actual fence <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_device|device]]</span> configuration is the final piece of the puzzle. It is here that you specify per-node configuration options and link these attributes to a given <span class="code">[[RHCS_v3_cluster.conf#Tag.3B_fencedevice|fencedevice]]</span>. Here, we see the link to the <span class="code">fencedevice</span> via the <span class="code">[[RHCS_v3_cluster.conf#device.27s_name_attribute|name]]</span>, <span class="code">ipmi_an01</span> in this example.


With the terminals setup, lets start the cluster!
Note that the PDU definition needs a <span class="code">port=""</span> attribute where the IPMI fence devices do not. These are the sorts of differences you will find, varying depending on how the fence device agent works.


{{warning|1=If you don't start <span class="code">cman</span> on both nodes within 30 seconds, the slower node will be fenced.}}
When a fence call is needed, the fence devices will be called in the order they are found here. If both devices fail, the cluster will go back to the start and try again, looping indefinitely until one device succeeds.


On '''both''' nodes, run:
{{note|1=It's important to understand why we use IPMI as the primary fence device. The FenceAgentAPI specification suggests, but does not require, that a fence device confirm that the node is off. IPMI can do this, the switched PDU can not. Thus, IPMI won't return a success unless the node is truly off. The PDU, however, will return a success once the power is cut to the requested port. The risk is that a misconfigured node with redundant PDU may in fact still be running, leading to disastrous consequences.}}


<source lang="bash">
Let's step through an example fence call to help show how the per-cluster and fence device attributes are combined during a fence call.
/etc/init.d/cman start
 
</source>
* The cluster manager decides that a node needs to be fenced. Let's say that the victim is <span class="code">an-node02</span>.
<source lang="text">
* The first <span class="code">method</span> in the <span class="code">fence</span> section under <span class="code">an-node02</span> is consulted. Within it there are two <span class="code">method</span> entries, named <span class="code">ipmi</span> and <span class="code">pdu</span>. The IPMI method's <span class="code">device</span> has one attribute while the PDU's <span class="code">device</span> has two attributes;
Starting cluster:
** <span class="code">port</span>; only found in the PDU <span class="code">method</span>, this tells the cluster that <span class="code">an-node02</span> is connected to switched PDU's outlet number <span class="code">2</span>.
  Checking if cluster has been disabled at boot...       [  OK  ]
** <span class="code">action</span>; Found on both devices, this tells the cluster that the fence action to take is <span class="code">reboot</span>. How this action is actually interpreted depends on the fence device in use, though the name certainly implies that the node will be forced off and then restarted.
  Checking Network Manager...                            [  OK  ]
* The cluster searches in <span class="code">fencedevices</span> for a <span class="code">fencedevice</span> matching the name <span class="code">ipmi_an02</span>. This fence device has four attributes;
  Global setup...                                         [  OK  ]
** <span class="code">agent</span>; This tells the cluster to call the <span class="code">fence_ipmilan</span> fence agent script, as we discussed earlier.
  Loading kernel modules...                               [  OK  ]
** <span class="code">ipaddr</span>; This tells the fence agent where on the network to find this particular IPMI BMC. This is how multiple fence devices of the same type can be used in the cluster.
  Mounting configfs...                                    [  OK  ]
** <span class="code">login</span>; This is the login user name to use when authenticating against the fence device.
  Starting cman...                                       [  OK  ]
** <span class="code">passwd</span>; This is the password to supply along with the <span class="code">login</span> name when authenticating against the fence device.
  Waiting for quorum...                                  [  OK  ]
* Should the IPMI fence call fail for some reason, the cluster will move on to the second <span class="code">pdu</span> method, repeating the steps above but using the PDU values.
  Starting fenced...                                      [  OK  ]
 
  Starting dlm_controld...                               [  OK  ]
When the cluster calls the fence agent, it does so by initially calling the fence agent script with no arguments.
  Starting gfs_controld...                                [  OK  ]
 
  Unfencing self...                                      [  OK  ]
<source lang="bash">
  Joining fence domain...                                [  OK  ]
/usr/sbin/fence_ipmilan
</source>
</source>


Here is what you should see in syslog:
Then it will pass to that agent the following arguments:


<source lang="text">
<source lang="text">
Oct 23 11:15:33 an-node01 kernel: DLM (built Oct  6 2011 19:25:48) installed
ipaddr=an-node02.ipmi
Oct 23 11:15:33 an-node01 corosync[10450]:  [MAIN  ] Corosync Cluster Engine ('1.2.3'): started and ready to provide service.
login=root
Oct 23 11:15:33 an-node01 corosync[10450]:  [MAIN  ] Corosync built-in features: nss dbus rdma snmp
passwd=secret
Oct 23 11:15:33 an-node01 corosync[10450]:  [MAIN  ] Successfully read config from /etc/cluster/cluster.conf
action=reboot
Oct 23 11:15:33 an-node01 corosync[10450]:  [MAIN  ] Successfully parsed cman config
Oct 23 11:15:33 an-node01 corosync[10450]:  [TOTEM ] Initializing transport (UDP/IP Multicast).
Oct 23 11:15:33 an-node01 corosync[10450]:  [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Oct 23 11:15:33 an-node01 corosync[10450]:  [TOTEM ] The network interface [10.20.0.1] is now up.
Oct 23 11:15:33 an-node01 corosync[10450]:  [QUORUM] Using quorum provider quorum_cman
Oct 23 11:15:33 an-node01 corosync[10450]:  [SERV  ] Service engine loaded: corosync cluster quorum service v0.1
Oct 23 11:15:33 an-node01 corosync[10450]:  [CMAN  ] CMAN 3.0.12 (built Sep 23 2011 20:31:00) started
Oct 23 11:15:33 an-node01 corosync[10450]:  [SERV  ] Service engine loaded: corosync CMAN membership service 2.90
Oct 23 11:15:33 an-node01 corosync[10450]:  [SERV  ] Service engine loaded: openais checkpoint service B.01.01
Oct 23 11:15:33 an-node01 corosync[10450]:  [SERV  ] Service engine loaded: corosync extended virtual synchrony service
Oct 23 11:15:33 an-node01 corosync[10450]:  [SERV  ] Service engine loaded: corosync configuration service
Oct 23 11:15:33 an-node01 corosync[10450]:  [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01
Oct 23 11:15:33 an-node01 corosync[10450]:  [SERV  ] Service engine loaded: corosync cluster config database access v1.01
Oct 23 11:15:33 an-node01 corosync[10450]:  [SERV  ] Service engine loaded: corosync profile loading service
Oct 23 11:15:33 an-node01 corosync[10450]:  [QUORUM] Using quorum provider quorum_cman
Oct 23 11:15:33 an-node01 corosync[10450]:  [SERV  ] Service engine loaded: corosync cluster quorum service v0.1
Oct 23 11:15:33 an-node01 corosync[10450]:  [MAIN  ] Compatibility mode set to whitetank.  Using V1 and V2 of the synchronization engine.
Oct 23 11:15:33 an-node01 corosync[10450]:  [TOTEM ] A processor joined or left the membership and a new membership was formed.
Oct 23 11:15:33 an-node01 corosync[10450]:  [CMAN  ] quorum regained, resuming activity
Oct 23 11:15:33 an-node01 corosync[10450]:  [QUORUM] This node is within the primary component and will provide service.
Oct 23 11:15:33 an-node01 corosync[10450]:  [QUORUM] Members[1]: 1
Oct 23 11:15:33 an-node01 corosync[10450]:  [QUORUM] Members[1]: 1
Oct 23 11:15:33 an-node01 corosync[10450]:  [CPG  ] downlist received left_list: 0
Oct 23 11:15:33 an-node01 corosync[10450]:  [CPG  ] chosen downlist from node r(0) ip(10.20.0.1)
Oct 23 11:15:33 an-node01 corosync[10450]:  [MAIN  ] Completed service synchronization, ready to provide service.
Oct 23 11:15:33 an-node01 corosync[10450]:  [TOTEM ] A processor joined or left the membership and a new membership was formed.
Oct 23 11:15:33 an-node01 corosync[10450]:  [QUORUM] Members[2]: 1 2
Oct 23 11:15:33 an-node01 corosync[10450]:  [QUORUM] Members[2]: 1 2
Oct 23 11:15:33 an-node01 corosync[10450]:  [CPG  ] downlist received left_list: 0
Oct 23 11:15:33 an-node01 corosync[10450]:  [CPG  ] downlist received left_list: 0
Oct 23 11:15:33 an-node01 corosync[10450]:  [CPG  ] chosen downlist from node r(0) ip(10.20.0.1)
Oct 23 11:15:33 an-node01 corosync[10450]:  [MAIN  ] Completed service synchronization, ready to provide service.
Oct 23 11:15:37 an-node01 fenced[10513]: fenced 3.0.12 started
Oct 23 11:15:37 an-node01 dlm_controld[10539]: dlm_controld 3.0.12 started
Oct 23 11:15:38 an-node01 gfs_controld[10590]: gfs_controld 3.0.12 started
</source>
</source>


Now to confirm that the cluster is operating properly, run <span class="code">cman_tool status</span>;
As you can see then, the first three arguments are from the <span class="code">fencedevice</span> attributes and the last one is from the <span class="code">device</span> attributes under <span class="code">an-node02</span>'s <span class="code">clusternode</span>'s <span class="code">fence</span> tag.
 
If this method fails, then the PDU will be called in a very similar way, but with an extra argument from the <span class="code">device</span> attributes.


<source lang="bash">
<source lang="bash">
cman_tool status
/usr/sbin/fence_apc_snmp
</source>
</source>
<source lang="bash">
 
Version: 6.2.0
Then it will pass to that agent the following arguments:
Config Version: 8
 
Cluster Name: an-clusterA
<source lang="text">
Cluster Id: 29382
ipaddr=pdu2.alteeve.ca
Cluster Member: Yes
port=2
Cluster Generation: 132
action=reboot
Membership state: Cluster-Member
Nodes: 2
Expected votes: 1
Total votes: 2
Node votes: 1
Quorum: 1 
Active subsystems: 7
Flags: 2node
Ports Bound: 0 
Node name: an-node01.alteeve.com
Node ID: 1
Multicast addresses: 239.192.114.57
Node addresses: 10.20.0.1
</source>
</source>


We can see that the both nodes are talking because of the <span class="code">Nodes: 2</span> entry.
Should this fail, the cluster will go back and try the IPMI interface again. It will loop through the fence device methods forever until one of the methods succeeds.
Below are snippets from other clusters using different fence device configurations which might help you build your cluster.
 
==== Example <fencedevice...> Tag For IPMI ====
 
{{warning|1=When using [[IPMI]] for fencing, it is very important that you disable [[ACPI]]. If <span class="code">acpid</span> is running when an IPMI-based fence is called against it, it will begin a graceful shutdown. This means that it will stay running for another four seconds. This is more than enough time for it to initiate a shutdown of the peer, resulting in both nodes powering down if the network is interrupted.}}


If you ever want to see the nitty-gritty configuration, you can run <span class="code">corosync-objctl</span>.
As stated above, it is critical to disable the <span class="code">acpid</span> daemon from running with the server.


<source lang="bash">
<source lang="bash">
corosync-objctl
chkconfig acpid off
/etc/init.d/acpid stop
</source>
</source>
<source lang="text">
 
cluster.name=an-clusterA
{{warning|1=After this tutorial was completed, a new <span class="code"><device ... /></span> attribute called <span class="code">delay="..."</span> was added. This is a very useful attribute that allows you to tell <span class="code">fenced</span> "hey, if you need to fence node X, pause for Y seconds before doing so". By setting this on only one node, you can effectively ensure that when both nodes try to fence each other at the same time, the one with the <span class="code">delay="Y"</span> set will always win.}}
cluster.config_version=8
 
cluster.cman.expected_votes=1
Here we will show what [[IPMI]] <span class="code"><fencedevice...></span> tags look like.
cluster.cman.two_node=1
 
cluster.cman.nodename=an-node01.alteeve.com
<source lang="xml">
cluster.cman.cluster_id=29382
...
cluster.clusternodes.clusternode.name=an-node01.alteeve.com
<clusternode name="an-node01.alteeve.ca" nodeid="1">
cluster.clusternodes.clusternode.nodeid=1
<fence>
cluster.clusternodes.clusternode.fence.method.name=ipmi
<method name="ipmi">
cluster.clusternodes.clusternode.fence.method.device.name=ipmi_an01
<device name="ipmi_an01" action="reboot"/>
cluster.clusternodes.clusternode.fence.method.device.action=reboot
</method>
cluster.clusternodes.clusternode.fence.method.name=pdu2
</fence>
cluster.clusternodes.clusternode.fence.method.device.name=pdu2
</clusternode>
cluster.clusternodes.clusternode.fence.method.device.port=1
<clusternode name="an-node02.alteeve.ca" nodeid="2">
cluster.clusternodes.clusternode.fence.method.device.action=reboot
<fence>
cluster.clusternodes.clusternode.name=an-node02.alteeve.com
<method name="ipmi">
cluster.clusternodes.clusternode.nodeid=2
<device name="ipmi_an02" action="reboot"/>
cluster.clusternodes.clusternode.fence.method.name=ipmi
</method>
cluster.clusternodes.clusternode.fence.method.device.name=ipmi_an02
</fence>
cluster.clusternodes.clusternode.fence.method.device.action=reboot
</clusternode>
cluster.clusternodes.clusternode.fence.method.name=pdu2
...
cluster.clusternodes.clusternode.fence.method.device.name=pdu2
<fencedevices>
cluster.clusternodes.clusternode.fence.method.device.port=2
<fencedevice name="ipmi_an01" agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" passwd="secret" />
cluster.clusternodes.clusternode.fence.method.device.action=reboot
<fencedevice name="ipmi_an02" agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" passwd="secret" />
cluster.fencedevices.fencedevice.name=ipmi_an01
</fencedevices>
cluster.fencedevices.fencedevice.agent=fence_ipmilan
</source>
cluster.fencedevices.fencedevice.ipaddr=an-node01.ipmi
 
cluster.fencedevices.fencedevice.login=root
* <span class="code">ipaddr</span>; This is the resolvable name or [[IP]] address of the device. If you use a resolvable name, it is strongly advised that you put the name in <span class="code">/etc/hosts</span> as [[DNS]] is another layer of abstraction which could fail.
cluster.fencedevices.fencedevice.passwd=secret
* <span class="code">login</span>; This is the login name to use when the <span class="code">fenced</span> daemon connects to the device.
cluster.fencedevices.fencedevice.name=ipmi_an02
* <span class="code">passwd</span>; This is the login password to use when the <span class="code">fenced</span> daemon connects to the device.
cluster.fencedevices.fencedevice.agent=fence_ipmilan
* <span class="code">name</span>; This is the name of this particular fence device within the cluster which, as we will see shortly, is matched in the <span class="code"><clusternode...></span> element where appropriate.
cluster.fencedevices.fencedevice.ipaddr=an-node02.ipmi
 
cluster.fencedevices.fencedevice.login=root
{{note|1=We will see shortly that, unlike switched PDUs or other network fence devices, [[IPMI]] does not have ports. This is because each [[IPMI]] BMC supports just its host system. More on that later.}}
cluster.fencedevices.fencedevice.passwd=secret
 
cluster.fencedevices.fencedevice.agent=fence_apc_snmp
==== Example <fencedevice...> Tag For HP iLO ====
cluster.fencedevices.fencedevice.ipaddr=pdu2.alteeve.com
 
cluster.fencedevices.fencedevice.name=pdu2
Here we will show how to use [http://h18013.www1.hp.com/products/servers/management/remotemgmt.html iLO] (integraterd Lights-Out) management devices as <span class="code"><fencedevice...></span> entries. We won't be using it ourselves, but it is quite popular as a fence device so I wanted to show an example of its use.
cluster.fence_daemon.post_join_delay=30
 
cluster.totem.rrp_mode=none
<source lang="xml">
cluster.totem.secauth=off
...
totem.rrp_mode=none
<clusternode name="an-node01.alteeve.ca" nodeid="1">
totem.secauth=off
<fence>
totem.version=2
<method name="ilo">
totem.nodeid=1
<device action="reboot" name="ilo_an01"/>
totem.vsftype=none
</method>
totem.token=10000
</fence>
totem.join=60
</clusternode>
totem.fail_recv_const=2500
<clusternode name="an-node02.alteeve.ca" nodeid="2">
totem.consensus=2000
<fence>
totem.key=an-clusterA
<method name="ilo">
totem.interface.ringnumber=0
<device action="reboot" name="ilo_an02"/>
totem.interface.bindnetaddr=10.20.0.1
</method>
totem.interface.mcastaddr=239.192.114.57
</fence>
totem.interface.mcastport=5405
</clusternode>
libccs.next_handle=7
...
libccs.connection.ccs_handle=3
<fencedevices>
libccs.connection.config_version=8
<fencedevice agent="fence_ilo" ipaddr="an-node01.ilo" login="root" name="ilo_an01" passwd="secret"/>
libccs.connection.fullxpath=0
<fencedevice agent="fence_ilo" ipaddr="an-node02.ilo" login="root" name="ilo_an02" passwd="secret"/>
libccs.connection.ccs_handle=4
</fencedevices>
libccs.connection.config_version=8
</source>
libccs.connection.fullxpath=0
 
libccs.connection.ccs_handle=5
* <span class="code">ipaddr</span>; This is the resolvable name or [[IP]] address of the device. If you use a resolvable name, it is strongly advised that you put the name in <span class="code">/etc/hosts</span> as [[DNS]] is another layer of abstraction which could fail.
libccs.connection.config_version=8
* <span class="code">login</span>; This is the login name to use when the <span class="code">fenced</span> daemon connects to the device.
libccs.connection.fullxpath=0
* <span class="code">passwd</span>; This is the login password to use when the <span class="code">fenced</span> daemon connects to the device.
logging.timestamp=on
* <span class="code">name</span>; This is the name of this particular fence device within the cluster which, as we will see shortly, is matched in the <span class="code"><clusternode...></span> element where appropriate.
logging.to_logfile=yes
 
logging.logfile=/var/log/cluster/corosync.log
{{note|1=Like [[IPMI]], [[iLO]] does not have ports. This is because each [[iLO]] BMC supports just its host system.}}
logging.logfile_priority=info
 
logging.to_syslog=yes
{{note|1=A reader kindly reported that iLO3 does not work with the <span class="code">fence_ilo</span> agent. The recommendation is to now use <span class="code">fence_ipmilan</span> with the following options; <span class="code"><fencedevice agent="fence_ipmilan" ipaddr="an-node01.ilo" lanplus="1" login="Administrator" name="ilo_an01" passwd="secret" power_wait="4"/></span>.}}
logging.syslog_facility=local4
 
logging.syslog_priority=info
==== Example <fencedevice...> Tag For Dell's DRAC ====
aisexec.user=ais
 
aisexec.group=ais
{{note|1=I have not tested fencing on Dell, but am using a reference working configuration from another user.}}
service.name=corosync_quorum
 
service.ver=0
Here we will show how to use [http://support.dell.com/support/edocs/software/smdrac3/ DRAC] (Dell Remote Access Controller) management devices as <span class="code"><fencedevice...></span> entries. We won't be using it ourselves, but it is another popular as a fence device so I wanted to show an example of its use.
service.name=corosync_cman
 
service.ver=0
<source lang="xml">
quorum.provider=quorum_cman
...
service.name=openais_ckpt
<clusternode name="an-node01.alteeve.ca" nodeid="1">
service.ver=0
<fence>
runtime.services.quorum.service_id=12
<method name="drac">
runtime.services.cman.service_id=9
<device action="reboot" name="drac_an01"/>
runtime.services.ckpt.service_id=3
</method>
runtime.services.ckpt.0.tx=0
</fence>
runtime.services.ckpt.0.rx=0
</clusternode>
runtime.services.ckpt.1.tx=0
<clusternode name="an-node02.alteeve.ca" nodeid="2">
runtime.services.ckpt.1.rx=0
<fence>
runtime.services.ckpt.2.tx=0
<method name="ilo">
runtime.services.ckpt.2.rx=0
<device action="reboot" name="drac_an02"/>
runtime.services.ckpt.3.tx=0
</method>
runtime.services.ckpt.3.rx=0
</fence>
runtime.services.ckpt.4.tx=0
</clusternode>
runtime.services.ckpt.4.rx=0
...
runtime.services.ckpt.5.tx=0
<fencedevices>
runtime.services.ckpt.5.rx=0
<fencedevice agent="fence_drac5" cmd_prompt="admin1-&gt;" ipaddr="an-node01.drac" login="root" name="drac_an01" passwd="secret" secure="1"/>
runtime.services.ckpt.6.tx=0
<fencedevice agent="fence_drac5" cmd_prompt="admin1-&gt;" ipaddr="an-node02.drac" login="root" name="drac_an02" passwd="secret" secure="1"/>
runtime.services.ckpt.6.rx=0
</fencedevices>
runtime.services.ckpt.7.tx=0
</source>
runtime.services.ckpt.7.rx=0
 
runtime.services.ckpt.8.tx=0
* <span class="code">ipaddr</span>; This is the resolvable name or [[IP]] address of the device. If you use a resolvable name, it is strongly advised that you put the name in <span class="code">/etc/hosts</span> as [[DNS]] is another layer of abstraction which could fail.
runtime.services.ckpt.8.rx=0
* <span class="code">login</span>; This is the login name to use when the <span class="code">fenced</span> daemon connects to the device.
runtime.services.ckpt.9.tx=0
* <span class="code">passwd</span>; This is the login password to use when the <span class="code">fenced</span> daemon connects to the device.
runtime.services.ckpt.9.rx=0
* <span class="code">name</span>; This is the name of this particular fence device within the cluster which, as we will see shortly, is matched in the <span class="code"><clusternode...></span> element where appropriate.
runtime.services.ckpt.10.tx=0
* <span class="code">cmd_prompt</span>; This is the string that the fence agent looks for when talking to the DRAC device.
runtime.services.ckpt.10.rx=0
* <span class="code">secure</span>; This tells the agent to use [[SSH]].
runtime.services.ckpt.11.tx=2
 
runtime.services.ckpt.11.rx=3
{{note|1=Like [[IPMI]] and [[iLO]], [[DRAC]] does not have ports. This is because each [[DRAC]] BMC supports just its host system.}}
runtime.services.ckpt.12.tx=0
 
runtime.services.ckpt.12.rx=0
==== Example <fencedevice...> Tag For APC Switched PDUs ====
runtime.services.ckpt.13.tx=0
 
runtime.services.ckpt.13.rx=0
Here we will show how to configure APC switched [[PDU]] <span class="code"><fencedevice...></span> tags. There are two agents for these devices; One that uses the telnet or ssh login and one that uses [[SNMP]]. This tutorial uses the later, and it is recommended that you do the same.
runtime.services.evs.service_id=0
 
runtime.services.evs.0.tx=0
The example below is from a production cluster that uses redundant power supplies and two separate PDUs. This is how you will want to configure any production clusters you build.
runtime.services.evs.0.rx=0
 
runtime.services.cfg.service_id=7
<source lang="xml">
runtime.services.cfg.0.tx=0
...
runtime.services.cfg.0.rx=0
<clusternode name="an-node01.alteeve.ca" nodeid="1">
runtime.services.cfg.1.tx=0
<fence>
runtime.services.cfg.1.rx=0
<method name="pdu2">
runtime.services.cfg.2.tx=0
<device action="reboot" name="pdu1" port="1"/>
runtime.services.cfg.2.rx=0
<device action="reboot" name="pdu2" port="1"/>
runtime.services.cfg.3.tx=0
</method>
runtime.services.cfg.3.rx=0
</fence>
runtime.services.cpg.service_id=8
</clusternode>
runtime.services.cpg.0.tx=4
<clusternode name="an-node02.alteeve.ca" nodeid="2">
runtime.services.cpg.0.rx=8
<fence>
runtime.services.cpg.1.tx=0
<method name="pdu2">
runtime.services.cpg.1.rx=0
<device action="reboot" name="pdu1" port="2"/>
runtime.services.cpg.2.tx=0
<device action="reboot" name="pdu2" port="2"/>
runtime.services.cpg.2.rx=0
</method>
runtime.services.cpg.3.tx=16
</fence>
runtime.services.cpg.3.rx=23
</clusternode>
runtime.services.cpg.4.tx=0
...
runtime.services.cpg.4.rx=0
<fencedevices>
runtime.services.cpg.5.tx=2
<fencedevice agent="fence_apc_snmp" ipaddr="pdu1.alteeve.ca" name="pdu1" />
runtime.services.cpg.5.rx=3
<fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.ca" name="pdu2" />
runtime.services.confdb.service_id=11
</fencedevices>
runtime.services.pload.service_id=13
</source>
runtime.services.pload.0.tx=0
 
runtime.services.pload.0.rx=0
* <span class="code">agent</span>; This is the name of the script under <span class="code">/usr/sbin/</span> to use when calling the physical PDU.
runtime.services.pload.1.tx=0
* <span class="code">ipaddr</span>; This is the resolvable name or [[IP]] address of the device. If you use a resolvable name, it is strongly advised that you put the name in <span class="code">/etc/hosts</span> as [[DNS]] is another layer of abstraction which could fail.
runtime.services.pload.1.rx=0
* <span class="code">name</span>; This is the name of this particular fence device within the cluster which, as we will see shortly, is matched in the <span class="code"><clusternode...></span> element where appropriate.
runtime.services.quorum.service_id=12
 
runtime.connections.active=6
=== Give Nodes More Time To Start ===
runtime.connections.closed=110
 
runtime.connections.fenced:CPG:10513:19.service_id=8
Clusters with more than three nodes will have to gain quorum before they can fence other nodes. As we discussed earlier though, this is not the case when using the <span class="code">[[RHCS_v3_cluster.conf#two_node|two_node]]="1"</span> attribute in the <span class="code">[[RHCS_v3_cluster.conf#cman.3B_The_Cluster_Manager|cman]]</span> element. What this means in practice is that if you start the cluster on one node and then wait too long to start the cluster on the second node, the first will fence the second.
runtime.connections.fenced:CPG:10513:19.client_pid=10513
 
runtime.connections.fenced:CPG:10513:19.responses=5
The logic behind this is; When the cluster starts, it will try to talk to its fellow node and then fail. With the special <span class="code">two_node="1"</span> attribute set, the cluster knows that it is allowed to start clustered services, but it has no way to say for sure what state the other node is in. It could well be online and hosting services for all it knows. So it has to proceed on the assumption that the other node is alive and using shared resources. Given that, and given that it can not talk to the other node, its only safe option is to fence the other node. Only then can it be confident that it is safe to start providing clustered services.
runtime.connections.fenced:CPG:10513:19.dispatched=9
 
runtime.connections.fenced:CPG:10513:19.requests=5
<source lang="xml">
runtime.connections.fenced:CPG:10513:19.sem_retry_count=0
<?xml version="1.0"?>
runtime.connections.fenced:CPG:10513:19.send_retry_count=0
<cluster name="an-cluster-A" config_version="6">
runtime.connections.fenced:CPG:10513:19.recv_retry_count=0
        <cman expected_votes="1" two_node="1" />
runtime.connections.fenced:CPG:10513:19.flow_control=0
        <clusternodes>
runtime.connections.fenced:CPG:10513:19.flow_control_count=0
                <clusternode name="an-node01.alteeve.ca" nodeid="1">
runtime.connections.fenced:CPG:10513:19.queue_size=0
                        <fence>
runtime.connections.dlm_controld:CPG:10539:22.service_id=8
                                <method name="ipmi">
runtime.connections.dlm_controld:CPG:10539:22.client_pid=10539
                                        <device name="ipmi_an01" action="reboot" />
runtime.connections.dlm_controld:CPG:10539:22.responses=5
                                </method>
runtime.connections.dlm_controld:CPG:10539:22.dispatched=8
                                <method name="pdu">
runtime.connections.dlm_controld:CPG:10539:22.requests=5
                                        <device name="pdu2" port="1" action="reboot" />
runtime.connections.dlm_controld:CPG:10539:22.sem_retry_count=0
                                </method>
runtime.connections.dlm_controld:CPG:10539:22.send_retry_count=0
                        </fence>
runtime.connections.dlm_controld:CPG:10539:22.recv_retry_count=0
                </clusternode>
runtime.connections.dlm_controld:CPG:10539:22.flow_control=0
                <clusternode name="an-node02.alteeve.ca" nodeid="2">
runtime.connections.dlm_controld:CPG:10539:22.flow_control_count=0
                        <fence>
runtime.connections.dlm_controld:CPG:10539:22.queue_size=0
                                <method name="ipmi">
runtime.connections.dlm_controld:CKPT:10539:23.service_id=3
                                        <device name="ipmi_an02" action="reboot" />
runtime.connections.dlm_controld:CKPT:10539:23.client_pid=10539
                                </method>
runtime.connections.dlm_controld:CKPT:10539:23.responses=0
                                <method name="pdu">
runtime.connections.dlm_controld:CKPT:10539:23.dispatched=0
                                        <device name="pdu2" port="2" action="reboot" />
runtime.connections.dlm_controld:CKPT:10539:23.requests=0
                                </method>
runtime.connections.dlm_controld:CKPT:10539:23.sem_retry_count=0
                        </fence>
runtime.connections.dlm_controld:CKPT:10539:23.send_retry_count=0
                </clusternode>
runtime.connections.dlm_controld:CKPT:10539:23.recv_retry_count=0
        </clusternodes>
runtime.connections.dlm_controld:CKPT:10539:23.flow_control=0
        <fencedevices>
runtime.connections.dlm_controld:CKPT:10539:23.flow_control_count=0
                <fencedevice name="ipmi_an01" agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" passwd="secret" />
runtime.connections.dlm_controld:CKPT:10539:23.queue_size=0
                <fencedevice name="ipmi_an02" agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" passwd="secret" />
runtime.connections.gfs_controld:CPG:10590:26.service_id=8
                <fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.ca" name="pdu2" />
runtime.connections.gfs_controld:CPG:10590:26.client_pid=10590
        </fencedevices>
runtime.connections.gfs_controld:CPG:10590:26.responses=5
        <fence_daemon post_join_delay="30" />
runtime.connections.gfs_controld:CPG:10590:26.dispatched=8
</cluster>
runtime.connections.gfs_controld:CPG:10590:26.requests=5
</source>
runtime.connections.gfs_controld:CPG:10590:26.sem_retry_count=0
 
runtime.connections.gfs_controld:CPG:10590:26.send_retry_count=0
The new tag is <span class="code">[[RHCS_v3_cluster.conf#fence_daemon.3B_Fencing|fence_daemon]]</span>, seen near the bottom if the file above. The change is made using the <span class="code">[[RHCS_v3_cluster.conf#post_join_delay|post_join_delay]]="30"</span> attribute. By default, the cluster will declare the other node dead after just <span class="code">6</span> seconds. The reason is that the larger this value, the slower the start-up of the cluster services will be. During testing and development though, I find this value to be far too short and frequently led to unnecessary fencing. Once your cluster is setup and working, it's not a bad idea to reduce this value to the lowest value with which you are comfortable.
runtime.connections.gfs_controld:CPG:10590:26.recv_retry_count=0
 
runtime.connections.gfs_controld:CPG:10590:26.flow_control=0
=== Configuring Totem ===
runtime.connections.gfs_controld:CPG:10590:26.flow_control_count=0
 
runtime.connections.gfs_controld:CPG:10590:26.queue_size=0
There are many attributes for the [[totem]] element. For now though, we're only going to set two of them. We know that cluster communication will be travelling over our private, secured [[BCN]] network, so for the sake of simplicity, we're going to disable encryption. We are also offering network redundancy using the bonding drivers, so we're also going to disable totem's [[redundant ring protocol]].
runtime.connections.fenced:CPG:10513:28.service_id=8
 
runtime.connections.fenced:CPG:10513:28.client_pid=10513
<source lang="xml">
runtime.connections.fenced:CPG:10513:28.responses=5
<?xml version="1.0"?>
runtime.connections.fenced:CPG:10513:28.dispatched=8
<cluster name="an-cluster-A" config_version="7">
runtime.connections.fenced:CPG:10513:28.requests=5
        <cman expected_votes="1" two_node="1" />
runtime.connections.fenced:CPG:10513:28.sem_retry_count=0
        <clusternodes>
runtime.connections.fenced:CPG:10513:28.send_retry_count=0
                <clusternode name="an-node01.alteeve.ca" nodeid="1">
runtime.connections.fenced:CPG:10513:28.recv_retry_count=0
                        <fence>
runtime.connections.fenced:CPG:10513:28.flow_control=0
                                <method name="ipmi">
runtime.connections.fenced:CPG:10513:28.flow_control_count=0
                                        <device name="ipmi_an01" action="reboot" />
runtime.connections.fenced:CPG:10513:28.queue_size=0
                                </method>
runtime.connections.corosync-objctl:CONFDB:10752:27.service_id=11
                                <method name="pdu">
runtime.connections.corosync-objctl:CONFDB:10752:27.client_pid=10752
                                        <device name="pdu2" port="1" action="reboot" />
runtime.connections.corosync-objctl:CONFDB:10752:27.responses=433
                                </method>
runtime.connections.corosync-objctl:CONFDB:10752:27.dispatched=0
                        </fence>
runtime.connections.corosync-objctl:CONFDB:10752:27.requests=436
                </clusternode>
runtime.connections.corosync-objctl:CONFDB:10752:27.sem_retry_count=0
                <clusternode name="an-node02.alteeve.ca" nodeid="2">
runtime.connections.corosync-objctl:CONFDB:10752:27.send_retry_count=0
                        <fence>
runtime.connections.corosync-objctl:CONFDB:10752:27.recv_retry_count=0
                                <method name="ipmi">
runtime.connections.corosync-objctl:CONFDB:10752:27.flow_control=0
                                        <device name="ipmi_an02" action="reboot" />
runtime.connections.corosync-objctl:CONFDB:10752:27.flow_control_count=0
                                </method>
runtime.connections.corosync-objctl:CONFDB:10752:27.queue_size=0
                                <method name="pdu">
runtime.totem.pg.mrp.srp.orf_token_tx=2
                                        <device name="pdu2" port="2" action="reboot" />
runtime.totem.pg.mrp.srp.orf_token_rx=423
                                </method>
runtime.totem.pg.mrp.srp.memb_merge_detect_tx=36
                        </fence>
runtime.totem.pg.mrp.srp.memb_merge_detect_rx=36
                </clusternode>
runtime.totem.pg.mrp.srp.memb_join_tx=2
        </clusternodes>
runtime.totem.pg.mrp.srp.memb_join_rx=4
        <fencedevices>
runtime.totem.pg.mrp.srp.mcast_tx=46
                <fencedevice name="ipmi_an01" agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" passwd="secret" />
runtime.totem.pg.mrp.srp.mcast_retx=0
                <fencedevice name="ipmi_an02" agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" passwd="secret" />
runtime.totem.pg.mrp.srp.mcast_rx=58
                <fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.ca" name="pdu2" />
runtime.totem.pg.mrp.srp.memb_commit_token_tx=4
        </fencedevices>
runtime.totem.pg.mrp.srp.memb_commit_token_rx=4
        <fence_daemon post_join_delay="30" />
runtime.totem.pg.mrp.srp.token_hold_cancel_tx=4
        <totem rrp_mode="none" secauth="off"/>
runtime.totem.pg.mrp.srp.token_hold_cancel_rx=8
</cluster>
runtime.totem.pg.mrp.srp.operational_entered=2
runtime.totem.pg.mrp.srp.operational_token_lost=0
runtime.totem.pg.mrp.srp.gather_entered=2
runtime.totem.pg.mrp.srp.gather_token_lost=0
runtime.totem.pg.mrp.srp.commit_entered=2
runtime.totem.pg.mrp.srp.commit_token_lost=0
runtime.totem.pg.mrp.srp.recovery_entered=2
runtime.totem.pg.mrp.srp.recovery_token_lost=0
runtime.totem.pg.mrp.srp.consensus_timeouts=0
runtime.totem.pg.mrp.srp.mtt_rx_token=563
runtime.totem.pg.mrp.srp.avg_token_workload=0
runtime.totem.pg.mrp.srp.avg_backlog_calc=0
runtime.totem.pg.mrp.srp.rx_msg_dropped=0
runtime.totem.pg.mrp.srp.members.1.ip=r(0) ip(10.20.0.1)
runtime.totem.pg.mrp.srp.members.1.join_count=1
runtime.totem.pg.mrp.srp.members.1.status=joined
runtime.totem.pg.mrp.srp.members.2.ip=r(0) ip(10.20.0.2)
runtime.totem.pg.mrp.srp.members.2.join_count=1
runtime.totem.pg.mrp.srp.members.2.status=joined
runtime.blackbox.dump_flight_data=no
runtime.blackbox.dump_state=no
cman_private.COROSYNC_DEFAULT_CONFIG_IFACE=xmlconfig:cmanpreconfig
</source>
</source>


If you want to check what [[DLM]] lockspaces, you can use <span class="code">dlm_tool ls</span> to list lock spaces. Given that we're not running and resources or clustered filesystems though, there won't be any at this time. We'll look at this again later.
{{note|1=At this time, [[redundant ring protocol]] is not supported ([[RHEL6]].1 and lower). It is in technology preview mode in [[RHEL6]].2 and above. This is another reason why we will not be using it in this tutorial..}}


== Testing Fencing ==
[[RRP]] is an optional second ring that can be used for cluster communication in the case of a break down in the first ring. However, if you wish to explore it further, please take a look at the <span class="code">clusternode</span> element tag called <span class="code"><[[RHCS_v3_cluster.conf#Tag.3B_altname|altname]]...></span>. When <span class="code">altname</span> is used though, then the <span class="code">[[RHCS_v3_cluster.conf#rrp_mode|rrp_mode]]</span> attribute will need to be changed to either <span class="code">active</span> or <span class="code">passive</span> (the details of which are outside the scope of this tutorial).


We need to thoroughly test our fence configuration and devices before we proceed. Should the cluster call a fence, and if the fence call fails, the cluster will hang until the fence finally succeeds. There is no way to abort a fence, so this could effectively hang the cluster. If we have problems, we need to find them now.
The second option we're looking at here is the <span class="code">[[RHCS_v3_cluster.conf#secauth|secauth]]="off"</span> attribute. This controls whether the cluster communications are encrypted or not. We can safely disable this because we're working on a known-private network, which yields two benefits; It's simpler to setup and it's a lot faster. If you must encrypt the cluster communications, then you can do so here. The details of which are also outside the scope of this tutorial though.


We need to run two tests from each node against the other node for a total of four tests.
=== Validating and Pushing the /etc/cluster/cluster.conf File ===
* The first test will use <span class="code">fence_ipmilan</span>. To do this, we will hang the victim node by running <span class="code">echo c > /proc/sysrq-trigger</span> on it. This will immediately and completely hang the kernel. The other node should detect the failure and reboot the victim. You can confirm that IPMI was used by watching the fence PDU and '''not''' seeing it power-cycle the port.
* Secondly, we will pull the power on the victim node. This is done to ensure that the IPMI BMC is also dead and will simulate a failure in the power supply. You should see the other node try to fence the victim, fail initially, then try again using the second, switched PDU. If you want the PDU, you should see the power indicator LED go off and then come back on.  


{{note|1=To "pull the power", we can actually just log into the PDU and turn off the victim's power. In this case, we'll see the power restored when the PDU is used to fence the node. We can actually use the <span class="code">fence_apc</span> fence agent to pull the power, as we'll see.}}
One of the most noticeable changes in [[RHCS]] cluster stable 3 is that we no longer have to make a long, cryptic <span class="code">xmllint</span> call to validate our cluster configuration. Now we can simply call <span class="code">ccs_config_validate</span>.


{|class="wikitable"
<source lang="bash">
!Test
ccs_config_validate
!Victim
</source>
!Pass?
<source lang="text">
|-
Configuration validates
|<span class="code">echo c > /proc/sysrq-trigger</span>
</source>
|<span class="code">an-node01</span>
|
|-
|<span class="code">fence_apc_snmp -a pdu2.alteeve.com -n 1 -o off</span>
|<span class="code">an-node01</span>
|
|-
|<span class="code">echo c > /proc/sysrq-trigger</span>
|<span class="code">an-node02</span>
|
|-
|<span class="code">fence_apc_snmp -a pdu2.alteeve.com -n 2 -o off</span>
|<span class="code">an-node02</span>
|
|}


After the lost node is recovered, remember to restart <span class="code">cman</span> before starting the next test.
If there was a problem, you need to go back and fix it. '''DO NOT''' proceed until your configuration validates. Once it does, we're ready to move on!


=== Hanging an-node01 ===
With it validated, we need to push it to the other node. As the cluster is not running yet, we will push it out using <span class="code">rsync</span>.
 
Be sure to be <span class="code">tail</span>ing the <span class="code">/var/log/messages</span> on <span class="code">an-node02</span>. Go to <span class="code">an-node01</span>'s first terminal and run the following command.
 
{{warning|1=This command will not return and you will lose all ability to talk to this node until it is rebooted.}}
 
On '''<span class="code">an-node01</span>''' run:


<source lang="bash">
<source lang="bash">
echo c > /proc/sysrq-trigger
rsync -av /etc/cluster/cluster.conf root@an-node02:/etc/cluster/
</source>
</source>
<source lang="text">
sending incremental file list
cluster.conf


On '''<span class="code">an-node02</span>''''s syslog terminal, you should see the following entries in the log.
sent 1198 bytes  received 31 bytes  2458.00 bytes/sec
 
total size is 1118 speedup is 0.91
<source lang="text">
Oct 23 12:21:47 an-node02 corosync[10194]:  [TOTEM ] A processor failed, forming new configuration.
Oct 23 12:21:49 an-node02 corosync[10194]:  [QUORUM] Members[1]: 2
Oct 23 12:21:49 an-node02 corosync[10194]:  [TOTEM ] A processor joined or left the membership and a new membership was formed.
Oct 23 12:21:49 an-node02 kernel: dlm: closing connection to node 1
Oct 23 12:21:49 an-node02 corosync[10194]:  [CPG  ] downlist received left_list: 1
Oct 23 12:21:49 an-node02 corosync[10194]:  [CPG  ] chosen downlist from node r(0) ip(10.20.0.2)
Oct 23 12:21:49 an-node02 corosync[10194]:  [MAIN ] Completed service synchronization, ready to provide service.
Oct 23 12:21:49 an-node02 fenced[10259]: fencing node an-node01.alteeve.com
Oct 23 12:22:03 an-node02 fenced[10259]: fence an-node01.alteeve.com success
</source>
</source>


Perfect!
=== Setting Up ricci ===


If you are watching <span class="code">an-node01</span>'s display, you should now see it starting to boot back up. Once it finished booting, log into it and restart <span class="code">cman</span> before moving on to the next test.
Another change from [[RHCS]] stable 2 is how configuration changes are propagated. Before, after a change, we'd push out the updated cluster configuration by calling <span class="code">ccs_tool update /etc/cluster/cluster.conf</span>. Now this is done with <span class="code">cman_tool version -r</span>. More fundamentally though, the cluster needs to authenticate against each node and does this using the local <span class="code">ricci</span> system user. The user has no password initially, so we need to set one.


=== Cutting the Power to an-node01 ===
On '''both''' nodes:


As was discussed earlier, IPMI and other out-of-band management interfaces have a fatal flaw as a fence device. Their [[BMC]] draws it's power from the same power supply as the node itself. Thus, when the power supply itself fails (or the mains connection is pulled/tripped over), fencing via IPMI will fail. This makes the power supply a single point of failure, which is what the PDU protects us against.
<source lang="bash">
passwd ricci
</source>
<source lang="text">
Changing password for user ricci.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
</source>


So to simulate a failed power supply, we're going to use <span class="code">an-node02</span>'s <span class="code">fence_apc</span> fence agent to turn off the power to <span class="code">an-node01</span>.  
You will need to enter this password once from each node against the other node. We will see this later.


Alternatively, you could also just unplug the power and the fence would still succeed. The fence call only needs to confirm that the node is off to succeed. Whether the node restarts after or not is not important so far as the cluster is concerned.
Now make sure that the <span class="code">ricci</span> daemon is set to start on boot and is running now.
 
From '''<span class="code">an-node02</span>''', pull the power on <span class="code">an-node01</span> with the following call;


<source lang="bash">
<source lang="bash">
fence_apc_snmp -a pdu2.alteeve.com -n 1 -o off
chkconfig ricci on
chkconfig --list ricci
</source>
</source>
<source lang="text">
<source lang="text">
Success: Powered OFF
ricci          0:off 1:off 2:on 3:on 4:on 5:on 6:off
</source>
</source>


Back on <span class="code">an-node02</span>'s syslog, we should see the following entries;
Now start it up.


<source lang="text">
<source lang="text">
Oct 23 12:25:41 an-node02 corosync[10194]:  [TOTEM ] A processor failed, forming new configuration.
/etc/init.d/ricci start
Oct 23 12:25:43 an-node02 corosync[10194]:  [QUORUM] Members[1]: 2
</source>
Oct 23 12:25:43 an-node02 corosync[10194]:  [TOTEM ] A processor joined or left the membership and a new membership was formed.
<source lang="text">
Oct 23 12:25:43 an-node02 corosync[10194]:  [CPG  ] downlist received left_list: 1
Starting ricci:                                           [ OK ]
Oct 23 12:25:43 an-node02 corosync[10194]:  [CPG  ] chosen downlist from node r(0) ip(10.20.0.2)
Oct 23 12:25:43 an-node02 corosync[10194]:  [MAIN ] Completed service synchronization, ready to provide service.
Oct 23 12:25:43 an-node02 kernel: dlm: closing connection to node 1
Oct 23 12:25:43 an-node02 fenced[10259]: fencing node an-node01.alteeve.com
Oct 23 12:26:03 an-node02 fenced[10259]: fence an-node01.alteeve.com dev 0.0 agent fence_ipmilan result: error from agent
Oct 23 12:26:03 an-node02 fenced[10259]: fence an-node01.alteeve.com success
</source>
</source>


Hoozah!
{{note|1=If you don't see <span class="code">[  OK  ]</span>, don't worry, it is probably because it was already running.}}


Notice that there is an error from the <span class="code">fence_ipmilan</span>. This is exactly what we expected because of the IPMI BMC having lost power.
We also need to have a daemon called <span class="code">modclusterd</span> running on start.


So now we know that <span class="code">an-node01</span> can be fenced successfully from both fence devices. Now we need to run the same tests against <span class="code">an-node02</span>.
<source lang="bash">
chkconfig modclusterd on
chkconfig --list modclusterd
</source>
<source lang="text">
modclusterd    0:off 1:off 2:on 3:on 4:on 5:on 6:off
</source>


=== Hanging an-node02 ===
Now start it up.


{{warning|1='''DO NOT ASSUME THAT <span class="code">an-node02</span> WILL FENCE PROPERLY JUST BECAUSE <span class="code">an-node01</span> PASSED!'''. There are many ways that a fence could fail; Bad password, misconfigured device, plugged into the wrong port on the PDU and so on. Always test all nodes using all methods!}}
<source lang="text">
/etc/init.d/modclusterd start
</source>
<source lang="text">
Starting Cluster Module - cluster monitor: Setting verbosity level to LogBasic
                                                          [  OK  ]
</source>


Be sure to be <span class="code">tail</span>ing the <span class="code">/var/log/messages</span> on <span class="code">an-node02</span>. Go to <span class="code">an-node01</span>'s first terminal and run the following command.
=== Starting the Cluster for the First Time ===


{{note|1=This command will not return and you will lose all ability to talk to this node until it is rebooted.}}
It's a good idea to open a second terminal on either node and <span class="code">tail</span> the <span class="code">/var/log/messages</span> [[syslog]] file. All cluster messages will be recorded here and it will help to debug problems if you can watch the logs. To do this, in the new terminal windows run;
 
On '''<span class="code">an-node02</span>''' run:


<source lang="bash">
<source lang="bash">
echo c > /proc/sysrq-trigger
clear; tail -f -n 0 /var/log/messages
</source>
</source>


On '''<span class="code">an-node01</span>''''s syslog terminal, you should see the following entries in the log.
This will clear the screen and start watching for new lines to be written to syslog. When you are done watching syslog, press the <span class="code"><ctrl></span> + <span class="code">c</span> key combination.
 
How you lay out your terminal windows is, obviously, up to your own preferences. Below is a configuration I have found very useful.


<source lang="text">
[[Image:2-node-rhcs3_terminal-window-layout_01.png|thumb|center|700px|Terminal window layout for watching 2 nodes. Left windows are used for entering commands and the left windows are used for tailing syslog.]]
Oct 23 11:32:34 an-node01 corosync[2377]:  [TOTEM ] A processor failed, forming new configuration.
Oct 23 11:32:36 an-node01 corosync[2377]:  [QUORUM] Members[1]: 1
Oct 23 11:32:36 an-node01 corosync[2377]:  [TOTEM ] A processor joined or left the membership and a new membership was formed.
Oct 23 11:32:36 an-node01 kernel: dlm: closing connection to node 2
Oct 23 11:32:36 an-node01 corosync[2377]:  [CPG  ] downlist received left_list: 1
Oct 23 11:32:36 an-node01 corosync[2377]:  [CPG  ] chosen downlist from node r(0) ip(10.20.0.1)
Oct 23 11:32:36 an-node01 corosync[2377]:  [MAIN  ] Completed service synchronization, ready to provide service.
Oct 23 11:32:36 an-node01 fenced[2433]: fencing node an-node02.alteeve.com
Oct 23 11:32:50 an-node01 fenced[2433]: fence an-node02.alteeve.com success
</source>


Again, perfect!
With the terminals setup, lets start the cluster!


=== Cutting the Power to an-node02 ===
{{warning|1=If you don't start <span class="code">cman</span> on both nodes within 30 seconds, the slower node will be fenced.}}


From '''<span class="code">an-node01</span>''', pull the power on <span class="code">an-node02</span> with the following call;
On '''both''' nodes, run:


<source lang="bash">
<source lang="bash">
fence_apc_snmp -a pdu2.alteeve.com -n 2 -o off
/etc/init.d/cman start
</source>
</source>
<source lang="text">
<source lang="text">
Success: Powered OFF
Starting cluster:  
  Checking if cluster has been disabled at boot...        [  OK  ]
  Checking Network Manager...                            [  OK  ]
  Global setup...                                        [  OK  ]
  Loading kernel modules...                              [  OK  ]
  Mounting configfs...                                    [  OK  ]
  Starting cman...                                        [  OK  ]
  Waiting for quorum...                                  [  OK  ]
  Starting fenced...                                      [  OK  ]
  Starting dlm_controld...                                [  OK  ]
  Starting gfs_controld...                                [  OK  ]
  Unfencing self...                                      [  OK  ]
  Joining fence domain...                                [  OK  ]
</source>
</source>


Back on <span class="code">an-node01</span>'s syslog, we should see the following entries;
Here is what you should see in syslog:


<source lang="text">
<source lang="text">
Oct 23 11:34:52 an-node01 corosync[2377]:  [TOTEM ] A processor failed, forming new configuration.
Dec 13 12:08:44 an-node01 kernel: DLM (built Nov  9 2011 08:04:11) installed
Oct 23 11:34:54 an-node01 corosync[2377]:  [QUORUM] Members[1]: 1
Dec 13 12:08:45 an-node01 corosync[3434]:  [MAIN  ] Corosync Cluster Engine ('1.4.1'): started and ready to provide service.
Oct 23 11:34:54 an-node01 corosync[2377]:  [TOTEM ] A processor joined or left the membership and a new membership was formed.
Dec 13 12:08:45 an-node01 corosync[3434]:  [MAIN  ] Corosync built-in features: nss dbus rdma snmp
Oct 23 11:34:54 an-node01 kernel: dlm: closing connection to node 2
Dec 13 12:08:45 an-node01 corosync[3434]:  [MAIN  ] Successfully read config from /etc/cluster/cluster.conf
Oct 23 11:34:54 an-node01 corosync[2377]:  [CPG   ] downlist received left_list: 1
Dec 13 12:08:45 an-node01 corosync[3434]:  [MAIN  ] Successfully parsed cman config
Oct 23 11:34:54 an-node01 corosync[2377]:  [CPG  ] chosen downlist from node r(0) ip(10.20.0.1)  
Dec 13 12:08:45 an-node01 corosync[3434]:  [TOTEM ] Initializing transport (UDP/IP Multicast).
Oct 23 11:34:54 an-node01 corosync[2377]:  [MAIN  ] Completed service synchronization, ready to provide service.
Dec 13 12:08:45 an-node01 corosync[3434]:  [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Oct 23 11:34:54 an-node01 fenced[2433]: fencing node an-node02.alteeve.com
Dec 13 12:08:46 an-node01 corosync[3434]:  [TOTEM ] The network interface [10.20.0.1] is now up.
Oct 23 11:35:14 an-node01 fenced[2433]: fence an-node02.alteeve.com dev 0.0 agent fence_ipmilan result: error from agent
Dec 13 12:08:46 an-node01 corosync[3434]:  [QUORUM] Using quorum provider quorum_cman
Oct 23 11:35:14 an-node01 fenced[2433]: fence an-node02.alteeve.com success
Dec 13 12:08:46 an-node01 corosync[3434]:  [SERV  ] Service engine loaded: corosync cluster quorum service v0.1
Dec 13 12:08:46 an-node01 corosync[3434]:  [CMAN  ] CMAN 3.0.12.1 (built Sep 30 2011 03:17:43) started
Dec 13 12:08:46 an-node01 corosync[3434]:  [SERV  ] Service engine loaded: corosync CMAN membership service 2.90
Dec 13 12:08:46 an-node01 corosync[3434]:  [SERV  ] Service engine loaded: openais checkpoint service B.01.01
Dec 13 12:08:46 an-node01 corosync[3434]:  [SERV  ] Service engine loaded: corosync extended virtual synchrony service
Dec 13 12:08:46 an-node01 corosync[3434]:  [SERV  ] Service engine loaded: corosync configuration service
Dec 13 12:08:46 an-node01 corosync[3434]:  [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01
Dec 13 12:08:46 an-node01 corosync[3434]:  [SERV  ] Service engine loaded: corosync cluster config database access v1.01
Dec 13 12:08:46 an-node01 corosync[3434]:  [SERV  ] Service engine loaded: corosync profile loading service
Dec 13 12:08:46 an-node01 corosync[3434]:  [QUORUM] Using quorum provider quorum_cman
Dec 13 12:08:46 an-node01 corosync[3434]:  [SERV  ] Service engine loaded: corosync cluster quorum service v0.1
Dec 13 12:08:46 an-node01 corosync[3434]:   [MAIN  ] Compatibility mode set to whitetank.  Using V1 and V2 of the synchronization engine.
Dec 13 12:08:46 an-node01 corosync[3434]:  [TOTEM ] A processor joined or left the membership and a new membership was formed.
Dec 13 12:08:46 an-node01 corosync[3434]:  [CMAN  ] quorum regained, resuming activity
Dec 13 12:08:46 an-node01 corosync[3434]:   [QUORUM] This node is within the primary component and will provide service.
Dec 13 12:08:46 an-node01 corosync[3434]:  [QUORUM] Members[1]: 1
Dec 13 12:08:46 an-node01 corosync[3434]:   [QUORUM] Members[1]: 1
Dec 13 12:08:46 an-node01 corosync[3434]:  [CPG  ] chosen downlist: sender r(0) ip(10.20.0.1) ; members(old:0 left:0)
Dec 13 12:08:46 an-node01 corosync[3434]:  [MAIN  ] Completed service synchronization, ready to provide service.
Dec 13 12:08:47 an-node01 corosync[3434]:  [TOTEM ] A processor joined or left the membership and a new membership was formed.
Dec 13 12:08:47 an-node01 corosync[3434]:  [QUORUM] Members[2]: 1 2
Dec 13 12:08:47 an-node01 corosync[3434]:  [QUORUM] Members[2]: 1 2
Dec 13 12:08:47 an-node01 corosync[3434]:  [CPG  ] chosen downlist: sender r(0) ip(10.20.0.1) ; members(old:1 left:0)
Dec 13 12:08:47 an-node01 corosync[3434]:  [MAIN  ] Completed service synchronization, ready to provide service.
Dec 13 12:08:49 an-node01 fenced[3490]: fenced 3.0.12.1 started
Dec 13 12:08:49 an-node01 dlm_controld[3515]: dlm_controld 3.0.12.1 started
Dec 13 12:08:51 an-node01 gfs_controld[3565]: gfs_controld 3.0.12.1 started
</source>
</source>


Woot!
{{note|1=If you see messages like <span class="code">rsyslogd-2177: imuxsock begins to drop messages from pid 29288 due to rate-limiting</span>, this is caused by new default configuration in <span class="code">[[rsyslogd]]</span>. To disable rate limiting, please follow the instructions in [[#Disabling rsyslog Rate Limiting|Disabling rsyslog Rate Limiting]] below.}}


Only now can we safely say that our fencing is setup and working properly.
Now to confirm that the cluster is operating properly, run <span class="code">cman_tool status</span>;


== Testing Network Redundancy ==
<source lang="bash">
 
cman_tool status
Next up of the testing block is our network configuration. Seeing as we've build our bonds, we need to now test that they are working properly.  
</source>
<source lang="text">
Version: 6.2.0
Config Version: 7
Cluster Name: an-cluster-A
Cluster Id: 24561
Cluster Member: Yes
Cluster Generation: 8
Membership state: Cluster-Member
Nodes: 2
Expected votes: 1
Total votes: 2
Node votes: 1
Quorum: 1 
Active subsystems: 7
Flags: 2node
Ports Bound: 0 
Node name: an-node01.alteeve.ca
Node ID: 1
Multicast addresses: 239.192.95.81
Node addresses: 10.20.0.1
</source>


* Make sure that <span class="code">cman</span> has started on both nodes.
We can see that the both nodes are talking because of the <span class="code">Nodes: 2</span> entry.


First, we'll test all network cables individually, one node and one bonded interface at a time.
If you ever want to see the nitty-gritty configuration, you can run <span class="code">corosync-objctl</span>.


* For each network; IFN, SN and BCN;
<source lang="bash">
** On both nodes, start a ping flood against the opposing node specifying the appropriate network name suffix in the first window and starting <span class="code">tail</span>ing syslog in the second window.
corosync-objctl
** <span class="code">watch</span> each bond's <span class="code">/proc/net/bonding/bondX</span> file to see which interfaces are active.
</source>
** Pull the currently-active network cable from the bond (either at the switch or at the node).
<source lang="text">
** Check the state of the bonds again and see that they've switched to their backup interface. If a node gets fenced, you know something went wrong. You should see a handful of lost packets in the ping flood.
cluster.name=an-cluster-A
** Restore the network cable and wait 2 minutes, then verify that the old primary interface was restored. You will see another handful of lost packets in the flood during the recovery.
cluster.config_version=7
** Pull the cable again, then restore it. This time, do not wait 2 minutes. After just a few seconds, pull the backup link and ensure that the bond immediately resumed use of the primary interface.
cluster.cman.expected_votes=1
** Repeat the above steps for all bonds on both nodes. This will take a while, but you need to ensure configuration errors are found now.
cluster.cman.two_node=1
 
cluster.cman.nodename=an-node01.alteeve.ca
{{warning|1=Testing the complete primary switch failure and subsequant recovery is very, very important. Please do NOT skip this step!}}
cluster.cman.cluster_id=24561
 
cluster.clusternodes.clusternode.name=an-node01.alteeve.ca
Once all bonds have been tested, we'll do a final test by failing the primary switch.
cluster.clusternodes.clusternode.nodeid=1
* Cut the power to the switch.
cluster.clusternodes.clusternode.fence.method.name=ipmi
* Check all bond status files. Confirm that all have switched to their backup links.
cluster.clusternodes.clusternode.fence.method.device.name=ipmi_an01
* Restore power to the switch and wait 2 minutes.
cluster.clusternodes.clusternode.fence.method.device.action=reboot
* Confirm that the bonds did not switch to the primary interfaces before the switch was ready to move data.
cluster.clusternodes.clusternode.fence.method.name=pdu
 
cluster.clusternodes.clusternode.fence.method.device.name=pdu2
If all of these steps pass and the cluster doesn't partition, then you can be confident that your network is configured properly for full redundancy.
cluster.clusternodes.clusternode.fence.method.device.port=1
 
cluster.clusternodes.clusternode.fence.method.device.action=reboot
=== Network Testing Terminal Layout ===
cluster.clusternodes.clusternode.name=an-node02.alteeve.ca
 
cluster.clusternodes.clusternode.nodeid=2
If you have a couple of monitors, particularly one with portrait mode, you might be able to open 16 terminals at once. This is how many are needed to run ping floods, watch the bond status files, tail syslog and watch cman_tool all at the same time. This configuration makes it very easy to keep a near real-time, complete view of all network components.
cluster.clusternodes.clusternode.fence.method.name=ipmi
 
cluster.clusternodes.clusternode.fence.method.device.name=ipmi_an02
On the left window, the top-left terminal shows <span class="code">watch cman_tool status</span> and the top-right terminal shows <span class="code">tail -f -n 0 /var/log/messages</span> for <span class="code">an-node01</span>. The bottom two terminals show the same for <span class="code">an-node02</span>.
cluster.clusternodes.clusternode.fence.method.device.action=reboot
 
cluster.clusternodes.clusternode.fence.method.name=pdu
On the right, portrait-mode window, the terminal layout used for monitoring the bonded link status and ping floods are shown. There are two columns; <span class="code">an-node01</span> on the left and <span class="code">an-node02</span> on the right. Each column is stacked into six rows, <span class="code">bond0</span> on the top followed by <span class="code">ping -f an-node02.bcn</span>, <span class="code">bond1</span> in the middle followed by <span class="code">ping -f an-node02.sn</span> and <span class="code">bond2</span> at the bottom followed by <span class="code">ping -f an-node02.ifn</span>. The left window shows the standard <span class="code">tail</span> on syslog plus <span class="code">watch cman_tool status</span>.
cluster.clusternodes.clusternode.fence.method.device.name=pdu2
 
cluster.clusternodes.clusternode.fence.method.device.port=2
[[Image:2-node_el6-tutorial_network-test_terminal-layout_01.png|thumb|center|700px|Terminal layout used for HA network testing; Calls shown.]]
cluster.clusternodes.clusternode.fence.method.device.action=reboot
 
cluster.fencedevices.fencedevice.name=ipmi_an01
[[Image:2-node_el6-tutorial_network-test_terminal-layout_02.png|thumb|center|700px|Terminal layout used for HA network testing; Calls running.]]
cluster.fencedevices.fencedevice.agent=fence_ipmilan
 
cluster.fencedevices.fencedevice.ipaddr=an-node01.ipmi
=== How to Know if the Tests Passed ===
cluster.fencedevices.fencedevice.login=root
 
cluster.fencedevices.fencedevice.passwd=secret
Well, the most obvious answer to this question is if the cluster is still working after a switch is powered off.
cluster.fencedevices.fencedevice.name=ipmi_an02
 
cluster.fencedevices.fencedevice.agent=fence_ipmilan
We can be a little more subtle than that though.
cluster.fencedevices.fencedevice.ipaddr=an-node02.ipmi
 
cluster.fencedevices.fencedevice.login=root
The state of each bond is viewable by looking in the special <span class="code">/proc/net/bonding/bondX</span> files, where <span class="code">X</span> is the bond number. Lets take a look at <span class="code">bond0</span> on <span class="code">an-node01</span>.
cluster.fencedevices.fencedevice.passwd=secret
 
cluster.fencedevices.fencedevice.agent=fence_apc_snmp
<source lang="bash">
cluster.fencedevices.fencedevice.ipaddr=pdu2.alteeve.ca
cat /proc/net/bonding/bond0
cluster.fencedevices.fencedevice.name=pdu2
</source>
cluster.fence_daemon.post_join_delay=30
<source lang="text">
cluster.totem.rrp_mode=none
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
cluster.totem.secauth=off
 
totem.rrp_mode=none
Bonding Mode: fault-tolerance (active-backup)
totem.secauth=off
Primary Slave: eth0 (primary_reselect always)
totem.transport=udp
Currently Active Slave: eth0
totem.version=2
MII Status: up
totem.nodeid=1
MII Polling Interval (ms): 100
totem.vsftype=none
Up Delay (ms): 120000
totem.token=10000
Down Delay (ms): 0
totem.join=60
 
totem.fail_recv_const=2500
Slave Interface: eth0
totem.consensus=2000
MII Status: up
totem.key=an-cluster-A
Link Failure Count: 0
totem.interface.ringnumber=0
Permanent HW addr: 00:e0:81:c7:ec:49
totem.interface.bindnetaddr=10.20.0.1
Slave queue ID: 0
totem.interface.mcastaddr=239.192.95.81
 
totem.interface.mcastport=5405
Slave Interface: eth3
libccs.next_handle=7
MII Status: up
libccs.connection.ccs_handle=3
Link Failure Count: 0
libccs.connection.config_version=7
Permanent HW addr: 00:1b:21:9d:59:fc
libccs.connection.fullxpath=0
Slave queue ID: 0
libccs.connection.ccs_handle=4
</source>
libccs.connection.config_version=7
 
libccs.connection.fullxpath=0
We can see that the currently active interface is <span class="code">eth0</span>. This is the key bit we're going to be watching for these tests. I know that <span class="code">eth0</span> on <span class="code">an-node01</span> is connected to by first switch. So when I pull the cable to that switch, or when I fail that switch entirely, I should see <span class="code">eth3</span> take over.
libccs.connection.ccs_handle=5
 
libccs.connection.config_version=7
We'll also be watching syslog. If things work right, we should not see any messages from the cluster during failure and recovery.
libccs.connection.fullxpath=0
 
logging.timestamp=on
=== Failing The First Interface ===
logging.to_logfile=yes
 
logging.logfile=/var/log/cluster/corosync.log
Let's look at the first test. We'll fail <span class="code">an-node01</span>'s <span class="code">eth0</span> interface by pulling it's cable.
logging.logfile_priority=info
 
logging.to_syslog=yes
Let's look again at <span class="code">bond0</span>'s status file;
logging.syslog_facility=local4
 
logging.syslog_priority=info
<source lang="bash">
aisexec.user=ais
cat /proc/net/bonding/bond0
aisexec.group=ais
</source>
service.name=corosync_quorum
<source lang="text">
service.ver=0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
service.name=corosync_cman
 
service.ver=0
Bonding Mode: fault-tolerance (active-backup)
quorum.provider=quorum_cman
Primary Slave: eth0 (primary_reselect always)
service.name=openais_ckpt
Currently Active Slave: eth3
service.ver=0
MII Status: up
runtime.services.quorum.service_id=12
MII Polling Interval (ms): 100
runtime.services.cman.service_id=9
Up Delay (ms): 120000
runtime.services.ckpt.service_id=3
Down Delay (ms): 0
runtime.services.ckpt.0.tx=0
 
runtime.services.ckpt.0.rx=0
Slave Interface: eth0
runtime.services.ckpt.1.tx=0
MII Status: down
runtime.services.ckpt.1.rx=0
Link Failure Count: 1
runtime.services.ckpt.2.tx=0
Permanent HW addr: 00:e0:81:c7:ec:49
runtime.services.ckpt.2.rx=0
Slave queue ID: 0
runtime.services.ckpt.3.tx=0
 
runtime.services.ckpt.3.rx=0
Slave Interface: eth3
runtime.services.ckpt.4.tx=0
MII Status: up
runtime.services.ckpt.4.rx=0
Link Failure Count: 0
runtime.services.ckpt.5.tx=0
Permanent HW addr: 00:1b:21:9d:59:fc
runtime.services.ckpt.5.rx=0
Slave queue ID: 0
runtime.services.ckpt.6.tx=0
</source>
runtime.services.ckpt.6.rx=0
 
runtime.services.ckpt.7.tx=0
We can see now that <span class="code">eth0</span> is <span class="code">down</span> and that <span class="code">eth3</span> has taken over.
runtime.services.ckpt.7.rx=0
 
runtime.services.ckpt.8.tx=0
If you look at the windows running the ping flood, both <span class="code">an-node01</span> and <span class="code">an-node02</span> should show nearly the same number of lost packets;
runtime.services.ckpt.8.rx=0
 
runtime.services.ckpt.9.tx=0
<source lang="text">
runtime.services.ckpt.9.rx=0
PING an-node02 (10.20.0.2) 56(84) bytes of data.
runtime.services.ckpt.10.tx=0
...........................................................
runtime.services.ckpt.10.rx=0
</source>
runtime.services.ckpt.11.tx=2
 
runtime.services.ckpt.11.rx=3
In <span class="code">an-node01</span>'s syslog, you should see errors about the network, but nothing from the cluster.
runtime.services.ckpt.12.tx=0
 
runtime.services.ckpt.12.rx=0
<source lang="text">
runtime.services.ckpt.13.tx=0
Oct 23 12:16:15 an-node01 kernel: e1000e: eth0 NIC Link is Down
runtime.services.ckpt.13.rx=0
Oct 23 12:16:15 an-node01 lldpad[1902]: vdp_ifdown:eth0 vdp data remove failed
runtime.services.evs.service_id=0
Oct 23 12:16:15 an-node01 kernel: bonding: bond0: link status definitely down for interface eth0, disabling it
runtime.services.evs.0.tx=0
Oct 23 12:16:15 an-node01 kernel: bonding: bond0: making interface eth3 the new active one.
runtime.services.evs.0.rx=0
Oct 23 12:16:15 an-node01 kernel: device eth0 left promiscuous mode
runtime.services.cfg.service_id=7
Oct 23 12:16:15 an-node01 kernel: device eth3 entered promiscuous mode
runtime.services.cfg.0.tx=0
Oct 23 12:16:15 an-node01 lldpad[1902]: vdp_ifup(1358): could not find port for bond0!
runtime.services.cfg.0.rx=0
Oct 23 12:16:15 an-node01 lldpad[1902]: vdp_ifup:bond0 vdp adding failed
runtime.services.cfg.1.tx=0
</source>
runtime.services.cfg.1.rx=0
 
runtime.services.cfg.2.tx=0
The failure of the link was successful! Now, recovery.
runtime.services.cfg.2.rx=0
 
runtime.services.cfg.3.tx=0
=== Recovering The First Interface ===
runtime.services.cfg.3.rx=0
 
runtime.services.cpg.service_id=8
Surviving failure is only half the test. We also need to test the recovery of the interface. When ready, reconnect <span class="code">an-node01</span>'s <span class="code">eth0</span>.
runtime.services.cpg.0.tx=4
 
runtime.services.cpg.0.rx=8
The first thing you should notice is in <span class="code">an-node01</span>'s syslog;
runtime.services.cpg.1.tx=0
 
runtime.services.cpg.1.rx=0
<source lang="text">
runtime.services.cpg.2.tx=0
Oct 23 12:22:36 an-node01 kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
runtime.services.cpg.2.rx=0
Oct 23 12:22:36 an-node01 lldpad[1902]: vdp_ifup(1364): port eth0 not enabled for RxTx (0) !
runtime.services.cpg.3.tx=16
Oct 23 12:22:36 an-node01 kernel: bonding: bond0: link status up for interface eth0, enabling it in 120000 ms.
runtime.services.cpg.3.rx=23
</source>
runtime.services.cpg.4.tx=0
runtime.services.cpg.4.rx=0
runtime.services.cpg.5.tx=2
runtime.services.cpg.5.rx=3
runtime.services.confdb.service_id=11
runtime.services.pload.service_id=13
runtime.services.pload.0.tx=0
runtime.services.pload.0.rx=0
runtime.services.pload.1.tx=0
runtime.services.pload.1.rx=0
runtime.services.quorum.service_id=12
runtime.connections.active=6
runtime.connections.closed=110
runtime.connections.fenced:CPG:3490:19.service_id=8
runtime.connections.fenced:CPG:3490:19.client_pid=3490
runtime.connections.fenced:CPG:3490:19.responses=5
runtime.connections.fenced:CPG:3490:19.dispatched=9
runtime.connections.fenced:CPG:3490:19.requests=5
runtime.connections.fenced:CPG:3490:19.sem_retry_count=0
runtime.connections.fenced:CPG:3490:19.send_retry_count=0
runtime.connections.fenced:CPG:3490:19.recv_retry_count=0
runtime.connections.fenced:CPG:3490:19.flow_control=0
runtime.connections.fenced:CPG:3490:19.flow_control_count=0
runtime.connections.fenced:CPG:3490:19.queue_size=0
runtime.connections.fenced:CPG:3490:19.invalid_request=0
runtime.connections.fenced:CPG:3490:19.overload=0
runtime.connections.dlm_controld:CPG:3515:22.service_id=8
runtime.connections.dlm_controld:CPG:3515:22.client_pid=3515
runtime.connections.dlm_controld:CPG:3515:22.responses=5
runtime.connections.dlm_controld:CPG:3515:22.dispatched=8
runtime.connections.dlm_controld:CPG:3515:22.requests=5
runtime.connections.dlm_controld:CPG:3515:22.sem_retry_count=0
runtime.connections.dlm_controld:CPG:3515:22.send_retry_count=0
runtime.connections.dlm_controld:CPG:3515:22.recv_retry_count=0
runtime.connections.dlm_controld:CPG:3515:22.flow_control=0
runtime.connections.dlm_controld:CPG:3515:22.flow_control_count=0
runtime.connections.dlm_controld:CPG:3515:22.queue_size=0
runtime.connections.dlm_controld:CPG:3515:22.invalid_request=0
runtime.connections.dlm_controld:CPG:3515:22.overload=0
runtime.connections.dlm_controld:CKPT:3515:23.service_id=3
runtime.connections.dlm_controld:CKPT:3515:23.client_pid=3515
runtime.connections.dlm_controld:CKPT:3515:23.responses=0
runtime.connections.dlm_controld:CKPT:3515:23.dispatched=0
runtime.connections.dlm_controld:CKPT:3515:23.requests=0
runtime.connections.dlm_controld:CKPT:3515:23.sem_retry_count=0
runtime.connections.dlm_controld:CKPT:3515:23.send_retry_count=0
runtime.connections.dlm_controld:CKPT:3515:23.recv_retry_count=0
runtime.connections.dlm_controld:CKPT:3515:23.flow_control=0
runtime.connections.dlm_controld:CKPT:3515:23.flow_control_count=0
runtime.connections.dlm_controld:CKPT:3515:23.queue_size=0
runtime.connections.dlm_controld:CKPT:3515:23.invalid_request=0
runtime.connections.dlm_controld:CKPT:3515:23.overload=0
runtime.connections.gfs_controld:CPG:3565:26.service_id=8
runtime.connections.gfs_controld:CPG:3565:26.client_pid=3565
runtime.connections.gfs_controld:CPG:3565:26.responses=5
runtime.connections.gfs_controld:CPG:3565:26.dispatched=8
runtime.connections.gfs_controld:CPG:3565:26.requests=5
runtime.connections.gfs_controld:CPG:3565:26.sem_retry_count=0
runtime.connections.gfs_controld:CPG:3565:26.send_retry_count=0
runtime.connections.gfs_controld:CPG:3565:26.recv_retry_count=0
runtime.connections.gfs_controld:CPG:3565:26.flow_control=0
runtime.connections.gfs_controld:CPG:3565:26.flow_control_count=0
runtime.connections.gfs_controld:CPG:3565:26.queue_size=0
runtime.connections.gfs_controld:CPG:3565:26.invalid_request=0
runtime.connections.gfs_controld:CPG:3565:26.overload=0
runtime.connections.fenced:CPG:3490:28.service_id=8
runtime.connections.fenced:CPG:3490:28.client_pid=3490
runtime.connections.fenced:CPG:3490:28.responses=5
runtime.connections.fenced:CPG:3490:28.dispatched=8
runtime.connections.fenced:CPG:3490:28.requests=5
runtime.connections.fenced:CPG:3490:28.sem_retry_count=0
runtime.connections.fenced:CPG:3490:28.send_retry_count=0
runtime.connections.fenced:CPG:3490:28.recv_retry_count=0
runtime.connections.fenced:CPG:3490:28.flow_control=0
runtime.connections.fenced:CPG:3490:28.flow_control_count=0
runtime.connections.fenced:CPG:3490:28.queue_size=0
runtime.connections.fenced:CPG:3490:28.invalid_request=0
runtime.connections.fenced:CPG:3490:28.overload=0
runtime.connections.corosync-objctl:CONFDB:3698:27.service_id=11
runtime.connections.corosync-objctl:CONFDB:3698:27.client_pid=3698
runtime.connections.corosync-objctl:CONFDB:3698:27.responses=444
runtime.connections.corosync-objctl:CONFDB:3698:27.dispatched=0
runtime.connections.corosync-objctl:CONFDB:3698:27.requests=447
runtime.connections.corosync-objctl:CONFDB:3698:27.sem_retry_count=0
runtime.connections.corosync-objctl:CONFDB:3698:27.send_retry_count=0
runtime.connections.corosync-objctl:CONFDB:3698:27.recv_retry_count=0
runtime.connections.corosync-objctl:CONFDB:3698:27.flow_control=0
runtime.connections.corosync-objctl:CONFDB:3698:27.flow_control_count=0
runtime.connections.corosync-objctl:CONFDB:3698:27.queue_size=0
runtime.connections.corosync-objctl:CONFDB:3698:27.invalid_request=0
runtime.connections.corosync-objctl:CONFDB:3698:27.overload=0
runtime.totem.pg.msg_reserved=1
runtime.totem.pg.msg_queue_avail=761
runtime.totem.pg.mrp.srp.orf_token_tx=2
runtime.totem.pg.mrp.srp.orf_token_rx=405
runtime.totem.pg.mrp.srp.memb_merge_detect_tx=53
runtime.totem.pg.mrp.srp.memb_merge_detect_rx=53
runtime.totem.pg.mrp.srp.memb_join_tx=3
runtime.totem.pg.mrp.srp.memb_join_rx=5
runtime.totem.pg.mrp.srp.mcast_tx=45
runtime.totem.pg.mrp.srp.mcast_retx=0
runtime.totem.pg.mrp.srp.mcast_rx=56
runtime.totem.pg.mrp.srp.memb_commit_token_tx=4
runtime.totem.pg.mrp.srp.memb_commit_token_rx=4
runtime.totem.pg.mrp.srp.token_hold_cancel_tx=4
runtime.totem.pg.mrp.srp.token_hold_cancel_rx=7
runtime.totem.pg.mrp.srp.operational_entered=2
runtime.totem.pg.mrp.srp.operational_token_lost=0
runtime.totem.pg.mrp.srp.gather_entered=2
runtime.totem.pg.mrp.srp.gather_token_lost=0
runtime.totem.pg.mrp.srp.commit_entered=2
runtime.totem.pg.mrp.srp.commit_token_lost=0
runtime.totem.pg.mrp.srp.recovery_entered=2
runtime.totem.pg.mrp.srp.recovery_token_lost=0
runtime.totem.pg.mrp.srp.consensus_timeouts=0
runtime.totem.pg.mrp.srp.mtt_rx_token=913
runtime.totem.pg.mrp.srp.avg_token_workload=0
runtime.totem.pg.mrp.srp.avg_backlog_calc=0
runtime.totem.pg.mrp.srp.rx_msg_dropped=0
runtime.totem.pg.mrp.srp.continuous_gather=0
runtime.totem.pg.mrp.srp.firewall_enabled_or_nic_failure=0
runtime.totem.pg.mrp.srp.members.1.ip=r(0) ip(10.20.0.1)
runtime.totem.pg.mrp.srp.members.1.join_count=1
runtime.totem.pg.mrp.srp.members.1.status=joined
runtime.totem.pg.mrp.srp.members.2.ip=r(0) ip(10.20.0.2)
runtime.totem.pg.mrp.srp.members.2.join_count=1
runtime.totem.pg.mrp.srp.members.2.status=joined
runtime.blackbox.dump_flight_data=no
runtime.blackbox.dump_state=no
cman_private.COROSYNC_DEFAULT_CONFIG_IFACE=xmlconfig:cmanpreconfig
</source>


The bond will still be using <span class="code">eth3</span>, so lets wait two minutes.
If you want to check what [[DLM]] lockspaces, you can use <span class="code">dlm_tool ls</span> to list lock spaces. Given that we're not running and resources or clustered filesystems though, there won't be any at this time. We'll look at this again later.


After the two minutes, you should see the following addition syslog entries.
== Testing Fencing ==


<source lang="text">
We need to thoroughly test our fence configuration and devices before we proceed. Should the cluster call a fence, and if the fence call fails, the cluster will hang until the fence finally succeeds. There is no way to abort a fence, so this could effectively hang the cluster. If we have problems, we need to find them now.
Oct 23 12:24:36 an-node01 kernel: bonding: bond0: link status definitely up for interface eth0.
 
Oct 23 12:24:36 an-node01 kernel: bonding: bond0: making interface eth0 the new active one.
We need to run two tests from each node against the other node for a total of four tests.
Oct 23 12:24:36 an-node01 kernel: device eth3 left promiscuous mode
* The first test will use <span class="code">fence_ipmilan</span>. To do this, we will hang the victim node by running <span class="code">echo c > /proc/sysrq-trigger</span> on it. This will immediately and completely hang the kernel. The other node should detect the failure and reboot the victim. You can confirm that IPMI was used by watching the fence PDU and '''not''' seeing it power-cycle the port.
Oct 23 12:24:36 an-node01 kernel: device eth0 entered promiscuous mode
* Secondly, we will pull the power on the victim node. This is done to ensure that the IPMI BMC is also dead and will simulate a failure in the power supply. You should see the other node try to fence the victim, fail initially, then try again using the second, switched PDU. If you want the PDU, you should see the power indicator LED go off and then come back on.
Oct 23 12:24:36 an-node01 lldpad[1902]: vdp_ifup(1358): could not find port for bond0!
 
Oct 23 12:24:36 an-node01 lldpad[1902]: vdp_ifup:bond0 vdp adding failed
{{note|1=To "pull the power", we can actually just log into the PDU and turn off the victim's power. In this case, we'll see the power restored when the PDU is used to fence the node. We can actually use the <span class="code">fence_apc</span> fence agent to pull the power, as we'll see.}}
</source>
 
{|class="wikitable"
!Test
!Victim
!Pass?
|-
|<span class="code">echo c > /proc/sysrq-trigger</span>
|<span class="code">an-node01</span>
|<span style="color: green;">Yes</span> / <span style="color: red;">No</span>
|-
|<span class="code">fence_apc_snmp -a pdu2.alteeve.ca -n 1 -o off</span>
|<span class="code">an-node01</span>
|<span style="color: green;">Yes</span> / <span style="color: red;">No</span>
|-
|<span class="code">echo c > /proc/sysrq-trigger</span>
|<span class="code">an-node02</span>
|<span style="color: green;">Yes</span> / <span style="color: red;">No</span>
|-
|<span class="code">fence_apc_snmp -a pdu2.alteeve.ca -n 2 -o off</span>
|<span class="code">an-node02</span>
|<span style="color: green;">Yes</span> / <span style="color: red;">No</span>
|}
 
After the lost node is recovered, remember to restart <span class="code">cman</span> before starting the next test.
 
=== Hanging an-node01 ===
 
Be sure to be <span class="code">tail</span>ing the <span class="code">/var/log/messages</span> on <span class="code">an-node02</span>. Go to <span class="code">an-node01</span>'s first terminal and run the following command.
 
{{warning|1=This command will not return and you will lose all ability to talk to this node until it is rebooted.}}


If we go back to the bond status file, we'll see that the <span class="code">eth0</span> interface has been restored.
On '''<span class="code">an-node01</span>''' run:


<source lang="bash">
<source lang="bash">
cat /proc/net/bonding/bond0
echo c > /proc/sysrq-trigger
</source>
</source>
On '''<span class="code">an-node02</span>''''s syslog terminal, you should see the following entries in the log.
<source lang="text">
<source lang="text">
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
Dec 13 12:42:39 an-node02 corosync[2758]:  [TOTEM ] A processor failed, forming new configuration.
Dec 13 12:42:41 an-node02 corosync[2758]:  [QUORUM] Members[1]: 2
Dec 13 12:42:41 an-node02 corosync[2758]:  [TOTEM ] A processor joined or left the membership and a new membership was formed.
Dec 13 12:42:41 an-node02 corosync[2758]:  [CPG  ] chosen downlist: sender r(0) ip(10.20.0.2) ; members(old:2 left:1)
Dec 13 12:42:41 an-node02 corosync[2758]:  [MAIN  ] Completed service synchronization, ready to provide service.
Dec 13 12:42:41 an-node02 kernel: dlm: closing connection to node 1
Dec 13 12:42:41 an-node02 fenced[2817]: fencing node an-node01.alteeve.ca
Dec 13 12:42:56 an-node02 fenced[2817]: fence an-node01.alteeve.ca success
</source>
 
Perfect!


Bonding Mode: fault-tolerance (active-backup)
If you are watching <span class="code">an-node01</span>'s display, you should now see it starting to boot back up.
Primary Slave: eth0 (primary_reselect always)
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 120000
Down Delay (ms): 0


Slave Interface: eth0
{{note|1=Remember to start <span class="code">cman</span> once the node boots back up before trying the next test.}}
MII Status: up
Link Failure Count: 1
Permanent HW addr: 00:e0:81:c7:ec:49
Slave queue ID: 0


Slave Interface: eth3
=== Cutting the Power to an-node01 ===
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:1b:21:9d:59:fc
Slave queue ID: 0
</source>


Note that the only difference from before is that <span class="code">eth0</span>'s <span class="code">Link Failure Count</span> has been incremented to <span class="code">1</span>.
As was discussed earlier, IPMI and other out-of-band management interfaces have a fatal flaw as a fence device. Their [[BMC]] draws its power from the same power supply as the node itself. Thus, when the power supply itself fails (or the mains connection is pulled/tripped over), fencing via IPMI will fail. This makes the power supply a single point of failure, which is what the PDU protects us against.


The test has passed!
So to simulate a failed power supply, we're going to use <span class="code">an-node02</span>'s <span class="code">fence_apc</span> fence agent to turn off the power to <span class="code">an-node01</span>.


Now repeat the test for the other two bonds, then for all three bonds on <span class="code">an-node02</span>. Remember to also repeat each test, but pull the backup interface before the 2 minutes delays has completed. The primary interface should immediately take over again. This will confirm that failover for the backup link is also working properly.
Alternatively, you could also just unplug the power and the fence would still succeed. The fence call only needs to confirm that the node is off to succeed. Whether the node restarts after or not is not important so far as the cluster is concerned.


=== Failing The First Switch ===
From '''<span class="code">an-node02</span>''', pull the power on <span class="code">an-node01</span> with the following call;


{{note|1=Make sure that <span class="code">cman</span> is running before beginning the test!}}
<source lang="bash">
fence_apc_snmp -a pdu2.alteeve.ca -n 1 -o off
</source>
<source lang="text">
Success: Powered OFF
</source>


Check that all bonds on both nodes are using their primary interfaces. Confirm your cabling to ensure that these are all routed to the primary switch and that all backup links are cabled into the backup switch. Once done, pull the power to the primary switch. Both nodes should show similar output in their syslog windows;
Back on <span class="code">an-node02</span>'s syslog, we should see the following entries;


<source lang="text">
<source lang="text">
Oct 23 12:29:39 an-node01 kernel: e1000e: eth2 NIC Link is Down
Dec 13 12:45:46 an-node02 corosync[2758]:   [TOTEM ] A processor failed, forming new configuration.
Oct 23 12:29:39 an-node01 kernel: e1000e: eth1 NIC Link is Down
Dec 13 12:45:48 an-node02 corosync[2758]:   [QUORUM] Members[1]: 2
Oct 23 12:29:39 an-node01 lldpad[1902]: vdp_ifdown:eth1 vdp data remove failed
Dec 13 12:45:48 an-node02 corosync[2758]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
Oct 23 12:29:39 an-node01 lldpad[1902]: vdp_ifdown:eth2 vdp data remove failed
Dec 13 12:45:48 an-node02 corosync[2758]:   [CPG  ] chosen downlist: sender r(0) ip(10.20.0.2) ; members(old:2 left:1)
Oct 23 12:29:39 an-node01 kernel: bonding: bond1: link status definitely down for interface eth1, disabling it
Dec 13 12:45:48 an-node02 corosync[2758]:   [MAIN  ] Completed service synchronization, ready to provide service.
Oct 23 12:29:39 an-node01 kernel: bonding: bond1: making interface eth4 the new active one.
Dec 13 12:45:48 an-node02 kernel: dlm: closing connection to node 1
Oct 23 12:29:39 an-node01 lldpad[1902]: vdp_ifup(1358): could not find port for bond1!
Dec 13 12:45:48 an-node02 fenced[2817]: fencing node an-node01.alteeve.ca
Oct 23 12:29:39 an-node01 lldpad[1902]: vdp_ifup:bond1 vdp adding failed
Dec 13 12:46:08 an-node02 fenced[2817]: fence an-node01.alteeve.ca dev 0.0 agent fence_ipmilan result: error from agent
Oct 23 12:29:39 an-node01 kernel: bonding: bond2: link status definitely down for interface eth2, disabling it
Dec 13 12:46:08 an-node02 fenced[2817]: fence an-node01.alteeve.ca success
Oct 23 12:29:39 an-node01 kernel: bonding: bond2: making interface eth5 the new active one.
Oct 23 12:29:39 an-node01 kernel: device eth2 left promiscuous mode
Oct 23 12:29:39 an-node01 kernel: device eth5 entered promiscuous mode
Oct 23 12:29:39 an-node01 lldpad[1902]: vdp_ifup(1358): could not find port for bond2!
Oct 23 12:29:39 an-node01 lldpad[1902]: vdp_ifup:bond2 vdp adding failed
Oct 23 12:29:40 an-node01 kernel: e1000e: eth0 NIC Link is Down
Oct 23 12:29:40 an-node01 kernel: bonding: bond0: link status definitely down for interface eth0, disabling it
Oct 23 12:29:40 an-node01 kernel: bonding: bond0: making interface eth3 the new active one.
Oct 23 12:29:40 an-node01 kernel: device eth0 left promiscuous mode
Oct 23 12:29:40 an-node01 kernel: device eth3 entered promiscuous mode
Oct 23 12:29:40 an-node01 lldpad[1902]: vdp_ifup(1358): could not find port for bond0!
Oct 23 12:29:40 an-node01 lldpad[1902]: vdp_ifup:bond0 vdp adding failed
Oct 23 12:29:40 an-node01 lldpad[1902]: vdp_ifdown:eth0 vdp data remove failed
</source>
</source>


I can look at <span class="code">an-node01</span>'s <span class="code">/proc/net/bonding/bond0</span> file and see:
Hoozah!
 
Notice that there is an error from the <span class="code">fence_ipmilan</span>. This is exactly what we expected because of the IPMI's BMC lost power and couldn't respond.


<source lang="bash">
So now we know that <span class="code">an-node01</span> can be fenced successfully from both fence devices. Now we need to run the same tests against <span class="code">an-node02</span>.
cat /proc/net/bonding/bond0
</source>
<source lang="text">
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)


Bonding Mode: fault-tolerance (active-backup)
=== Hanging an-node02 ===
Primary Slave: eth0 (primary_reselect always)
Currently Active Slave: eth3
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 120000
Down Delay (ms): 0


Slave Interface: eth0
{{warning|1='''DO NOT ASSUME THAT <span class="code">an-node02</span> WILL FENCE PROPERLY JUST BECAUSE <span class="code">an-node01</span> PASSED!'''. There are many ways that a fence could fail; Bad password, misconfigured device, plugged into the wrong port on the PDU and so on. Always test all nodes using all methods!}}
MII Status: down
Link Failure Count: 3
Permanent HW addr: 00:e0:81:c7:ec:49
Slave queue ID: 0


Slave Interface: eth3
Be sure to be <span class="code">tail</span>ing the <span class="code">/var/log/messages</span> on <span class="code">an-node02</span>. Go to <span class="code">an-node01</span>'s first terminal and run the following command.
MII Status: up
Link Failure Count: 2
Permanent HW addr: 00:1b:21:9d:59:fc
Slave queue ID: 0
</source>


Notice <span class="code">Currently Active Slave</span> is now <span class="code">eth3</span>? You can also see now that <span class="code">eth0</span>'s link is down (<span class="code">MII Status: down</span>).
{{note|1=This command will not return and you will lose all ability to talk to this node until it is rebooted.}}


It should be the same story for all the other bonds on both nodes.
On '''<span class="code">an-node02</span>''' run:
 
If we check the status of the cluster, we'll see that all is good.


<source lang="bash">
<source lang="bash">
cman_tool status
echo c > /proc/sysrq-trigger
</source>
</source>
On '''<span class="code">an-node01</span>''''s syslog terminal, you should see the following entries in the log.
<source lang="text">
<source lang="text">
Version: 6.2.0
Dec 13 12:52:34 an-node01 corosync[3445]:  [TOTEM ] A processor failed, forming new configuration.
Config Version: 8
Dec 13 12:52:36 an-node01 corosync[3445]:  [QUORUM] Members[1]: 1
Cluster Name: an-clusterA
Dec 13 12:52:36 an-node01 corosync[3445]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
Cluster Id: 29382
Dec 13 12:52:36 an-node01 corosync[3445]:   [CPG  ] chosen downlist: sender r(0) ip(10.20.0.1) ; members(old:2 left:1)
Cluster Member: Yes
Dec 13 12:52:36 an-node01 corosync[3445]:  [MAIN  ] Completed service synchronization, ready to provide service.
Cluster Generation: 164
Dec 13 12:52:36 an-node01 kernel: dlm: closing connection to node 2
Membership state: Cluster-Member
Dec 13 12:52:36 an-node01 fenced[3501]: fencing node an-node02.alteeve.ca
Nodes: 2
Dec 13 12:52:51 an-node01 fenced[3501]: fence an-node02.alteeve.ca success
Expected votes: 1
Total votes: 2
Node votes: 1
Quorum: 1
Active subsystems: 7
Flags: 2node
Ports Bound: 0
Node name: an-node01.alteeve.com
Node ID: 1
Multicast addresses: 239.192.114.57
Node addresses: 10.20.0.1
</source>
</source>


Success! We just failed the primary switch without any interruption of clustered services.
Again, perfect!


We're not out of the woods yet, though...
=== Cutting the Power to an-node02 ===


=== Restoring The First Switch ===
From '''<span class="code">an-node01</span>''', pull the power on <span class="code">an-node02</span> with the following call;
 
Now that we've confirmed all of the bonds are working on the backup switch, lets restore power to the first switch.
 
{{warning|1=Be sure to wait five minutes after restoring power before declaring the recovery a success! Some configuration faults will take a few minutes to appear.}}
 
It is very important to wait for a while after restoring power to the switch. Some of the common problems that can break your cluster will not show up immediately. A good example is a misconfiguration of [[STP]]. In this case, the switch will come up, a short time will pass and then the switch will trigger an STP reconfiguration. Once this happens, both switches will block traffic for many seconds. This will partition you cluster.
 
So then, lets power it back up.
 
Within a few moments, you should see this in your syslog;


<source lang="bash">
fence_apc_snmp -a pdu2.alteeve.ca -n 2 -o off
</source>
<source lang="text">
<source lang="text">
Oct 23 12:34:25 an-node01 kernel: e1000e: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Success: Powered OFF
Oct 23 12:34:25 an-node01 kernel: e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Oct 23 12:34:25 an-node01 kernel: bonding: bond1: link status up for interface eth1, enabling it in 120000 ms.
Oct 23 12:34:25 an-node01 kernel: bonding: bond2: link status up for interface eth2, enabling it in 120000 ms.
Oct 23 12:34:25 an-node01 kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Oct 23 12:34:25 an-node01 kernel: bonding: bond0: link status up for interface eth0, enabling it in 120000 ms.
Oct 23 12:34:25 an-node01 lldpad[1902]: vdp_ifup(1364): port eth2 not enabled for RxTx (0) !
Oct 23 12:34:25 an-node01 lldpad[1902]: vdp_ifup(1364): port eth1 not enabled for RxTx (0) !
Oct 23 12:34:25 an-node01 lldpad[1902]: vdp_ifup(1364): port eth0 not enabled for RxTx (0) !
</source>
</source>


As with the individual link test, the backup interfaces will remain in use for two minutes. This is critical because <span class="code">miimon</span> has detected the connection to the switches, but the switches are still a long way from being able to route traffic. After the two minutes, we'll see the primary interfaces return to active state.
Back on <span class="code">an-node01</span>'s syslog, we should see the following entries;


<source lang="text">
<source lang="text">
Oct 23 12:35:19 an-node01 kernel: e1000e: eth0 NIC Link is Down
Dec 13 12:55:58 an-node01 corosync[3445]:   [TOTEM ] A processor failed, forming new configuration.
Oct 23 12:35:19 an-node01 lldpad[1902]: vdp_ifdown:eth0 vdp data remove failed
Dec 13 12:56:00 an-node01 corosync[3445]:   [QUORUM] Members[1]: 1
Oct 23 12:35:20 an-node01 kernel: bonding: bond0: link status down again after 54300 ms for interface eth0.
Dec 13 12:56:00 an-node01 corosync[3445]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
Oct 23 12:35:20 an-node01 kernel: e1000e: eth1 NIC Link is Down
Dec 13 12:56:00 an-node01 corosync[3445]:   [CPG  ] chosen downlist: sender r(0) ip(10.20.0.1) ; members(old:2 left:1)
Oct 23 12:35:20 an-node01 kernel: bonding: bond1: link status down again after 55300 ms for interface eth1.
Dec 13 12:56:00 an-node01 kernel: dlm: closing connection to node 2
Oct 23 12:35:20 an-node01 lldpad[1902]: vdp_ifdown:eth1 vdp data remove failed
Dec 13 12:56:00 an-node01 corosync[3445]:   [MAIN  ] Completed service synchronization, ready to provide service.
Oct 23 12:35:21 an-node01 kernel: e1000e: eth2 NIC Link is Down
Dec 13 12:56:00 an-node01 fenced[3501]: fencing node an-node02.alteeve.ca
Oct 23 12:35:21 an-node01 kernel: bonding: bond2: link status down again after 56300 ms for interface eth2.
Dec 13 12:56:20 an-node01 fenced[3501]: fence an-node02.alteeve.ca dev 0.0 agent fence_ipmilan result: error from agent
Oct 23 12:35:21 an-node01 lldpad[1902]: vdp_ifdown:eth2 vdp data remove failed
Dec 13 12:56:20 an-node01 fenced[3501]: fence an-node02.alteeve.ca success
Oct 23 12:35:22 an-node01 kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Oct 23 12:35:22 an-node01 kernel: bonding: bond0: link status up for interface eth0, enabling it in 120000 ms.
Oct 23 12:35:22 an-node01 lldpad[1902]: vdp_ifup(1364): port eth0 not enabled for RxTx (0) !
Oct 23 12:35:23 an-node01 kernel: e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Oct 23 12:35:23 an-node01 lldpad[1902]: vdp_ifup(1364): port eth1 not enabled for RxTx (0) !
Oct 23 12:35:23 an-node01 kernel: bonding: bond1: link status up for interface eth1, enabling it in 120000 ms.
Oct 23 12:35:24 an-node01 kernel: e1000e: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Oct 23 12:35:24 an-node01 lldpad[1902]: vdp_ifup(1364): port eth2 not enabled for RxTx (0) !
Oct 23 12:35:24 an-node01 kernel: bonding: bond2: link status up for interface eth2, enabling it in 120000 ms.
Oct 23 12:35:25 an-node01 kernel: e1000e: eth0 NIC Link is Down
Oct 23 12:35:25 an-node01 lldpad[1902]: vdp_ifdown:eth0 vdp data remove failed
Oct 23 12:35:25 an-node01 kernel: bonding: bond0: link status down again after 3100 ms for interface eth0.
Oct 23 12:35:26 an-node01 kernel: e1000e: eth1 NIC Link is Down
Oct 23 12:35:26 an-node01 lldpad[1902]: vdp_ifdown:eth1 vdp data remove failed
Oct 23 12:35:26 an-node01 kernel: bonding: bond1: link status down again after 3100 ms for interface eth1.
Oct 23 12:35:27 an-node01 kernel: e1000e: eth2 NIC Link is Down
Oct 23 12:35:27 an-node01 lldpad[1902]: vdp_ifdown:eth2 vdp data remove failed
Oct 23 12:35:27 an-node01 kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Oct 23 12:35:27 an-node01 lldpad[1902]: vdp_ifup(1364): port eth0 not enabled for RxTx (0) !
Oct 23 12:35:27 an-node01 kernel: bonding: bond0: link status up for interface eth0, enabling it in 120000 ms.
Oct 23 12:35:27 an-node01 kernel: bonding: bond2: link status down again after 3300 ms for interface eth2.
Oct 23 12:35:28 an-node01 kernel: e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Oct 23 12:35:28 an-node01 lldpad[1902]: vdp_ifup(1364): port eth1 not enabled for RxTx (0) !
Oct 23 12:35:28 an-node01 kernel: bonding: bond1: link status up for interface eth1, enabling it in 120000 ms.
Oct 23 12:35:30 an-node01 kernel: e1000e: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Oct 23 12:35:30 an-node01 lldpad[1902]: vdp_ifup(1364): port eth2 not enabled for RxTx (0) !
Oct 23 12:35:30 an-node01 kernel: bonding: bond2: link status up for interface eth2, enabling it in 120000 ms.
</source>
</source>


See all that bouncing? That is caused by many switches showing a link (that is the [[MII]] status) without actually being able to push traffic. As part of the switches boot sequence, the links will go down and come back up a couple of times. The 2 minute counter will reset with each bounce, so the recovery time is actually quite a bit longer than two minutes. This is fine, no need to rush back to the first switch.
Woot!


Finally, the old interfaces will actually be restored.
Only now can we safely say that our fencing is setup and working properly.


<source lang="text">
== Testing Network Redundancy ==
Oct 23 12:37:27 an-node01 kernel: bonding: bond0: link status definitely up for interface eth0.
Oct 23 12:37:27 an-node01 kernel: bonding: bond0: making interface eth0 the new active one.
Oct 23 12:37:27 an-node01 kernel: device eth3 left promiscuous mode
Oct 23 12:37:27 an-node01 kernel: device eth0 entered promiscuous mode
Oct 23 12:37:27 an-node01 lldpad[1902]: vdp_ifup(1358): could not find port for bond0!
Oct 23 12:37:27 an-node01 lldpad[1902]: vdp_ifup:bond0 vdp adding failed
Oct 23 12:37:28 an-node01 kernel: bonding: bond1: link status definitely up for interface eth1.
Oct 23 12:37:28 an-node01 kernel: bonding: bond1: making interface eth1 the new active one.
Oct 23 12:37:28 an-node01 lldpad[1902]: vdp_ifup(1358): could not find port for bond1!
Oct 23 12:37:28 an-node01 lldpad[1902]: vdp_ifup:bond1 vdp adding failed
Oct 23 12:37:30 an-node01 kernel: bonding: bond2: link status definitely up for interface eth2.
Oct 23 12:37:30 an-node01 kernel: bonding: bond2: making interface eth2 the new active one.
Oct 23 12:37:30 an-node01 kernel: device eth5 left promiscuous mode
Oct 23 12:37:30 an-node01 kernel: device eth2 entered promiscuous mode
Oct 23 12:37:30 an-node01 lldpad[1902]: vdp_ifup(1358): could not find port for bond2!
Oct 23 12:37:30 an-node01 lldpad[1902]: vdp_ifup:bond2 vdp adding failed
</source>


Complete success!
Next up of the testing block is our network configuration. Seeing as we've build our bonds, we need to now test that they are working properly.


{{warning|1=It is worth restating the importance of spreading your two fence methods across two switches. If both your PDU(s) and you IPMI (or iLO, etc) interfaces all run through one switch, that switch becomes a single point of failure. Generally, I run the IPMI/iLO/etc fence devices on the primary switch and the PDU(s) on the secondary switch.}}
* Make sure that <span class="code">cman</span> has started on both nodes.


=== Failing The Secondary Switch ===
First, we'll test all network cables individually, one node and one bonded interface at a time.


Before we can say that everything is perfect, we need to test failing and recovering the secondary switch. The main purpose of this test is to ensure that there are no problems caused when the secondary switch restarts.
* For each network; IFN, SN and BCN;
** On both nodes, start a ping flood against the opposing node specifying the appropriate network name suffix in the first window and starting <span class="code">tail</span>ing syslog in the second window.
** <span class="code">watch</span> each bond's <span class="code">/proc/net/bonding/bondX</span> file to see which interfaces are active.
** Pull the currently-active network cable from the bond (either at the switch or at the node).
** Check the state of the bonds again and see that they've switched to their backup interface. If a node gets fenced, you know something went wrong. You should see a handful of lost packets in the ping flood.
** Restore the network cable and wait 2 minutes, then verify that the old primary interface was restored. You will see another handful of lost packets in the flood during the recovery.
** Pull the cable again, then restore it. This time, do not wait 2 minutes. After just a few seconds, pull the backup link and ensure that the bond immediately resumed use of the primary interface.
** Repeat the above steps for all bonds on both nodes. This will take a while, but you need to ensure configuration errors are found now.


To fail the switch, as we did with the primary switch, simply cut it's power. We should see the following in both node's syslog;
{{warning|1=Testing the complete primary switch failure and subsequant recovery is very, very important. Please do NOT skip this step!}}


<source lang="text">
Once all bonds have been tested, we'll do a final test by failing the primary switch.
Oct 23 12:55:50 an-node01 kernel: e1000e: eth3 NIC Link is Down
* Cut the power to the switch.
Oct 23 12:55:50 an-node01 kernel: e1000e: eth4 NIC Link is Down
* Check all bond status files. Confirm that all have switched to their backup links.
Oct 23 12:55:50 an-node01 kernel: e1000e: eth5 NIC Link is Down
* Restore power to the switch and wait 2 minutes.
Oct 23 12:55:50 an-node01 kernel: bonding: bond2: link status definitely down for interface eth5, disabling it
* Confirm that the bonds did not switch to the primary interfaces before the switch was ready to move data.
Oct 23 12:55:50 an-node01 kernel: bonding: bond0: link status definitely down for interface eth3, disabling it
Oct 23 12:55:50 an-node01 kernel: bonding: bond1: link status definitely down for interface eth4, disabling it
Oct 23 12:55:50 an-node01 lldpad[1902]: vdp_ifdown:eth5 vdp data remove failed
Oct 23 12:55:50 an-node01 lldpad[1902]: vdp_ifdown:eth4 vdp data remove failed
Oct 23 12:55:50 an-node01 lldpad[1902]: vdp_ifdown:eth3 vdp data remove failed
</source>


Let's take a look at <span class="code">an-node01</span>'s <span class="code">bond0</span> status file.
If all of these steps pass and the cluster doesn't partition, then you can be confident that your network is configured properly for full redundancy.
 
=== Network Testing Terminal Layout ===
 
If you have a couple of monitors, particularly one with portrait mode, you might be able to open 16 terminals at once. This is how many are needed to run ping floods, watch the bond status files, tail syslog and watch cman_tool all at the same time. This configuration makes it very easy to keep a near real-time, complete view of all network components.
 
On the left window, the top-left terminal shows <span class="code">watch cman_tool status</span> and the top-right terminal shows <span class="code">tail -f -n 0 /var/log/messages</span> for <span class="code">an-node01</span>. The bottom two terminals show the same for <span class="code">an-node02</span>.
 
On the right, portrait-mode window, the terminal layout used for monitoring the bonded link status and ping floods are shown. There are two columns; <span class="code">an-node01</span> on the left and <span class="code">an-node02</span> on the right. Each column is stacked into six rows, <span class="code">bond0</span> on the top followed by <span class="code">ping -f an-node02.bcn</span>, <span class="code">bond1</span> in the middle followed by <span class="code">ping -f an-node02.sn</span> and <span class="code">bond2</span> at the bottom followed by <span class="code">ping -f an-node02.ifn</span>. The left window shows the standard <span class="code">tail</span> on syslog plus <span class="code">watch cman_tool status</span>.
 
[[Image:2-node_el6-tutorial_network-test_terminal-layout_01.png|thumb|center|700px|Terminal layout used for HA network testing; Calls shown.]]
 
[[Image:2-node_el6-tutorial_network-test_terminal-layout_02.png|thumb|center|700px|Terminal layout used for HA network testing; Calls running.]]
 
=== How to Know if the Tests Passed ===
 
Well, the most obvious answer to this question is if the cluster is still working after a switch is powered off.
 
We can be a little more subtle than that though.
 
The state of each bond is viewable by looking in the special <span class="code">/proc/net/bonding/bondX</span> files, where <span class="code">X</span> is the bond number. Lets take a look at <span class="code">bond0</span> on <span class="code">an-node01</span>.


<source lang="bash">
<source lang="bash">
Line 2,535: Line 2,718:
Slave Interface: eth0
Slave Interface: eth0
MII Status: up
MII Status: up
Link Failure Count: 3
Link Failure Count: 0
Permanent HW addr: 00:e0:81:c7:ec:49
Permanent HW addr: 00:e0:81:c7:ec:49
Slave queue ID: 0
Slave queue ID: 0


Slave Interface: eth3
Slave Interface: eth3
MII Status: down
MII Status: up
Link Failure Count: 3
Link Failure Count: 0
Permanent HW addr: 00:1b:21:9d:59:fc
Permanent HW addr: 00:1b:21:9d:59:fc
Slave queue ID: 0
Slave queue ID: 0
</source>
</source>


Note that the <span class="code">eth3</span> interface is shown as <span class="code">down</span>. There should have been no dropped packets in the ping-flood window at all.
We can see that the currently active interface is <span class="code">eth0</span>. This is the key bit we're going to be watching for these tests. I know that <span class="code">eth0</span> on <span class="code">an-node01</span> is connected to by first switch. So when I pull the cable to that switch, or when I fail that switch entirely, I should see <span class="code">eth3</span> take over.


=== Restoring The First Switch ===
We'll also be watching syslog. If things work right, we should not see any messages from the cluster during failure and recovery.


When the power is restored to the switch, we'll see the same "bouncing" as the switch goes through it's startup process. Notice that the backup link also remains listed as <span class="code">down</span> for 2 minutes, despite the interface not being used by the bonded interface.
=== Failing The First Interface ===


<source lang="text">
Let's look at the first test. We'll fail <span class="code">an-node01</span>'s <span class="code">eth0</span> interface by pulling its cable.
Oct 23 12:58:50 an-node01 kernel: e1000e: eth3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Oct 23 12:58:50 an-node01 kernel: e1000e: eth4 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Oct 23 12:58:50 an-node01 kernel: e1000e: eth5 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Oct 23 12:58:50 an-node01 kernel: bonding: bond0: link status up for interface eth3, enabling it in 120000 ms.
Oct 23 12:58:50 an-node01 kernel: bonding: bond1: link status up for interface eth4, enabling it in 120000 ms.
Oct 23 12:58:50 an-node01 kernel: bonding: bond2: link status up for interface eth5, enabling it in 120000 ms.
Oct 23 12:58:50 an-node01 lldpad[1902]: vdp_ifup(1364): port eth5 not enabled for RxTx (0) !
Oct 23 12:58:50 an-node01 lldpad[1902]: vdp_ifup(1364): port eth4 not enabled for RxTx (0) !
Oct 23 12:58:50 an-node01 lldpad[1902]: vdp_ifup(1364): port eth3 not enabled for RxTx (0) !
Oct 23 12:59:46 an-node01 kernel: e1000e: eth3 NIC Link is Down
Oct 23 12:59:46 an-node01 lldpad[1902]: vdp_ifdown:eth3 vdp data remove failed
Oct 23 12:59:46 an-node01 kernel: e1000e: eth4 NIC Link is Down
Oct 23 12:59:46 an-node01 kernel: bonding: bond0: link status down again after 56000 ms for interface eth3.
Oct 23 12:59:46 an-node01 kernel: bonding: bond1: link status down again after 56000 ms for interface eth4.
Oct 23 12:59:47 an-node01 lldpad[1902]: vdp_ifdown:eth4 vdp data remove failed
Oct 23 12:59:48 an-node01 kernel: e1000e: eth3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Oct 23 12:59:48 an-node01 kernel: e1000e: eth4 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Oct 23 12:59:48 an-node01 kernel: e1000e: eth5 NIC Link is Down
Oct 23 12:59:48 an-node01 lldpad[1902]: vdp_ifdown:eth5 vdp data remove failed
Oct 23 12:59:48 an-node01 kernel: bonding: bond2: link status down again after 58000 ms for interface eth5.
Oct 23 12:59:48 an-node01 kernel: bonding: bond0: link status up for interface eth3, enabling it in 120000 ms.
Oct 23 12:59:48 an-node01 kernel: bonding: bond1: link status up for interface eth4, enabling it in 120000 ms.
Oct 23 12:59:48 an-node01 lldpad[1902]: vdp_ifup(1364): port eth4 not enabled for RxTx (0) !
Oct 23 12:59:48 an-node01 lldpad[1902]: vdp_ifup(1364): port eth3 not enabled for RxTx (0) !
Oct 23 12:59:50 an-node01 kernel: e1000e: eth3 NIC Link is Down
Oct 23 12:59:50 an-node01 kernel: e1000e: eth5 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Oct 23 12:59:50 an-node01 lldpad[1902]: vdp_ifup(1364): port eth5 not enabled for RxTx (0) !
Oct 23 12:59:50 an-node01 lldpad[1902]: vdp_ifdown:eth3 vdp data remove failed
Oct 23 12:59:50 an-node01 kernel: bonding: bond2: link status up for interface eth5, enabling it in 120000 ms.
Oct 23 12:59:50 an-node01 kernel: bonding: bond0: link status down again after 2000 ms for interface eth3.
Oct 23 12:59:52 an-node01 kernel: e1000e: eth3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Oct 23 12:59:52 an-node01 kernel: e1000e: eth5 NIC Link is Down
Oct 23 12:59:52 an-node01 lldpad[1902]: vdp_ifdown:eth5 vdp data remove failed
Oct 23 12:59:52 an-node01 kernel: bonding: bond2: link status down again after 2000 ms for interface eth5.
Oct 23 12:59:52 an-node01 kernel: bonding: bond0: link status up for interface eth3, enabling it in 120000 ms.
Oct 23 12:59:52 an-node01 lldpad[1902]: vdp_ifup(1364): port eth3 not enabled for RxTx (0) !
Oct 23 12:59:54 an-node01 kernel: e1000e: eth5 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Oct 23 12:59:54 an-node01 lldpad[1902]: vdp_ifup(1364): port eth5 not enabled for RxTx (0) !
Oct 23 12:59:54 an-node01 kernel: bonding: bond2: link status up for interface eth5, enabling it in 120000 ms.
</source>


After two minutes from the last bound, we'll see the backup interfaces return to <span class="code">up</span> state in the bond's status file.
On <span class="code">an-node01</span>'s syslog, you will see;


<source lang="text">
<source lang="text">
Oct 23 13:01:48 an-node01 kernel: bonding: bond1: link status definitely up for interface eth4.
Dec 13 14:03:19 an-node01 kernel: e1000e: eth0 NIC Link is Down
Oct 23 13:01:52 an-node01 kernel: bonding: bond0: link status definitely up for interface eth3.
Dec 13 14:03:19 an-node01 kernel: bonding: bond0: link status definitely down for interface eth0, disabling it
Oct 23 13:01:54 an-node01 kernel: bonding: bond2: link status definitely up for interface eth5.
Dec 13 14:03:19 an-node01 kernel: bonding: bond0: making interface eth3 the new active one.
</source>
</source>


After a full five minutes, the cluster and the network remain stable. We can officially declare our network to be fully highly available!
Looking again at <span class="code">an-node01</span>'s <span class="code">bond0</span>'s status;


= Installing DRBD =
<source lang="bash">
cat /proc/net/bonding/bond0
</source>
<source lang="text">
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)


DRBD is an open-source application for real-time, block-level disk replication created and maintained by [http://linbit.com Linbit]. We will use this to keep the data on our cluster consistent between the two nodes.
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: eth0 (primary_reselect always)
Currently Active Slave: eth3
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 120000
Down Delay (ms): 0
 
Slave Interface: eth0
MII Status: down
Link Failure Count: 1
Permanent HW addr: 00:e0:81:c7:ec:49
Slave queue ID: 0
 
Slave Interface: eth3
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:1b:21:9d:59:fc
Slave queue ID: 0
</source>
 
We can see now that <span class="code">eth0</span> is <span class="code">down</span> and that <span class="code">eth3</span> has taken over.
 
If you look at the windows running the ping flood, both <span class="code">an-node01</span> and <span class="code">an-node02</span> should show nearly the same number of lost packets;


To install it, we have two choices;
<source lang="text">
* Install from source files.
PING an-node02 (10.20.0.2) 56(84) bytes of data.
* Install from [http://elrepo.org/tiki/tiki-index.php ELRepo].
........................
</source>


Installing from source ensures that you have full control over the installed software. However, you become solely responsible for installing future patches and bugfixes.
The failure of the link was successful!


Installing from ELRepo means seceding some control to the ELRepo maintainers, but it also means that future patches and bugfixes are applied as part of a standard update.
=== Recovering The First Interface ===


Which you choose is, ultimately, a decision you need to make.
Surviving failure is only half the test. We also need to test the recovery of the interface. When ready, reconnect <span class="code">an-node01</span>'s <span class="code">eth0</span>.


== Option A - Install From Source ==
The first thing you should notice is in <span class="code">an-node01</span>'s syslog;


On '''Both''' nodes run:
<source lang="text">
Dec 13 14:06:40 an-node01 kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:06:40 an-node01 kernel: bonding: bond0: link status up for interface eth0, enabling it in 120000 ms.
</source>


<source lang="bash">
The bond will still be using <span class="code">eth3</span>, so lets wait two minutes.
# Obliterate peer - fence via cman
wget -c https://alteeve.com/files/an-cluster/sbin/obliterate-peer.sh -O /sbin/obliterate-peer.sh
chmod a+x /sbin/obliterate-peer.sh
ls -lah /sbin/obliterate-peer.sh


# Download, compile and install DRBD
After the two minutes, you should see the following addition syslog entries.
wget -c http://oss.linbit.com/drbd/8.3/drbd-8.3.11.tar.gz
 
tar -xvzf drbd-8.3.11.tar.gz
<source lang="text">
cd drbd-8.3.11
Dec 13 14:08:40 an-node01 kernel: bond0: link status definitely up for interface eth0, 1000 Mbps full duplex.
./configure \
Dec 13 14:08:40 an-node01 kernel: bonding: bond0: making interface eth0 the new active one.
  --prefix=/usr \
  --localstatedir=/var \
  --sysconfdir=/etc \
  --with-utils \
  --with-km \
  --with-udev \
  --with-pacemaker \
  --with-rgmanager \
  --with-bashcompletion
make
make install
chkconfig --add drbd
chkconfig drbd off
</source>
</source>


== Option B - Install From ELRepo ==
If we go back to the bond status file, we'll see that the <span class="code">eth0</span> interface has been restored.
 
On '''Both''' nodes run:


<source lang="bash">
<source lang="bash">
# Obliterate peer - fence via cman
cat /proc/net/bonding/bond0
wget -c https://alteeve.com/files/an-cluster/sbin/obliterate-peer.sh -O /sbin/obliterate-peer.sh
chmod a+x /sbin/obliterate-peer.sh
ls -lah /sbin/obliterate-peer.sh
 
# Install the ELRepo GPG key, add the repo and install DRBD.
rpm --import http://elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://elrepo.org/elrepo-release-6-4.el6.elrepo.noarch.rpm
yum install drbd83-utils kmod-drbd83
</source>
</source>
<source lang="text">
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)


=== The "Why" of Our Layout ===
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: eth0 (primary_reselect always)
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 120000
Down Delay (ms): 0


We will be creating three separate DRBD resouorces. The reason for this is to minimize the chance of data loss in a [[split-brain]] event.
Slave Interface: eth0
MII Status: up
Link Failure Count: 1
Permanent HW addr: 00:e0:81:c7:ec:49
Slave queue ID: 0


A split-brain occurs when a [[DRBD]] resource loses it's network link while in <span class="code">Primary/Primary</span> mode. The problem is that, after the split, any write to either node is not replicated to the other node. Thus, after even one [[byte]] is written, the DRBD resource is out of sync. Once this happens, there is no real way to automate recovery. You will need to go in and manual flag one side of the resource to discard it's changes and then manually re-connect the two sides before the resource will be usable again.
Slave Interface: eth3
 
MII Status: up
We will take steps to prevent split-brains, but we can never make the risk go away entirely.
Link Failure Count: 0
 
Permanent HW addr: 00:1b:21:9d:59:fc
Given then that there is no sure way to avoid split-brains, we're going to mitigate risk by breaking up our DRBD resources so that we can be more selective in choosing what parts to invalidate after a split brain event.
Slave queue ID: 0
</source>


* The small GFS2 partition will be the hardest to manage. For this reason, it is on it's own. For the same reason, we will be using it as little as we can, and copies of files we care about will be stored on each node as backups. The main thing here are the VM configuration files. This should be written to rarely, so with luck, in a split brain condition, simply nothing will be written to either side so recovery should be arbitrary and simple.
Note that the only difference from before is that <span class="code">eth0</span>'s <span class="code">Link Failure Count</span> has been incremented to <span class="code">1</span>.
* The VMs that will primarily run on <span class="code">an-node01</span> will get their own resource. This way we can simply invalidate the DRBD device on the node that was '''not''' running the VMs during the split-brain.
* Likewise, the VMs primarily running on <span class="code">an-node02</span> will get their own resource. This way, if a split-brain happens and VMs are running on both nodes, it should be easily to invalidate opposing nodes for the respective DRBD resource.


== Creating The Partitions For DRBD ==
The test has passed!


It is possible to use [[LVM]] on the hosts, and simply create [[LV]]s to back our DRBD resources. However, this causes confusion as LVM will see the [[PV]] signatures on both the DRBD backing devices and the DRBD device itself. Getting around this requires editing LVM's <span class="code">filter</span> option, which is somewhat complicated. Not overly so, mind you, but enough to be outside the scope of this document.  
Now repeat the test for the other two bonds, then for all three bonds on <span class="code">an-node02</span>. Remember to also repeat each test, but pull the backup interface before the 2 minutes delays has completed. The primary interface should immediately take over again. This will confirm that failover for the backup link is also working properly.


Also, by working with <span class="code">fdisk</span> directly, it will give us a chance to make sure that the DRBD partitions start on an even 64 [[KiB]] boundry. This is important for decent performance on Windows VMs, as we will see later. This is true for both traditional platter and modern solid-state drives.
=== Failing The First Switch ===


On our nodes, we created three primary disk partitions;
{{note|1=Make sure that <span class="code">cman</span> is running before beginning the test! The real test is less about the failure and recovery of the network itself and more about whether it fails and recovers in such a way that the cluster stays up and no partitioning occurs.}}
* <span class="code">/dev/sda1</span>; The <span class="code">/boot</span> partition.
* <span class="code">/dev/sda2</span>; The root <span class="code">/</span> partition.
* <span class="code">/dev/sda3</span>; The swap partition.


We will create a new extended partition. Then within it we will create three new partitions;
Check that all bonds on both nodes are using their primary interfaces. Confirm your cabling to ensure that these are all routed to the primary switch and that all backup links are cabled into the backup switch. Once done, pull the power to the primary switch. Both nodes should show similar output in their syslog windows;
* <span class="code">/dev/sda5</span>; a small partition we will later use for our shared [[GFS2]] partition.
* <span class="code">/dev/sda6</span>; a partition big enough to host the VMs that will normally run on <span class="code">an-node01</span>.
* <span class="code">/dev/sda7</span>; a partition big enough to host the VMs that will normally run on <span class="code">an-node02</span>.


As we create each partition, we will do a little math to ensure that the start sector is on a 64 [[KiB]] boundry.
<source lang="text">
Dec 13 14:16:17 an-node01 kernel: e1000e: eth2 NIC Link is Down
Dec 13 14:16:17 an-node01 kernel: e1000e: eth0 NIC Link is Down
Dec 13 14:16:17 an-node01 kernel: bonding: bond0: link status definitely down for interface eth0, disabling it
Dec 13 14:16:17 an-node01 kernel: bonding: bond0: making interface eth3 the new active one.
Dec 13 14:16:17 an-node01 kernel: bonding: bond2: link status definitely down for interface eth2, disabling it
Dec 13 14:16:17 an-node01 kernel: bonding: bond2: making interface eth5 the new active one.
Dec 13 14:16:17 an-node01 kernel: device eth2 left promiscuous mode
Dec 13 14:16:17 an-node01 kernel: device eth5 entered promiscuous mode
Dec 13 14:16:17 an-node01 kernel: e1000e: eth1 NIC Link is Down
Dec 13 14:16:18 an-node01 kernel: bonding: bond1: link status definitely down for interface eth1, disabling it
Dec 13 14:16:18 an-node01 kernel: bonding: bond1: making interface eth4 the new active one.
</source>


=== Block Alignment ===
I can look at <span class="code">an-node01</span>'s <span class="code">/proc/net/bonding/bond0</span> file and see:
 
For performance reasons, we want to ensure that the file systems created within a VM matches the block alignment of the underlying storage stack, clear down to the base partitions on <span class="code">/dev/sda</span> (or what ever your lowest-level block device is).
 
Imagine this mis-aligned scenario;


<source lang="bash">
cat /proc/net/bonding/bond0
</source>
<source lang="text">
<source lang="text">
Note: Not to scale
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
                ________________________________________________________________
VM Filesystem  |~~~~~|_______|_______|_______|_______|_______|_______|_______|__
                |~~~~~|==========================================================
DRBD Partition  |~~~~~|_______|_______|_______|_______|_______|_______|_______|__
64 KiB block    |_______|_______|_______|_______|_______|_______|_______|_______|
512byte sectors |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|
</source>


Now, when the guest wants to write one block worth of data, it actually causes two blocks to be written, causing avoidable disk I/O.
Bonding Mode: fault-tolerance (active-backup)
<source lang="text">
Primary Slave: eth0 (primary_reselect always)
Note: Not to scale
Currently Active Slave: eth3
                ________________________________________________________________
MII Status: up
VM Filesystem  |~~~~~~~|_______|_______|_______|_______|_______|_______|_______|
MII Polling Interval (ms): 100
                |~~~~~~~|========================================================
Up Delay (ms): 120000
DRBD Partition  |~~~~~~~|_______|_______|_______|_______|_______|_______|_______|
Down Delay (ms): 0
64 KiB block    |_______|_______|_______|_______|_______|_______|_______|_______|
512byte sectors |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|
</source>


By changing the start cylinder of our partitions to always start on 64 [[KiB]] boundries, we're sure to keep the guest OS's filesystem in-line with the DRBD backing device's blocks. Thus, all reads and writes in the guest OS effect a matching number of real blocks, maximizing disk I/O efficiency.
Slave Interface: eth0
MII Status: down
Link Failure Count: 3
Permanent HW addr: 00:e0:81:c7:ec:49
Slave queue ID: 0


{{note|1=You will want to do this with SSD drives, too. It's true that the performance will remain about the same, but SSD drives have a limited number of write cycles, and aligning the blocks will minimize block writes.}}
Slave Interface: eth3
MII Status: up
Link Failure Count: 2
Permanent HW addr: 00:1b:21:9d:59:fc
Slave queue ID: 0
</source>


Special thanks to [http://xen.org/community/spotlight/pasi.html Pasi Kärkkäinen] for his patience in explaining to me the importance of disk alignment. He created two images which I used as templates for the [[ASCII]] art images above;
Notice <span class="code">Currently Active Slave</span> is now <span class="code">eth3</span>? You can also see now that <span class="code">eth0</span>'s link is down (<span class="code">MII Status: down</span>).
* [http://pasik.reaktio.net/virtual-disk-partitions-not-aligned.jpg Virtual Disk Partitions, Not aligned.]
* [http://pasik.reaktio.net/virtual-disk-partitions-aligned.jpg Virtual Disk Partitions, aligned.]


=== DOS Compatibility Mode ===
It should be the same story for all the other bonds on both nodes.


It is common for disks to have "DOS-compatible mode" enabled by default. This causes a 63 [byte]] offset to be used for the first cylinder of the first partition, leading all subsequant partitions to have the same 63 byte offset.
If we check the status of the cluster, we'll see that all is good.
 
If you call <span class="code">fdisk</span> using the <span class="code">-u</span> switch, <span class="code">fdisk</span> will show partition start and end positions using [[sector]]s instead of [[cylinder]]s. We're more interested in [[sector]]s as we know that the sectors are 512 bytes long, and that we'll want 64 [[KiB]] making for easy math; <span class="code">((64 * 1024) / 512) == 128</span> - Each block will have 128 sectors.
 
Thus, we'll want to ensure that the starting sector of all partitions sits at a position evenly divisible by 128. Using this, we can ensure that all levels of the storage stack are aligned properly.
 
We can see the effect of "DOS-compatible mode" by creating a partition on a new drive;


<source lang="bash">
<source lang="bash">
fdisk -u /dev/sdb
cman_tool status
</source>
</source>
<source lang="text">
<source lang="text">
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
Version: 6.2.0
        switch off the mode (command 'c').
Config Version: 7
Cluster Name: an-cluster-A
Cluster Id: 24561
Cluster Member: Yes
Cluster Generation: 40
Membership state: Cluster-Member
Nodes: 2
Expected votes: 1
Total votes: 2
Node votes: 1
Quorum: 1 
Active subsystems: 7
Flags: 2node
Ports Bound: 0 
Node name: an-node01.alteeve.ca
Node ID: 1
Multicast addresses: 239.192.95.81
Node addresses: 10.20.0.1
</source>
</source>


<source lang="text">
Success! We just failed the primary switch without any interruption of clustered services.
Command (m for help): p
</source>


<source lang="text">
We're not out of the woods yet, though...
Disk /dev/sdb: 30.0 GB, 30016659456 bytes
255 heads, 63 sectors/track, 3649 cylinders, total 58626288 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xcfeb5447


  Device Boot      Start        End      Blocks  Id  System
=== Restoring The First Switch ===
</source>


With "DOS-compatible mode" enabled, let's create a new partition. Notice the default initial sector number;
Now that we've confirmed all of the bonds are working on the backup switch, lets restore power to the first switch.


<source lang="text">
{{warning|1=Be sure to wait five minutes after restoring power before declaring the recovery a success! Some configuration faults will take a few minutes to appear.}}
Command (m for help): n
</source>


<source lang="text">
It is very important to wait for a while after restoring power to the switch. Some of the common problems that can break your cluster will not show up immediately. A good example is a misconfiguration of [[STP]]. In this case, the switch will come up, a short time will pass and then the switch will trigger an STP reconfiguration. Once this happens, both switches will block traffic for many seconds. This will partition you cluster.
Command action
  e  extended
  p  primary partition (1-4)
p
Partition number (1-4): 1
First sector (63-58626287, default 63):
Using default value 63
Last sector, +sectors or +size{K,M,G} (63-58626287, default 58626287):
Using default value 58626287
</source>


<source lang="text">
So then, lets power it back up.
Command (m for help): p
</source>


<source lang="text">
Within a few moments, you should see this in your syslog;
Disk /dev/sdb: 30.0 GB, 30016659456 bytes
255 heads, 63 sectors/track, 3649 cylinders, total 58626288 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xcfeb5447
 
  Device Boot      Start        End      Blocks  Id  System
/dev/sdb1              63    58626287    29313112+  83  Linux
</source>
 
Now the first partition has a starting sector of <span class="code">63</span>, which is <span class="code">32,256</span> bytes. This will make it very difficult to create aligned LVM LVs, and thus very difficult to create aligned virtual machine storage.
 
Let's delete the partition, disable "DOS-compatible mode" and re-create the partiton. Note the new default start sector number.


<source lang="text">
<source lang="text">
Command (m for help): d
Dec 13 14:19:30 an-node01 kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:19:30 an-node01 kernel: bonding: bond0: link status up for interface eth0, enabling it in 120000 ms.
Dec 13 14:19:30 an-node01 kernel: e1000e: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:19:30 an-node01 kernel: e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:19:30 an-node01 kernel: bonding: bond2: link status up for interface eth2, enabling it in 120000 ms.
Dec 13 14:19:30 an-node01 kernel: bonding: bond1: link status up for interface eth1, enabling it in 120000 ms.
</source>
</source>


<source lang="text">
As with the individual link test, the backup interfaces will remain in use for two minutes. This is critical because <span class="code">miimon</span> has detected the connection to the switches, but the switches are still a long way from being able to route traffic. After the two minutes, we'll see the primary interfaces return to active state.
Selected partition 1
</source>


<source lang="text">
<source lang="text">
Command (m for help): c
Dec 13 14:20:25 an-node01 kernel: e1000e: eth0 NIC Link is Down
Dec 13 14:20:25 an-node01 kernel: bonding: bond0: link status down again after 55000 ms for interface eth0.
Dec 13 14:20:26 an-node01 kernel: e1000e: eth1 NIC Link is Down
Dec 13 14:20:26 an-node01 kernel: bonding: bond1: link status down again after 55800 ms for interface eth1.
Dec 13 14:20:27 an-node01 kernel: e1000e: eth2 NIC Link is Down
Dec 13 14:20:27 an-node01 kernel: bonding: bond2: link status down again after 56800 ms for interface eth2.
Dec 13 14:20:27 an-node01 kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:20:27 an-node01 kernel: bonding: bond0: link status up for interface eth0, enabling it in 120000 ms.
Dec 13 14:20:28 an-node01 kernel: e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:20:28 an-node01 kernel: bonding: bond1: link status up for interface eth1, enabling it in 120000 ms.
Dec 13 14:20:29 an-node01 kernel: e1000e: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:20:29 an-node01 kernel: bonding: bond2: link status up for interface eth2, enabling it in 120000 ms.
Dec 13 14:20:31 an-node01 kernel: e1000e: eth0 NIC Link is Down
Dec 13 14:20:31 an-node01 kernel: bonding: bond0: link status down again after 3500 ms for interface eth0.
Dec 13 14:20:32 an-node01 kernel: e1000e: eth1 NIC Link is Down
Dec 13 14:20:32 an-node01 kernel: bonding: bond1: link status down again after 4100 ms for interface eth1.
Dec 13 14:20:32 an-node01 kernel: e1000e: eth2 NIC Link is Down
Dec 13 14:20:32 an-node01 kernel: bonding: bond2: link status down again after 3500 ms for interface eth2.
Dec 13 14:20:33 an-node01 kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:20:33 an-node01 kernel: bonding: bond0: link status up for interface eth0, enabling it in 120000 ms.
Dec 13 14:20:34 an-node01 kernel: e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:20:34 an-node01 kernel: bonding: bond1: link status up for interface eth1, enabling it in 120000 ms.
Dec 13 14:20:35 an-node01 kernel: e1000e: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:20:35 an-node01 kernel: bonding: bond2: link status up for interface eth2, enabling it in 120000 ms.
</source>
</source>


<source lang="text">
See all that bouncing? That is caused by many switches showing a link (that is the [[MII]] status) without actually being able to push traffic. As part of the switches boot sequence, the links will go down and come back up a couple of times. The 2 minute counter will reset with each bounce, so the recovery time is actually quite a bit longer than two minutes. This is fine, no need to rush back to the first switch.
DOS Compatibility flag is not set
</source>


<source lang="text">
Note that you will not see this bouncing on switches that hold back on [[MII]] status until finished booting.
Command (m for help): n
</source>


<source lang="text">
After a few minutes, the old interfaces will actually be restored.
Command action
  e  extended
  p  primary partition (1-4)
p
Partition number (1-4): 1
First sector (2048-58626287, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-58626287, default 58626287):
Using default value 58626287
</source>


<source lang="text">
<source lang="text">
Command (m for help): p
Dec 13 14:22:33 an-node01 kernel: bond0: link status definitely up for interface eth0, 1000 Mbps full duplex.
Dec 13 14:22:33 an-node01 kernel: bonding: bond0: making interface eth0 the new active one.
Dec 13 14:22:34 an-node01 kernel: bond1: link status definitely up for interface eth1, 1000 Mbps full duplex.
Dec 13 14:22:34 an-node01 kernel: bonding: bond1: making interface eth1 the new active one.
Dec 13 14:22:35 an-node01 kernel: bond2: link status definitely up for interface eth2, 1000 Mbps full duplex.
Dec 13 14:22:35 an-node01 kernel: bonding: bond2: making interface eth2 the new active one.
Dec 13 14:22:35 an-node01 kernel: device eth5 left promiscuous mode
Dec 13 14:22:35 an-node01 kernel: device eth2 entered promiscuous mode
</source>
</source>


<source lang="text">
Complete success!
Disk /dev/sdb: 30.0 GB, 30016659456 bytes
255 heads, 63 sectors/track, 3649 cylinders, total 58626288 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xcfeb5447


  Device Boot      Start        End      Blocks  Id  System
{{warning|1=It is worth restating the importance of spreading your two fence methods across two switches. If both your PDU(s) and you IPMI (or iLO, etc) interfaces all run through one switch, that switch becomes a single point of failure. Generally, I run the IPMI/iLO/etc fence devices on the primary switch and the PDU(s) on the secondary switch.}}
/dev/sdb1            2048    58626287    29312120  83  Linux
</source>


This now places the starting sector as <span class="code">2048</span>, which neatly divides by <span class="code">128</span> with no remainder. Saying it another way, the starting sector is at <span class="code">((2048 * 512) / 1024) == 1,024</span> [[KiB]], perfectly divisible by 64 [[KiB]]. Now everything above it will align properly.
=== Failing The Secondary Switch ===


=== Alignment Math ===
Before we can say that everything is perfect, we need to test failing and recovering the secondary switch. The main purpose of this test is to ensure that there are no problems caused when the secondary switch restarts.


Assuming your partitions are already created, and that they were created with "DOS-compatible mode" enabled, let's look at how we can manually change the starting sector of each partition to sit on even 64 [[KiB]] boundries.
To fail the switch, as we did with the primary switch, simply cut its power. We should see the following in both node's syslog;


Before we can start the alignment math, we need to know how big each sector is on our hard drive. This is almost always 512 [[bytes]], but it's still best to be sure. To check, run;
<source lang="bash">
fdisk -l /dev/sda | grep Sector
</source>
<source lang="text">
<source lang="text">
Sector size (logical/physical): 512 bytes / 512 bytes
Dec 13 14:30:57 an-node01 kernel: e1000e: eth3 NIC Link is Down
Dec 13 14:30:57 an-node01 kernel: bonding: bond0: link status definitely down for interface eth3, disabling it
Dec 13 14:30:58 an-node01 kernel: e1000e: eth4 NIC Link is Down
Dec 13 14:30:58 an-node01 kernel: e1000e: eth5 NIC Link is Down
Dec 13 14:30:58 an-node01 kernel: bonding: bond1: link status definitely down for interface eth4, disabling it
Dec 13 14:30:58 an-node01 kernel: bonding: bond2: link status definitely down for interface eth5, disabling it
</source>
</source>


So now that we have confirmed our sector size, we can look at the math.
Let's take a look at <span class="code">an-node01</span>'s <span class="code">bond0</span> status file.


* Each 64 [[KiB]] block will use 128 sectors <span class="code">((64 * 1024) / 512) == 128</span>.
<source lang="bash">
* As we create each each partition, we will be asked to enter the starting sector (using <span class="code">fdisk -u</span>). Take the first free sector and divide it by <span class="code">128</span>. If it does not divide evenly, then;
cat /proc/net/bonding/bond0
** Add <span class="code">127 (one sector shy of another block to guarantee we've gone past the start sector we want).
</source>
** Divide the new number by <span class="code">128</span>. This will give you a fractional number. Remove (do not round!) any number after the decimal place.
<source lang="text">
** Multiply by <span class="code">128</span> to get the sector number we want.
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
 
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: eth0 (primary_reselect always)
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 120000
Down Delay (ms): 0


Lets look at a example using real numbers. Lets say we create a new partition and the first free sector is <span class="code">92807568</span>;
Slave Interface: eth0
MII Status: up
Link Failure Count: 3
Permanent HW addr: 00:e0:81:c7:ec:49
Slave queue ID: 0


<source lang="text">
Slave Interface: eth3
92807568 ÷ 128 = 725059.125
MII Status: down
Link Failure Count: 3
Permanent HW addr: 00:1b:21:9d:59:fc
Slave queue ID: 0
</source>
</source>


We have a remainder, so it's not on an even 64 KiB block boundry. Now we need to figure out what sector above <span class="code">92807568</span> is evenly divisible by 128. To do that, lets add 127 (one sector shy of the next 64 KiB block), divide by 128 to get the number of 64 KiB blocks (with a remainder), remove the remainder to get an even number (do not round, you just want the bare integer), then finally multiply by 128 to get the sector number. This will give us the sector number we want our partition to start on.
Note that the <span class="code">eth3</span> interface is shown as <span class="code">down</span>. There should have been no dropped packets in the ping-flood window at all.
 
=== Restoring The Second Switch ===
 
When the power is restored to the switch, we'll see the same "bouncing" as the switch goes through its startup process. Notice that the backup link also remains listed as <span class="code">down</span> for 2 minutes, despite the interface not being used by the bonded interface.


<source lang="text">
<source lang="text">
92807568 + 127 = 92807695
Dec 13 14:33:36 an-node01 kernel: e1000e: eth4 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:33:36 an-node01 kernel: e1000e: eth5 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:33:36 an-node01 kernel: bonding: bond1: link status up for interface eth4, enabling it in 120000 ms.
Dec 13 14:33:36 an-node01 kernel: bonding: bond2: link status up for interface eth5, enabling it in 120000 ms.
Dec 13 14:33:37 an-node01 kernel: e1000e: eth3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:33:37 an-node01 kernel: bonding: bond0: link status up for interface eth3, enabling it in 120000 ms.
Dec 13 14:34:34 an-node01 kernel: e1000e: eth5 NIC Link is Down
Dec 13 14:34:34 an-node01 kernel: bonding: bond2: link status down again after 58000 ms for interface eth5.
Dec 13 14:34:36 an-node01 kernel: e1000e: eth5 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:34:36 an-node01 kernel: bonding: bond2: link status up for interface eth5, enabling it in 120000 ms.
Dec 13 14:34:38 an-node01 kernel: e1000e: eth5 NIC Link is Down
Dec 13 14:34:38 an-node01 kernel: bonding: bond2: link status down again after 2000 ms for interface eth5.
Dec 13 14:34:40 an-node01 kernel: e1000e: eth5 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Dec 13 14:34:40 an-node01 kernel: bonding: bond2: link status up for interface eth5, enabling it in 120000 ms.
</source>
</source>
After two minutes from the last bound, we'll see the backup interfaces return to <span class="code">up</span> state in the bond's status file.
<source lang="text">
<source lang="text">
92807695 ÷ 128 = 725060.1171875
Dec 13 14:35:36 an-node01 kernel: bond1: link status definitely up for interface eth4, 1000 Mbps full duplex.
</source>
Dec 13 14:35:37 an-node01 kernel: bond0: link status definitely up for interface eth3, 1000 Mbps full duplex.
<source lang="text">
Dec 13 14:36:40 an-node01 kernel: bond2: link status definitely up for interface eth5, 1000 Mbps full duplex.
int(725060.1171875) = 725060
</source>
<source lang="text">
725060 x 128 = 92807680
</source>
</source>


So now we know that sector number <span class="code">92807680</span> is the first sector above <span class="code">92807568</span> that falls on an even 64 KiB block. Now we need to alter our partition's starting sector. To do this, we will need to go into <span class="code">fdisk</span>'s extra functions.
After a full five minutes, the cluster and the network remain stable. We can officially declare our network to be fully highly available!
 
= Installing DRBD =
 
DRBD is an open-source application for real-time, block-level disk replication created and maintained by [http://linbit.com Linbit]. We will use this to keep the data on our cluster consistent between the two nodes.
 
To install it, we have three choices;
# Purchase a Red Hat blessed, fully supported copy from [http://linbit.com Linbit].
# Install from the freely available, community maintained [http://elrepo.org/tiki/tiki-index.php ELRepo] repository.
# Install from source files.
 
We will be using the 8.3.x version of DRBD. This tracts the Red Hat and Linbit supported versions, providing the most tested combination and providing a painless path to move to a fully supported version, should you decide to do so down the road.


{{note|1=Pay attention to the last sector number of each partition you create. As you create partitions, <span class="code">fdisk</span> will see free space, as tiny as it is, and it will default to that as the first sector for the next partition. This is annoying. My noting the last sector of each partition you create, you can add 1 sector and do the math to find the first sector above that which sits on a 64 KiB boundary.}}
== Option 1 - Fully Supported by Red Hat and Linbit ==


=== Creating the Three Partitions ===
Red Hat decided to no longer directly support [[DRBD]] in [[EL6]] to narrow down what applications they shipped and focus on improving those components. Given the popularity of DRBD, however, Red Hat struck a deal with [[Linbit]], the authors and maintainers of DRBD. You have the option of purchasing a fully supported version of DRBD that is blessed by Red Hat for use under Red Hat Enterprise Linux 6.


Here I will show you the values I entered to create the three partitions I needed on my nodes.  
If you are building a fully supported cluster, please [http://www.linbit.com/en/products-services/drbd/drbd-for-high-availability/ contact Linbit] to purchase DRBD. Once done, you will get an email with you login information and, most importantly here, the [[URL]] hash needed to access the official repositories.


'''DO NOT COPY THIS!'''
First you will need to add an entry in <span class="code">/etc/yum.repo.d/</span> for DRBD, but this needs to be hand-crafted as you must specify the URL hash given to you in the email as part of the repo configuration.


The values you enter will almost certainly be different.
* Log into the [https://my.linbit.com Linbit portal].
* Click on ''Account''.
* Under ''Your account details'', click on the hash string to the right of ''URL hash:''.
* Click on ''RHEL 6'' (even if you are using CentOS or another [[EL6]] distro.


Start <span class="code">fdisk</span> in sector mode on <span class="code">/dev/sda</span>.
This will take you to a new page called ''Instructions for using the DRBD package repository''. The details installation instruction are found here.


{{note|1=If you are using software [[RAID]], you will need to do the following steps on all disks, then you can proceed to create the RAID partitions normally and they will be aligned.}}
Lets use the imaginative URL hash of <span class="code">abcdefghijklmnopqrstuvwxyz0123456789ABCD</span> and we're are in fact using <span class="code">x86_64</span> architecture. Given this, we would create the following repository configuration file.


<source lang="bash">
<source lang="bash">
fdisk -u /dev/sda
vim /etc/yum.repos.d/linbit.repo
</source>
</source>
<source lang="text">
<source lang="text">
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
[drbd-8]
        switch off the mode (command 'c').
name=DRBD 8
baseurl=http://packages.linbit.com/abcdefghijklmnopqrstuvwxyz0123456789ABCD/rhel6/x86_64
gpgcheck=0
</source>
</source>


Disable DOS compatibility because hey, it's not the 80s any more.
Once this is saved, you can install DRBD using <span class="code">yum</span>;


<source lang="text">
<source lang="bash">
Command (m for help): c
yum install drbd kmod-drbd
</source>
<source lang="text">
DOS Compatibility flag is not set
</source>
</source>


Lets take a look at the current partition layout.
Done!
 
== Option 2 - Install From ELRepo ==


<source lang="text">
[http://elrepo.org ELRepo] is a community-maintained repository of packages for '''E'''nterprise '''L'''inux; Red Hat Enterprise Linux and its derivatives like CentOS. This is the easiest option for a freely available DRBD package.
Command (m for help): p
</source>
<source lang="text">
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00056856


  Device Boot      Start        End      Blocks  Id  System
The main concern with this option is that you are seceding control of DRBD to a community-controlled project. This is a trusted repo, but there are still undeniable security concerns.
/dev/sda1  *        2048      526335      262144  83  Linux
/dev/sda2          526336    84412415    41943040  83  Linux
/dev/sda3        84412416    92801023    4194304  82  Linux swap / Solaris
</source>


Perfect. Now let's create a new extended partition that will use the rest of the disk. We don't care if this is aligned so we'll just accept the default start and end sectors.
Check for the latest installation RPM and information;
* [http://elrepo.org ELRepo Installation Page]


<source lang="text">
<source lang="bash">
Command (m for help): n
# Install the ELRepo GPG key, add the repo and install DRBD.
rpm --import http://elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://elrepo.org/elrepo-release-6-4.el6.elrepo.noarch.rpm
</source>
</source>
<source lang="text">
<source lang="text">
Command action
Retrieving http://elrepo.org/elrepo-release-6-4.el6.elrepo.noarch.rpm
  e  extended
Preparing...                ########################################### [100%]
   p  primary partition (1-4)
   1:elrepo-release        ########################################### [100%]
</source>
</source>
<source lang="text">
<source lang="bash">
e
yum install drbd83-utils kmod-drbd83
</source>
<source lang="text">
Selected partition 4
First sector (92801024-976773167, default 92801024):
</source>
</source>


Just press <span class="code"><enter></span>.
This is the method used for this tutorial.
 
== Option 3 - Install From Source ==


<source lang="text">
If you do not wish to pay for access to the official DRBD repository and do not feel comfortable adding a public repository, your last option is to install from Linbit's source code. The benefit of this is that you can vet the source before installing it, making it a more secure option. The downside is that you will need to manually install updates and security fixes as they are made available.
Using default value 92801024
Last sector, +sectors or +size{K,M,G} (92801024-976773167, default 976773167):
</source>


Just press <span class="code"><enter></span> again.
On '''Both''' nodes run:


<source lang="text">
<source lang="bash">
Using default value 976773167
# Download, compile and install DRBD
yum install flex gcc make kernel-devel
wget -c http://oss.linbit.com/drbd/8.3/drbd-8.3.15.tar.gz
tar -xvzf drbd-8.3.15.tar.gz
cd drbd-8.3.15
./configure \
  --prefix=/usr \
  --localstatedir=/var \
  --sysconfdir=/etc \
  --with-utils \
  --with-km \
  --with-udev \
  --with-pacemaker \
  --with-rgmanager \
  --with-bashcompletion
make
make install
chkconfig --add drbd
chkconfig drbd off
</source>
</source>


Now we'll create the first partition. This will be a 20GB partition used by the shared [[GFS2]] partition. As it will never host a VM, I don't care if it is aligned.
=== Hooking DRBD Into The Cluster's Fencing ===


<source lang="text">
{{warning|1=This script has no delay built into it. In many cases, if the link between the DRBD resources fail, both nodes may fence simultaneously causing both nodes to shut down. If you add <span class="code">sleep 10;</span> to '''one''' of the nodes, then you can ensure that dual-fencing won't occur.}}
Command (m for help): n
</source>
<source lang="text">
First sector (92803072-976773167, default 92803072):
</source>


Just press <span class="code"><enter></span>.
We will use a script, written by [http://lon.fedorapeople.org/ Lon Hohberger] of Red Hat. This script will capture fence calls from DRBD and in turn calls the cluster's <span class="code">fence_node</span> against the opposing node. It this way, DRBD will avoid split-brain without the need to maintain two separate fence configurations.


<source lang="text">
On '''Both''' nodes run:
Using default value 92803072
</source>
<source lang="text">
Last sector, +sectors or +size{K,M,G} (92803072-976773167, default 976773167): +20G
</source>


Now we will create the last two partitions that will host our VMs. I want to split the remaining space in half, so I need to do a little bit more math before I can proceed. I will need to see how many sectors are still free, divide by two to get the number of sectors in half the remaining free space, the add the number of already-used sectors so that I know where the first partition should end. We'll do this math in just a moment.
<source lang="bash">
 
# Obliterate peer - fence via cman
So let's print the current partition layout:
wget -c https://alteeve.ca/files/an-cluster/sbin/obliterate-peer.sh -O /sbin/obliterate-peer.sh
 
chmod a+x /sbin/obliterate-peer.sh
<source lang="text">
ls -lah /sbin/obliterate-peer.sh
Command (m for help): p
</source>
</source>
<source lang="text">
<source lang="text">
Disk /dev/sda: 500.1 GB, 500107862016 bytes
-rwxr-xr-x 1 root root 2.1K May 4 2011 /sbin/obliterate-peer.sh
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00056856
 
  Device Boot      Start        End      Blocks  Id System
/dev/sda1  *        2048      526335      262144  83 Linux
/dev/sda2          526336    84412415    41943040  83  Linux
/dev/sda3        84412416    92801023    4194304  82  Linux swap / Solaris
/dev/sda4        92801024  976773167  441986072    5  Extended
/dev/sda5        92803072  134746111    20971520  83  Linux
</source>
</source>


Start to create the new partition. Before be can sort out the last sector, we first need to find the first sector.
We'll configure DRBD to use this script shortly.


<source lang="text">
==== Alternate Fence Handler; rhcs_fence ====
Command (m for help): n
</source>
<source lang="text">
First sector (134748160-976773167, default 134748160):
</source>


Now I see that it the first free sector is <span class="code">134748160</span>. I divide this by <span class="code">128</span> and I get <span class="code">1052720</span>. It is an even number, so I don't need to do anything more as it is already on a 64 [[KiB]] boundry! So I can just press <span class="code"><enter></span> to accept it.
{{note|1=Caveat: The author of this tutorial is also the author of this script.}}


<source lang="text">
A new fence handler which ties DRBD into RHCS is now available called <span class="code">rhcs_fence</span> with the goal of replacing <span class="code">obliterate-peer.sh</span>. It aims to extend Lon's script, which hasn't been actively developed in some time.
Using default value 134748160
Last sector, +sectors or +size{K,M,G} (134748160-976773167, default 976773167):
</source>


Now we need to do the math to find what sector marks half of the remaining free space. Let's gather some numbers;
This agent has had minimal testing, so please test thoroughly when using it.  


* This partition started at sector <span class="code">134748160</span>
This agent addresses the simultaneous fencing issue by automatically adding a delay to the fence call based on the host node's ID number, with the node having ID of <span class="code">1</span> having no delay at all. It is also a little more elegant about how it handles the actual fence call with the goal of being more reliable when a fence action takes longer than usual to complete.
* The default end sector is <span class="code">976773167</span>
* That means that there are currently <span class="code">(976773167 - 134748160) == 842025007</span> sectors free.
* Half of that is <span class="code">(842025007 / 2) == int(421012503.5) == 421012503</span> sectors free (<span class="code">int()</span> simply means to take the remainder off the number).
* So if we want a partition that is <span class="code">421012503</span> long, we need to add the start sector to get our offset. That is, <span class="code">(421012503 + 134748160) == 555760663</span>. This is what we will enter now.


<source lang="text">
To install it, run the following on both nodes.
Last sector, +sectors or +size{K,M,G} (134748160-976773167, default 976773167): 555760663
</source>


Now to create the last partition, we will repeat the steps above.
<source lang="bash">
 
wget -c https://raw.github.com/digimer/rhcs_fence/master/rhcs_fence
<source lang="text">
chmod 755 rhcs_fence
Command (m for help): n
mv rhcs_fence /sbin/
ls -lah /sbin/rhcs_fence
</source>
</source>
<source lang="text">
<source lang="text">
First sector (555762712-976773167, default 555762712):  
-rwxr-xr-x 1 root root 15K Jan 24 22:04 /usr/sbin/rhcs_fence
</source>
</source>


Let's make sure that <span class="code">555762712</span> is on a 64 KiB boundry;
=== The "Why" of Our Layout ===
* <span class="code">(555762712 / 128) == 4341896.1875</span> is not an even number, so we need to find the next sector on an even boundary.
* Add <span class="code">127</span> sectors and divide by 128 again;
** <span class="code">(555762712 + 127) == 555762839</span>
** <span class="code">(555762839 / 128) == int(4341897.1796875) == 4341897</span>
** <span class="code">(4341897 * 128) == 555762816</span>
* Now we know that we want our start sector to be <span class="code">555762816</span>.


<source lang="text">
We will be creating three separate DRBD resources. The reason for this is to minimize the chance of data loss in a [[split-brain]] event.
First sector (555762712-976773167, default 555762712): 555762816
</source>
<source lang="text">
Last sector, +sectors or +size{K,M,G} (555762816-976773167, default 976773167):
</source>


This is the last partition, so we can just press <span class="code"><enter></span> to get the last sector on the disk.
We're going to take steps to ensure that a [[split-brain]] is exceedingly unlikely, but we always have to plan for the worst case scenario. The biggest concern with recovering from a split-brain is that, by necessity, one of the nodes will lose data. Further, there is no way to automate the recovery, as there is no clear way for DRBD to tell which node has the more valuable data.


<source lang="text">
Consider this scenario;
Using default value 976773167
* You have a two-node cluster running two VMs. One is a mirror for a project and the other is an accounting application. Node 1 hosts the mirror, Node 2 hosts the accounting application.
</source>
* A partition occurs and both nodes try to fence the other.
* Network access is lost, so both nodes fall back to fencing using PDUs.
* Both nodes have redundant power supplies, and at some point in time, the power cables on the second PDU got reversed.
* The <span class="code">fence_apc_snmp</span> agent succeeds, because the requested outlets were shut off. However, do to the cabling mistake, neither node actually shut down.
* Both nodes proceed to run independently, thinking they are the only node left.
* During this split-brain, the mirror VM downloads over a [[gigabyte]] of updates. Meanwhile, an hour earlier, the accountant updates the books, totalling less than one [[megabyte]] of changes.


Lets take a final look at the new partition before committing the changes to disk.
At this point, you will need to discard the changed on one of the nodes. So now you have to choose;
* Is the node with the most changes more valid?
* Is the node with the most recent changes more valid?


<source lang="text">
Neither of these are true, as the node with the older data and smallest amount of changed data is the accounting data which is significantly more valuable.
Command (m for help): p
</source>
<source lang="text">
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00056856


  Device Boot      Start        End      Blocks  Id  System
Now imagine that both VMs have equally valuable data. What then? Which side do you discard?
/dev/sda1  *        2048      526335      262144  83  Linux
/dev/sda2          526336    84412415    41943040  83  Linux
/dev/sda3        84412416    92801023    4194304  82  Linux swap / Solaris
/dev/sda4        92801024  976773167  441986072    5  Extended
/dev/sda5        92803072  134746111    20971520  83  Linux
/dev/sda6      134748160  555760663  210506252  83  Linux
/dev/sda7      555762816  976773167  210505176  83  Linux
</source>


Perfect. If you divide partition six or seven's start sector by <span class="code">128</span>, you will see that both have no remainder which means that they are, if fact, aligned. This is the last time we need to worry about alignment because LVM uses an even multiple of 64 [[KiB]] in it's [[extent]] sizes, so all normal extent sized will always produce [[LV]]s on even 64 KiB boudaries.
The approach we will use is to create two separate DRBD resources. Then we will assign the VMs into two groups; VMs normally designed to run on one node will go one one resource while the VMs designed to normally run on the other resource will share the second resource.  


So now write out the changes, re-probe the disk (or reboot) and then repeat all these steps on the other node.
With all the VMs on a given resource running on the same DRBD resource, we can fairly easily decide which node to discard changes on, on a per-resource level.


<source lang="text">
To summarize, we're going to create the following three resources;
Command (m for help): w
* <span class="code">r0</span>; A small resource for the shared files formatted with [[GFS2]].
</source>
* <span class="code">r1</span>; This resource will back the VMs designed to primarily run on <span class="code">an-node01</span>.
<source lang="text">
* <span class="code">r2</span>; This resource will back the VMs designed to primarily run on <span class="code">an-node02</span>.
The partition table has been altered!


Calling ioctl() to re-read partition table.
== Creating The Partitions For DRBD ==


WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
It is possible to use [[LVM]] on the hosts, and simply create [[LV]]s to back our DRBD resources. However, this causes confusion as LVM will see the [[PV]] signatures on both the DRBD backing devices and the DRBD device itself. Getting around this requires editing LVM's <span class="code">filter</span> option, which is somewhat complicated. Not overly so, mind you, but enough to be outside the scope of this document.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
</source>


No reprobe using <span class="code">partprobe</span>.
Also, by working with <span class="code">fdisk</span> directly, it will give us a chance to make sure that the DRBD partitions start on an even 64 [[KiB]] boundry. This is important for decent performance on Windows VMs, as we will see later. This is true for both traditional platter and modern solid-state drives.


<source lang="bash">
On our nodes, we created three primary disk partitions;
partprobe /dev/sda
* <span class="code">/dev/sda1</span>; The <span class="code">/boot</span> partition.
</source>
* <span class="code">/dev/sda2</span>; The root <span class="code">/</span> partition.
<source lang="text">
* <span class="code">/dev/sda3</span>; The swap partition.
Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda
(Device or resource busy).  As a result, it may not reflect all of your changes
until after reboot.
</source>


In my case, the probe failed so I will reboot. To do this most safely, stop the cluster before calling <span class="code">reboot</span>.
We will create a new extended partition. Then within it we will create three new partitions;
* <span class="code">/dev/sda5</span>; a small partition we will later use for our shared [[GFS2]] partition.
* <span class="code">/dev/sda6</span>; a partition big enough to host the VMs that will normally run on <span class="code">an-node01</span>.
* <span class="code">/dev/sda7</span>; a partition big enough to host the VMs that will normally run on <span class="code">an-node02</span>.


<source lang="bash">
As we create each partition, we will do a little math to ensure that the start sector is on a 64 [[KiB]] boundry.
/etc/init.d/cman stop
</source>
<source lang="text">
Stopping cluster:
  Leaving fence domain...                                [  OK  ]
  Stopping gfs_controld...                                [ OK  ]
  Stopping dlm_controld...                                [ OK  ]
  Stopping fenced...                                      [  OK  ]
  Stopping cman...                                        [  OK  ]
  Waiting for corosync to shutdown:                      [  OK  ]
  Unloading kernel modules...                            [  OK  ]
  Unmounting configfs...                                  [  OK  ]
</source>


Now reboot.
=== Block Alignment ===


<source lang="bash">
For performance reasons, we want to ensure that the file systems created within a VM matches the block alignment of the underlying storage stack, clear down to the base partitions on <span class="code">/dev/sda</span> (or what ever your lowest-level block device is).
reboot
</source>


== Configuring DRBD ==
Imagine this misaligned scenario;


DRBD is configured in two parts;
<source lang="text">
Note: Not to scale
                ________________________________________________________________
VM File system  |~~~~~|_______|_______|_______|_______|_______|_______|_______|__
                |~~~~~|==========================================================
DRBD Partition  |~~~~~|_______|_______|_______|_______|_______|_______|_______|__
64 KiB block    |_______|_______|_______|_______|_______|_______|_______|_______|
512byte sectors |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|
</source>


* Global and common configuration options
Now, when the guest wants to write one block worth of data, it actually causes two blocks to be written, causing avoidable disk I/O.
* Resource configurations
<source lang="text">
Note: Not to scale
                ________________________________________________________________
VM File system  |~~~~~~~|_______|_______|_______|_______|_______|_______|_______|
                |~~~~~~~|========================================================
DRBD Partition  |~~~~~~~|_______|_______|_______|_______|_______|_______|_______|
64 KiB block    |_______|_______|_______|_______|_______|_______|_______|_______|
512byte sectors |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|
</source>
 
By changing the start cylinder of our partitions to always start on 64 [[KiB]] boundaries, we're sure to keep the guest OS's file system in-line with the DRBD backing device's blocks. Thus, all reads and writes in the guest OS effect a matching number of real blocks, maximizing disk I/O efficiency.
 
Thankfully, as we'll see in a moment, the <span class="code">parted</span> program has a mode that will tell it to always optimally align partitions, so we won't need to do any crazy math.
 
{{note|1=You will want to do this with [[SSD]] drives, too. It's true that the performance will remain about the same, but SSD drives have a limited number of write cycles, and aligning the blocks will minimize block writes.}}
 
Special thanks to [http://xen.org/community/spotlight/pasi.html Pasi Kärkkäinen] for his patience in explaining to me the importance of disk alignment. He created two images which I used as templates for the [[ASCII]] art images above;
* [http://pasik.reaktio.net/virtual-disk-partitions-not-aligned.jpg Virtual Disk Partitions, Not aligned.]
* [http://pasik.reaktio.net/virtual-disk-partitions-aligned.jpg Virtual Disk Partitions, aligned.]
 
=== Creating the DRBD Partitions ===
 
Here I will show you the values I entered to create the three partitions I needed on my nodes.


We will be creating three separate DRBD resources, so we will create three separate resource configuration files. More on that in a moment.
'''DO NOT DIRECTLY COPY THIS!'''


=== Configuring DRBD Global and Common Options ===
The values you enter will almost certainly be different.


The first file to edit is <span class="code">/etc/drbd.d/global_common.conf</span>. In this file, we will set global configuration options and set default resource configuration options. These default resource options can be overwritten in the actual resource files which we'll create once we're done here.
We're going to use a program called <span class="code">parted</span> to configure the disk <span class="code">/dev/sda</span>. Pay close attention to the <span class="code">-a optimal</span> switch. This tells <span class="code">parted</span> to create new partitions with optimal block alignment, which is crucial for virtual machine performance.


<source lang="bash">
<source lang="bash">
cp /etc/drbd.d/global_common.conf /etc/drbd.d/global_common.conf.orig
parted -a optimal /dev/sda
vim /etc/drbd.d/global_common.conf
</source>
diff -u /etc/drbd.d/global_common.conf.orig /etc/drbd.d/global_common.conf
<source lang="text">
GNU Parted 2.1
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted)                                                                 
</source>
 
We're now in the <span class="code">parted</span> console. Before we start, let's take a look at the current disk configuration along with the amount of free space available.
 
<source lang="text">
print free
</source>
<source lang="text">
Model: ATA ST9500420ASG (scsi)
Disk /dev/sda: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
 
Number  Start  End    Size    Type    File system    Flags
        32.3kB  1049kB  1016kB          Free Space
1      1049kB  269MB  268MB  primary  ext4            boot
  2      269MB  43.2GB  42.9GB  primary  ext4
3      43.2GB  47.5GB  4295MB  primary  linux-swap(v1)
        47.5GB  500GB  453GB            Free Space
</source>
</source>
<source lang="diff">
 
--- /etc/drbd.d/global_common.conf.orig 2011-09-14 14:03:56.364566109 -0400
Before we can create the three DRBD partition, we first need to create an [[extended partition|extended]] partition wherein which we will create the three [[logical partition|logical]] partitions. From the output above, we can see that the free space starts at <span class="code">47.5GB</span>, and that the drive ends at <span class="code">500GB</span>. Knowing this, we can now create the extended partition.
+++ /etc/drbd.d/global_common.conf 2011-09-14 14:23:37.287566400 -0400
 
@@ -15,24 +15,81 @@
<source lang="text">
# out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
mkpart extended 47.5GB 500GB
# before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
</source>
# after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
<source lang="text">
+
Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda
+ # This script is a wrapper for RHCS's 'fence_node' command line
(Device or resource busy). As a result, it may not reflect all of your changes
+ # tool. It will call a fence against the other node and return
until after reboot.
+ # the appropriate exit code to DRBD.
+ fence-peer "/sbin/obliterate-peer.sh";
}
startup {
# wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb
+
+ # This tells DRBD to promote both nodes to Primary on start.
+ become-primary-on both;
+
+ # This tells DRBD to wait five minutes for the other node to
+ # connect. This should be longer than it takes for cman to
+ # timeout and fence the other node *plus* the amount of time it
+ # takes the other node to reboot. If you set this too short,
+ # you could corrupt your data. If you want to be extra safe, do
+ # not use this at all and DRBD will wait for the other node
+ # forever.
+ wfc-timeout 300;
+
+ # This tells DRBD to wait for the other node for three minutes
+ # if the other node was degraded the last time it was seen by
+ # this node. This is a way to speed up the boot process when
+ # the other node is out of commission for an extended duration.
+ degr-wfc-timeout 120;
}
disk {
# on-io-error fencing use-bmbv no-disk-barrier no-disk-flushes
# no-disk-drain no-md-flushes max-bio-bvecs
+
+ # This tells DRBD to block IO and fence the remote node (using
+ # the 'fence-peer' helper) when connection with the other node
+ # is unexpectedly lost. This is what helps prevent split-brain
+ # condition and it is incredible important in dual-primary
+ # setups!
+ fencing resource-and-stonith;
}
net {
# sndbuf-size rcvbuf-size timeout connect-int ping-int ping-timeout max-buffers
# max-epoch-size ko-count allow-two-primaries cram-hmac-alg shared-secret
# after-sb-0pri after-sb-1pri after-sb-2pri data-integrity-alg no-tcp-cork
+
+ # This tells DRBD to allow two nodes to be Primary at the same
+ # time. It is needed when 'become-primary-on both' is set.
+ allow-two-primaries;
+
+ # The following three commands tell DRBD how to react should
+ # our best efforts fail and a split brain occurs. You can learn
+ # more about these options by reading the drbd.conf man page.
+ # NOTE! It is not possible to safely recover from a split brain
+ # where both nodes were primary. This care requires human
+ # intervention, so 'disconnect' is the only safe policy.
+ after-sb-0pri discard-zero-changes;
+ after-sb-1pri discard-secondary;
+ after-sb-2pri disconnect;
}
syncer {
# rate after al-extents use-rle cpu-mask verify-alg csums-alg
+
+ # This alters DRBD's default syncer rate. Note that is it
+ # *very* important that you do *not* configure the syncer rate
+ # to be too fast. If it is too fast, it can significantly
+ # impact applications using the DRBD resource. If it's set to a
+ # rate higher than the underlying network and storage can
+ # handle, the sync can stall completely.
+ # This should be set to ~30% of the *tested* sustainable read  
+ # or write speed of the raw /dev/drbdX device (whichever is
+ # slower). In this example, the underlying resource was tested
+ # as being able to sustain roughly 60 MB/sec, so this is set to
+ # one third of that rate, 20M.
+ rate 20M;
}
}
</source>
</source>


=== Configuring the DRBD Resources ===
Don't worry about that message, we will reboot when we finish.


As mentioned earlier, we are going to create three DRBD resources.
So now we can confirm that the new extended partition was create by again printing the partition table and the free space.


* Resource <span class="code">r0</span>, which will be device <span class="code">/dev/drbd0</span>, will be the shared GFS2 partition.
<source lang="text">
* Resource <span class="code">r1</span>, which will be device <span class="code">/dev/drbd1</span>, will provide disk space for VMs that will normally run on <span class="code">an-node01</span>.
print free
* Resource <span class="code">r2</span>, which will be device <span class="code">/dev/drbd2</span>, will provide disk space for VMs that will normally run on <span class="code">an-node02</span>.
</source>
<source lang="text">
Model: ATA ST9500420ASG (scsi)
Disk /dev/sda: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos


{{note|1=The reason for the two separate VM resources is to help protect against data loss in the off chance that a [[split-brain]] occurs, despite our counter-measures. As we will see later, recovering from a split brain requires discarding the changes on one side of the resource. If VMs are running on the same resource but on different nodes, this would lead to data loss. Using two resources helps prevent that scenario.}}
Number  Start  End    Size    Type      File system    Flags
        32.3kB  1049kB  1016kB            Free Space
1     1049kB  269MB  268MB  primary  ext4            boot
2      269MB  43.2GB  42.9GB  primary  ext4
3      43.2GB  47.5GB  4295MB  primary  linux-swap(v1)
4      47.5GB  500GB  453GB  extended                  lba
        47.5GB  500GB  453GB            Free Space
        500GB  500GB  24.6kB            Free Space
</source>


Each resource configuration will be in it's own file saved as <span class="code">/etc/drbd.d/rX.res</span>. The three of them will be pretty much the same. So let's take a look at the first GFS2 resource <span class="code">r0.res</span>, then we'll just look at the changes for <span class="code">r1.res</span> and <span class="code">r2.res</span>. These files won't exist initially.
Perfect. So now we're going to create our three logical partitions. We're going to use the same start position as last time, but the end position will be 20 [[GiB]] further in.


<source lang="bash">
<source lang="text">
vim /etc/drbd.d/r0.res
mkpart logical 47.5GB 67.5GB
</source>
</source>
<source lang="text">
<source lang="text">
# This is the resource used for the shared GFS2 partition.
Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda
resource r0 {
(Device or resource busy).  As a result, it may not reflect all of your changes
# This is the block device path.
until after reboot.
        device          /dev/drbd0;
</source>


# We'll use the normal internal metadisk (takes about 32MB/TB)
We'll check again to see the new partition layout.
        meta-disk      internal;


# This is the `uname -n` of the first node
<source lang="text">
        on an-node01.alteeve.com {
print free
# The 'address' has to be the IP, not a hostname. This is the
</source>
# node's SN (bond1) IP. The port number must be unique amoung
<source lang="text">
# resources.
Model: ATA ST9500420ASG (scsi)
                address        10.10.0.1:7789;
Disk /dev/sda: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos


# This is the block device backing this resource on this node.
Number  Start  End    Size    Type      File system    Flags
                disk           /dev/sda5;
        32.3kB  1049kB  1016kB            Free Space
        }
1      1049kB  269MB  268MB  primary  ext4           boot
# Now the same information again for the second node.
2      269MB  43.2GB  42.9GB  primary  ext4
        on an-node02.alteeve.com {
3      43.2GB  47.5GB  4295MB  primary  linux-swap(v1)
                address        10.10.0.2:7789;
4      47.5GB  500GB  453GB  extended                  lba
                disk            /dev/sda5;
5      47.5GB  67.5GB  20.0GB  logical
         }
        67.5GB  500GB  433GB            Free Space
}
         500GB  500GB  24.6kB            Free Space
</source>
</source>


Now copy this to <span class="code">r1.res</span> and edit for the <span class="code">an-node01</span> VM resource. The main differences are the resource name, <span class="code">r1</span>, the block device, <span class="code">/dev/drbd1</span>, the port, <span class="code">7790</span> and the backing block devices, <span class="code">/dev/sda6</span>.
Again, perfect. Now I have a total of <span class="code">433[[GB]]</span> left free. How you carve this up for your VMs will depend entirely on what kind of VMs you plan to install and what their needs are. For me, I will divide the space evenly into to logical partitions of <span class="code">216.5GB</span> (<span class="code">433 / 2 = 216.5)</span>.
 
The first partition will start at <span class="code">67.5</span> and end at <span class="code">284GB</span> (<span class="code">67.5 + 216.5 = 284</span>)


<source lang="bash">
<source lang="text">
cp /etc/drbd.d/r0.res /etc/drbd.d/r1.res
mkpart logical 67.5GB 284GB
vim /etc/drbd.d/r1.res
</source>
</source>
<source lang="text">
<source lang="text">
# This is the resource used for VMs that will normally run on an-node01.
Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda
resource r1 {
(Device or resource busy).  As a result, it may not reflect all of your changes
# This is the block device path.
until after reboot.
        device          /dev/drbd1;
</source>


# We'll use the normal internal metadisk (takes about 32MB/TB)
Once again, lets look at the new partition table.
        meta-disk      internal;


# This is the `uname -n` of the first node
<source lang="text">
        on an-node01.alteeve.com {
print free
# The 'address' has to be the IP, not a hostname. This is the
</source>
# node's SN (bond1) IP. The port number must be unique amoung
<source lang="text">
# resources.
Model: ATA ST9500420ASG (scsi)
                address        10.10.0.1:7790;
Disk /dev/sda: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos


# This is the block device backing this resource on this node.
Number  Start  End    Size    Type      File system    Flags
                disk           /dev/sda6;
        32.3kB  1049kB  1016kB            Free Space
        }
1      1049kB  269MB  268MB  primary  ext4           boot
# Now the same information again for the second node.
2      269MB  43.2GB  42.9GB  primary  ext4
        on an-node02.alteeve.com {
3      43.2GB  47.5GB  4295MB  primary  linux-swap(v1)
                address        10.10.0.2:7790;
4      47.5GB  500GB  453GB  extended                  lba
                disk            /dev/sda6;
5      47.5GB  67.5GB  20.0GB  logical
         }
6      67.5GB  284GB  216GB  logical
}
         284GB  500GB  216GB            Free Space
        500GB  500GB  24.6kB            Free Space
</source>
</source>


The last resource is again the same, with the same set of changes.
Finally, our last partition will start at <span class="code">284GB</span> and use the rest of the free space, ending at <span class="code">500GB</span>.


<source lang="bash">
<source lang="text">
cp /etc/drbd.d/r1.res /etc/drbd.d/r2.res
mkpart logical 284GB 500GB
vim /etc/drbd.d/r2.res
</source>
</source>
<source lang="text">
<source lang="text">
# This is the resource used for VMs that will normally run on an-node02.
Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda
resource r2 {
(Device or resource busy).  As a result, it may not reflect all of your changes
# This is the block device path.
until after reboot.
        device          /dev/drbd2;
</source>


# We'll use the normal internal metadisk (takes about 32MB/TB)
One last time, let's look at the partition table.
        meta-disk      internal;


# This is the `uname -n` of the first node
<source lang="text">
         on an-node01.alteeve.com {
print free
# The 'address' has to be the IP, not a hostname. This is the
</source>
# node's SN (bond1) IP. The port number must be unique amoung
<source lang="text">
# resources.
Model: ATA ST9500420ASG (scsi)
                address        10.10.0.1:7791;
Disk /dev/sda: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
 
Number  Start  End    Size    Type      File system    Flags
         32.3kB  1049kB  1016kB            Free Space
1      1049kB  269MB  268MB  primary  ext4            boot
2      269MB  43.2GB  42.9GB  primary  ext4
3      43.2GB  47.5GB  4295MB  primary  linux-swap(v1)
4      47.5GB  500GB  453GB  extended                  lba
5      47.5GB  67.5GB  20.0GB  logical
6      67.5GB  284GB  216GB  logical
7      284GB  500GB  216GB  logical
        500GB  500GB  24.6kB            Free Space
</source>
 
Just as we asked for. Before we finish though, let's be extra careful and do a manual check of our three partitions to ensure that they are, in fact, aligned optimally. There will be no output from the following commands if the partitions are aligned.
 
<source lang="text">
(parted) align-check opt 5
(parted) align-check opt 6
(parted) align-check opt 7
(parted)                                                                
</source>
 
Excellent! We can now exit.
 
<source lang="text">
quit
</source>
<source lang="text">
Information: You may need to update /etc/fstab.                          
</source>
 
Now we need to reboot to make the kernel see the new partition table.


# This is the block device backing this resource on this node.
<source lang="bash">
                disk            /dev/sda7;
reboot
        }
# Now the same information again for the second node.
        on an-node02.alteeve.com {
                address        10.10.0.2:7791;
                disk            /dev/sda7;
        }
}
</source>
</source>


The final step is to validate the configuration. This is done by running the following command;
Done! Do this for both nodes, then proceed.
 
== Configuring DRBD ==
 
DRBD is configured in two parts;
 
* Global and common configuration options
* Resource configurations
 
We will be creating three separate DRBD resources, so we will create three separate resource configuration files. More on that in a moment.
 
=== Configuring DRBD Global and Common Options ===
 
The first file to edit is <span class="code">/etc/drbd.d/global_common.conf</span>. In this file, we will set global configuration options and set default resource configuration options. These default resource options can be overwritten in the actual resource files which we'll create once we're done here.
 
I'll explain the values we're setting here, and we'll put the explanation of each option in the file itself, as it will be useful to have them should you need to alter the files sometime in the future.
 
The first addition is in the <span class="code">handlers { }</span> directive. We're going to add the <span class="code">fence-peer</span> option and configure it to use the <span class="code">obliterate-peer.sh</span> script we spoke about earlier in the DRBD section.


<source lang="bash">
<source lang="bash">
drbdadm dump
vim /etc/drbd.d/global_common.conf
</source>
</source>
<source lang="text">
<source lang="text">
# /etc/drbd.conf
handlers {
common {
# This script is a wrapper for RHCS's 'fence_node' command line
    protocol              C;
# tool. It will call a fence against the other node and return
    net {
# the appropriate exit code to DRBD.
        allow-two-primaries;
fence-peer "/sbin/obliterate-peer.sh";
        after-sb-0pri    discard-zero-changes;
}
        after-sb-1pri    discard-secondary;
</source>
        after-sb-2pri    disconnect;
 
    }
{{note|1=If you used the <span class="code">rhcs_fence</span> handler, use '<span class="code">fence-peer "/usr/sbin/rhcs_fence";</span>'.}}
    disk {
 
        fencing          resource-and-stonith;
We're going to add three options to the <span class="code">startup { }</span> directive; We're going to tell DRBD to make both nodes "primary" on start, to wait five minutes on start for its peer to connect and, if the peer never connected last time, to wait onto two minutes.
    }
 
    syncer {
<source lang="text">
        rate            20M;
startup {
    }
# This tells DRBD to promote both nodes to Primary on start.
    startup {
become-primary-on both;
        wfc-timeout      300;
        degr-wfc-timeout 120;
        become-primary-on both;
    }
    handlers {
        pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
        pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
        local-io-error  "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
        fence-peer      /sbin/obliterate-peer.sh;
    }
}


# resource r0 on an-node01.alteeve.com: not ignored, not stacked
# This tells DRBD to wait five minutes for the other node to
resource r0 {
# connect. This should be longer than it takes for cman to
    on an-node01.alteeve.com {
# timeout and fence the other node *plus* the amount of time it
        device          /dev/drbd0 minor 0;
# takes the other node to reboot. If you set this too short,
        disk            /dev/sda5;
# you could corrupt your data. If you want to be extra safe, do
        address          ipv4 10.10.0.1:7789;
# not use this at all and DRBD will wait for the other node
        meta-disk        internal;
# forever.
    }
wfc-timeout 300;
    on an-node02.alteeve.com {
        device          /dev/drbd0 minor 0;
        disk            /dev/sda5;
        address          ipv4 10.10.0.2:7789;
        meta-disk        internal;
    }
}


# resource r1 on an-node01.alteeve.com: not ignored, not stacked
# This tells DRBD to wait for the other node for three minutes
resource r1 {
# if the other node was degraded the last time it was seen by
    on an-node01.alteeve.com {
# this node. This is a way to speed up the boot process when
        device          /dev/drbd1 minor 1;
# the other node is out of commission for an extended duration.
        disk            /dev/sda6;
degr-wfc-timeout 120;
        address          ipv4 10.10.0.1:7790;
}
        meta-disk        internal;
</source>
    }
    on an-node02.alteeve.com {
        device          /dev/drbd1 minor 1;
        disk            /dev/sda6;
        address          ipv4 10.10.0.2:7790;
        meta-disk        internal;
    }
}
 
# resource r2 on an-node01.alteeve.com: not ignored, not stacked
resource r2 {
    on an-node01.alteeve.com {
        device          /dev/drbd2 minor 2;
        disk            /dev/sda7;
        address          ipv4 10.10.0.1:7791;
        meta-disk        internal;
    }
    on an-node02.alteeve.com {
        device          /dev/drbd2 minor 2;
        disk            /dev/sda7;
        address          ipv4 10.10.0.2:7791;
        meta-disk        internal;
    }
}
</source>


You'll note that the output is formatted differently, but the values themselves are the same. If there had of been errors, you would have seen them printed. Fix any problems before proceeding. Once you get a clean dump, copy the configuration over to the other node.
For the <span class="code">disk { }</span> directive, we're going to configure DRBD's behaviour when a [[split-brain]] is detected. By setting <span class="code">fencing</span> to <span class="code">resource-and-stonith</span>, we're telling DRBD to stop all disk access and call a fence against its peer node rather than proceeding.


<source lang="bash">
rsync -av /etc/drbd.d root@an-node02:/etc/
</source>
<source lang="text">
<source lang="text">
sending incremental file list
disk {
drbd.d/
# This tells DRBD to block IO and fence the remote node (using
drbd.d/global_common.conf
# the 'fence-peer' helper) when connection with the other node
drbd.d/global_common.conf.orig
# is unexpectedly lost. This is what helps prevent split-brain
drbd.d/r0.res
# condition and it is incredible important in dual-primary
drbd.d/r1.res
# setups!
drbd.d/r2.res
fencing resource-and-stonith;
 
}
sent 7619 bytes  received 129 bytes  15496.00 bytes/sec
total size is 7946  speedup is 1.03
</source>
</source>


== Initializing The DRBD Resources ==
In the <span class="code">net { }</span> directive, we're going to tell DRBD that it is allowed to run in dual-primary mode and we're going to configure how it behaves if a split-brain has occurred, despite our best efforts. The recovery (or lack there of) requires three options; What to do when neither node had been primary (<span class="code">after-sb-0pri</span>), what to do if only one node had been primary (<span class="code">after-sb-1pri</span>) and finally, what to do if both nodes had been primary (<span class="code">after-sb-2pri</span>), as will most likely be the case for us. This last instance will be configured to tell DRBD just to drop the connection, which will require human intervention to correct.
 
Now that we have DRBD configured, we need to initialize the DRBD backing devices and then bring up the resources for the first time.


{{note|1=To save a bit of time and typing, the following sections will use a little <span class="code">bash</span> magic. When commands need to be run on all three resources, rather than running the same command three times with the different resource names, we will use the short-hand form <span class="code">r{0,1,2}</span> or <span class="code">r{0..2}</span>.}}
At this point, you might be wondering why we won't simply run Primary/Secondary. The reason is because of live-migration. When we push a VM across to the backup node, there is a short period of time where both nodes need to be writeable.  


On '''both''' nodes, create the new metadata on the backing devices. You may need to type <span class="code">yes</span> to confirm the action if any data is seen. If DRBD sees an actual file system, it will error and insist that you clear the partition. You can do this by running; <span class="code">dd if=/dev/zero of=/dev/sdaX bs=4M count=1000</span>, where <span class="code">X</span> is the partition you want to clear.
<source lang="text">
net {
# This tells DRBD to allow two nodes to be Primary at the same
# time. It is needed when 'become-primary-on both' is set.
allow-two-primaries;


<source lang="bash">
# The following three commands tell DRBD how to react should
drbdadm create-md r{0..2}
# our best efforts fail and a split brain occurs. You can learn
# more about these options by reading the drbd.conf man page.
# NOTE! It is not possible to safely recover from a split brain
# where both nodes were primary. This care requires human
# intervention, so 'disconnect' is the only safe policy.
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
}
</source>
</source>
<source lang="text">
md_offset 21474832384
al_offset 21474799616
bm_offset 21474144256


Found some data
We'll make our usual backup of the configuration file, add the new sections and then create a diff to see exactly how things have changed.


==> This might destroy existing data! <==
<source lang="bash">
 
cp /etc/drbd.d/global_common.conf /etc/drbd.d/global_common.conf.orig
Do you want to proceed?
vim /etc/drbd.d/global_common.conf
diff -u  /etc/drbd.d/global_common.conf.orig /etc/drbd.d/global_common.conf
</source>
</source>
<source lang="text">
<source lang="diff">
[need to type 'yes' to confirm] yes
--- /etc/drbd.d/global_common.conf.orig 2011-12-13 22:22:30.916128360 -0500
+++ /etc/drbd.d/global_common.conf 2011-12-13 22:26:30.733379609 -0500
@@ -14,22 +14,67 @@
# split-brain "/usr/lib/drbd/notify-split-brain.sh root";
# out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
# before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
+
# after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
+                # This script is a wrapper for RHCS's 'fence_node' command line
+                # tool. It will call a fence against the other node and return
+                # the appropriate exit code to DRBD.
+                fence-peer              "/sbin/obliterate-peer.sh";
}
startup {
# wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb
+
+                # This tells DRBD to promote both nodes to Primary on start.
+                become-primary-on      both;
+
+                # This tells DRBD to wait five minutes for the other node to
+                # connect. This should be longer than it takes for cman to
+                # timeout and fence the other node *plus* the amount of time it
+                # takes the other node to reboot. If you set this too short,
+                # you could corrupt your data. If you want to be extra safe, do
+                # not use this at all and DRBD will wait for the other node
+                # forever.
+                wfc-timeout            300;
+
+                # This tells DRBD to wait for the other node for three minutes
+                # if the other node was degraded the last time it was seen by
+                # this node. This is a way to speed up the boot process when
+                # the other node is out of commission for an extended duration.
+                degr-wfc-timeout        120;
}
disk {
# on-io-error fencing use-bmbv no-disk-barrier no-disk-flushes
# no-disk-drain no-md-flushes max-bio-bvecs
+
+                # This tells DRBD to block IO and fence the remote node (using
+                # the 'fence-peer' helper) when connection with the other node
+                # is unexpectedly lost. This is what helps prevent split-brain
+                # condition and it is incredible important in dual-primary
+                # setups!
+                fencing                resource-and-stonith;
}
net {
# sndbuf-size rcvbuf-size timeout connect-int ping-int ping-timeout max-buffers
# max-epoch-size ko-count allow-two-primaries cram-hmac-alg shared-secret
# after-sb-0pri after-sb-1pri after-sb-2pri data-integrity-alg no-tcp-cork
+
+
+                # This tells DRBD to allow two nodes to be Primary at the same
+                # time. It is needed when 'become-primary-on both' is set.
+                allow-two-primaries;
+
+                # The following three commands tell DRBD how to react should
+                # our best efforts fail and a split brain occurs. You can learn
+                # more about these options by reading the drbd.conf man page.
+                # NOTE! It is not possible to safely recover from a split brain
+                # where both nodes were primary. This care requires human
+                # intervention, so 'disconnect' is the only safe policy.
+                after-sb-0pri          discard-zero-changes;
+                after-sb-1pri          discard-secondary;
+                after-sb-2pri          disconnect;
}
syncer {
</source>
</source>
<source lang="text">


Writing meta data...
=== Configuring the DRBD Resources ===
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
success
md_offset 215558397952
al_offset 215558365184
bm_offset 215551782912


Found some data
As mentioned earlier, we are going to create three DRBD resources.


==> This might destroy existing data! <==
* Resource <span class="code">r0</span>, which will be device <span class="code">/dev/drbd0</span>, will be the shared GFS2 partition.
* Resource <span class="code">r1</span>, which will be device <span class="code">/dev/drbd1</span>, will provide disk space for VMs that will normally run on <span class="code">an-node01</span>.
* Resource <span class="code">r2</span>, which will be device <span class="code">/dev/drbd2</span>, will provide disk space for VMs that will normally run on <span class="code">an-node02</span>.


Do you want to proceed?
{{note|1=The reason for the two separate VM resources is to help protect against data loss in the off chance that a [[split-brain]] occurs, despite our counter-measures. As we will see later, recovering from a split brain requires discarding the changes on one side of the resource. If VMs are running on the same resource but on different nodes, this would lead to data loss. Using two resources helps prevent that scenario.}}
</source>
<source lang="text">
[need to type 'yes' to confirm] yes
</source>
<source lang="text">


Writing meta data...
Each resource configuration will be in its own file saved as <span class="code">/etc/drbd.d/rX.res</span>. The three of them will be pretty much the same. So let's take a look at the first GFS2 resource <span class="code">r0.res</span>, then we'll just look at the changes for <span class="code">r1.res</span> and <span class="code">r2.res</span>. These files won't exist initially.
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
success
md_offset 215557296128
al_offset 215557263360
bm_offset 215550681088


Found some data
<source lang="bash">
 
vim /etc/drbd.d/r0.res
==> This might destroy existing data! <==
 
Do you want to proceed?
</source>
<source lang="text">
[need to type 'yes' to confirm] yes
</source>
</source>
<source lang="text">
<source lang="text">
# This is the resource used for the shared GFS2 partition.
resource r0 {
# This is the block device path.
device /dev/drbd0;


Writing meta data...
# We'll use the normal internal metadisk (takes about 32MB/TB)
initializing activity log
meta-disk internal;
NOT initialized bitmap
New drbd meta data block successfully created.
success
</source>


Before you go any further, we'll need to load the <span class="code">drbd</span> kernel module. Note that you won't normally need to do this. Later, after we get everything running the first time, we'll be able to start and stop the DRBD resources using the <span class="code">/etc/init.d/drbd</span> script, which loads and unloads the <span class="code">drbd</span> kernel module as needed.
# This is the `uname -n` of the first node
on an-node01.alteeve.ca {
# The 'address' has to be the IP, not a hostname. This is the
# node's SN (bond1) IP. The port number must be unique amoung
# resources.
address 10.10.0.1:7788;


<source lang="bash">
# This is the block device backing this resource on this node.
modprobe drbd
disk /dev/sda5;
}
# Now the same information again for the second node.
on an-node02.alteeve.ca {
address 10.10.0.2:7788;
disk /dev/sda5;
}
}
</source>
</source>


Now go back to the terminal windows we had used to watch the cluster start. We now want to watch the output of <span class="code">cat /proc/drbd</span> so we can keep tabs on the current state of the DRBD resources. We'll do this by using the <span class="code">watch</span> program, which will refresh the output of the <span class="code">cat</span> call every couple of seconds.
Now copy this to <span class="code">r1.res</span> and edit for the <span class="code">an-node01</span> VM resource. The main differences are the resource name, <span class="code">r1</span>, the block device, <span class="code">/dev/drbd1</span>, the port, <span class="code">7790</span> and the backing block devices, <span class="code">/dev/sda6</span>.


<source lang="bash">
<source lang="bash">
watch cat /proc/drbd
cp /etc/drbd.d/r0.res /etc/drbd.d/r1.res
vim /etc/drbd.d/r1.res
</source>
</source>
<source lang="text">
<source lang="text">
version: 8.3.11 (api:88/proto:86-96)
# This is the resource used for VMs that will normally run on an-node01.
GIT-hash: 0de839cee13a4160eed6037c4bddd066645e23c5 build by dag@Build64R6, 2011-08-08 08:54:05
resource r1 {
</source>
# This is the block device path.
device /dev/drbd1;


Back in the first terminal, we need to <span class="code">attach</span> the backing device, <span class="code">/dev/sda{5..7}</span> to their respective DRBD resources, <span class="code">r{0..2}</span>. After running the following command, you will see no output on the first terminal, but the second terminal's <span class="code">/proc/drbd</span> should update.
# We'll use the normal internal metadisk (takes about 32MB/TB)
meta-disk internal;
 
# This is the `uname -n` of the first node
on an-node01.alteeve.ca {
# The 'address' has to be the IP, not a hostname. This is the
# node's SN (bond1) IP. The port number must be unique amoung
# resources.
address 10.10.0.1:7789;
 
# This is the block device backing this resource on this node.
disk /dev/sda6;
}
# Now the same information again for the second node.
on an-node02.alteeve.ca {
address 10.10.0.2:7789;
disk /dev/sda6;
}
}
</source>
 
The last resource is again the same, with the same set of changes.


<source lang="bash">
<source lang="bash">
drbdadm attach r{0..2}
cp /etc/drbd.d/r1.res /etc/drbd.d/r2.res
vim /etc/drbd.d/r2.res
</source>
</source>
<source lang="text">
<source lang="text">
version: 8.3.11 (api:88/proto:86-96)
# This is the resource used for VMs that will normally run on an-node02.
GIT-hash: 0de839cee13a4160eed6037c4bddd066645e23c5 build by dag@Build64R6, 2011-08-08 08:54:05
resource r2 {
0: cs:StandAlone ro:Secondary/Unknown ds:Inconsistent/DUnknown  r----s
# This is the block device path.
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:20970844
device /dev/drbd2;
1: cs:StandAlone ro:Secondary/Unknown ds:Inconsistent/DUnknown  r----s
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:210499788
2: cs:StandAlone ro:Secondary/Unknown ds:Inconsistent/DUnknown  r----s
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:210498712
</source>


Take note of the connection state, <span class="code">cs:StandAlone</span>, the current role, <span class="code">ro:Secondary/Unknown</span> and the disk state, <span class="code">ds:Inconsistent/DUnknown</span>. This tells us that our resources are not talking to one another, are not usable because they are in the <span class="code">Secondary</span> state (you can't even read the <span class="code">/dev/drbdX</span> device) and that the backing device does not have an up to date view of the data.
# We'll use the normal internal metadisk (takes about 32MB/TB)
meta-disk internal;


This all makes sense of course, as the resources are brand new.
# This is the `uname -n` of the first node
on an-node01.alteeve.ca {
# The 'address' has to be the IP, not a hostname. This is the
# node's SN (bond1) IP. The port number must be unique amoung
# resources.
address 10.10.0.1:7790;


So the next step is to <span class="code">connect</span> the two nodes together. As before, we won't see any output from the first terminal, but the second terminal will change.
# This is the block device backing this resource on this node.
disk /dev/sda7;
}
# Now the same information again for the second node.
on an-node02.alteeve.ca {
address 10.10.0.2:7790;
disk /dev/sda7;
}
}
</source>


{{note|1=After running the following command on the first node, it's connection state will become <span class="code">cs:WFConnection</span> which means that it is '''w'''aiting '''f'''or a '''connection''' from the other node.}}
The final step is to validate the configuration. This is done by running the following command;


<source lang="bash">
<source lang="bash">
drbdadm connect r{0..2}
drbdadm dump
</source>
</source>
<source lang="text">
<source lang="text">
version: 8.3.11 (api:88/proto:86-96)
# /etc/drbd.conf
GIT-hash: 0de839cee13a4160eed6037c4bddd066645e23c5 build by dag@Build64R6, 2011-08-08 08:54:05
common {
0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
    protocol              C;
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:20970844
    net {
1: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
        allow-two-primaries;
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:210499788
        after-sb-0pri    discard-zero-changes;
2: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
        after-sb-1pri    discard-secondary;
     ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:210498712
        after-sb-2pri    disconnect;
</source>
    }
    disk {
        fencing          resource-and-stonith;
    }
    startup {
        wfc-timeout      300;
        degr-wfc-timeout 120;
        become-primary-on both;
    }
    handlers {
        pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
        pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
        local-io-error  "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
        fence-peer      /sbin/obliterate-peer.sh;
     }
}


We can now see that the two nodes are talking to one another properly as the connection state has changed to <span class="code">cs:Connected</span>. They can see that their peer node is in the same state as they are; <span class="code">Secondary</span>/<span class="code">Inconsistent</span>.
# resource r0 on an-node01.alteeve.ca: not ignored, not stacked
resource r0 {
    on an-node01.alteeve.ca {
        device          /dev/drbd0 minor 0;
        disk            /dev/sda5;
        address          ipv4 10.10.0.1:7788;
        meta-disk        internal;
    }
    on an-node02.alteeve.ca {
        device          /dev/drbd0 minor 0;
        disk            /dev/sda5;
        address          ipv4 10.10.0.2:7788;
        meta-disk        internal;
    }
}


Seeing as the resources are brand new, there is no data to synchronize or save. So we're going to issue a special command that will only ever be used this one time. It will tell DRBD to immediately consider the DRBD resources to be up to date.
# resource r1 on an-node01.alteeve.ca: not ignored, not stacked
 
resource r1 {
On '''one''' node only, run;
    on an-node01.alteeve.ca {
 
        device          /dev/drbd1 minor 1;
<source lang="bash">
        disk            /dev/sda6;
drbdadm -- --clear-bitmap new-current-uuid r{0..2}
        address          ipv4 10.10.0.1:7789;
        meta-disk        internal;
    }
    on an-node02.alteeve.ca {
        device          /dev/drbd1 minor 1;
        disk            /dev/sda6;
        address          ipv4 10.10.0.2:7789;
        meta-disk        internal;
    }
}
 
# resource r2 on an-node01.alteeve.ca: not ignored, not stacked
resource r2 {
    on an-node01.alteeve.ca {
        device          /dev/drbd2 minor 2;
        disk            /dev/sda7;
        address          ipv4 10.10.0.1:7790;
        meta-disk        internal;
    }
    on an-node02.alteeve.ca {
        device          /dev/drbd2 minor 2;
        disk            /dev/sda7;
        address          ipv4 10.10.0.2:7790;
        meta-disk        internal;
    }
}
</source>
</source>


As before, look to the second terminal to see the new state of affairs.
You'll note that the output is formatted differently from the configuration files we created, but the values themselves are the same. If there had of been errors, you would have seen them printed. Fix any problems before proceeding. Once you get a clean dump, copy the configuration over to the other node.


<source lang="bash">
rsync -av /etc/drbd.d root@an-node02:/etc/
</source>
<source lang="text">
<source lang="text">
version: 8.3.11 (api:88/proto:86-96)
sending incremental file list
GIT-hash: 0de839cee13a4160eed6037c4bddd066645e23c5 build by dag@Build64R6, 2011-08-08 08:54:05
drbd.d/
0: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r-----
drbd.d/global_common.conf
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
drbd.d/global_common.conf.orig
1: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r-----
drbd.d/r0.res
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
drbd.d/r1.res
  2: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r-----
drbd.d/r2.res
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
 
sent 7534 bytes received 129 bytes  5108.67 bytes/sec
total size is 7874  speedup is 1.03
</source>
</source>


Voila!
== Initializing The DRBD Resources ==
 
Now that we have DRBD configured, we need to initialize the DRBD backing devices and then bring up the resources for the first time.


We could promote both sides to <span class="code">Primary</span> by running <span class="code">drbdadm primary r{0..2}</span> on both nodes, but there is no purpose in doing that at this stage as we can safely say our DRBD is ready to go. So instead, let's just stop DRBD entirely. We'll also prevent it from starting on boot as <span class="code">drbd</span> will be managed by the cluster in a later step.
{{note|1=To save a bit of time and typing, the following sections will use a little <span class="code">bash</span> magic. When commands need to be run on all three resources, rather than running the same command three times with the different resource names, we will use the short-hand form <span class="code">r{0,1,2}</span> or <span class="code">r{0..2}</span>.}}


On '''both''' nodes run;
On '''both''' nodes, create the new [[DRBD metadata|metadata]] on the backing devices. You may need to type <span class="code">yes</span> to confirm the action if any data is seen. If DRBD sees an actual file system, it will error and insist that you clear the partition. You can do this by running; <span class="code">dd if=/dev/zero of=/dev/sdaX bs=4M</span>, where <span class="code">X</span> is the partition you want to clear. This is called "zeroing out" a partition. The <span class="code">dd</span> program does not print its progress, and can take a long time. To check the progress, open a new session to the server and run '<span class="code">kill -USR1 $(pgrep -l '^dd$' | awk '{ print $1 }')</span>'.
 
If DRBD sees old metadata, it will prompt you to type <span class="code">yes</span> before it will proceed. In my case, I had recently zeroed-out my drive so DRBD had no concerns and just created the metadata for the three resources.


<source lang="bash">
<source lang="bash">
chkconfig drbd off
drbdadm create-md r{0..2}
/etc/init.d/drbd stop
</source>
</source>
<source lang="text">
<source lang="text">
Stopping all DRBD resources: .
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
success
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
success
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
success
</source>
</source>


The second terminal will start complaining that <span class="code">/proc/drbd</span> no longer exists. This is because the <span class="code">drbd</span> init script unloaded the <span class="code">drbd</span> kernel module. It is expected and not a problem.
Before you go any further, we'll need to load the <span class="code">drbd</span> kernel module. Note that you won't normally need to do this. Later, after we get everything running the first time, we'll be able to start and stop the DRBD resources using the <span class="code">/etc/init.d/drbd</span> script, which loads and unloads the <span class="code">drbd</span> kernel module as needed.


= Configuring Clustered Storage =
<source lang="bash">
modprobe drbd
</source>


Before we can provision the first virtual machine, we must first create the storage that will back them. This will take a few steps;
Now go back to the terminal windows we had used to watch the cluster start. We now want to watch the output of <span class="code">cat /proc/drbd</span> so we can keep tabs on the current state of the DRBD resources. We'll do this by using the <span class="code">watch</span> program, which will refresh the output of the <span class="code">cat</span> call every couple of seconds.


* Configuring [[LVM]]'s clustered locking and creating the [[PV]]s, [[VG]]s and [[LV]]s
<source lang="bash">
* Formatting and configuring the shared [[GFS2]] partition.
watch cat /proc/drbd
* Adding storage to the cluster's resource management.
</source>
<source lang="text">
version: 8.3.12 (api:88/proto:86-96)
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03
</source>


== Clustered Logical Volume Management ==
Back in the first terminal, we need to <span class="code">attach</span> the backing device, <span class="code">/dev/sda{5..7}</span> to their respective DRBD resources, <span class="code">r{0..2}</span>. After running the following command, you will see no output on the first terminal, but the second terminal's <span class="code">/proc/drbd</span> should update.


We will assign all three DRBD resources to be managed by clustered LVM. This isn't strictly needed for the [[GFS2]] partition, as it uses DLM directly. However, the flexibility of LVM is very appealing, and will make later growth of the GFS2 partition quite trivial, should the need arise.  
<source lang="bash">
 
drbdadm attach r{0..2}
The real reason for clustered LVM in our cluster is to provide DLM-backed locking to the partitions, or logical volumes in LVM, that will be used to back our VMs. Of course, the flexibility of LVM managed storage is enough of a win to justify using LVM for our VMs in itself, and shouldn't be ignored here.
</source>
<source lang="text">
version: 8.3.12 (api:88/proto:86-96)
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03
0: cs:StandAlone ro:Secondary/Unknown ds:Inconsistent/DUnknown  r----s
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:19515784
1: cs:StandAlone ro:Secondary/Unknown ds:Inconsistent/DUnknown  r----s
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:211418788
2: cs:StandAlone ro:Secondary/Unknown ds:Inconsistent/DUnknown  r----s
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:211034800
</source>
 
Take note of the connection state, <span class="code">cs:StandAlone</span>, the current role, <span class="code">ro:Secondary/Unknown</span> and the disk state, <span class="code">ds:Inconsistent/DUnknown</span>. This tells us that our resources are not talking to one another, are not usable because they are in the <span class="code">Secondary</span> state (you can't even read the <span class="code">/dev/drbdX</span> device) and that the backing device does not have an up to date view of the data.  
 
This all makes sense of course, as the resources are brand new.
 
So the next step is to <span class="code">connect</span> the two nodes together. As before, we won't see any output from the first terminal, but the second terminal will change.


=== Configuring Clustered LVM Locking ===
{{note|1=After running the following command on the first node, its connection state will become <span class="code">cs:WFConnection</span> which means that it is '''w'''aiting '''f'''or a '''connection''' from the other node.}}


Before we create the clustered LVM, we need to first make three changes to the LVM configuration.  
<source lang="bash">
* We need to filter out the DRBD backing devices so that LVM doesn't see the same signature twice.
drbdadm connect r{0..2}
* Switch from local locking to clustered locking.
</source>
* Prevent fall-back to local locking when the cluster is not available.
<source lang="text">
version: 8.3.12 (api:88/proto:86-96)
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03
0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:19515784
1: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:211418788
2: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:211034800
</source>


The configuration option to filter out the DRBD backing device is, surprisingly, <span class="code">filter = [ ... ]</span>. By default, it is set to allow everything via the <span class="code">"a/.*/"</span> regular expression. We're only using DRBD in our LVM, so we're going to flip that to reject everything ''except'' DRBD by changing the regex to <span class="code">"a|/dev/drbd*|", "r/.*/"</span>. If we didn't do this, LVM would see the same signature on the DRBD device and again on the backing devices, at which time it would ignore the DRBD device. This filter allows LVM to only inspect the DRBD devices for LVM signatures.
We can now see that the two nodes are talking to one another properly as the connection state has changed to <span class="code">cs:Connected</span>. They can see that their peer node is in the same state as they are; <span class="code">Secondary</span>/<span class="code">Inconsistent</span>.


For the locking, we're going to change the <span class="code">locking_type</span> from <span class="code">1</span> (local locking) to <span class="code">3</span>, (clustered locking).
Seeing as the resources are brand new, there is no data to synchronize the two nodes. We're going to issue a special command that will only ever be used this one time. It will tell DRBD to immediately consider the DRBD resources to be up to date.


We're also going to disallow fall-back to local locking. Normally, LVM would try to access a clustered LVM [[VG]] using local locking if DLM is not available. We want to prevent any access to the clustered LVM volumes except when the cluster is itself running. This is done by changing <span class="code">fallback_to_local_locking</span> to <span class="code">0</span>.
On '''one''' node only, run;


<source lang="bash">
<source lang="bash">
cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.orig
drbdadm -- --clear-bitmap new-current-uuid r{0..2}
vim /etc/lvm/lvm.conf
diff -u /etc/lvm/lvm.conf.orig /etc/lvm/lvm.conf
</source>
</source>
As before, look to the second terminal to see the new state of affairs.
<source lang="text">
<source lang="text">
--- /etc/lvm/lvm.conf.orig 2011-09-16 14:07:08.500691102 -0400
version: 8.3.12 (api:88/proto:86-96)
+++ /etc/lvm/lvm.conf 2011-10-10 16:28:24.530564214 -0400
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03
@@ -50,7 +50,8 @@
  0: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r-----
   
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
  1: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r-----
    # By default we accept every block device:
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
-    filter = [ "a/.*/" ]
  2: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r-----
+    #filter = [ "a/.*/" ]
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
+    filter = [ "a|/dev/drbd*|", "r/.*/" ]
    # Exclude the cdrom drive
    # filter = [ "r|/dev/cdrom|" ]
@@ -278,7 +279,8 @@
    # Type 3 uses built-in clustered locking.
    # Type 4 uses read-only locking which forbids any operations that might
    # change metadata.
-   locking_type = 1
+    #locking_type = 1
+    locking_type = 3
   
    # Set to 0 to fail when a lock request cannot be satisfied immediately.
    wait_for_locks = 1
@@ -294,7 +296,7 @@
    # to 1 an attempt will be made to use local file-based locking (type 1).
    # If this succeeds, only commands against local volume groups will proceed.
    # Volume Groups marked as clustered will be ignored.
-    fallback_to_local_locking = 1
+    fallback_to_local_locking = 0
   
    # Local non-LV directory that holds file-based locks while commands are
    # in progress.  A directory like /tmp that may get wiped on reboot is OK.
</source>
</source>


Copy the modified <span class="code">lvm.conf</span> file to the other node.
Voila!
 
We could promote both sides to <span class="code">Primary</span> by running <span class="code">drbdadm primary r{0..2}</span> on both nodes, but there is no purpose in doing that at this stage as we can safely say our DRBD is ready to go. So instead, let's just stop DRBD entirely. We'll also prevent it from starting on boot as <span class="code">drbd</span> will be managed by the cluster in a later step.
 
On '''both''' nodes run;


<source lang="bash">
<source lang="bash">
rsync -av /etc/lvm/lvm.conf root@an-node02:/etc/lvm/
/etc/init.d/drbd stop
</source>
</source>
<source lang="text">
<source lang="text">
sending incremental file list
Stopping all DRBD resources: .
lvm.conf
</source>
 
Now disable it from starting on boot.


sent 200 bytes  received 223 bytes  282.00 bytes/sec
<source lang="bash">
total size is 21809  speedup is 51.56
chkconfig drbd off
chkconfig --list drbd
</source>
<source lang="text">
drbd          0:off 1:off 2:off 3:off 4:off 5:off 6:off
</source>
</source>


=== Starting the clvmd Daemon ===
The second terminal will start complaining that <span class="code">/proc/drbd</span> no longer exists. This is because the <span class="code">drbd</span> init script unloaded the <span class="code">drbd</span> kernel module. It is expected and not a problem.


A little later on, we're going to put clustered LVM under the control of <span class="code">rgmanager</span>. Before we can do that though, we need to start it manually so that we can use it to create the LV that will back the GFS2 <span class="code">/shared</span> partition, which we will also be adding to <span class="code">rgmanager</span> when we build our storage services.
= Configuring Clustered Storage =


Before we start the <span class="code">clvmd</span> daemon, we'll want to ensure that the cluster is running.
Before we can provision the first virtual machine, we must first create the storage that will back them. This will take a few steps;


<source lang="bash">
* Configuring [[LVM]]'s clustered locking and creating the [[PV]]s, [[VG]]s and [[LV]]s
cman_tool status
* Formatting and configuring the shared [[GFS2]] partition.
* Adding storage to the cluster's resource management.
 
== Clustered Logical Volume Management ==
 
We will assign all three DRBD resources to be managed by clustered LVM. This isn't strictly needed for the [[GFS2]] partition, as it uses DLM directly. However, the flexibility of LVM is very appealing, and will make later growth of the GFS2 partition quite trivial, should the need arise.
 
The real reason for clustered LVM in our cluster is to provide DLM-backed locking to the partitions, or logical volumes in LVM, that will be used to back our VMs. Of course, the flexibility of LVM managed storage is enough of a win to justify using LVM for our VMs in itself, and shouldn't be ignored here.
 
=== Configuring Clustered LVM Locking ===
 
Before we create the clustered LVM, we need to first make three changes to the LVM configuration.
* We need to filter out the DRBD backing devices so that LVM doesn't see the same signature twice.
* Switch from local locking to clustered locking.
* Prevent fall-back to local locking when the cluster is not available.
 
Start by making a backup of <span class="code">lvm.conf</span> and then begin editing it.
 
<source lang="bash">
cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.orig
vim /etc/lvm/lvm.conf
</source>
</source>
The configuration option to filter out the DRBD backing device is, surprisingly, <span class="code">filter = [ ... ]</span>. By default, it is set to allow everything via the <span class="code">"a/.*/"</span> regular expression. We're only using DRBD in our LVM, so we're going to flip that to reject everything ''except'' DRBD by changing the regex to <span class="code">"a|/dev/drbd*|", "r/.*/"</span>. If we didn't do this, LVM would see the same signature on the DRBD device and again on the backing devices, at which time it would ignore the DRBD device. This filter allows LVM to only inspect the DRBD devices for LVM signatures.
Change;
<source lang="bash">
<source lang="bash">
Version: 6.2.0
    # By default we accept every block device:
Config Version: 8
    filter = [ "a/.*/" ]
Cluster Name: an-clusterA
Cluster Id: 29382
Cluster Member: Yes
Cluster Generation: 164
Membership state: Cluster-Member
Nodes: 2
Expected votes: 1
Total votes: 2
Node votes: 1
Quorum: 1 
Active subsystems: 7
Flags: 2node
Ports Bound: 0 
Node name: an-node01.alteeve.com
Node ID: 1
Multicast addresses: 239.192.114.57
Node addresses: 10.20.0.1
</source>
</source>


It is, and both nodes are members. We can start the <span class="code">clvmd</span> daemon now.
To;


<source lang="bash">
<source lang="bash">
/etc/init.d/clvmd start
    # We're only using LVM on DRBD resource.
</source>
    filter = [ "a|/dev/drbd*|", "r/.*/" ]
<source lang="text">
Starting clvmd:
Activating VG(s):  No volume groups found
                                                          [  OK  ]
</source>
</source>


We've not created any clustered volume groups yet, so that is expected.
For the locking, we're going to change the <span class="code">locking_type</span> from <span class="code">1</span> (local locking) to <span class="code">3</span>, (clustered locking). This is what tells LVM to use DLM.


{{note|1=At this stage, the cluster does not start at boot, so we can't start <span class="code">clvmd</span> at boot yet, either. We'll do this at the end of the tutorial, so for now, disable <span class="code">clvmd</span> and start it manually after starting <span class="code">cman</span> when you first start your cluster.}}
Change;


<source lang="bash">
<source lang="bash">
chkconfig clvmd off
    locking_type = 1
chkconfig --list clvmd
</source>
<source lang="text">
clvmd          0:off 1:off 2:off 3:off 4:off 5:off 6:off
</source>
</source>


=== Initialize Out DRBD Resource For Use As LVM PVs ===
To;
 
Before we can use LVM, clustered or otherwise, we need to initialize one or more raw storage devices. This is done using the <span class="code">pvcreate</span> command.
 
First though, we need to make sure that they are up and <span class="code">Primary/Primary</span> on both nodes.


<source lang="bash">
<source lang="bash">
/etc/init.d/drbd status
    locking_type = 3
</source>
<source lang="text">
drbd driver loaded OK; device status:
version: 8.3.11 (api:88/proto:86-96)
GIT-hash: 0de839cee13a4160eed6037c4bddd066645e23c5 build by dag@Build64R6, 2011-08-08 08:54:05
m:res  cs        ro              ds                p  mounted  fstype
0:r0  Connected  Primary/Primary  UpToDate/UpToDate  C
1:r1  Connected  Primary/Primary  UpToDate/UpToDate  C
2:r2  Connected  Primary/Primary  UpToDate/UpToDate  C
</source>
</source>


All three resources are up on both nodes, so we can initilize our resources. We're going to do this on <span class="code">an-node01</span>, then run <span class="code">pvscan</span> on <span class="code">an-node02</span>. We should see the newly initialized DRBD resources appear.
Lastly, we're also going to disallow fall-back to local locking. Normally, LVM would try to access a clustered LVM [[VG]] using local locking if DLM is not available. We want to prevent any access to the clustered LVM volumes ''except'' when the DLM is itself running. This is done by changing <span class="code">fallback_to_local_locking</span> to <span class="code">0</span>.


Running <span class="code">pvscan</span> first, we'll see that no [[PV]]s have been created.
Change;


<source lang="bash">
<source lang="bash">
pvscan
    fallback_to_local_locking = 1
</source>
<source lang="text">
  No matching physical volumes found
</source>
</source>


On '''<span class="code">an-node01</span>''', initialize the PVs;
To;


<source lang="bash">
<source lang="bash">
pvcreate /dev/drbd{0..2}
    fallback_to_local_locking = 0
</source>
<source lang="text">
  Physical volume "/dev/drbd0" successfully created
  Physical volume "/dev/drbd1" successfully created
  Physical volume "/dev/drbd2" successfully created
</source>
</source>


On both nodes, re-run <span class="code">pvscan</span> and the new PVs should show. This works because DRBD is keeping the data in sync, including the new LVM signatures.
Save the changes, then lets run a <span class="code">diff</span> against our backup to see a summary of the changes.


<source lang="bash">
<source lang="bash">
pvscan
diff -u /etc/lvm/lvm.conf.orig /etc/lvm/lvm.conf
</source>
<source lang="diff">
--- /etc/lvm/lvm.conf.orig 2011-12-14 17:42:16.416094972 -0500
+++ /etc/lvm/lvm.conf 2011-12-14 17:49:15.747097684 -0500
@@ -62,8 +62,8 @@
    # If it doesn't do what you expect, check the output of 'vgscan -vvvv'.
-    # By default we accept every block device:
-    filter = [ "a/.*/" ]
+    # We're only using LVM on DRBD resource.
+    filter = [ "a|/dev/drbd*|", "r/.*/" ]
    # Exclude the cdrom drive
    # filter = [ "r|/dev/cdrom|" ]
@@ -356,7 +356,7 @@
    # Type 3 uses built-in clustered locking.
    # Type 4 uses read-only locking which forbids any operations that might
    # change metadata.
-    locking_type = 1
+    locking_type = 3
    # Set to 0 to fail when a lock request cannot be satisfied immediately.
    wait_for_locks = 1
@@ -372,7 +372,7 @@
    # to 1 an attempt will be made to use local file-based locking (type 1).
    # If this succeeds, only commands against local volume groups will proceed.
    # Volume Groups marked as clustered will be ignored.
-    fallback_to_local_locking = 1
+    fallback_to_local_locking = 0
    # Local non-LV directory that holds file-based locks while commands are
    # in progress.  A directory like /tmp that may get wiped on reboot is OK.
</source>
</source>
<source lang="text">
  PV /dev/drbd0                      lvm2 [20.00 GiB]
  PV /dev/drbd1                      lvm2 [200.75 GiB]
  PV /dev/drbd2                      lvm2 [200.75 GiB]
  Total: 3 [421.49 GiB] / in use: 0 [0  ] / in no VG: 3 [421.49 GiB]
</source>
Done.
=== Creating Cluster Volume Groups ===


As with initializing the DRBD resource above, we will create out volume groups, [[VG]]s, on <span class="code">an-node01</span> only, but we will then see them on both nodes.
Perfect! Now copy the modified <span class="code">lvm.conf</span> file to the other node.
 
Check to confirm that no VGs exist;


<source lang="bash">
<source lang="bash">
vgdisplay
rsync -av /etc/lvm/lvm.conf root@an-node02:/etc/lvm/
</source>
</source>
<source lang="text">
<source lang="text">
  No volume groups found
sending incremental file list
lvm.conf
 
sent 2351 bytes  received 283 bytes  5268.00 bytes/sec
total size is 28718  speedup is 10.90
</source>
</source>


Now to create the VGs, we'll use the <span class="code">vgcreate</span> command with the <span class="code">-c y</span> switch, which tells LVM to make the VG a clustered VG. Note that when the <span class="code">clvmd</span> daemon is running, <span class="code">-c y</span> is implied. However, I like to get into the habit of using it because it will trigger an error if, for some reason, <span class="code">clvmd</span> wasn't actually running.
=== Testing the clvmd Daemon ===
 
A little later on, we're going to put clustered LVM under the control of <span class="code">rgmanager</span>. Before we can do that though, we need to start it manually so that we can use it to create the LV that will back the GFS2 <span class="code">/shared</span> partition, which we will also be adding to <span class="code">rgmanager</span> when we build our storage services.


On '''<span class="code">an-node01</span>''', create the three VGs.
Before we start the <span class="code">clvmd</span> daemon, we'll want to ensure that the cluster is running.


* VG for the GFS2 <span class="code">/shared</span> partition;
<source lang="bash">
<source lang="bash">
vgcreate -c y shared-vg0 /dev/drbd0
cman_tool status
</source>
<source lang="text">
  Clustered volume group "shared-vg0" successfully created
</source>
</source>
* VG for the VMs that will primarily run on <span class="code">an-node01</span>;
<source lang="bash">
<source lang="bash">
vgcreate -c y an01-vg0 /dev/drbd1
Version: 6.2.0
</source>
Config Version: 7
Cluster Name: an-cluster-A
Cluster Id: 24561
Cluster Member: Yes
Cluster Generation: 68
Membership state: Cluster-Member
Nodes: 2
Expected votes: 1
Total votes: 2
Node votes: 1
Quorum: 1 
Active subsystems: 7
Flags: 2node
Ports Bound: 0 
Node name: an-node01.alteeve.ca
Node ID: 1
Multicast addresses: 239.192.95.81
Node addresses: 10.20.0.1
</source>
 
It is, and both nodes are members. We can start the <span class="code">clvmd</span> daemon now.
 
<source lang="bash">
/etc/init.d/clvmd start
</source>
<source lang="text">
<source lang="text">
   Clustered volume group "an01-vg0" successfully created
Starting clvmd:
Activating VG(s):   No volume groups found
                                                          [  OK  ]
</source>
</source>


* VG for the VMs that will primarily run on <span class="code">an-node02</span>;
We've not created any clustered volume groups yet, so that complaint about not finding volume groups is expected.
 
We don't want <span class="code">clvmd</span> to start at boot, as we will be putting it under the cluster's control. So we need to make sure that <span class="code">clvmd</span> is disabled at boot, and then we'll stop <span class="code">clvmd</span> for now.
 
<source lang="bash">
<source lang="bash">
vgcreate -c y an02-vg0 /dev/drbd2
chkconfig clvmd off
chkconfig --list clvmd
</source>
</source>
<source lang="text">
<source lang="text">
  Clustered volume group "an02-vg0" successfully created
clvmd          0:off 1:off 2:off 3:off 4:off 5:off 6:off
</source>
</source>


Now on both nodes, we should see the three new volume groups.
Now stop it entirely.


<source lang="bash">
<source lang="bash">
vgscan
/etc/init.d/clvmd stop
</source>
</source>
<source lang="text">
<source lang="text">
  Reading all physical volumes. This may take a while...
Signaling clvmd to exit                                    [ OK  ]
  Found volume group "an02-vg0" using metadata type lvm2
clvmd terminated                                          [  OK  ]
  Found volume group "an01-vg0" using metadata type lvm2
  Found volume group "shared-vg0" using metadata type lvm2
</source>
</source>


=== Creating a Logical Volume ===
=== Initialize our DRBD Resource for use as LVM PVs ===


At this stage, we're going to create only one [[LV]] for the GFS2 partition. We'll create the rest later when we're ready to provision the VMs. This will be the <span class="code">/shared</span> partiton, which we will discuss further in the next section.
This is the first time we're actually going to use DRBD and clustered LVM, so we need to make sure that both are started. Earlier we stopped them, so if they're not running now, we need to restart them.


As before, we'll create the LV on <span class="code">an-node01</span> and then verify it exists on both nodes.
First, check (and start if needed) <span class="code">drbd</span>.
 
Before we create our first LV, check <span class="code">lvscan</span>.


<source lang="bash">
<source lang="bash">
lvscan
/etc/init.d/drbd status
</source>
<source lang="text">
drbd not loaded
</source>
</source>
''Nothing is returned''.


On '''<span class="code">an-node01</span>''', create the the LV on the <span class="code">shared-vg0</span> VG, using all of the available space.
It's stopped, so we'll start it on '''both''' nodes now.


<source lang="bash">
<source lang="bash">
lvcreate -l 100%FREE -n shared shared-vg0
/etc/init.d/drbd start
</source>
</source>
<source lang="text">
<source lang="text">
  Logical volume "shared" created
Starting DRBD resources: [ d(r0) d(r1) d(r2) n(r0) n(r1) n(r2) ].
</source>
</source>


Now on both nodes, check that the new LV exists.
It looks like it started, but let's confirm that the resources are all <span class="code">Connected</span>, <span class="code">Primary</span> and <span class="code">UpToDate</span>.


<source lang="bash">
<source lang="bash">
lvscan
/etc/init.d/drbd status
</source>
</source>
<source lang="text">
<source lang="text">
   ACTIVE            '/dev/shared-vg0/shared' [20.00 GiB] inherit
drbd driver loaded OK; device status:
version: 8.3.12 (api:88/proto:86-96)
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03
m:res  cs        ro              ds                p  mounted  fstype
0:r0  Connected  Primary/Primary  UpToDate/UpToDate  C
1:r1   Connected  Primary/Primary  UpToDate/UpToDate  C
2:r2  Connected  Primary/Primary  UpToDate/UpToDate  C
</source>
</source>


Perfect. We can now create our GFS2 partition.
Excellent, now to check on <span class="code">clvmd</span>.


== Creating The Shared GFS2 Partition ==
<source lang="bash">
/etc/init.d/clvmd status
</source>
<source lang="text">
clvmd is stopped
</source>


The GFS2-formatted <span class="code">/shared</span> partition will be used for four main purposes;
It's also stopped, so lets start it now.
* <span class="code">/shared/files</span>; Storing files like [[ISO]] images needed when provisioning VMs.
* <span class="code">/shared/provision</span>; Storing short scripts used to call <span class="code">virt-install</span> which handles the creation of our VMs.
* <span class="code">/shared/definitions</span>; This is where the [[XML]] definition files which define the emulated hardware backing our VMs are kept. This is the most critical directory as the cluster will look here when starting and recovering VMs.
* <span class="code">/shared/archive</span>; This is used to store old copies of the [[XML]] definition files. I like to make a time-stamped copy of definition files prior to altering and redefining a VM. This way, I can quickly and easily revert to an old configuration should I run into trouble.


Make sure that both <span class="code">drbd</span> and <span class="code">clvmd</span> are running.
<source lang="bash">
/etc/init.d/clvmd start
</source>
<source lang="text">
Starting clvmd:
Activating VG(s):  No volume groups found
                                                          [  OK  ]
</source>


The <span class="code">mkfs.gfs2</span> call uses a few switches that are worth explaining;
Now we're ready to start!
* <span class="code">-p lock_dlm</span>; This tells GFS2 to use [[DLM]] for it's clustered locking. Currently, this is the only supported locking type.
* <span class="code">-j 2</span>; This tells GFS2 to create two journals. This must match the number of nodes that will try to mount this partition at any one time.
* <span class="code">-t an-clusterA:shared</span>; This is the lockspace name, which must be in the format <span class="code"><clustename>:<fsname></span>. The <span class="code">clustername</span> must match the one in <span class="code">cluster.conf</span>, and any node that belongs to a cluster of another name will not be allowed to access the file system.


{{note|1=Depending on the size of the new partition, this call could take a while to complete. Please be patient.}}
Before we can use LVM, clustered or otherwise, we need to initialize one or more raw storage devices. This is done using the <span class="code">pvcreate</span> command. We're going to do this on <span class="code">an-node01</span>, then run <span class="code">pvscan</span> on <span class="code">an-node02</span>. We should see the newly initialized DRBD resources appear.


Then, on '''<span class="code">an-node01</span>''', run;
Running <span class="code">pvscan</span> first, we'll see that no [[PV]]s have been created.


<source lang="bash">
<source lang="bash">
mkfs.gfs2 -p lock_dlm -j 2 -t an-clusterA:shared /dev/shared-vg0/shared
pvscan
</source>
</source>
<source lang="text">
<source lang="text">
This will destroy any data on /dev/shared-vg0/shared.
  No matching physical volumes found
It appears to contain: symbolic link to `../dm-0'
</source>


Are you sure you want to proceed? [y/n] y
On '''<span class="code">an-node01</span>''', initialize the PVs;
 
Device:                    /dev/shared-vg0/shared
Blocksize:                4096
Device Size                20.00 GB (5241856 blocks)
Filesystem Size:          20.00 GB (5241855 blocks)
Journals:                  2
Resource Groups:          80
Locking Protocol:          "lock_dlm"
Lock Table:                "an-clusterA:shared"
UUID:                      2A96F8AA-664F-84CA-9147-85B53215D911
</source>
 
On '''both''' nodes, run all of the following commands.


<source lang="bash">
<source lang="bash">
mkdir /shared
pvcreate /dev/drbd{0..2}
mount /dev/shared-vg0/shared /shared/
</source>
 
Confirm that <span class="code">/shared</span> is now mounted.
 
<source lang="bash">
df -hP /shared
</source>
</source>
<source lang="text">
<source lang="text">
Filesystem            Size  Used Avail Use% Mounted on
  Writing physical volume data to disk "/dev/drbd0"
/dev/mapper/shared--vg0-shared   20G  259M   20G   2% /shared
  Physical volume "/dev/drbd0" successfully created
  Writing physical volume data to disk "/dev/drbd1"
   Physical volume "/dev/drbd1" successfully created
   Writing physical volume data to disk "/dev/drbd2"
   Physical volume "/dev/drbd2" successfully created
</source>
</source>


Note that the path under <span class="code">Filesystem</span> is different from what we used when creating the GFS2 partition. This is an effect of [[Device Mapper]], which is used by LVM to create symlinks to actual block device paths. If we look at our <span class="code">/dev/shared-vg0/shared</span> device and the device from <span class="code">df</span>, <span class="code">/dev/mapper/shared--vg0-shared</span>, we'll see that they both point to the same actual block device.
On both nodes, re-run <span class="code">pvscan</span> and the new PVs should show. This works because DRBD is keeping the data in sync, including the new LVM signatures.


<source lang="bash">
<source lang="bash">
ls -lah /dev/shared-vg0/shared /dev/mapper/shared--vg0-shared
pvscan
</source>
</source>
<source lang="text">
<source lang="text">
lrwxrwxrwx 1 root root 7 Oct 23 16:35 /dev/mapper/shared--vg0-shared -> ../dm-0
  PV /dev/drbd0                      lvm2 [18.61 GiB]
lrwxrwxrwx 1 root root 7 Oct 23 16:35 /dev/shared-vg0/shared -> ../dm-0
  PV /dev/drbd1                      lvm2 [201.62 GiB]
</source>
  PV /dev/drbd2                      lvm2 [201.26 GiB]
<source lang="bash">
  Total: 3 [421.49 GiB] / in use: 0 [0   ] / in no VG: 3 [421.49 GiB]
ls -lah /dev/dm-0
</source>
<source lang="text">
brw-rw---- 1 root disk 253, 0 Oct 23 16:35 /dev/dm-0
</source>
</source>


This next command is a bit of command-line voodoo. It takes the output from <span class="code">gfs2_edit -p sb /dev/shared-vg0/shared</span>, <span class="code">grep</span>'s out the [[UUID]] line for the new GFS2 partition, parses out of that the UUID itself, converts it to lower-case and, finally, spits out a string that can be used in <span class="code">/etc/fstab</span>. We'll run it twice; The first time to confirm that the output is what we expect and the second time to append it to <span class="code">/etc/fstab</span>.
Done.
 
=== Creating Cluster Volume Groups ===


The <span class="code">gfs2</span> daemon can only work on GFS2 partitions that have been defined in <span class="code">fstab</span>, so this is a required step on both nodes.
As with initializing the DRBD resource above, we will create out volume groups, [[VG]]s, on <span class="code">an-node01</span> only, but we will then see them on both nodes.


We use <span class="code">rw,suid,dev,exec,nouser,async</span> instead of <span class="code">default</span> because we do not want the <span class="code">auto</span> option, which is implied by <span class="code">default</span>. With <span class="code">auto</span>, the operating system would try to mount the GFS2 filesystem at boot, which would fail as the cluster isn't up. This failure would drop the operating system to single-user mode. There are other ways of avoiding this problem, like using <span class="code">noauto</span> or <span class="code">_netdev</span>. Feel free to use which ever option you prefer, so long as the OS doesn't attempt to mount this partition on boot.
Check to confirm that no VGs exist;


<source lang="bash">
<source lang="bash">
echo `gfs2_edit -p sb /dev/shared-vg0/shared | grep sb_uuid | sed -e "s/.*sb_uuid  *\(.*\)/UUID=\L\1\E \/shared\t\tgfs2\trw,suid,dev,exec,nouser,async\t0 0/"`
vgdisplay
</source>
</source>
<source lang="text">
<source lang="text">
UUID=2a96f8aa-664f-84ca-9147-85b53215d911 /shared gfs2 rw,suid,dev,exec,nouser,async 0 0
  No volume groups found
</source>
</source>


This looks good, so now re-run it but redirect the output to append to <span class="code">/etc/fstab</span>. We'll confirm it worked by checking the status of the <span class="code">gfs2</span> daemon.
Now to create the VGs, we'll use the <span class="code">vgcreate</span> command with the <span class="code">-c y</span> switch, which tells LVM to make the VG a clustered VG. Note that when the <span class="code">clvmd</span> daemon is running, <span class="code">-c y</span> is implied. However, I like to get into the habit of using it because it will trigger an error if, for some reason, <span class="code">clvmd</span> wasn't actually running.
 
On '''<span class="code">an-node01</span>''', create the three VGs.


* VG for the GFS2 <span class="code">/shared</span> partition;
<source lang="bash">
<source lang="bash">
echo `gfs2_edit -p sb /dev/shared-vg0/shared | grep sb_uuid | sed -e "s/.*sb_uuid  *\(.*\)/UUID=\L\1\E \/shared\t\tgfs2\trw,suid,dev,exec,nouser,async\t0 0/"` >> /etc/fstab
vgcreate -c y shared-vg0 /dev/drbd0
/etc/init.d/gfs2 status
</source>
</source>
<source lang="text">
<source lang="text">
Configured GFS2 mountpoints:
  Clustered volume group "shared-vg0" successfully created
/shared
Active GFS2 mountpoints:
/shared
</source>
</source>


On '''<span class="code">an-node01</span>'''
* VG for the VMs that will primarily run on <span class="code">an-node01</span>;
<source lang="bash">
vgcreate -c y an01-vg0 /dev/drbd1
</source>
<source lang="text">
  Clustered volume group "an01-vg0" successfully created
</source>


* VG for the VMs that will primarily run on <span class="code">an-node02</span>;
<source lang="bash">
<source lang="bash">
mkdir /shared/{definitions,provision,archive,files}
vgcreate -c y an02-vg0 /dev/drbd2
</source>
<source lang="text">
  Clustered volume group "an02-vg0" successfully created
</source>
</source>


On '''both''' nodes, confirm that all of the new directories exist and are visible.
Now on both nodes, we should see the three new volume groups.


<source lang="bash">
<source lang="bash">
ls -lah /shared/
vgscan
</source>
</source>
<source lang="text">
<source lang="text">
total 24K
   Reading all physical volumes. This may take a while...
drwxr-xr-x   6 root root 3.8K Oct 23 16:53 .
  Found volume group "an02-vg0" using metadata type lvm2
dr-xr-xr-x. 26 root root 4.0K Oct 23 11:28 ..
   Found volume group "an01-vg0" using metadata type lvm2
drwxr-xr-x   2 root root    0 Oct 23 16:53 archive
   Found volume group "shared-vg0" using metadata type lvm2
drwxr-xr-x  2 root root    0 Oct 23 16:53 definitions
drwxr-xr-x   2 root root    0 Oct 23 16:53 files
drwxr-xr-x  2 root root    0 Oct 23 16:53 provision
</source>
</source>


Wounderful!
=== Creating a Logical Volume ===


=== Stopping All Clustered Storage Components ===
At this stage, we're going to create only one [[LV]] for the GFS2 partition. We'll create the rest later when we're ready to provision the VMs. This will be the <span class="code">/shared</span> partiton, which we will discuss further in the next section.


Before we can the clustered storage under the cluster's control, we need to stop the services manually and then insure they won't try to start on boot.
As before, we'll create the LV on <span class="code">an-node01</span> and then verify it exists on both nodes.


On '''both''' nodes, run;
Before we create our first LV, check <span class="code">lvscan</span>.


<source lang="bash">
<source lang="bash">
/etc/init.d/gfs2 stop && /etc/init.d/clvmd stop && /etc/init.d/drbd stop
lvscan
</source>
''Nothing is returned''.
 
On '''<span class="code">an-node01</span>''', create the the LV on the <span class="code">shared-vg0</span> VG, using all of the available space.
 
<source lang="bash">
lvcreate -l 100%FREE -n shared shared-vg0
</source>
</source>
<source lang="text">
<source lang="text">
Unmounting GFS2 filesystem (/shared):                      [  OK  ]
   Logical volume "shared" created
Deactivating clustered VG(s):  0 logical volume(s) in volume group "an02-vg0" now active
  0 logical volume(s) in volume group "an01-vg0" now active
   0 logical volume(s) in volume group "shared-vg0" now active
                                                          [  OK  ]
Signaling clvmd to exit                                    [  OK  ]
clvmd terminated                                          [  OK  ]
Stopping all DRBD resources:
</source>
</source>


Make sure that the three daemons are not going to start on boot.
Now on both nodes, check that the new LV exists.


<source lang="bash">
<source lang="bash">
chkconfig gfs2 off && chkconfig clvmd off && chkconfig drbd off
lvscan
chkconfig --list |grep -e gfs2 -e clvmd -e drbd
</source>
</source>
<source lang="text">
<source lang="text">
clvmd          0:off 1:off 2:off 3:off 4:off 5:off 6:off
  ACTIVE            '/dev/shared-vg0/shared' [18.61 GiB] inherit
drbd          0:off 1:off 2:off 3:off 4:off 5:off 6:off
gfs2          0:off 1:off 2:off 3:off 4:off 5:off 6:off
</source>
</source>


= Managing Storage In The Cluster =
Perfect. We can now create our GFS2 partition.
 
== Creating The Shared GFS2 Partition ==


A little while back, we spoke about how the cluster is split into two components; cluster communication managed by <span class="code">cman</span> and resource management provided by <span class="code">rgmanager</span>. It's the later which we will now configure.
The GFS2-formatted <span class="code">/shared</span> partition will be used for four main purposes;
* <span class="code">/shared/files</span>; Storing files like [[ISO]] images needed when provisioning VMs.
* <span class="code">/shared/provision</span>; Storing short scripts used to call <span class="code">virt-install</span> which handles the creation of our VMs.
* <span class="code">/shared/definitions</span>; This is where the [[XML]] definition files which define the emulated hardware backing our VMs are kept. This is the most critical directory as the cluster will look here when starting and recovering VMs.
* <span class="code">/shared/archive</span>; This is used to store old copies of the [[XML]] definition files. I like to make a time-stamped copy of definition files prior to altering and redefining a VM. This way, I can quickly and easily revert to an old configuration should I run into trouble.


In the <span class="code">cluster.conf</span>, the <span class="code">rgmanager</span> component is contained within the <span class="code"><rm /></span> element tags. Within this element are three types of child elements. They are:
Make sure that both <span class="code">drbd</span> and <span class="code">clvmd</span> are running.
* Failover Domains - <span class="code"><failoverdomains /></span>;
** There are optional constraints which allow for control which nodes, and under what circumstances, services may run. When not used, a service will be allowed to run on any node in the cluster without constraints or ordering.
* Resources - <span class="code"><resources /></span>;
** Within this element, available resources are defined. Simply having a resource here will not put it under cluster control. Rather, it makes it available for use in <span class="code"><service /></span> elements.
* Services - <span class="code"><service /></span>;
** This element contains one or more parallel or series child-elements which are themselves references to <span class="code"><resources /></span> elements. When in parallel, the services will start and stop at the same time. When in series, the services start in order and stop in reverse order. We will also see a specialized type of service that uses the <span class="code"><vm /></span> element name, as you can probably guess, for creating virtual machine services.


We'll look at each of these components in more detail shortly.
The <span class="code">mkfs.gfs2</span> call uses a few switches that are worth explaining;
* <span class="code">-p lock_dlm</span>; This tells GFS2 to use [[DLM]] for its clustered locking. Currently, this is the only supported locking type.
* <span class="code">-j 2</span>; This tells GFS2 to create two journals. This must match the number of nodes that will try to mount this partition at any one time.
* <span class="code">-t an-cluster-A:shared</span>; This is the lockspace name, which must be in the format <span class="code"><clustename>:<fsname></span>. The <span class="code">clustername</span> must match the one in <span class="code">cluster.conf</span>, and any node that belongs to a cluster of another name will not be allowed to access the file system.


== Before We Start ==
{{note|1=Depending on the size of the new partition, this call could take a while to complete. Please be patient.}}


During the build-up of DRBD earlier, we had to reboot the servers. So let's start by making sure the cluster is up.
Then, on '''<span class="code">an-node01</span>''', run;


<source lang="bash">
<source lang="bash">
/etc/init.d/cman status
mkfs.gfs2 -p lock_dlm -j 2 -t an-cluster-A:shared /dev/shared-vg0/shared
</source>
</source>
<source lang="text">
<source lang="text">
corosync is stopped
This will destroy any data on /dev/shared-vg0/shared.
It appears to contain: symbolic link to `../dm-0'
</source>
</source>
 
<source lang="text">
It's down, so start it and <span class="code">rgmanager</span> up.
Are you sure you want to proceed? [y/n] y
 
<source lang="bash">
/etc/init.d/cman start && /etc/init.d/rgmanager start
</source>
</source>
<source lang="text">
<source lang="text">
Starting cluster:  
Device:                    /dev/shared-vg0/shared
  Checking if cluster has been disabled at boot...       [  OK  ]
Blocksize:                 4096
  Checking Network Manager...                            [  OK  ]
Device Size                18.61 GB (4878336 blocks)
  Global setup...                                        [  OK  ]
Filesystem Size:          18.61 GB (4878333 blocks)
  Loading kernel modules...                              [  OK  ]
Journals:                  2
  Mounting configfs...                                    [  OK  ]
Resource Groups:          75
  Starting cman...                                        [  OK  ]
Locking Protocol:          "lock_dlm"
  Waiting for quorum...                                  [  OK  ]
Lock Table:                "an-cluster-A:shared"
  Starting fenced...                                      [  OK  ]
UUID:                      162a80eb-59b3-08bd-5d69-740cbb60aa45
  Starting dlm_controld...                                [  OK  ]
</source>
  Starting gfs_controld...                                [  OK  ]
 
  Unfencing self...                                      [  OK  ]
On '''both''' nodes, run all of the following commands.
  Joining fence domain...                                [  OK  ]
 
Starting Cluster Service Manager:                          [  OK  ]
<source lang="bash">
mkdir /shared
mount /dev/shared-vg0/shared /shared/
</source>
</source>


Confirm everything is up.
Confirm that <span class="code">/shared</span> is now mounted.


<source lang="bash">
<source lang="bash">
cman_tool status
df -hP /shared
</source>
</source>
<source lang="text">
<source lang="text">
Version: 6.2.0
Filesystem            Size  Used Avail Use% Mounted on
Config Version: 8
/dev/mapper/shared--vg0-shared  19G  259M  19G  2% /shared
Cluster Name: an-clusterA
Cluster Id: 29382
Cluster Member: Yes
Cluster Generation: 180
Membership state: Cluster-Member
Nodes: 2
Expected votes: 1
Total votes: 2
Node votes: 1
Quorum: 1 
Active subsystems: 8
Flags: 2node
Ports Bound: 0 177 
Node name: an-node01.alteeve.com
Node ID: 1
Multicast addresses: 239.192.114.57
Node addresses: 10.20.0.1
</source>
</source>


Perfect, let's proceed.
Note that the path under <span class="code">Filesystem</span> is different from what we used when creating the GFS2 partition. This is an effect of [[Device Mapper]], which is used by LVM to create symlinks to actual block device paths. If we look at our <span class="code">/dev/shared-vg0/shared</span> device and the device from <span class="code">df</span>, <span class="code">/dev/mapper/shared--vg0-shared</span>, we'll see that they both point to the same actual block device.


== A Note On Daemon Starting ==
<source lang="bash">
ls -lah /dev/shared-vg0/shared /dev/mapper/shared--vg0-shared
</source>
<source lang="text">
lrwxrwxrwx 1 root root 7 Oct 23 16:35 /dev/mapper/shared--vg0-shared -> ../dm-0
lrwxrwxrwx 1 root root 7 Oct 23 16:35 /dev/shared-vg0/shared -> ../dm-0
</source>
<source lang="bash">
ls -lah /dev/dm-0
</source>
<source lang="text">
brw-rw---- 1 root disk 253, 0 Oct 23 16:35 /dev/dm-0
</source>


There are four daemons we will be putting under cluster control;
This next step uses some command-line voodoo. It takes the output from <span class="code">gfs2_tool sb /dev/shared-vg0/shared uuid</span>, parses out the [[UUID]], converts it to lower-case and spits out a string that can be used in <span class="code">/etc/fstab</span>. We'll run it twice; The first time to confirm that the output is what we expect and the second time to append it to <span class="code">/etc/fstab</span>.
* <span class="code">drbd</span>; Replicated storage.
* <span class="code">clvmd</span>; Clustered LVM.
* <span class="code">gfs2</span>; Mounts and Unmounts configured GFS2 partition.
* <span class="code">libvirtd</span>; Provides access to <span class="code">virsh</span> and other <span class="code">libvirt</span> tools. Needed for running our VMs.


The reason we do not want to start these daemons with the system is so that we can let the cluster do it. This way, should any fail, the cluster will detect the failure and fail the service tree properly. For example, lets say that <span class="code">drbd</span> failed to start, <span class="code">rgmanager</span> would fail the storage service and give up, rather than continue trying to start <span class="code">clvmd</span> and the rest.  
The <span class="code">gfs2</span> daemon can only work on GFS2 partitions that have been defined in <span class="code">/etc/fstab</span>, so this is a required step on both nodes.


However, if left to boot on start, the failure of the <span class="code">drbd</span> would not effect the startup of <span class="code">clvmd</span>, which would then not find it's [[PV]]s given that DRBD is down. Next, the system would try to start the <span class="code">gfs2</span> daemon which would also fail as the [[LV]] backing the partition would not be available. Finally, the system would start <span class="code">libvirtd</span>, which would enable the start of virtual machine, which would also be missing their "hard drives" as their backing LVs would also not be available.
We use <span class="code">defaults,noatime,nodiratime</span> instead of just <span class="code">defaults</span> for performance reasons. Normally, every time a file or directory is accessed, its <span class="code">[[atime]]</span> (or <span class="code">[[diratime]]</span>) is updated, which requires a disk write, which requires an exclusive DLM lock, which is expensive. If you need to know when a file or directory was accessed, remove <span class="code">,noatime,nodiratime</span>.


=== Defining The Resources ===
<source lang="bash">
echo `gfs2_tool sb /dev/shared-vg0/shared uuid | awk '/uuid =/ { print $4; }' | sed -e "s/\(.*\)/UUID=\L\1\E \/shared\t\tgfs2\tdefaults,noatime,nodiratime\t0 0/"`
</source>
<source lang="text">
UUID=162a80eb-59b3-08bd-5d69-740cbb60aa45 /shared gfs2 defaults,noatime,nodiratime 0 0
</source>


Lets start by first defining our clustered resources.  
This looks good, so now re-run it but redirect the output to append to <span class="code">/etc/fstab</span>. We'll confirm it worked by checking the status of the <span class="code">gfs2</span> daemon.


As stated before, the addition of these resources does not, in itself, put the defined resources under the cluster's management. Instead, it defines services, like <span class="code">init.d</span> scripts. These can then be used by one or more <span class="code"><service /></span> elements, as we will see shortly. For now, it is enough to know what, until a resource is defined, it can not be used in the cluster.
<source lang="bash">
echo `gfs2_tool sb /dev/shared-vg0/shared uuid | awk '/uuid =/ { print $4; }' | sed -e "s/\(.*\)/UUID=\L\1\E \/shared\t\tgfs2\tdefaults,noatime,nodiratime\t0 0/"` >> /etc/fstab
/etc/init.d/gfs2 status
</source>
<source lang="text">
Configured GFS2 mountpoints:
/shared
Active GFS2 mountpoints:
/shared
</source>


Given that this is the first component of <span class="code">rgmanager</span> being added to <span class="code">cluster.conf</span>, we will be creating the parent <span class="code"><rm /></span> elements here as well.
Perfect, <span class="code">gfs2</span> can see the partition now! We're ready to setup our directories.


Let's take a look at the new section, then discuss the parts.
On '''<span class="code">an-node01</span>'''


<source lang="xml">
<source lang="bash">
<?xml version="1.0"?>
mkdir /shared/{definitions,provision,archive,files}
<cluster name="an-clusterA" config_version="9">
</source>
<cman expected_votes="1" two_node="1"/>
 
<clusternodes>
On '''both''' nodes, confirm that all of the new directories exist and are visible.
<clusternode name="an-node01.alteeve.com" nodeid="1">
 
<fence>
<source lang="bash">
<method name="ipmi">
ls -lah /shared/
<device name="ipmi_an01" action="reboot"/>
</source>
</method>
<source lang="text">
<method name="pdu2">
total 24K
<device name="pdu2" port="1" action="reboot"/>
drwxr-xr-x  6 root root 3.8K Dec 14 19:05 .
</method>
dr-xr-xr-x. 24 root root 4.0K Dec 14 18:44 ..
</fence>
drwxr-xr-x  2 root root    0 Dec 14 19:05 archive
</clusternode>
drwxr-xr-x  2 root root    0 Dec 14 19:05 definitions
<clusternode name="an-node02.alteeve.com" nodeid="2">
drwxr-xr-x  2 root root    0 Dec 14 19:05 files
<fence>
drwxr-xr-x  2 root root    0 Dec 14 19:05 provision
<method name="ipmi">
<device name="ipmi_an02" action="reboot"/>
</method>
<method name="pdu2">
<device name="pdu2" port="2" action="reboot"/>
</method>
</fence>
</clusternode>
</clusternodes>
<fencedevices>
<fencedevice name="ipmi_an01" agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" passwd="secret" />
<fencedevice name="ipmi_an02" agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" passwd="secret" />
<fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.com" name="pdu2"/>
</fencedevices>
<fence_daemon post_join_delay="30"/>
<totem rrp_mode="none" secauth="off"/>
<rm log_level="5">
<resources>
<script file="/etc/init.d/drbd" name="drbd"/>
<script file="/etc/init.d/clvmd" name="clvmd"/>
<script file="/etc/init.d/gfs2" name="gfs2"/>
<script file="/etc/init.d/libvirtd" name="libvirtd"/>
</resources>
</rm>
</cluster>
</source>
</source>


First and foremost; Note that we've incremented the version to <span class="code">9</span>. As always, increment and then edit.
Wonderful!


Let's focus on the new section;
As with <span class="code">drbd</span> and <span class="code">clvmd</span>, we don't want to have <span class="code">gfs2</span> start at boot as we're going to put it under the control of the cluster.


<source lang="xml">
<source lang="bash">
<rm log_level="5">
chkconfig gfs2 off
<resources>
chkconfig --list gfs2
<script file="/etc/init.d/drbd" name="drbd"/>
</source>
<script file="/etc/init.d/clvmd" name="clvmd"/>
<source lang="text">
<script file="/etc/init.d/gfs2" name="gfs2"/>
gfs2           0:off 1:off 2:off 3:off 4:off 5:off 6:off
<script file="/etc/init.d/libvirtd" name="libvirtd"/>
</resources>
</rm>
</source>
</source>


The new <span class="code"><rm log_level="5">...</rm></span> element tells the cluster that this is the section for <span class="code">rgmanager</span> and that we're setting the <span class="code">log_level</span> to <span class="code">5</span>. This <span class="code">log_level</span> is slightly less verbose that the default. Specifically, by default, there is an entry in <span class="code">/var/log/messages</span> every time each resource is checked. This quickly adds a lot of questionably useful information to [[syslog]]. By changing this, we will still see all important messages, but these resource check messages are surpressed. If you are ever curious about whether or not <span class="code">rgmanager</span> is, in fact, checking the services than either remove <span class="code">log_level="5"</span> or change it to <span class="code">6</span> or higher.
==== Renaming a GFS2 Partition ====


The <span class="code"><resources>...</resources></span> element contains our four <span class="code"><script .../></span> resources. This is a particular type of resource which specificially handles that starting and stopping of <span class="code">init.d</span> style scripts. That is, the script must exit with [[LSB]] compliant codes. They must also properly react to being called with the sole argument of <span class="code">start</span>, <span class="code">stop</span> and <span class="code">status</span>.
{{warning|1=Be sure to unmount the GFS2 partition from '''all''' nodes prior to altering the cluster or filesystem names!}}


{{note|1='''''TODO''''': Find (or create) proper documention for all resource scripts.}}
If you ever need to rename your cluster, you will need to update your GFS2 partition before you can remount it. Unmount the partition from all nodes and run:


There are many other types of resources which, with the exception of <span class="code"><vm .../></span>, we will not be looking at in this tutorial. Should you be interested in them, please look in <span class="code">/usr/share/cluster</span> for the various scripts (executable files that end with <span class="code">.sh</span>).  
<source lang="bash">
gfs2_tool sb /dev/shared-vg0/shared table "new_cluster_name:shared"
</source>
<source lang="text">
You shouldn't change any of these values if the filesystem is mounted.


Each of our four <span class="code"><script ... /></span> resources have two attributes;
Are you sure? [y/n] y
* <span class="code">file="..."</span>; The full path to the script to be managed.
* <span class="code">name="..."</span>; A unique name used to reference this resource later on in the <span class="code"><service /></span> elements.


Other resources are more involved, but the <span class="code"><script .../></span> resources are quite simple.
current lock table name = "an-cluster-A:shared"
new lock table name = "new_cluster_name:shared"
Done
</source>


=== Creating Failover Domains ===
Then you can change the cluster's name in <span class="code">cluster.conf</span> and then remount the GFS2 partition.


Failover domains are, at their most basic, a collection of one or more nodes in the cluster with a particular set of rules associated with them. Services can then be configured to operate within the context of a given failover domain. There are a few key options to be aware of.
You can use the same command, changing the GFS2 partition name, if you want to change the name of the filesystem instead of (or at the same time as) the cluster's name.


Failover domains are optional and can be left out of the cluster, generally speaking. However, in our cluster, we will need them for our storage services, as we will later see, so please do not skip this step.
=== Stopping All Clustered Storage Components ===


* A failover domain can be unordered or prioritized.
Before we can put storage under the cluster's control, we need to make sure that the <span class="code">gfs2</span>, <span class="code">clvmd</span> and <span class="code">drbd</span> daemons are stopped.
** When unordered, a service will start on any node in the domain. Should that node later fail, it will restart to another random node in the domain.
** When prioritized, a service will start on the available node with the highest priority in the domain. Should that node later fail, the service will restart on the available node with the next highest priority.
* A failover domain can be restricted or unrestricted.
** When restricted, a service is '''only''' allowed to start on, or restart on. a nodes in the domain. When no nodes are available, the service will be stopped.
** When unrestricted, a service will try to start on, or restart on, a node in the domain. However, when no domain members are available, the cluster will pick another available node at random to start the service on.
* A failover domain can have a failback policy.
** When a domain allows for failback and the domain is ordered, and a node with a higher <span class="code">priority</span> (re)joins the cluster, services within the domain will migrate to that higher-priority node. This allows for automated restoration of services on a failed node when it rejoins the cluster.
** When a domain does not allow for failback, but is unrestricted, failback of services that fell out of the domain will happen anyway. That is to say, <span class="code">nofailback="1"</span> is ignored if a service was running on a node outside of the failover domain and a node within the domain joins the cluster. However, once the service is on a node within the domain, the service will '''not''' relocate to a higher-priority node should one join the cluster later.
** When a domain does not allow for failback and is restricted, then failback of services will never occur.


What we need to do at this stage is to create something of a hack. Let me explain;
On '''both''' nodes, run;


As discussed earlier, we need to start a set of local daemons on all nodes. We want this to happen within the guidance of the cluster. These aren't really clustered resources though as they can only ever run on their host node. They will never be relocated or restarted elsewhere in the cluster. So to work around this desire to "cluster the unclusterable", we're going to create a failover domain for each node in the cluster. Each of these domains will have only one of the cluster nodes as members of the domain and the domain will be restricted, unordered and have no failback. With this configuration, any service group using it will only ever run on the one node in the domain.
<source lang="bash">
/etc/init.d/gfs2 stop && /etc/init.d/clvmd stop && /etc/init.d/drbd stop
</source>
<source lang="text">
Unmounting GFS2 filesystem (/shared):                      [  OK  ]
Deactivating clustered VG(s):  0 logical volume(s) in volume group "an02-vg0" now active
  0 logical volume(s) in volume group "an01-vg0" now active
  0 logical volume(s) in volume group "shared-vg0" now active
                                                          [  OK  ]
Signaling clvmd to exit                                    [  OK  ]
clvmd terminated                                          [  OK  ]
Stopping all DRBD resources: .
</source>


In the next step, we will create a service group, then replicate it once for each node in the cluster. The only difference will be the <span class="code">failoverdomain</span> each is set to use. With our configuration of two nodes then, we will have two failover domains, one for each node, and we will define the clustered storage service twice, each one using one of the two failover domains.
= Managing Storage In The Cluster =


Let's look at the complete updated <span class="code">cluster.conf</span>, then we will focus closer on the new section.
A little while back, we spoke about how the cluster is split into two components; cluster communication managed by <span class="code">cman</span> and resource management provided by <span class="code">rgmanager</span>. It's the later which we will now begin to configure.


<source lang="xml">
In the <span class="code">cluster.conf</span>, the <span class="code">rgmanager</span> component is contained within the <span class="code"><rm /></span> element tags. Within this element are three types of child elements. They are:
* Fail-over Domains - <span class="code"><failoverdomains /></span>;
** These are optional constraints which allow for control which nodes, and under what circumstances, services may run. When not used, a service will be allowed to run on any node in the cluster without constraints or ordering.
* Resources - <span class="code"><resources /></span>;
** Within this element, available resources are defined. Simply having a resource here will not put it under cluster control. Rather, it makes it available for use in <span class="code"><service /></span> elements.
* Services - <span class="code"><service /></span>;
** This element contains one or more parallel or series child-elements which are themselves references to <span class="code"><resources /></span> elements. When in parallel, the services will start and stop at the same time. When in series, the services start in order and stop in reverse order. We will also see a specialized type of service that uses the <span class="code"><vm /></span> element name, as you can probably guess, for creating virtual machine services.
 
We'll look at each of these components in more detail shortly.
 
== A Note On Daemon Starting ==
 
There are four daemons we will be putting under cluster control;
* <span class="code">drbd</span>; Replicated storage.
* <span class="code">clvmd</span>; Clustered LVM.
* <span class="code">gfs2</span>; Mounts and Unmounts configured GFS2 partition.
* <span class="code">libvirtd</span>; Provides access to <span class="code">virsh</span> and other <span class="code">libvirt</span> tools. Needed for running our VMs.
 
The reason we do not want to start these daemons with the system is so that we can let the cluster do it. This way, should any fail, the cluster will detect the failure and fail the entire service tree. For example, lets say that <span class="code">drbd</span> failed to start, <span class="code">rgmanager</span> would fail the storage service and give up, rather than continue trying to start <span class="code">clvmd</span> and the rest. With <span class="code">libvirtd</span> being the last daemon, it will not be possible to start a VM unless the storage started successfully.
 
If we had left these daemons to boot on start, the failure of the <span class="code">drbd</span> would not effect the start-up of <span class="code">clvmd</span>, which would then not find its [[PV]]s given that DRBD is down. Next, the system would try to start the <span class="code">gfs2</span> daemon which would also fail as the [[LV]] backing the partition would not be available. Finally, the system would start <span class="code">libvirtd</span>, which would allow the start of virtual machine, which would also be missing their "hard drives" as their backing LVs would also not be available. Pretty messy situation to clean up from.
 
=== Defining The Resources ===
 
Lets start by first defining our clustered resources.
 
As stated before, the addition of these resources does not, in itself, put the defined resources under the cluster's management. Instead, it defines services, like <span class="code">init.d</span> scripts. These can then be used by one or more <span class="code"><service /></span> elements, as we will see shortly. For now, it is enough to know what, until a resource is defined, it can not be used in the cluster.
 
Given that this is the first component of <span class="code">rgmanager</span> being added to <span class="code">cluster.conf</span>, we will be creating the parent <span class="code"><rm /></span> elements here as well.
 
Let's take a look at the new section, then discuss the parts.
 
<source lang="xml">
<?xml version="1.0"?>
<?xml version="1.0"?>
<cluster name="an-clusterA" config_version="10">
<cluster name="an-cluster-A" config_version="8">
<cman expected_votes="1" two_node="1"/>
        <cman expected_votes="1" two_node="1" />
<clusternodes>
        <clusternodes>
<clusternode name="an-node01.alteeve.com" nodeid="1">
                <clusternode name="an-node01.alteeve.ca" nodeid="1">
<fence>
                        <fence>
<method name="ipmi">
                                <method name="ipmi">
<device name="ipmi_an01" action="reboot"/>
                                        <device name="ipmi_an01" action="reboot" />
</method>
                                </method>
<method name="pdu2">
                                <method name="pdu">
<device name="pdu2" port="1" action="reboot"/>
                                        <device name="pdu2" port="1" action="reboot" />
</method>
                                </method>
</fence>
                        </fence>
</clusternode>
                </clusternode>
<clusternode name="an-node02.alteeve.com" nodeid="2">
                <clusternode name="an-node02.alteeve.ca" nodeid="2">
<fence>
                        <fence>
<method name="ipmi">
                                <method name="ipmi">
<device name="ipmi_an02" action="reboot"/>
                                        <device name="ipmi_an02" action="reboot" />
</method>
                                </method>
<method name="pdu2">
                                <method name="pdu">
<device name="pdu2" port="2" action="reboot"/>
                                        <device name="pdu2" port="2" action="reboot" />
</method>
                                </method>
</fence>
                        </fence>
</clusternode>
                </clusternode>
</clusternodes>
        </clusternodes>
<fencedevices>
        <fencedevices>
<fencedevice name="ipmi_an01" agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" passwd="secret" />
                <fencedevice name="ipmi_an01" agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" passwd="secret" />
<fencedevice name="ipmi_an02" agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" passwd="secret" />
                <fencedevice name="ipmi_an02" agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" passwd="secret" />
<fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.com" name="pdu2"/>
                <fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.ca" name="pdu2" />
</fencedevices>
        </fencedevices>
<fence_daemon post_join_delay="30"/>
        <fence_daemon post_join_delay="30" />
<totem rrp_mode="none" secauth="off"/>
        <totem rrp_mode="none" secauth="off"/>
<rm log_level="5">
        <rm>
                <resources>
                        <script file="/etc/init.d/drbd" name="drbd"/>
                        <script file="/etc/init.d/clvmd" name="clvmd"/>
                        <script file="/etc/init.d/gfs2" name="gfs2"/>
                        <script file="/etc/init.d/libvirtd" name="libvirtd"/>
                </resources>
        </rm>
</cluster>
</source>
 
First and foremost; Note that we've incremented the version to <span class="code">8</span>. As always, increment and then edit.
 
Let's focus on the new section;
 
<source lang="xml">
<rm>
<resources>
<resources>
<script file="/etc/init.d/drbd" name="drbd"/>
<script file="/etc/init.d/drbd" name="drbd"/>
Line 4,295: Line 4,653:
<script file="/etc/init.d/libvirtd" name="libvirtd"/>
<script file="/etc/init.d/libvirtd" name="libvirtd"/>
</resources>
</resources>
<failoverdomains>
<failoverdomain name="only_an01" nofailback="1" ordered="0" restricted="1">
<failoverdomainnode name="an-node01.alteeve.com"/>
</failoverdomain>
<failoverdomain name="only_an02" nofailback="1" ordered="0" restricted="1">
<failoverdomainnode name="an-node02.alteeve.com"/>
</failoverdomain>
</failoverdomains>
</rm>
</rm>
</cluster>
</source>
</source>


As always, the version was incremented, this time to <span class="code">10</span>. We've also added the new <span class="code"><failoverdomains>...</failoverdomains></span> element. Let's take a closer look at this new element.
The <span class="code"><resources>...</resources></span> element contains our four <span class="code"><script .../></span> resources. This is a particular type of resource which specifically handles that starting and stopping of <span class="code">[[init.d]]</span> style scripts. That is, the script must exit with [[LSB]] compliant codes. They must also properly react to being called with the sole argument of <span class="code">start</span>, <span class="code">stop</span> and <span class="code">status</span>.


<source lang="xml">
There are many other types of resources which, with the exception of <span class="code"><vm .../></span>, we will not be looking at in this tutorial. Should you be interested in them, please look in <span class="code">/usr/share/cluster</span> for the various scripts (executable files that end with <span class="code">.sh</span>).
<failoverdomains>
 
<failoverdomain name="only_an01" nofailback="1" ordered="0" restricted="1">
Each of our four <span class="code"><script ... /></span> resources have two attributes;
<failoverdomainnode name="an-node01.alteeve.com"/>
* <span class="code">file="..."</span>; The full path to the script to be managed.
</failoverdomain>
* <span class="code">name="..."</span>; A unique name used to reference this resource later on in the <span class="code"><service /></span> elements.
<failoverdomain name="only_an02" nofailback="1" ordered="0" restricted="1">
<failoverdomainnode name="an-node02.alteeve.com"/>
</failoverdomain>
</failoverdomains>
</source>


The first thing to node is that there are two <span class="code"><failoverdomain...>...</failoverdomain></span> child elements.
Other resources are more involved, but the <span class="code"><script .../></span> resources are quite simple.
* The first has the name <span class="code">only_an01</span> and contains only the node <span class="code">an-node01</span> as a member.
* The second is effectively identical, save that the domain's name is <span class="code">only_an02</span> and it contains only the node <span class="code">an-node02</span> as a member.


The <span class="code"><failoverdomain ...></span> element has four attributes;
=== Creating Failover Domains ===
* The <span class="code">name="..."</span> attribute sets the unique name of the domain which we will later use to bind a service to the domain.
* The <span class="code">nofailback="1"</span> attribute tells the cluster to never "fail back" any services in this domain. This seems redundant, given there is only one node, but when combined with <span class="code">restricted="0"</span>, prevents any migration of services.
* The <span class="code">ordered="0"</span> this is also somewhat redundant in that there is only one node defined in the domain, but I don't like to leave attributes undefined so I have it here.
* The <span class="code">restricted="1"</span> attribute is key in that it tells the cluster to '''not''' try to restart services within this domain on any other nodes outside of the one defined in the failover domain.


Each of the <span class="code"><failoverdomain...></span> elements has a single <span class="code"><failoverdomainnode .../></span> child element. This is a very simple element which has, at this time, only one attribute;
Fail-over domains are, at their most basic, a collection of one or more nodes in the cluster with a particular set of rules associated with them. Services can then be configured to operate within the context of a given fail-over domain. There are a few key options to be aware of.
* <span class="code">name="..."</span>; The name of the node to include in the failover domain. This name must match the corresponding <span class="code"><clusternode name="..."</span> node name.


At this point, we're ready to finally create our clustered services.
Fail-over domains are optional and can be left out of the cluster, generally speaking. However, in our cluster, we will need them for our storage services, as we will later see, so please do not skip this step.


=== Creating Clustered Services ===
* A fail-over domain can be unordered or prioritized.
** When unordered, a service will start on any node in the domain. Should that node later fail, it will restart to another random node in the domain.
** When prioritized, a service will start on the available node with the highest priority in the domain. Should that node later fail, the service will restart on the available node with the next highest priority.
* A fail-over domain can be restricted or unrestricted.
** When restricted, a service is '''only''' allowed to start on, or restart on. a nodes in the domain. When no nodes are available, the service will be stopped.
** When unrestricted, a service will try to start on, or restart on, a node in the domain. However, when no domain members are available, the cluster will pick another available node at random to start the service on.
* A fail-over domain can have a fail-back policy.
** When a domain allows for fail-back and the domain is ordered, and a node with a higher <span class="code">priority</span> (re)joins the cluster, services within the domain will migrate to that higher-priority node. This allows for automated restoration of services on a failed node when it rejoins the cluster.
** When a domain does not allow for fail-back, but is unrestricted, fail-back of services that fell out of the domain will happen anyway. That is to say, <span class="code">nofailback="1"</span> is ignored if a service was running on a node outside of the fail-over domain and a node within the domain joins the cluster. However, once the service is on a node within the domain, the service will '''not''' relocate to a higher-priority node should one join the cluster later.
** When a domain does not allow for fail-back and is restricted, then fail-back of services will never occur.


With the resources defined and the failover domains created, we can set about creating our services.
What we need to do at this stage is to create something of a hack. Let me explain;


Generally speaking, services can have one or more resources within them. When two or more resources exist, then can be put into a dependency tree, they can used in parallel or a combination of parallel and dependent resources.
As discussed earlier, we need to start a set of local daemons on all nodes. These aren't really clustered resources though as they can only ever run on their host node. They will never be relocated or restarted elsewhere in the cluster as as such, are not highly available. So to work around this desire to "cluster the unclusterable", we're going to create a fail-over domain for each node in the cluster. Each of these domains will have only one of the cluster nodes as members of the domain and the domain will be restricted, unordered and have no fail-back. With this configuration, any service group using it will only ever run on the one node in the domain.


When you create a service dependency tree, you put each dependent resource as a child element of it's parent. The resources are then started in order, starting at the top of the tree and working it's way down to the deepest child resource. If at any time one of the resources should fail, the entire service will be declared failed and no attempt will be made to try and start any further child resources. Conversely, stopping the service will cause the deepest child resource to be stopped first. Then the second deepest and on upwards towards the top resource. This is exactly the behaviour we want, as we will see shortly.
In the next step, we will create a service group, then replicate it once for each node in the cluster. The only difference will be the <span class="code">failoverdomain</span> each is set to use. With our configuration of two nodes then, we will have two fail-over domains, one for each node, and we will define the clustered storage service twice, each one using one of the two fail-over domains.


When resources are defined in parallel, all defined resources will be started at the same time. Should any one of the resources fail to start, the entire resource will declared failed. Stopping the service will likewise cause a simultaneous call to stop all resources.
Let's look at the complete updated <span class="code">cluster.conf</span>, then we will focus closer on the new section.
 
As before, let's take a look at the entire updated <span class="code">cluster.conf</span> file, then we'll focus in on the new service section.


<source lang="xml">
<source lang="xml">
<?xml version="1.0"?>
<?xml version="1.0"?>
<cluster name="an-clusterA" config_version="11">
<cluster name="an-cluster-A" config_version="9">
<cman expected_votes="1" two_node="1"/>
        <cman expected_votes="1" two_node="1" />
<clusternodes>
        <clusternodes>
<clusternode name="an-node01.alteeve.com" nodeid="1">
                <clusternode name="an-node01.alteeve.ca" nodeid="1">
<fence>
                        <fence>
<method name="ipmi">
                                <method name="ipmi">
<device name="ipmi_an01" action="reboot"/>
                                        <device name="ipmi_an01" action="reboot" />
</method>
                                </method>
<method name="pdu2">
                                <method name="pdu">
<device name="pdu2" port="1" action="reboot"/>
                                        <device name="pdu2" port="1" action="reboot" />
</method>
                                </method>
</fence>
                        </fence>
</clusternode>
                </clusternode>
<clusternode name="an-node02.alteeve.com" nodeid="2">
                <clusternode name="an-node02.alteeve.ca" nodeid="2">
<fence>
                        <fence>
<method name="ipmi">
                                <method name="ipmi">
<device name="ipmi_an02" action="reboot"/>
                                        <device name="ipmi_an02" action="reboot" />
</method>
                                </method>
<method name="pdu2">
                                <method name="pdu">
<device name="pdu2" port="2" action="reboot"/>
                                        <device name="pdu2" port="2" action="reboot" />
</method>
                                </method>
</fence>
                        </fence>
</clusternode>
                </clusternode>
</clusternodes>
        </clusternodes>
<fencedevices>
        <fencedevices>
<fencedevice name="ipmi_an01" agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" passwd="secret" />
                <fencedevice name="ipmi_an01" agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" passwd="secret" />
<fencedevice name="ipmi_an02" agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" passwd="secret" />
                <fencedevice name="ipmi_an02" agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" passwd="secret" />
<fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.com" name="pdu2"/>
                <fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.ca" name="pdu2" />
</fencedevices>
        </fencedevices>
<fence_daemon post_join_delay="30"/>
        <fence_daemon post_join_delay="30" />
<totem rrp_mode="none" secauth="off"/>
        <totem rrp_mode="none" secauth="off"/>
<rm log_level="5">
        <rm>
<resources>
                <resources>
<script file="/etc/init.d/drbd" name="drbd"/>
                        <script file="/etc/init.d/drbd" name="drbd"/>
<script file="/etc/init.d/clvmd" name="clvmd"/>
                        <script file="/etc/init.d/clvmd" name="clvmd"/>
<script file="/etc/init.d/gfs2" name="gfs2"/>
                        <script file="/etc/init.d/gfs2" name="gfs2"/>
<script file="/etc/init.d/libvirtd" name="libvirtd"/>
                        <script file="/etc/init.d/libvirtd" name="libvirtd"/>
</resources>
                </resources>
<failoverdomains>
                <failoverdomains>
<failoverdomain name="only_an01" nofailback="1" ordered="0" restricted="1">
                        <failoverdomain name="only_an01" nofailback="1" ordered="0" restricted="1">
<failoverdomainnode name="an-node01.alteeve.com"/>
                                <failoverdomainnode name="an-node01.alteeve.ca"/>
</failoverdomain>
                        </failoverdomain>
<failoverdomain name="only_an02" nofailback="1" ordered="0" restricted="1">
                        <failoverdomain name="only_an02" nofailback="1" ordered="0" restricted="1">
<failoverdomainnode name="an-node02.alteeve.com"/>
                                <failoverdomainnode name="an-node02.alteeve.ca"/>
</failoverdomain>
                        </failoverdomain>
</failoverdomains>
                </failoverdomains>
<service name="storage_an01" autostart="1" domain="only_an01" exclusive="0" recovery="restart">
        </rm>
<script ref="drbd">
</cluster>
<script ref="clvmd">
</source>
<script ref="gfs2">
 
<script ref="libvirtd"/>
As always, the version was incremented, this time to <span class="code">9</span>. We've also added the new <span class="code"><failoverdomains>...</failoverdomains></span> element. Let's take a closer look at this new element.
</script>
 
</script>
<source lang="xml">
</script>
                <failoverdomains>
</service>
                        <failoverdomain name="only_an01" nofailback="1" ordered="0" restricted="1">
<service name="storage_an02" autostart="1" domain="only_an02" exclusive="0" recovery="restart">
                                <failoverdomainnode name="an-node01.alteeve.ca"/>
<script ref="drbd">
                        </failoverdomain>
<script ref="clvmd">
                        <failoverdomain name="only_an02" nofailback="1" ordered="0" restricted="1">
<script ref="gfs2">
                                <failoverdomainnode name="an-node02.alteeve.ca"/>
<script ref="libvirtd"/>
                        </failoverdomain>
</script>
                </failoverdomains>
</script>
</source>
</script>
 
</service>
The first thing to node is that there are two <span class="code"><failoverdomain...>...</failoverdomain></span> child elements.
</rm>
* The first has the name <span class="code">only_an01</span> and contains only the node <span class="code">an-node01</span> as a member.
</cluster>
* The second is effectively identical, save that the domain's name is <span class="code">only_an02</span> and it contains only the node <span class="code">an-node02</span> as a member.
</source>
 
The <span class="code"><failoverdomain ...></span> element has four attributes;
* The <span class="code">name="..."</span> attribute sets the unique name of the domain which we will later use to bind a service to the domain.
* The <span class="code">nofailback="1"</span> attribute tells the cluster to never "fail back" any services in this domain. This seems redundant, given there is only one node, but when combined with <span class="code">restricted="0"</span>, prevents any migration of services.
* The <span class="code">ordered="0"</span> this is also somewhat redundant in that there is only one node defined in the domain, but I don't like to leave attributes undefined so I have it here.
* The <span class="code">restricted="1"</span> attribute is key in that it tells the cluster to '''not''' try to restart services within this domain on any other nodes outside of the one defined in the fail-over domain.
 
Each of the <span class="code"><failoverdomain...></span> elements has a single <span class="code"><failoverdomainnode .../></span> child element. This is a very simple element which has, at this time, only one attribute;
* <span class="code">name="..."</span>; The name of the node to include in the fail-over domain. This name must match the corresponding <span class="code"><clusternode name="..."</span> node name.
 
At this point, we're ready to finally create our clustered storage services.
 
=== Creating Clustered Storage Services ===
 
With the resources defined and the fail-over domains created, we can set about creating our services.
 
Generally speaking, services can have one or more resources within them. When two or more resources exist, then can be put into a dependency tree, they can used in parallel or a combination of parallel and dependent resources.
 
When you create a service dependency tree, you put each dependent resource as a child element of its parent. The resources are then started in order, starting at the top of the tree and working its way down to the deepest child resource. If at any time one of the resources should fail, the entire service will be declared failed and no attempt will be made to try and start any further child resources. Conversely, stopping the service will cause the deepest child resource to be stopped first. Then the second deepest and on upwards towards the top resource. This is exactly the behaviour we want, as we will see shortly.
 
When resources are defined in parallel, all defined resources will be started at the same time. Should any one of the resources fail to start, the entire resource will declared failed. Stopping the service will likewise cause a simultaneous call to stop all resources.


With the version now at <span class="code">11</span>, we have added two <span class="code"><service...>...</service></span> elements. Each containing a four <span class="code"><script ...></span> type resources in a service tree configuration. Let's take a closer look.
As before, let's take a look at the entire updated <span class="code">cluster.conf</span> file, then we'll focus in on the new service section.
 
<source lang="xml">
<?xml version="1.0"?>
<cluster name="an-cluster-A" config_version="10">
        <cman expected_votes="1" two_node="1" />
        <clusternodes>
                <clusternode name="an-node01.alteeve.ca" nodeid="1">
                        <fence>
                                <method name="ipmi">
                                        <device name="ipmi_an01" action="reboot" />
                                </method>
                                <method name="pdu">
                                        <device name="pdu2" port="1" action="reboot" />
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="an-node02.alteeve.ca" nodeid="2">
                        <fence>
                                <method name="ipmi">
                                        <device name="ipmi_an02" action="reboot" />
                                </method>
                                <method name="pdu">
                                        <device name="pdu2" port="2" action="reboot" />
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <fencedevices>
                <fencedevice name="ipmi_an01" agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" passwd="secret" />
                <fencedevice name="ipmi_an02" agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" passwd="secret" />
                <fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.ca" name="pdu2" />
        </fencedevices>
        <fence_daemon post_join_delay="30" />
        <totem rrp_mode="none" secauth="off"/>
        <rm>
                <resources>
                        <script file="/etc/init.d/drbd" name="drbd"/>
                        <script file="/etc/init.d/clvmd" name="clvmd"/>
                        <script file="/etc/init.d/gfs2" name="gfs2"/>
                        <script file="/etc/init.d/libvirtd" name="libvirtd"/>
                </resources>
                <failoverdomains>
                        <failoverdomain name="only_an01" nofailback="1" ordered="0" restricted="1">
                                <failoverdomainnode name="an-node01.alteeve.ca"/>
                        </failoverdomain>
                        <failoverdomain name="only_an02" nofailback="1" ordered="0" restricted="1">
                                <failoverdomainnode name="an-node02.alteeve.ca"/>
                        </failoverdomain>
                </failoverdomains>
                <service name="storage_an01" autostart="1" domain="only_an01" exclusive="0" recovery="restart">
                        <script ref="drbd">
                                <script ref="clvmd">
                                        <script ref="gfs2">
                                                <script ref="libvirtd"/>
                                        </script>
                                </script>
                        </script>
                </service>
                <service name="storage_an02" autostart="1" domain="only_an02" exclusive="0" recovery="restart">
                        <script ref="drbd">
                                <script ref="clvmd">
                                        <script ref="gfs2">
                                                <script ref="libvirtd"/>
                                        </script>
                                </script>
                        </script>
                </service>
        </rm>
</cluster>
</source>
 
With the version now at <span class="code">10</span>, we have added two <span class="code"><service...>...</service></span> elements. Each containing a four <span class="code"><script ...></span> type resources in a service tree configuration. Let's take a closer look.


<source lang="xml">
<source lang="xml">
Line 4,443: Line 4,879:
* The <span class="code">name="..."</span> attribute is a unique name that will be used to identify the service, as we will see later.
* The <span class="code">name="..."</span> attribute is a unique name that will be used to identify the service, as we will see later.
* The <span class="code">autostart="1"</span> attribute tells the cluster that, when it starts, it should automatically start this service.
* The <span class="code">autostart="1"</span> attribute tells the cluster that, when it starts, it should automatically start this service.
* The <span class="code">domain="..."</span> attribute tells the cluster which failover domain this service must run within. The two otherwise identical services each point to a different failover domain, as we discussed in the previous section.
* The <span class="code">domain="..."</span> attribute tells the cluster which fail-over domain this service must run within. The two otherwise identical services each point to a different fail-over domain, as we discussed in the previous section.
* The <span class="code">exclusive="0"</span> attribute tells the cluster that a node running this service '''is''' allowed to to have other services running as well.
* The <span class="code">exclusive="0"</span> attribute tells the cluster that a node running this service '''is''' allowed to to have other services running as well.
* The <span class="code">recovery="restart"</span> attribute sets the service recovery policy. As the name implies, the cluster will try to restart this service should it fail. Should the service fail multiple times in a row, it will be disabled. The exact number of failures allowed before disabling is configurable using the optional <span class="code">max_restarts</span> and <span class="code">restart_expire_time</span> attributes, which are not covered here.
* The <span class="code">recovery="restart"</span> attribute sets the service recovery policy. As the name implies, the cluster will try to restart this service should it fail. Should the service fail multiple times in a row, it will be disabled. The exact number of failures allowed before disabling is configurable using the optional <span class="code">max_restarts</span> and <span class="code">restart_expire_time</span> attributes, which are not covered here.
Line 4,457: Line 4,893:
* DRBD needs to start so that the bare clustered storage devices become available.
* DRBD needs to start so that the bare clustered storage devices become available.
* Clustered LVM must next start so that the logical volumes used by GFS2 and our VMs become available.
* Clustered LVM must next start so that the logical volumes used by GFS2 and our VMs become available.
* The GFS2 partition contains the [[XML]] definition files needed to start our virtual machines.
* The GFS2 partition contains the [[XML]] definition files needed to start our virtual machines.
* Finally, <span class="code">libvirtd</span> must be running for the virtual machines to be able to run. By putting this daemon in the resource tree, we can ensure that no attempt to start a VM will succeed until all of the clustered storage stack is available.
* Finally, <span class="code">libvirtd</span> must be running for the virtual machines to be able to run. By putting this daemon in the resource tree, we can ensure that no attempt to start a VM will succeed until all of the clustered storage stack is available.
 
 
From the other direction, we need the stop order to be organized in the reverse order.
From the other direction, we need the stop order to be organized in the reverse order.
* Stopping <span class="code">libvirtd</span> would cause any remaining running VMs to stop. If a VM is blocking, it will prevent <span class="code">libvirtd</span> from stopping and, thus, delay any of our other clustered storage resources from attempting to stop.
* Stopping <span class="code">libvirtd</span> would cause any remaining running VMs to stop. If a VM is blocking, it will prevent <span class="code">libvirtd</span> from stopping and, thus, delay any of our other clustered storage resources from attempting to stop.
* We need the GFS2 partition to unmount after the VM goes down and before Clustered LVM map stop.
* We need the GFS2 partition to unmount after the VM goes down and before Clustered LVM map stop.
* With all VMs and the GFS2 partition stopped, we can safely say that all LVs are no longer in use and thus <span class="code">clvmd</span> can stop.
* With all VMs and the GFS2 partition stopped, we can safely say that all LVs are no longer in use and thus <span class="code">clvmd</span> can stop.
* With Clustered LVM now stopped, nothing should be using our DRBD resources any more, so we can safely stop them, too.
* With Clustered LVM now stopped, nothing should be using our DRBD resources any more, so we can safely stop them, too.
 
 
All in all, it's a surprisingly simple and effective configuration.
All in all, it's a surprisingly simple and effective configuration.
 
 
== Validating And Pushing The Changes ==
== Validating And Pushing The Changes ==
 
 
We've made a big change, so it's all the more important that we validate the config before proceeding.
We've made a big change, so it's all the more important that we validate the config before proceeding.
 
 
<source lang="bash">
<source lang="bash">
ccs_config_validate  
ccs_config_validate  
</source>
</source>
<source lang="text">
<source lang="text">
Configuration validates
Configuration validates
</source>
</source>
 
 
That was easy. Now push out the updated config.
We need to now tell the cluster to use the new configuration file. Unlike last time, we won't use <span class="code">rsync</span>. Now that the cluster is up and running, we can use it to push out the updated configuration file using <span class="code">cman_tool</span>. This is the first time we've used the cluster to push out an updated <span class="code">cluster.conf</span> file, so we will have to enter the password we set earlier for the <span class="code">ricci</span> user on both nodes.
 
 
On '''both''';
<source lang="bash">
 
cman_tool version -r
<source lang="bash">
</source>
cman_tool version
<source lang="text">
</source>
You have not authenticated to the ricci daemon on an-node01.alteeve.ca
<source lang="text">
</source>
6.2.0 config 11
<source lang="text">
</source>
Password:
 
</source>
== Checking The Cluster's Status ==
<source lang="text">
 
You have not authenticated to the ricci daemon on an-node02.alteeve.ca
Now let's look at a new tool; <span class="code">clustat</span>, '''clu'''ster '''stat'''us. We'll be using <span class="code">clustat</span> extensively from here on out to monitor the status of the cluster members and managed services. It does not manage the cluster in any way, it is simply a status tool. We'll see how
</source>
 
<source lang="text">
Here is what it should look like when run from <span class="code">an-node01</span>.  
Password:
 
</source>
<source lang="bash">
 
clustat  
If you were watching syslog, you will have seen an entries like the ones below.
 
<source lang="text">
Dec 14 20:39:08 an-node01 modcluster: Updating cluster.conf
Dec 14 20:39:12 an-node01 corosync[2360]:  [QUORUM] Members[2]: 1 2
</source>
 
Now we can confirm that both nodes are using the new configuration by re-running the <span class="code">cman_tool version</span> command, but without the <span class="code">-r</span> switch.
 
On '''both''';
 
<source lang="bash">
cman_tool version
</source>
<source lang="text">
6.2.0 config 10
</source>
 
== Checking The Cluster's Status ==
 
Now let's look at a new tool; <span class="code">clustat</span>, '''clu'''ster '''stat'''us. We'll be using <span class="code">clustat</span> extensively from here on out to monitor the status of the cluster members and managed services. It does not manage the cluster in any way, it is simply a status tool. We'll see how
 
Here is what it should look like when run from <span class="code">an-node01</span>.
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Wed Dec 14 20:45:04 2011
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local
an-node02.alteeve.ca                      2 Online
</source>
 
At this point, we're only running the foundation of the cluster, so we can only see which nodes are in the cluster. We've added resources to the cluster configuration though, so it's time to start the resource layer as well, which is managed by the <span class="code">rgmanager</span> daemon.
 
At this time, we're still starting the cluster manually after each node boots, so we're going to make sure that <span class="code">rgmanager</span> is disabled at boot.
 
<source lang="bash">
chkconfig rgmanager off
chkconfig --list rgmanager
</source>
<source lang="text">
rgmanager      0:off 1:off 2:off 3:off 4:off 5:off 6:off
</source>
 
Now let's start it.
 
{{note|1=We've configured the storage services to start automatically. When we start <span class="code">rgmanager</span> now, it will start the storage resources, including DRBD. In turn, DRBD will stop up to five minutes and wait for its peer. This will cause the first node you start <span class="code">rgmanager</span> on to appear to hang until the other node's <span class="code">rgmanager</span> has started DRBD as well.}}
 
<source lang="bash">
/etc/init.d/rgmanager start
</source>
<source lang="text">
Starting Cluster Service Manager:                          [  OK  ]
</source>
 
Now let's run <span class="code">clustat</span> again, and see what's new.
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Wed Dec 14 20:52:11 2011
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State       
------- ----                  ----- ------                  -----       
service:storage_an01          an-node01.alteeve.ca          started     
service:storage_an02          an-node02.alteeve.ca          started     
</source>
 
What we see are two section; The top section shows the cluster members and the lower part covers the managed resources.
 
We can see that both members, <span class="code">an-node01.alteeve.ca</span> and <span class="code">an-node02.alteeve.ca</span> are <span class="code">Online</span>, meaning that <span class="code">cman</span> is running and that they've joined the cluster. It also shows us that both members are running <span class="code">rgmanager</span>. You will always see <span class="code">Local</span> beside the name of the node you ran the actual <span class="code">clustat</span> command from.
 
Under the services, you can see the two new services we created with the <span class="code">service:</span> prefix. We can see that each service is <span class="code">started</span>, meaning that all four of the resources are up and running properly and which node each service is running on.
 
Note that the two storage services are running, despite not having started them? That is because the <span class="code">rgmanager</span> service was started earlier. When we pushed out the updated configuration, <span class="code">rgmanager</span> saw the two new storage services had <span class="code">autostart="1"</span> and started them. If you check your storage services now, you will see that they are all online.
 
DRBD;
 
<source lang="bash">
/etc/init.d/drbd status
</source>
<source lang="text">
version: 8.3.12 (api:88/proto:86-96)
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03
m:res  cs        ro              ds                p  mounted  fstype
0:r0  Connected  Primary/Primary  UpToDate/UpToDate  C
1:r1  Connected  Primary/Primary  UpToDate/UpToDate  C
2:r2  Connected  Primary/Primary  UpToDate/UpToDate  C
</source>
 
Clustered LVM;
 
<source lang="bash">
pvscan; vgscan; lvscan
</source>
<source lang="text">
  PV /dev/drbd2  VG an02-vg0    lvm2 [201.25 GiB / 201.25 GiB free]
  PV /dev/drbd1  VG an01-vg0    lvm2 [201.62 GiB / 201.62 GiB free]
  PV /dev/drbd0  VG shared-vg0  lvm2 [18.61 GiB / 0    free]
  Total: 3 [421.48 GiB] / in use: 3 [421.48 GiB] / in no VG: 0 [0  ]
  Reading all physical volumes.  This may take a while...
  Found volume group "an02-vg0" using metadata type lvm2
  Found volume group "an01-vg0" using metadata type lvm2
  Found volume group "shared-vg0" using metadata type lvm2
  ACTIVE            '/dev/shared-vg0/shared' [18.61 GiB] inherit
</source>
 
GFS2;
 
<source lang="bash">
/etc/init.d/gfs2 status
</source>
<source lang="text">
Configured GFS2 mountpoints:
Configured GFS2 mountpoints:
/shared
Active GFS2 mountpoints:
/shared
</source>
 
Nice, eh?
 
== Managing Cluster Resources ==
 
Managing services in the cluster is done with a fairly simple tool called <span class="code">clusvcadm</span>.
 
The main commands we're going to look at shortly are:
 
* <span class="code">clusvcadm -e <service> -m <node></span>: Enable the <span class="code"><service></span> on the specified <span class="code"><node></span>. When a <span class="code"><node></span> is not specified, the local node where the command was run is assumed.
* <span class="code">clusvcadm -d <service></span>: Disable the <span class="code"><service></span>.
 
There are other ways to use <span class="code">clusvcadm</span> which we will look at after the virtual servers are provisioned and under cluster control.
 
== Stopping Clustered Storage - A Preview To Cold-Stopping The Cluster ==
 
To stop the storage services, we'll use the <span class="code">rgmanager</span> command line tool <span class="code">clusvcadm</span>, the '''clu'''ster '''s'''er'''v'''i'''c'''e '''adm'''inistrator. Specifically, we'll use its <span class="code">-d</span> switch, which tells <span class="code">rgmanager</span> to '''d'''isable the service.
 
{{note|1=Services with the <span class="code">service:</span> prefix can be called with their name alone. As we will see later, other services will need to have the service type prefix included.}}
 
As always, confirm the current state of affairs before starting. On both nodes, run <span class="code">clustat</span> to confirm that the storage services are up.
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Tue Dec 20 20:37:42 2011
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State       
------- ----                  ----- ------                  -----       
service:storage_an01          an-node01.alteeve.ca          started     
service:storage_an02          an-node02.alteeve.ca          started     
</source>
 
They are, so now lets gracefully shut them down.
 
On '''<span class="code">an-node01</span>''', run:
 
<source lang="bash">
clusvcadm -d storage_an01
</source>
<source lang="text">
Local machine disabling service:storage_an01...Success
</source>
 
If we now run <span class="code">clustat</span> from either node, we should see this;
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Tue Dec 20 20:38:28 2011
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State       
------- ----                  ----- ------                  -----       
service:storage_an01          (an-node01.alteeve.ca)        disabled     
service:storage_an02          an-node02.alteeve.ca          started     
</source>
 
Notice how <span class="code">service:storage_an01</span> is now in the <span class="code">disabled</span> state? If you check the status of <span class="code">drbd</span> now on <span class="code">an-node02</span> you will see that <span class="code">an-node01</span> is indeed down.
 
<source lang="bash">
/etc/init.d/drbd status
</source>
<source lang="text">
drbd driver loaded OK; device status:
version: 8.3.12 (api:88/proto:86-96)
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03
m:res  cs            ro              ds                p  mounted  fstype
0:r0  WFConnection  Primary/Unknown  UpToDate/Outdated  C
1:r1  WFConnection  Primary/Unknown  UpToDate/Outdated  C
2:r2  WFConnection  Primary/Unknown  UpToDate/Outdated  C
</source>
 
If you want to shut down the entire cluster, you will need to stop the <span class="code">storage_an02</span> service as well. For fun, let's do this, but lets stop the service from <span class="code">an-node01</span>;
 
<source lang="bash">
clusvcadm -d storage_an02
</source>
<source lang="text">
Local machine disabling service:storage_an02...Success
</source>
 
Now on both nodes, we should see this from <span class="code">clustat</span>;
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Tue Dec 20 20:39:55 2011
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State       
------- ----                  ----- ------                  -----       
service:storage_an01          (an-node01.alteeve.ca)        disabled     
service:storage_an02          (an-node02.alteeve.ca)        disabled     
</source>
 
{{warning|1=If you are not doing a cold shut-down of the cluster, you will want to skip this step and just stop <span class="code">rgmanager</span>. The reason is that the <span class="code">autostart="1"</span> value only gets evaluated when [[quorum]] is gained. If you disable the <span class="code">storage_anXX</span> service and then reboot the node, the cluster has not lost quorum. Thus, when the node rejoins the cluster, the storage service '''will not''' automatically start.}}
 
We can now, if we wanted to, stop the <span class="code">rgmanager</span> and <span class="code">cman</span> daemons. This is, in fact, how we will cold-stop the cluster from now on.
 
We'll cover cold stopping the cluster after we finish provisioning VMs.
 
== Starting Clustered Storage ==
 
Normally from now on, the clustered storage will start automatically. However, it's a good exercise to look at how to manually start them, just in case.
 
The main difference from stopping the service is that we swap the <span class="code">-d</span> switch for the <span class="code">-e</span>, '''e'''nable, switch. We will also add the target cluster member name using the <span class="code">-m</span> switch. We didn't need to use the member switch while stopping because the cluster could tell where the service was running and, thus, which member to contact to stop the service.
 
Should you omit the member name, the cluster will try to use the local node as the target member. Note though that a target service will start on the node the command was issued on, regardless of the fail-over domain's ordered policy. That is to say, a service will not start on another node in the cluster when the member option is not specified, despite the fail-over configuration set to prefer another node.
 
{{note|1=The storage services need to start at about the same time on both nodes. This is because the initially started storage service will hang when it tries to start <span class="code">drbd</span> until either the other node is up or until it times out. For this reason, be sure to have two terminal windows open to make then next two calls simultaneously.}}
 
On '''<span class="code">an-node01</span>''', run;
 
<source lang="bash">
clusvcadm -e storage_an01 -m an-node01.alteeve.ca
</source>
<source lang="text">
Member an-node01.alteeve.ca trying to enable service:storage_an01...Success
service:storage_an01 is now running on an-node01.alteeve.ca
</source>
 
On '''<span class="code">an-node02</span>''', run;
 
<source lang="bash">
clusvcadm -e storage_an02 -m an-node02.alteeve.ca
</source>
<source lang="text">
Member an-node02.alteeve.ca trying to enable service:storage_an02...Success
service:storage_an02 is now running on an-node02.alteeve.ca
</source>
 
Now <span class="code">clustat</span> on either node should again show the storage services running again.
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Tue Dec 20 21:09:19 2011
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State       
------- ----                  ----- ------                  -----       
service:storage_an01          an-node01.alteeve.ca          started     
service:storage_an02          an-node02.alteeve.ca          started     
</source>
 
== A Note On Resource Management With DRBD ==
 
When the cluster starts for the first time, where neither node's DRBD storage was up, the first node to start will wait for
<span class="code">/etc/drbd.d/global_common.conf</span>'s <span class="code">wfc-timeout</span> seconds (<span class="code">300</span> in our case) for the second node to start. For this reason, we want to ensure that we enable the storage resources more or less at the same time and from two different terminals. The reason for two terminals is that the <span class="code">clusvcadm -e ...</span> command won't return until all resources have started, so you need the second terminal window to start the other node's clustered storage service while the first one waits.
 
If the clustered storage service ever fails, look in [[syslog]]'s <span class="code">/var/log/messages</span> for a split-brain error. Look for a message like:
 
<source lang="text">
Mar 29 20:24:37 an-node01 kernel: block drbd2: helper command: /sbin/drbdadm initial-split-brain minor-2
Mar 29 20:24:37 an-node01 kernel: block drbd2: helper command: /sbin/drbdadm initial-split-brain minor-2 exit code 0 (0x0)
Mar 29 20:24:37 an-node01 kernel: block drbd2: Split-Brain detected but unresolved, dropping connection!
Mar 29 20:24:37 an-node01 kernel: block drbd2: helper command: /sbin/drbdadm split-brain minor-2
Mar 29 20:24:37 an-node01 kernel: block drbd2: helper command: /sbin/drbdadm split-brain minor-2 exit code 0 (0x0)
Mar 29 20:24:37 an-node01 kernel: block drbd2: conn( WFReportParams -> Disconnecting )
</source>
 
With the fencing hook into the cluster, this should be a very hard problem to run into. If you do though, [http://linbit.com Linbit] has the authoritative guide to recover from this situation.
 
* [http://www.drbd.org/users-guide-legacy/s-resolve-split-brain.html Manual split brain recovery]
 
= Provisioning Virtual Machines =
 
Now we're getting to the purpose of our cluster; Provision virtual machines!
 
We have two steps left;
* Provision our VMs.
* Add the VMs to <span class="code">rgmanager</span>.
 
"Provisioning" a virtual machine simple means to create it; Assign a collection of emulated hardware, connected to physical devices, to a given virtual machine and begin the process of installing the operating system on it. This tutorial is more about clustering than it is about virtual machine administration, so some experience with managing virtual machines has to be assumed. If you need to brush up, here are some resources;
 
* [http://www.linux-kvm.org/page/HOWTO KVM project's How-Tos]
* [http://kvm.et.redhat.com/page/FAQ KVM project's FAQ]
* [http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Hypervisor_Deployment_Guide/index.html Red Hat's Hypervisor Guide]
* [http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Getting_Started_Guide/index.html Red Hat's Virtualization Guide]
* [http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/index.html Red Hat's Virtualization Administration]
* [http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/index.html Red Hat's Virtualization Host Configuration and Guest Installation Guide]
 
When you feel comfortable, proceed.
 
== Before We Begin - Setting Up Our Workstation ==
 
The virtual machines are, for obvious reasons, headless. That is, they have no real video card into which we can plug a monitor and watch the progress of the install. This would, left unresolved, make it pretty hard to install the operating systems as there is simply no network in the early stages of most operating system installations.
 
Part of the <span class="code">libvirtd</span> package is a program called <span class="code">virt-manager</span> which is available on most all modern Linux distributions. This application makes it very easy to connect to our virtual machines, regardless of their network state.
 
How you install this will depend on your workstation.
 
On [[RPM]]-based systems, try:
 
<source lang="bash">
yum install virt-manager
</source>
 
On [[deb]] based systems, try:
 
<source lang="bash">
apt-get install virt-manager
</source>
 
On [[SUSE]]-based systems, try;
 
<source lang="bash">
zypper install virt-manager
</source>
 
Once it is installed, you need to determine whether your workstation is on the [[IFN]] or [[BCN]]. I've got my laptop on the BCN, so I will connect to the nodes using just their short host names. If you're on the same IFN as the nodes, you will need to append <span class="code">.ifn</span> to the host names.
 
[[Image:2n-RHEL6-KVM_virt-manager_01.png|thumb|448px|center|Initial installation of <span class="code">virt-manager</span>.]]
 
To connect to the the cluster nodes;
 
# Click on ''<span class="code">File</span>'' -> ''<span class="code">Add Connection...</span>''.
# Make sure that ''Hypervisor'' is set to ''<span class="code">QEMU/KVM</span>''.
# Click to check ''Connect to remote host''.
# Make sure that ''Method'' is set to ''<span class="code">SSH/span>''.
# Make sure that ''Username'' is set to ''<span class="code">root</span>''.
# Enter the ''Hostname'' using the proper entry from <span class="code">/etc/hosts</span> (ie: <span class="code">an-node01</span> or <span class="code">an-node01.ifn</span>)
# Click on the button labelled ''<span class="code">Connect</span>''.
# Repeat these steps for the other node.
 
[[Image:2n-RHEL6-KVM_virt-manager_02.png|thumb|700px|center|New connection window.]]
 
Once your two nodes have been added to <span class="code">virt-manager</span>, you should see both nodes as connected, but no VMs will be shown as we've not yet provisioned any yet.
 
[[Image:2n-RHEL6-KVM_virt-manager_03.png|thumb|448px|center|Two nodes added to <span class="code">virt-manager</span>.]]
 
We'll come back to <span class="code">virt-manager</span> shortly.
 
== Provision Planning ==
 
Before we can start creating virtual machines, we need to take stock of what resources we have available and how we want to divy them out to the VMs.
 
In my cluster, I've got 200 [[GiB]] available on each of my two nodes.
 
<source lang="bash">
vgdisplay |grep -i -e free -e "vg name"
</source>
<source lang="text">
  VG Name              an02-vg0
  Free  PE / Size      51521 / 201.25 GiB
  VG Name              an01-vg0
  Free  PE / Size      51615 / 201.62 GiB
  VG Name              shared-vg0
  Free  PE / Size      0 / 0 
</source>
 
I know I have 8 [[GiB]] of memory, but I have to slice off a certain amount of that for the host [[OS]]. I've got my nodes sitting about where they will be normally, so I can check how much memory is in use fairly easily.
 
<source lang="bash">
cat /proc/meminfo |grep -e MemTotal -e MemFree
</source>
<source lang="text">
MemTotal:        8050312 kB
MemFree:        7432288 kB
</source>
 
I'm sitting about about 604 [[MiB]] used (<span class="code">8,050,312 [[KiB]] - 7,432,288 KiB == 618,024 KiB / 1,024 == 603.54 MiB). I think I can safely operate within 1 [[GiB]], leaving me 7 GiB of RAM to allocate to VMs.
 
Next up, I need to confirm how many CPU cores I have available.
 
<source lang="bash">
cat /proc/cpuinfo |grep processor
</source>
<source lang="text">
processor : 0
processor : 1
processor : 2
processor : 3
</source>
 
I've got four, and I like to dedicate the first one to the host OS, so I've got three to allocate to my VMs.
 
On the network front, I know I've got two bridges, one to the [[IFN]] and one to the [[BCN]].
 
So let's summarize:
* 400 GiB of space, 200 GiB per DRBD resource.
* 7 GiB of RAM.
* 3 CPU cores (can over-allocate).
* 1 network bridge, <span class="code">vbr2</span>.
 
With this list in mind, we can now start planning out the VMs.
 
The network can share the same [[subnet]] as the [[IFN]] if you wish, but I prefer to isolate my VMs from the IFN using a different subnet, <span class="code">10.254.0.0/16</span>. This is, admittedly, "security by obscurity" and in no way is it a replacement for proper isolation. In production, you will want to setup firewalls on you nodes to prevent access from virtual machines.
 
With that said, here is what we will install now. Obviously, you will have other needs and goals. Mine is an admittedly artificial network.
* A development server. This would be used for testing, so it will have more modest resources.
* A web server, which will mainly use a DB server, so will need CPU and RAM, but not much disk.
* A database server.
* A windows server. I don't exactly have a use for it, except to show how to install a Windows VM for those who do need it.
 
Now to divvy up the resources;
 
{|class="wikitable"
!VM
!Name
!Primary Host
!Disk
!CPU
!RAM
![[IFN]]
!OS
|-
|Dev Server
|class="code"|vm0001-dev
|class="code"|an-node01
|150 [[GiB]]
|1 [[GiB]]
|2 core
|class="code"|10.254.0.1/16
|CentOS 6
|-
|Web Server
|class="code"|vm0002-web
|class="code"|an-node01
|50 [[GiB]]
|2 [[GiB]]
|2 cores
|class="code"|10.254.0.2/16
|CentOS 6
|-
|Database Server
|class="code"|vm0003-db
|class="code"|an-node02
|100 [[GiB]]
|2 [[GiB]]
|2 cores
|class="code"|10.254.0.3/16
|CentOS 6
|-
|Web Server
|class="code"|vm0004-ms
|class="code"|an-node02
|100 [[GiB]]
|2 [[GiB]]
|2 cores
|class="code"|10.254.0.4/16
|Windows Server 2008 R2 64-bit
|}
 
Notice that we've over-allocated the CPU cores? This is ok. We're going to restrict the VMs to CPU cores number 1 through 3, leaving core number 0 for the host OS. When all of the VMs are running on one node, the hypervisor's scheduler will handle shuffling jobs from the VMs' cores to the real cores that are least loaded at a given time.
 
As for the RAM though, we can not use more than we have. We're going to leave 1 [[GiB]] for the host, so we'll divvy the remaining 7 GiB between the VMs. Remember, we have to plan for when all four VMs will run on just one node.
 
==== A Note on VM Configuration ====
 
It would be a questionably valueable divertion to cover the setup of each VM. It will be up to you, reader, to setup each VM however you like.
 
=== Provisioning vm0001-dev ===
 
{{note|1=We're going to spend a lot more time on this first VM, so bear with me here, even if you aren't interested in creating a VM like this.}}
 
Before we can provision, we need to gather whatever install source we'll need for the VM. This can be a simple [[ISO]] file, as we'll see on the [[2-Node Red Hat KVM Cluster Tutorial#Provisioning vm0001-dev|windows install]] later, or it can be files on a web server, which we'll use here. We'll also need to create the "hard drive" for the VM, which will be a new [[LV]]. Finally, we'll craft the <span class="code">virt-install</span> command which will begin the actual OS install.
 
This being a Linux machine, we can provision this using a network. Conveniently, I've got a [[Setting Up a PXE Server on an RPM-based OS|PXE server]] setup with the CentOS install files available on my local network at <span class="code"><nowiki>http://10.255.255.254/c6/x86_64/img/</nowiki></span>. You don't need to have a full [[PXE]] server setup, mounting the install [[ISO]] and pointing a web server at the mounted directory would work just fine. I'm also going to further customize my install by using a [[kickstart]] file which, effectively, pre-answers the installation questions so that the install is fully automated.
 
So, let's create the new [[LV]]. I know that this machine will be primarily run on <span class="code">an-node01</span> and that it will be 150 [[GiB]]. I personally always name the [[LV]]s as <span class="code">vmXXXX-Y</span>, where <span class="code">X</span> is the VM's name and the <span class="code">Y</span> is a simple integer. You are obviously free to use whatever makes most sense to you.
 
==== Creating vm0001-dev's Storage ====
 
With that, the <span class="code">lvcreate</span> call is;
 
On <span class="code">an-node01</span>, run;
 
<source lang="bash">
lvcreate -L 150G -n vm0001-1 /dev/an01-vg0
</source>
<source lang="text">
  Logical volume "vm0001-1" created
</source>
 
==== Creating vm0001-dev's virt-install Call ====
 
Now with the storage created, we can craft the <span class="code">virt-install</span> command. I like to put this into a file under the <span class="code">/shared/provision/</span> directory for future reference. Let's take a look at the command, then we'll discuss what the switches are for.
 
<source lang="bash">
touch /shared/provision/vm0001-dev.sh
chmod 755 /shared/provision/vm0001-dev.sh
vim /shared/provision/vm0001-dev.sh
</source>
<source lang="text">
virt-install --connect qemu:///system \
  --name vm0001-dev \
  --ram 1024 \
  --arch x86_64 \
  --vcpus 1 \
  --location http://10.255.255.254/c6/x86_64/img/ \
  --extra-args "ks=http://10.255.255.254/c6/x86_64/ks/c6_minimal.ks" \
  --os-type linux \
  --os-variant rhel6 \
  --disk path=/dev/an01-vg0/vm0001-1 \
  --network bridge=vbr2 \
  --vnc
</source>
 
{{note|1=Don't use tabs to indent the lines.}}
 
Let's break it down;
 
* <span class="code">--connect qemu:///system</span>
This tells <span class="code">virt-install</span> to use the [[QEMU]] hardware emulator (as opposed to [[Xen]]) and to install the VM on to local system.
 
* <span class="code">--name vm0001-dev</span>
This sets the name of the VM. It is the name we will use in the cluster configuration and whenever we use the <span class="code">libvirtd</span> tools, like <span class="code">virsh</span>.
 
* <span class="code">--ram 1024</span>
This sets the amount of RAM, in [[MiB]], to allocate to this VM. Here, we're allocating 1 [[GiB]] (1,024 MiB).
 
* <span class="code">--arch x86_64</span>
This sets the emulated CPU's architecture to 64-[[bit]]. This can be used even when you plan to install a 32-bit [[OS]], but not the other way around, of course.
 
* <span class="code">--vcpus 1</span>
This sets the number of CPU cores to allocate to this VM. Here, we're setting just one.
 
* <span class="code">--location <nowiki>http://10.255255.254/c6/x86_64/img/</nowiki></span>
This tells <span class="code">virt-install</span> to pull the installation files from the [[URL]] specified.
 
* <span class="code">--extra-args "ks=<nowiki>http://10.255.255.254/c6/x86_64/ks/c6_minimal.ks</nowiki>"</span>
This is an optional command used to pass the install kernel arguments. Here, I'm using it to tell the kernel to grab the specified kickstart file for use during the installation.
 
{{note|1=If you want to copy the kickstart script used in this tutorial, you can [[File c6_minimal.ks|find it here]].}}
 
* <span class="code">--os-type linux</span>
This broadly sets hardware emulation for optimal use with Linux-based virtual machines.
 
* <span class="code">--os-variant rhel6</span>
This further refines tweaks to the hardware emulation to maximize performance for [[RHEL]]6 (and derivative) installs.
 
* <span class="code">--disk path=/dev/an01-vg0/vm0001-1</span>
This tells the installer to use the [[LV]] we created earlier as the backing storage device for the new virtual machine.
 
* <span class="code">--network bridge=vbr2</span>
This tells the installer to create a network card in the VM and to then connect it to the <span class="code">vbr2</span> bridge, thus connecting the VM to the [[IFN]]. Optionally, you could add <span class="code">,model=e1000</span> option to tells the emulator to mimic an [[Intel]] <span class="code">e1000</span> hardware NIC. The default is to use the <span class="code">[[virtio]]</span> virtualized network card. If you have two or more bridges, you can repeat the <span class="code">--network</span> switch as many times as you need.
 
* <span class="code">--vnc</span>
This tells <span class="code">virt-manager</span> to create a [[VNC]] server on the VM and, if possible, immediately connect it the just-provisioned VM. With a minimal install on the nodes, the automatically spawned client will fail. This is fine, just use <span class="code">virt-manager</span> from my workstation.
 
{{note|1=If you close the initial VNC window and want to reconnect to the VM, you can simply open up <span class="code">virt-manager</span>, connect to the <span class="code">an-node01</span> host if needed, and double-click on the <span class="code">vm0001-dev</span> entry. This will effectively "plug a monitor into the VM".}}
 
==== Initializing vm0001-dev's Install ====
 
Well, time to start the install!
 
On <span class="code">an-node01</span>, run;
 
<source lang="bash">
/shared/provision/vm0001-dev.sh
</source>
<source lang="text">
Starting install...
Retrieving file .treeinfo...                            |  676 B    00:00 ...
Retrieving file vmlinuz...                              | 7.5 MB    00:00 ...
Retrieving file initrd.img...                            |  59 MB    00:02 ...
Creating domain...                                      |    0 B    00:00   
WARNING  Unable to connect to graphical console: virt-viewer not installed. Please install the 'virt-viewer' package.
Domain installation still in progress. You can reconnect to
the console to complete the installation process.
</source>
 
And it's off!
 
[[Image:2n-RHEL6-KVM_vm0001_provision_01.png|thumb|700px|center|Initial provision of <span class="code">vm0001-dev</span>.]]
 
Progressing nicely.
 
[[Image:2n-RHEL6-KVM_vm0001_provision_02.png|thumb|700px|center|Installation of <span class="code">vm0001-dev</span> proceeding as expected.]]
 
And done! Note that, depending on your kickstart file, it may have automatically rebooted or you may need to reboot manually.
 
{{note|1=I've found that there are occassions where the VM will power off instead of rebooting. With <span class="code">virt-manager</span>, you can click to select the new VM and then press the "play" button to boot the VM manually.}}
 
[[Image:2n-RHEL6-KVM_vm0001_provision_03.png|thumb|700px|center|Installation of <span class="code">vm0001-dev</span> complete.]]
 
==== Defining vm0001-dev On an-node02 ====
 
We can use <span class="code">virsh</span> to see that the new virtual machine exists and what state it is in. Note that I've gotten into the habit of using <span class="code">--all</span> to get around <span class="code">virsh</span>'s default behaviour of hiding VMs that are off.
 
On <span class="code">an-node01</span>;
 
<source lang="bash">
virsh list --all
</source>
<source lang="text">
Id Name                State
----------------------------------
  2 vm0001-dev          running
</source>
 
On <span class="code">an-node02</span>;
 
<source lang="bash">
virsh list --all
</source>
<source lang="text">
Id Name                State
----------------------------------
</source>
 
As we see, the new <span class="code">vm0001-dev</span> is only known to <span class="code">an-node01</span>. This is, in and of itself, just fine.
 
We're going to need to put the virtual machine's [[XML]] definition file in a common place accessible on both nodes. This could be matching but separate directories on either node, or it can be a common shared location. As we've got the cluster's <span class="code">/shared</span> GFS2 partition, we're going to use the <span class="code">/shared/definitions</span> directory we create earlier. This avoids the need to remember to keep two copies of the file in sync across both nodes.
 
To backup the VM's configuration, we'll again use <span class="code">virsh</span>, but this time with the <span class="code">dumpxml</span> command.
 
On <span class="code">an-node01</span>;
 
<source lang="bash">
virsh dumpxml vm0001-dev > /shared/definitions/vm0001-dev.xml
cat /shared/definitions/vm0001-dev.xml
</source>
<source lang="xml">
<domain type='kvm' id='2'>
  <name>vm0001-dev</name>
  <uuid>2512b2dd-a1a8-f990-2a0d-6c41968ab3f8</uuid>
  <memory>1048576</memory>
  <currentMemory>1048576</currentMemory>
  <vcpu>1</vcpu>
  <os>
    <type arch='x86_64' machine='rhel6.2.0'>hvm</type>
    <boot dev='network'/>
    <boot dev='cdrom'/>
    <boot dev='hd'/>
    <bootmenu enable='yes'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/an01-vg0/vm0001-1'/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <interface type='bridge'>
      <mac address='52:54:00:9b:3c:f7'/>
      <source bridge='vbr2'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/2'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/2'>
      <source path='/dev/pts/2'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <graphics type='vnc' port='5900' autoport='yes'/>
    <video>
      <model type='cirrus' vram='9216' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </memballoon>
  </devices>
</domain>
</source>
 
There we go; That is the emulated hardware on which your virtual machine exists. Pretty neat, eh?
 
I like to keep all of my VMs defined on all of my nodes. This is entirely optional, as the cluster will define the VM on a target node when needed. It is, though, a good chance to examine how this is done manually.
 
On <span class="code">an-node02</span>;
 
<source lang="bash">
virsh define /shared/definitions/vm0001-dev.xml
</source>
<source lang="text">
Domain vm0001-dev defined from /shared/definitions/vm0001-dev.xml
</source>
 
We can confirm that it now exists by re-running <span class="code">virsh list --all</span>.
 
<source lang="bash">
virsh list --all
</source>
<source lang="text">
Id Name                State
----------------------------------
  - vm0001-dev          shut off
</source>
 
You should also now be able to see <span class="code">vm0001-dev</span> under <span class="code">an-node02</span> in your <span class="code">virt-manager</span> window. It will be listed as <span class="code">shutoff</span>, which is expected. '''Do not''' try to turn it on while it's running on the other node!
 
=== Provisioning vm0002-web ===
 
This installation will be pretty much the same as it was for <span class="code">vm0001-dev</span>, so we'll look mainly at the differences.
 
==== Creating vm0002-web's Storage ====
 
We'll use <span class="code">lvcreate</span> again, but this time we won't specify a specific size, but instead a percentage of the remainin free space will be defined. Note that the <span class="code">-L</span> switch changes to <span class="code">-l</span>;
 
On <span class="code">an-node01</span>, run;
 
<source lang="bash">
lvcreate -l 100%FREE -n vm0002-1 /dev/an01-vg0
</source>
<source lang="text">
  Logical volume "vm0002-1" created
</source>
 
==== Creating vm0002-web's virt-install Call ====
 
The <span class="code">virt-install</span> command will be quite similar to the previous one.
 
<source lang="bash">
touch /shared/provision/vm0002-web.sh
chmod 755 /shared/provision/vm0002-web.sh
vim /shared/provision/vm0002-web.sh
</source>
<source lang="text">
virt-install --connect qemu:///system \
  --name vm0002-web \
  --ram 2048 \
  --arch x86_64 \
  --vcpus 2 \
  --location http://10.255.255.254/c6/x86_64/img/ \
  --extra-args "ks=http://10.255.255.254/c6/x86_64/ks/c6_minimal.ks" \
  --os-type linux \
  --os-variant rhel6 \
  --disk path=/dev/an01-vg0/vm0002-1 \
  --network bridge=vbr2 \
  --vnc
</source>
 
Lets look at the differences;
 
* <span class="code">--name vm0002-web</span>; This sets the new name of the VM.
 
* <span class="code">--ram 2048</span>; This doubles the amount of RAM to 2048 [[MiB]].
 
* <span class="code">--vcpus 2</span>; This sets the number of CPU cores to two.
 
* <span class="code">--disk path=/dev/an01-vg0/vm0002-1</span>; The path to the new LV is set.
 
Note that the same kickstart file from before is used. This is fine as it doesn't specify a specific IP address and it is smart enough to adapt to the new virtual disk size.
 
==== Initializing vm0002-web's Install ====
 
Well, time to start the install!
 
On <span class="code">an-node01</span>, run;
 
<source lang="bash">
/shared/provision/vm0002-web.sh
</source>
<source lang="text">
Starting install...
Retrieving file .treeinfo...                            |  676 B    00:00 ...
Retrieving file vmlinuz...                              | 7.5 MB    00:00 ...
Retrieving file initrd.img...                            |  59 MB    00:02 ...
Creating domain...                                      |    0 B    00:00   
WARNING  Unable to connect to graphical console: virt-viewer not installed. Please install the 'virt-viewer' package.
Domain installation still in progress. You can reconnect to
the console to complete the installation process.
</source>
 
The install should proceed more or less the same as it did for <span class="code">vm0001-dev</span>.
 
==== Defining vm0002-web On an-node02 ====
 
We can use <span class="code">virsh</span> to see that the new virtual machine exists and what state it is in. Note that I've gotten into the habit of using <span class="code">--all</span> to get around <span class="code">virsh</span>'s default behaviour of hiding VMs that are off.
 
On <span class="code">an-node01</span>;
 
<source lang="bash">
virsh list --all
</source>
<source lang="text">
Id Name                State
----------------------------------
  2 vm0001-dev          running
  4 vm0002-web          running
</source>
 
On <span class="code">an-node02</span>;
 
<source lang="bash">
virsh list --all
</source>
<source lang="text">
Id Name                State
----------------------------------
  - vm0001-dev          shut off
</source>
 
As before, the new <span class="code">vm0002-web</span> is only known to <span class="code">an-node01</span>.
 
On <span class="code">an-node01</span>;
 
<source lang="bash">
virsh dumpxml vm0002-web > /shared/definitions/vm0002-web.xml
cat /shared/definitions/vm0002-web.xml
</source>
<source lang="xml">
<domain type='kvm' id='4'>
  <name>vm0002-web</name>
  <uuid>02f967ab-103f-c276-c40f-9eaa47339df4</uuid>
  <memory>2097152</memory>
  <currentMemory>2097152</currentMemory>
  <vcpu>2</vcpu>
  <os>
    <type arch='x86_64' machine='rhel6.2.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/an01-vg0/vm0002-1'/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <interface type='bridge'>
      <mac address='52:54:00:65:39:60'/>
      <source bridge='vbr2'/>
      <target dev='vnet1'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/3'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/3'>
      <source path='/dev/pts/3'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <graphics type='vnc' port='5901' autoport='yes'/>
    <video>
      <model type='cirrus' vram='9216' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </memballoon>
  </devices>
</domain>
</source>
 
There we go; That is the emulated hardware on which your virtual machine exists. Pretty neat, eh?
 
I like to keep all of my VMs defined on all of my nodes. This is entirely optional, as the cluster will define the VM on a target node when needed. It is, though, a good chance to examine how this is done manually.
 
On <span class="code">an-node02</span>;
 
<source lang="bash">
virsh define /shared/definitions/vm0002-web.xml
</source>
<source lang="text">
Domain vm0002-web defined from /shared/definitions/vm0002-web.xml
</source>
 
We can confirm that it now exists by re-running <span class="code">virsh list --all</span>.
 
<source lang="bash">
virsh list --all
</source>
<source lang="text">
Id Name                State
----------------------------------
  - vm0001-dev          shut off
  - vm0002-web          shut off
</source>
 
=== Provisioning vm0003-db ===
 
This installation will, again, be pretty much the same as it was for <span class="code">vm0001-dev</span> and <span class="code">vm0002-web</span>, so we'll again look mainly at the differences.
 
==== Creating vm0003-db's Storage ====
 
We'll use <span class="code">lvcreate</span> again, but being the first [[LV]] on the <span class="code">an02-vg0</span>, we'll specify the specific size again.
 
On <span class="code">an-node01</span>, run;
 
<source lang="bash">
lvcreate -L 100G -n vm0003-1 /dev/an02-vg0
</source>
<source lang="text">
  Logical volume "vm0003-1" created
</source>
 
==== Creating vm0003-db's virt-install Call ====
 
The <span class="code">virt-install</span> command will be quite similar to the previous one.
 
<source lang="bash">
touch /shared/provision/vm0003-db.sh
chmod 755 /shared/provision/vm0003-db.sh
vim /shared/provision/vm0003-db.sh
</source>
<source lang="text">
virt-install --connect qemu:///system \
  --name vm0003-db \
  --ram 2048 \
  --arch x86_64 \
  --vcpus 2 \
  --location http://10.255.255.254/c6/x86_64/img/ \
  --extra-args "ks=http://10.255.255.254/c6/x86_64/ks/c6_minimal.ks" \
  --os-type linux \
  --os-variant rhel6 \
  --disk path=/dev/an02-vg0/vm0003-1 \
  --network bridge=vbr2 \
  --vnc
</source>
 
Lets look at the differences;
 
* <span class="code">--name vm0003-db</span>; This sets the new name of the VM.
 
* <span class="code">--disk path=/dev/an02-vg0/vm0003-1</span>; The path to the new LV is set. Note that the [[VG]] has changed as this VM will run in <span class="code">an-node02</span> normally.
 
==== Initializing vm0003-db's Install ====
 
This time we're going to provision the new VM on <span class="code">an-node02</span>, as that is where it will live normally.
 
On <span class="code">an-node02</span>, run;
 
<source lang="bash">
/shared/provision/vm0003-db.sh
</source>
<source lang="text">
Starting install...
Retrieving file .treeinfo...                            |  676 B    00:00 ...
Retrieving file vmlinuz...                              | 7.5 MB    00:00 ...
Retrieving file initrd.img...                            |  59 MB    00:02 ...
Creating domain...                                      |    0 B    00:00   
WARNING  Unable to connect to graphical console: virt-viewer not installed. Please install the 'virt-viewer' package.
Domain installation still in progress. You can reconnect to
the console to complete the installation process.
</source>
 
The install should proceed more or less the same as it did for <span class="code">vm0001-dev</span> and <span class="code">vm0002-web</span>.
 
==== Defining vm0003-db On an-node01 ====
 
We can use <span class="code">virsh</span> to see that the new virtual machine exists and what state it is in. Note that I've gotten into the habit of using <span class="code">--all</span> to get around <span class="code">virsh</span>'s default behaviour of hiding VMs that are off.
 
On <span class="code">an-node02</span>;
 
<source lang="bash">
virsh list --all
</source>
<source lang="text">
Id Name                State
----------------------------------
  2 vm0003-db            running
  - vm0001-dev          shut off
  - vm0002-web          shut off
</source>
 
On <span class="code">an-node01</span>;
 
<source lang="bash">
virsh list --all
</source>
<source lang="text">
Id Name                State
----------------------------------
  2 vm0001-dev          running
  4 vm0002-web          running
</source>
 
To backup the VM's configuration, we'll again use <span class="code">virsh</span>, but this time with the <span class="code">dumpxml</span> command.
 
On <span class="code">an-node02</span>;
 
<source lang="bash">
virsh dumpxml vm0003-db > /shared/definitions/vm0003-db.xml
cat /shared/definitions/vm0003-db.xml
</source>
<source lang="xml">
<domain type='kvm' id='2'>
  <name>vm0003-db</name>
  <uuid>a7018001-b433-b739-bbd9-d4d3285f0a72</uuid>
  <memory>2097152</memory>
  <currentMemory>2097152</currentMemory>
  <vcpu>2</vcpu>
  <os>
    <type arch='x86_64' machine='rhel6.2.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/an02-vg0/vm0003-1'/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <interface type='bridge'>
      <mac address='52:54:00:44:83:ec'/>
      <source bridge='vbr2'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/2'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/2'>
      <source path='/dev/pts/2'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <graphics type='vnc' port='5900' autoport='yes'/>
    <video>
      <model type='cirrus' vram='9216' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </memballoon>
  </devices>
</domain>
</source>
 
On <span class="code">an-node01</span>;
 
<source lang="bash">
virsh define /shared/definitions/vm0003-db.xml
</source>
<source lang="text">
Domain vm0003-db defined from /shared/definitions/vm0003-db.xml
</source>
 
We can confirm that it now exists by re-running <span class="code">virsh list --all</span>.
 
<source lang="bash">
virsh list --all
</source>
<source lang="text">
Id Name                State
----------------------------------
  2 vm0001-dev          running
  4 vm0002-web          running
  - vm0003-db            shut off
</source>
 
=== Provisioning vm0004-ms ===
 
Now for something a little different!
 
This will be the [http://www.microsoft.com/en-us/server-cloud/windows-server/2008-r2-standard.aspx Windows 2008 R2] virtual machine. The biggest difference this time will be that we're going to install from the [[ISO]] file rather than from a web-accessible store.
 
Another difference is that we're going to specify what kind of storage bus to use with this VM. We'll be using a special, virtualized bus called <span class="code">virtio</span> which requires that the drivers be available to the OS at install time. These drivers will, in turn, be made available to the installer as a virtual floppy disk. It will make for quite the interesting <span class="code">virt-install</span> call, as we'll see.
 
==== Preparing vm0004-ms's Storage ====
 
As before, we need to create the backing storage [[LV]] before we can provision the machine. As we planned, this will be a 100 [[GiB]] partition and will be on the <span class="code">an02-vg0</span> [[VG]]. Seeing as this LV will use up the rest of the free space in the VG, we'll again use the <span class="code">lvcreate -l 100%FREE</span> instead of <span class="code">-L 100G</span> as sometimes the numbers don't work out to be exactly the size we intend.
 
On <span class="code">an-node02</span>, run;
 
<source lang="bash">
lvcreate -l 100%FREE -n vm0004-1 /dev/an02-vg0
</source>
<source lang="text">
  Logical volume "vm0004-1" created
</source>
 
Before we proceed, we now need to put a copy of the install media, the OS's [[ISO]] and the virtual floppy disk, somewhere that the installer can access. I like to put files like this into the <span class="code">/shared/files/</span> directory we created earlier. How you put them there will be an exercise for the reader.
 
If you do not have a copy of Microsoft's server operating system, you can download a 30-day free trial here;
* [http://technet.microsoft.com/en-us/evalcenter/dd459137 MS Windows Server 2008 R2 with SP1]
 
The driver for the <span class="code">virtio</span> bus can be found from Red Hat here. Note that there is an [[ISO]] and a <span class="code">vfd</span> (virtual floppy disk) file. You can use the ISO and mount it as a second CD-ROM if you wish. This tutorial will use the virtual floppy disk to show how floppy images can be used in VMs:
* [http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/ virtio Drivers for Windows]
 
{{note|1=The <span class="code">vfd</span> no longer seems to exist upstream. As of Sep. 30, 2012, the [http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/ latest available version] is <span class="code">virtio-win-0.1-30.iso</span>, which is an [[ISO]] (cd-rom) image. To use it, replace the line;
 
<span class="code">--disk path=/shared/files/virtio-win-1.1.16.vfd,device=floppy \</span>
 
with;
 
<span class="code">--disk path=/shared/files/virtio-win-0.1-30.iso,device=cdrom \</span>}}
 
For those wishing to use the floppy image:
* Local copy of [https://alteeve.ca/files/virtio-win-1.1.16.vfd virtio-win-1.1.16.vfd].
 
==== Creating vm0004-ms's virt-install Call ====
 
Lets look at the <span class="code">virt-install</span> command, then we'll discuss the main differences from the previous call for the firewall. As before, we'll put this command into a small shell script for later reference.
 
<source lang="bash">
touch /shared/provision/vm0004-ms.sh
chmod 755 /shared/provision/vm0004-ms.sh
vim /shared/provision/vm0004-ms.sh
</source>
<source lang="text">
virt-install --connect qemu:///system \
  --name vm0004-ms \
  --ram 2048 \
  --arch x86_64 \
  --vcpus 2 \
  --cdrom /shared/files/Windows_Server_2008_R2_64Bit_SP1.iso \
  --disk path=/dev/an02-vg0/vm0004-1,device=disk,bus=virtio \
  --disk path=/shared/files/virtio-win-1.1.16.vfd,device=floppy \
  --os-type windows \
  --os-variant win2k8 \
  --network bridge=vbr2 \
  --vnc
</source>
 
Let's look at the main differences;
 
* <span class="code">--cdrom /shared/files/Windows_Server_2008_R2_64Bit_SP1.iso</span>
Here we've swapped out the <span class="code">--location</span> and <span class="code">--extra-args</span> arguments for the <span class="code">--cdrom</span> switch. This will create an emulated DVD-ROM drive and boot from it. The path and file is an [[ISO]] image of the installation media we want to use.
 
* <span class="code">--disk path=/dev/an02-vg0/vm0004-1,device=disk,bus=virtio</span>
This is the same line we used before, pointing to the new [[LV]] of course, but we've added options to it. Specifically, we've told the hardware emulator, [[QEMU]], to not create the standard (<span class="code">ide</span> or <span class="code">scsi</span>) bus. This is a special bus that improves storage [[I/O]] on windows (and other) guests. Windows does not support this bus natively, which brings us to the next option.
 
* <span class="code">--disk path=/shared/files/virtio-win-1.1.16.vfd,device=floppy</span>
This mounts the emulated floppy disk with the <span class="code">virtio</span> drivers that we'll need to allow windows to see the hard drive during the install.
 
The rest is more or less the same as before.
 
==== Initializing vm0004-ms's Install ====
 
As before, we'll run the script with the <span class="code">virt-install</span> command in it.
 
On <span class="code">an-node02</span>, run;
 
<source lang="bash">
/shared/provision/vm0004-ms.sh
</source>
<source lang="text">
Starting install...
Creating domain...                                      |    0 B    00:00   
WARNING  Unable to connect to graphical console: virt-viewer not installed. Please install the 'virt-viewer' package.
Domain installation still in progress. Waiting for installation to complete.
</source>
 
This install isn't automated like the previous installs were, so we'll need to hand-hold the VM through the install.
 
[[Image:2n-RHEL6-KVM_vm0004_provision_01.png|thumb|700px|center|Initial provision of <span class="code">vm0004-ms</span>.]]
 
After you get click to select the ''Custom (advanced)'' installation method, you will
 
[[Image:2n-RHEL6-KVM_vm0004_provision_02.png|thumb|700px|center|The Windows 2008 VM <span class="code">vm0004-ms</span> doesn't see a hard drive.]]
 
Click on the ''Load Driver'' option on the bottom left. You will be presented with a window telling you your options for loading the drivers.
 
[[Image:2n-RHEL6-KVM_vm0004_provision_03.png|thumb|700px|center|The Windows 2008 VM <span class="code">vm0004-ms</span> driver prompt.]]
 
Click on the ''OK'' button and the installer will automatically find the virtual floppy disk and present you with the available drivers. Click to highlight ''Red Hat VirtIO SCSI Controller (A:\amd64\Win2008\viostor.inf)'' and click the ''Next'' button.
 
[[Image:2n-RHEL6-KVM_vm0004_provision_04.png|thumb|700px|center|Selecting the Win2008 <span class="code">virtio</span> driver.]]
 
At this point, the windows installer will see the virtual hard drive and you can proceed with the install as you would normally install Windows 2008 R2 server.
 
[[Image:2n-RHEL6-KVM_vm0004_provision_05.png|thumb|700px|center|The Win2008 installer now is about to use the <span class="code">virtio</span>-backed storage.]]
 
Once the install is complete, reboot.
 
[[Image:2n-RHEL6-KVM_vm0004_provision_06.png|thumb|700px|center|Installation of <span class="code">vm0004-ms</span> complete.]]
 
==== Post-Install Housekeeping ====
 
We have to be careful to "eject" the virtual floppy and DVD disks from the VM. If you neglect to do so, then later delete the files, <span class="code">virsh</span> will fail to boot the VMs and '''undefine them entirely'''. (Yes, that is dumb, in this author's opinion). [[#My VM Just Vanished!|How to recover]] from this issue can be found below.
 
{{note|1=At the time of writing this, the author could not find any manner to eject media from the command line, shy of modifying the raw [[XML]] definition file and then redefining the VM and rebooting the guest. This is part of a known bug found in <span class="code">[[libvirt]]</span> prior to version 0.9.7 and [[EL6]] ships with version 0.8.7. For this reason, we will use <span class="code">virt-manager</span> here.}}
 
To "eject" the DVD-ROM and floppy drive, we will use the <span class="code">virt-manager</span> graphical program. You will need to either run <span class="code">virt-manager</span> on one of the nodes, or use a version of it from your workstation by connecting to the host node over [[SSH]]. This later method is what I like to do.
 
Using <span class="code">virt-manager</span>, connect to the <span class="code">vm0004-ms</span> VM.
 
[[Image:2n-RHEL6-KVM_vm0004_eject-media_01.png|thumb|700px|center|Connecting to <span class="code">vm0004-ms</span> using <span class="code">virt-manager</span> from a remote workstation.]]
 
Click on ''View'' then ''Details'' and you will see the virtual machine's emulated hardware.
 
[[Image:2n-RHEL6-KVM_vm0004_eject-media_02.png|thumb|700px|center|Looking at <span class="code">vm0004-ms</span>'s emulated hardware configuration.]]
 
First, let's eject the virtual floppy disk. In the left panel, click to select the ''Floppy 1'' device.
 
[[Image:2n-RHEL6-KVM_vm0004_eject-media_03.png|thumb|700px|center|Viewing the ''Floppy 1'' device on <span class="code">vm0004-ms</span>.]]
 
Click on the ''Disconnect'' button and the disk will be unmounted.
 
[[Image:2n-RHEL6-KVM_vm0004_eject-media_04.png|thumb|700px|center|Viewing the ''Floppy 1'' device after ejecting the virtual floppy disk on <span class="code">vm0004-ms</span>.]]
 
Now to eject the emulated DVD-ROM, again on the left panel, click to select the ''IDE CDROM 1'' device.
 
[[Image:2n-RHEL6-KVM_vm0004_eject-media_05.png|thumb|700px|center|Viewing the ''IDE CDROM 1'' device on <span class="code">vm0004-ms</span>.]]
 
Click on ''Disconnect'' again to unmount the ISO image.
 
[[Image:2n-RHEL6-KVM_vm0004_eject-media_06.png|thumb|700px|center|Viewing the ''IDE CDROM 1'' device after ejecting the virtual floppy disk on <span class="code">vm0004-ms</span>.]]
 
Now both the floppy disk and DVD image have been unmounted from the VM. We can return to the console view (''View'' -> ''Console'') and we will see that both the floppy disk and DVD drive no longer show any media as mounted within them.
 
[[Image:2n-RHEL6-KVM_vm0004_eject-media_07.png|thumb|700px|center|Viewing ''File Manager'' on <span class="code">vm0004-ms</span> with the virtual floppy disk and DVD ISO image now unmounted.]]
 
Done!
 
==== Defining vm0004-ms On an-node02 ====
 
Now with the installation media unmounted, and as we did before, we will use <span class="code">virsh dumpxml</span> to write out the [[XML]] definition file for the new VM and then <span class="code">virsh define</span> it on <span class="code">an-node01</span>.
 
On <span class="code">an-node02</span>;
 
<source lang="bash">
virsh list --all
</source>
<source lang="text">
Id Name                State
----------------------------------
  2 vm0003-db            running
  4 vm0004-ms            running
  - vm0001-dev          shut off
  - vm0002-web          shut off
</source>
 
On <span class="code">an-node01</span>;
 
<source lang="bash">
virsh list --all
</source>
<source lang="text">
Id Name                State
----------------------------------
  2 vm0001-dev          running
  4 vm0002-web          running
  - vm0003-db            shut off
</source>
 
As before, our new VM is only defined on the node we installed it on. We'll fix this now.
 
On <span class="code">an-node02</span>;
 
<source lang="bash">
virsh dumpxml vm0004-ms > /shared/definitions/vm0004-ms.xml
cat /shared/definitions/vm0004-ms.xml
</source>
<source lang="xml">
<domain type='kvm' id='4'>
  <name>vm0004-ms</name>
  <uuid>4c537551-96f4-3b5e-209a-0e41cab41d44</uuid>
  <memory>2097152</memory>
  <currentMemory>2097152</currentMemory>
  <vcpu>2</vcpu>
  <os>
    <type arch='x86_64' machine='rhel6.2.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/an02-vg0/vm0004-1'/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <disk type='file' device='floppy'>
      <driver name='qemu' type='raw' cache='none'/>
      <target dev='fda' bus='fdc'/>
      <alias name='fdc0-0-0'/>
      <address type='drive' controller='0' bus='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <alias name='ide0-1-0'/>
      <address type='drive' controller='0' bus='1' unit='0'/>
    </disk>
    <controller type='fdc' index='0'>
      <alias name='fdc0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:5e:b1:47'/>
      <source bridge='vbr2'/>
      <target dev='vnet1'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/3'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/3'>
      <source path='/dev/pts/3'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <graphics type='vnc' port='5901' autoport='yes'/>
    <video>
      <model type='vga' vram='9216' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </memballoon>
  </devices>
</domain>
</source>
 
As before, defining the VM on both nodes is optional, but a habit I like to do.
 
On <span class="code">an-node01</span>;
 
<source lang="bash">
virsh define /shared/definitions/vm0004-ms.xml
</source>
<source lang="text">
Domain vm0004-ms defined from /shared/definitions/vm0004-ms.xml
</source>
 
We can confirm that it now exists by re-running <span class="code">virsh list --all</span>.
 
<source lang="bash">
virsh list --all
</source>
<source lang="text">
Id Name                State
----------------------------------
  2 vm0001-dev          running
  4 vm0002-web          running
  - vm0003-db            shut off
  - vm0004-ms            shut off
</source>
 
With that, all our VMs exist and we're ready to make them highly available!
 
= Making Our VMs Highly Available Cluster Services =
 
We're ready to start the final step; Making our VMs highly available cluster services! This involves two main steps:
* Creating two new, ordered fail-over Domains; One with each node as the highest priority.
* Adding our VMs as services, one is each new fail-over domain.
 
== Creating the Ordered Fail-Over Domains ==
 
We have planned for two VMs, <span class="code">vm0001-dev</span> and <span class="code">vm0002-web</span> to normally run on <span class="code">an-node01</span> while <span class="code">vm0003-db</span> and <span class="code">vm0004-ms</span> to run on <span class="code">an-node02</span>. Of course, should one of the nodes fail, the lost VMs will be restarted on the surviving node. For this, we will use an ordered fail-over domain.
 
The idea here is that each new fail-over domain will have one node with a higher priority than the other. That is, one will have <span class="code">an-node01</span> with the highest priority and the other will have <span class="code">an-node02</span> as the highest. This way, VMs that we want to normally run on a given node will be added to the matching fail-over domain.
 
{{note|1=With 2-node clusters like ours, ordering is arguably useless. It's used here more to introduce the concepts rather than providing any real benefit. If you want to make production clusters unordered, you can. Just remember to run the VMs on the appropriate nodes when both are on-line.}}
 
Here are the two new domains we will create in <span class="code">/etc/cluster/cluster.conf</span>;
 
<source lang="xml">
                <failoverdomains>
                        ...
                        <failoverdomain name="primary_an01" nofailback="1" ordered="1" restricted="1">
                                <failoverdomainnode name="an-node01.alteeve.ca" priority="1"/>
                                <failoverdomainnode name="an-node02.alteeve.ca" priority="2"/>
                        </failoverdomain>
                        <failoverdomain name="primary_an02" nofailback="1" ordered="1" restricted="1">
                                <failoverdomainnode name="an-node01.alteeve.ca" priority="2"/>
                                <failoverdomainnode name="an-node02.alteeve.ca" priority="1"/>
                        </failoverdomain>
                </failoverdomains>
</source>
 
The two major pieces of the puzzle here are the <span class="code"><failoverdomain ...></span>'s <span class="code">ordered="1"</span> attribute and the <span class="code"><failoverdomainnode ...></span>'s <span class="code">priority="x"</span> attributes. The former tells the cluster that there is a preference for which node should be used when both are available. The later, which is the difference between the two new domains, tells the cluster which specific node is preferred.
 
The first of the new fail-over domains is <span class="code">primary_an01</span>. Any service placed in this domain will prefer to run on <span class="code">an-node01</span>, as its priority of <span class="code">1</span> is higher than <span class="code">an-node02</span>'s priority of <span class="code">2</span>. The second of the new domains is <span class="code">primary_an02</span> which reverses the preference, making <span class="code">an-node02</span> preferred over <span class="code">an-node01</span>.
 
Let's look at the complete <span class="code">cluster.conf</span> with the new domain, and the version updated to <span class="code">11</span> of course.
 
<source lang="xml">
<?xml version="1.0"?>
<cluster config_version="11" name="an-cluster-A">
        <cman expected_votes="1" two_node="1"/>
        <clusternodes>
                <clusternode name="an-node01.alteeve.ca" nodeid="1">
                        <fence>
                                <method name="ipmi">
                                        <device action="reboot" name="ipmi_an01"/>
                                </method>
                                <method name="pdu">
                                        <device action="reboot" name="pdu2" port="1"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="an-node02.alteeve.ca" nodeid="2">
                        <fence>
                                <method name="ipmi">
                                        <device action="reboot" name="ipmi_an02"/>
                                </method>
                                <method name="pdu">
                                        <device action="reboot" name="pdu2" port="2"/>
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <fencedevices>
                <fencedevice agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" name="ipmi_an01" passwd="secret"/>
                <fencedevice agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" name="ipmi_an02" passwd="secret"/>
                <fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.ca" name="pdu2"/>
        </fencedevices>
        <fence_daemon post_join_delay="30"/>
        <totem rrp_mode="none" secauth="off"/>
        <rm>
                <resources>
                        <script file="/etc/init.d/drbd" name="drbd"/>
                        <script file="/etc/init.d/clvmd" name="clvmd"/>
                        <script file="/etc/init.d/gfs2" name="gfs2"/>
                        <script file="/etc/init.d/libvirtd" name="libvirtd"/>
                </resources>
                <failoverdomains>
                        <failoverdomain name="only_an01" nofailback="1" ordered="0" restricted="1">
                                <failoverdomainnode name="an-node01.alteeve.ca"/>
                        </failoverdomain>
                        <failoverdomain name="only_an02" nofailback="1" ordered="0" restricted="1">
                                <failoverdomainnode name="an-node02.alteeve.ca"/>
                        </failoverdomain>
                        <failoverdomain name="primary_an01" nofailback="1" ordered="1" restricted="1">
                                <failoverdomainnode name="an-node01.alteeve.ca" priority="1"/>
                                <failoverdomainnode name="an-node02.alteeve.ca" priority="2"/>
                        </failoverdomain>
                        <failoverdomain name="primary_an02" nofailback="1" ordered="1" restricted="1">
                                <failoverdomainnode name="an-node01.alteeve.ca" priority="2"/>
                                <failoverdomainnode name="an-node02.alteeve.ca" priority="1"/>
                        </failoverdomain>
                </failoverdomains>
                <service autostart="1" domain="only_an01" exclusive="0" name="storage_an01" recovery="restart">
                        <script ref="drbd">
                                <script ref="clvmd">
                                        <script ref="gfs2">
                                                <script ref="libvirtd"/>
                                        </script>
                                </script>
                        </script>
                </service>
                <service autostart="1" domain="only_an02" exclusive="0" name="storage_an02" recovery="restart">
                        <script ref="drbd">
                                <script ref="clvmd">
                                        <script ref="gfs2">
                                                <script ref="libvirtd"/>
                                        </script>
                                </script>
                        </script>
                </service>
        </rm>
</cluster>
</source>
 
Let's validate it now, but we won't bother to push it out just yet.
 
<source lang="bash">
ccs_config_validate
</source>
<source lang="text">
Configuration validates
</source>
 
Good, now to create the new VM services!
 
== Making Our VMs Clustered Services ==
 
The final piece of the puzzle, and the whole purpose of this exercise is in sight!
 
There is a special service in <span class="code">rgmanager</span> for virtual machines which uses the <span class="code">vm:</span> prefix. We will need to create four of these services; One for each of the virtual machines.
 
{{note|1=There is a one main drawback of using <span class="code">rgmanager</span> to manage virtual machines in our cluster. Ideally, we'd like to have the <span class="code">vm:</span> services start after the <span class="code">storage_X</span> services are up, and a bit of logic to say that all VMs can start on one node, should the other's storage service fail. This isn't possible though, so we will need to manually start VMs after a cold-start of the cluster.}}
 
=== Creating The vm: Services ===
 
We'll create four new services, one for each VM. These are simple single-element entries. Lets increment the version to <span class="code">12</span> and take a look at the new entries.
 
<source lang="xml">
        <rm>
                ...
                <vm name="vm0001-dev" domain="primary_an01" path="/shared/definitions/" autostart="0"
                exclusive="0" recovery="restart" max_restarts="2" restart_expire_time="600"/>
                <vm name="vm0002-web" domain="primary_an01" path="/shared/definitions/" autostart="0"
                exclusive="0" recovery="restart" max_restarts="2" restart_expire_time="600"/>
                <vm name="vm0003-db" domain="primary_an02" path="/shared/definitions/" autostart="0"
                exclusive="0" recovery="restart" max_restarts="2" restart_expire_time="600"/>
                <vm name="vm0004-ms" domain="primary_an02" path="/shared/definitions/" autostart="0"
                exclusive="0" recovery="restart" max_restarts="2" restart_expire_time="600"/>
        </rm>
</source>
 
Let's look at each of the attributes now;
* <span class="code">name</span>; This must match the name we created the VM with (the <span class="code">--name ...</span> value when we provisioned the VMs). This is the name that will be passed to the <span class="code">vm.sh</span> resource agent when managing this service, and it will be the <span class="code"><name>.xml</span> used when looking under <span class="code">path=...</span> for the VM's definition file.
 
* <span class="code">domain</span>; This tells the cluster to manage the VM using the given fail-over domain.
 
* <span class="code">path</span>; This tells the cluster where to look for the VM's definition file. '''Do not''' include the actual file name, just the path. This is partly why we wrote out each VM's definition to the shared directory.
 
* <span class="code">autostart</span>; As mentioned above, we can't have the VMs start with the cluster, because the underlying storage takes too long to come on-line. Setting this to <span class="code">0</span> disables the auto-start behaviour.
 
* <span class="code">exclusive</span>; As we saw with the storage services, we want to ensure that this service '''is not''' exclusive. If it were, starting the VM would stop the storage and prevent other VMs from running on the node. This would be a bad thing™.
 
* <span class="code">recovery</span>; This tells the cluster what to do when the service fails. We are setting this to <span class="code">restart</span>, so the cluster will try to restart the VM on the same node it was on when it failed. The alternative is <span class="code">relocate</span>, which would instead start the VM on another node. More about this next.
 
* <span class="code">max_restarts</span>; When a VM fails, it is possible that it is because there is a subtle problem on the host node itself. So this attribute allows up to set a limit on how many times a VM will be allowed to <span class="code">restart</span> before giving up and switching to a <span class="code">relocate</span> police. We're setting this to <span class="code">2</span>, which means that if a VM is restarted twice, the third failure will trigger a <span class="code">relocate</span>.
 
* <span class="code">restart_expire_time</span>; If we let the failure count increment indefinitely, than a <span class="code">relocate</span> policy becomes inevitable, when there is no reason to believe that an issue with the host node exists. To account for this, we use this attribute to tell the cluster to "forget" a restart after the defined number of seconds. We're using <span class="code">600</span> seconds (ten minutes). So if a VM fails, the failure count increments from <span class="code">0</span> to <span class="code">1</span>. After <span class="code">600</span> seconds though, the restart is "forgotten" and the failure count returns to <span class="code">0</span>. Said another way, a VM will have to fail three times in ten minutes to trigger the <span class="code">relocate</span> recovery policy.
 
So let's take a look at the final, complete <span class="code">cluster.conf</span>;
 
<source lang="xml">
<?xml version="1.0"?>
<cluster config_version="12" name="an-cluster-A">
<cman expected_votes="1" two_node="1"/>
<clusternodes>
<clusternode name="an-node01.alteeve.ca" nodeid="1">
<fence>
<method name="ipmi">
<device action="reboot" name="ipmi_an01"/>
</method>
<method name="pdu">
<device action="reboot" name="pdu2" port="1"/>
</method>
</fence>
</clusternode>
<clusternode name="an-node02.alteeve.ca" nodeid="2">
<fence>
<method name="ipmi">
<device action="reboot" name="ipmi_an02"/>
</method>
<method name="pdu">
<device action="reboot" name="pdu2" port="2"/>
</method>
</fence>
</clusternode>
</clusternodes>
<fencedevices>
<fencedevice agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" name="ipmi_an01" passwd="secret"/>
<fencedevice agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" name="ipmi_an02" passwd="secret"/>
<fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.ca" name="pdu2"/>
</fencedevices>
<fence_daemon post_join_delay="30"/>
<totem rrp_mode="none" secauth="off"/>
<rm>
<resources>
<script file="/etc/init.d/drbd" name="drbd"/>
<script file="/etc/init.d/clvmd" name="clvmd"/>
<script file="/etc/init.d/gfs2" name="gfs2"/>
<script file="/etc/init.d/libvirtd" name="libvirtd"/>
</resources>
<failoverdomains>
<failoverdomain name="only_an01" nofailback="1" ordered="0" restricted="1">
<failoverdomainnode name="an-node01.alteeve.ca"/>
</failoverdomain>
<failoverdomain name="only_an02" nofailback="1" ordered="0" restricted="1">
<failoverdomainnode name="an-node02.alteeve.ca"/>
</failoverdomain>
<failoverdomain name="primary_an01" nofailback="1" ordered="1" restricted="1">
<failoverdomainnode name="an-node01.alteeve.ca" priority="1"/>
<failoverdomainnode name="an-node02.alteeve.ca" priority="2"/>
</failoverdomain>
<failoverdomain name="primary_an02" nofailback="1" ordered="1" restricted="1">
<failoverdomainnode name="an-node01.alteeve.ca" priority="2"/>
<failoverdomainnode name="an-node02.alteeve.ca" priority="1"/>
</failoverdomain>
</failoverdomains>
<service autostart="1" domain="only_an01" exclusive="0" name="storage_an01" recovery="restart">
<script ref="drbd">
<script ref="clvmd">
<script ref="gfs2">
<script ref="libvirtd"/>
</script>
</script>
</script>
</service>
<service autostart="1" domain="only_an02" exclusive="0" name="storage_an02" recovery="restart">
<script ref="drbd">
<script ref="clvmd">
<script ref="gfs2">
<script ref="libvirtd"/>
</script>
</script>
</script>
</service>
<vm name="vm0001-dev" domain="primary_an01" path="/shared/definitions/" autostart="0" exclusive="0" recovery="restart" max_restarts="2" restart_expire_time="600"/>
<vm name="vm0002-web" domain="primary_an01" path="/shared/definitions/" autostart="0" exclusive="0" recovery="restart" max_restarts="2" restart_expire_time="600"/>
<vm name="vm0003-db" domain="primary_an02" path="/shared/definitions/" autostart="0" exclusive="0" recovery="restart" max_restarts="2" restart_expire_time="600"/>
<vm name="vm0004-ms" domain="primary_an02" path="/shared/definitions/" autostart="0" exclusive="0" recovery="restart" max_restarts="2" restart_expire_time="600"/>
</rm>
</cluster>
</source>
 
Let's validate one more time.
 
<source lang="bash">
ccs_config_validate
</source>
<source lang="text">
Configuration validates
</source>
 
She's a beaut', eh?
 
=== Making The VM Services Active ===
 
Before we push the last <span class="code">cluster.conf</span> out, lets take a look at the current state of affairs.
 
On <span class="code">an-node01</span>;
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Tue Dec 27 14:06:38 2011
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State       
------- ----                  ----- ------                  -----       
service:storage_an01          an-node01.alteeve.ca          started     
service:storage_an02          an-node02.alteeve.ca          started     
</source>
<source lang="bash">
virsh list --all
</source>
<source lang="text">
Id Name                State
----------------------------------
  2 vm0001-dev          running
  4 vm0002-web          running
  - vm0003-db            shut off
  - vm0004-ms            shut off
</source>
 
On <span class="code">an-node02</span>;
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Tue Dec 27 14:07:32 2011
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, rgmanager
an-node02.alteeve.ca                      2 Online, Local, rgmanager
 
Service Name                  Owner (Last)                  State       
------- ----                  ----- ------                  -----       
service:storage_an01          an-node01.alteeve.ca          started     
service:storage_an02          an-node02.alteeve.ca          started     
</source>
<source lang="bash">
virsh list --all
</source>
<source lang="text">
Id Name                State
----------------------------------
  2 vm0003-db            running
  4 vm0004-ms            running
  - vm0001-dev          shut off
  - vm0002-web          shut off
</source>
 
So we can see that the cluster doesn't know about the VMs yet, as we've not yet pushed out the changes. We can also see that <span class="code">vm0001-dev</span> and <span class="code">vm0002-web</span> are currently running on <span class="code">an-node01</span> and <span class="code">vm0003-db</span> and <span class="code">vm0004-ms</span> are running on <span class="code">an-node02</span>.
 
So let's push out the new configuration and see what happens!
 
<source lang="bash">
cman_tool version -r
cman_tool version
</source>
<source lang="text">
6.2.0 config 12
</source>
 
Let's take a look at what showed up in syslog;
 
<source lang="text">
Dec 27 14:18:20 an-node01 modcluster: Updating cluster.conf
Dec 27 14:18:20 an-node01 corosync[2362]:  [QUORUM] Members[2]: 1 2
Dec 27 14:18:20 an-node01 rgmanager[2579]: Reconfiguring
Dec 27 14:18:22 an-node01 rgmanager[2579]: Initializing vm:vm0001-dev
Dec 27 14:18:22 an-node01 rgmanager[2579]: vm:vm0001-dev was added to the config, but I am not initializing it.
Dec 27 14:18:22 an-node01 rgmanager[2579]: Initializing vm:vm0002-web
Dec 27 14:18:22 an-node01 rgmanager[2579]: vm:vm0002-web was added to the config, but I am not initializing it.
Dec 27 14:18:22 an-node01 rgmanager[2579]: Initializing vm:vm0003-db
Dec 27 14:18:22 an-node01 rgmanager[2579]: vm:vm0003-db was added to the config, but I am not initializing it.
Dec 27 14:18:23 an-node01 rgmanager[2579]: Initializing vm:vm0004-ms
Dec 27 14:18:23 an-node01 rgmanager[2579]: vm:vm0004-ms was added to the config, but I am not initializing it.
</source>
 
Indeed, if we check again with <span class="code">clustat</span>, we'll see the new VM services, but all four will show as <span class="code">disabled</span>, despite the VMs themselves being up and running.
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Tue Dec 27 14:20:10 2011
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State       
------- ----                  ----- ------                  -----       
service:storage_an01          an-node01.alteeve.ca          started     
service:storage_an02          an-node02.alteeve.ca          started     
vm:vm0001-dev                  (none)                        disabled     
vm:vm0002-web                  (none)                        disabled     
vm:vm0003-db                  (none)                        disabled     
vm:vm0004-ms                  (none)                        disabled     
</source>
 
This highlights how the state of the VMs is not intrinsically tied to the cluster's status. The VMs were started outside of the cluster, so the cluster thinks they are off-line. We know they're running though, so we can tell the cluster to enable them now. Note that the VMs will '''not''' be rebooted or in any way effected, provided you tell the cluster to enable the VM on the node it's currently running on.
 
Let's start by enabling <span class="code">vm0001-dev</span>, which we know is running on <span class="code">an-node01</span>. Be aware that the <span class="code">vm:</span> prefix is required when using <span class="code">clusvcadm</span>!
 
<source lang="bash">
clusvcadm -e vm:vm0001-dev -m an-node01.alteeve.ca
</source>
<source lang="text">
vm:vm0001-dev is now running on an-node01.alteeve.ca
</source>
 
Now we can see that the VM is under the cluster's control!
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Tue Dec 27 14:25:08 2011
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State       
------- ----                  ----- ------                  -----       
service:storage_an01          an-node01.alteeve.ca          started     
service:storage_an02          an-node02.alteeve.ca          started     
vm:vm0001-dev                  an-node01.alteeve.ca          started     
vm:vm0002-web                  (none)                        disabled     
vm:vm0003-db                  (none)                        disabled     
vm:vm0004-ms                  (none)                        disabled     
</source>
 
Perfect! Now to add the other three VMs. Note that all of these commands can be run from whichever node you wish, because we're specifying the target node by using the "member" switch.
 
<source lang="bash">
clusvcadm -e vm:vm0002-web -m an-node01.alteeve.ca
</source>
<source lang="text">
vm:vm0002-web is now running on an-node01.alteeve.ca
</source>
<source lang="bash">
clusvcadm -e vm:vm0003-db -m an-node02.alteeve.ca
</source>
<source lang="text">
vm:vm0003-db is now running on an-node02.alteeve.ca
</source>
<source lang="bash">
clusvcadm -e vm:vm0004-ms -m an-node02.alteeve.ca
</source>
<source lang="text">
vm:vm0004-ms is now running on an-node02.alteeve.ca
</source>
 
Let's do a final check of the cluster's status;
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Tue Dec 27 14:28:19 2011
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State       
------- ----                  ----- ------                  -----       
service:storage_an01          an-node01.alteeve.ca          started     
service:storage_an02          an-node02.alteeve.ca          started     
vm:vm0001-dev                  an-node01.alteeve.ca          started     
vm:vm0002-web                  an-node01.alteeve.ca          started     
vm:vm0003-db                  an-node02.alteeve.ca          started     
vm:vm0004-ms                  an-node02.alteeve.ca          started     
</source>
 
== The Last Step - Automatic Cluster Start ==
 
The last step is to enable automatic starting of the <span class="code">cman</span> and <span class="code">rgmanager</span> services when the host node boots. This is quite simple;
 
On both nodes, run;
 
<source lang="bash">
chkconfig cman on && chkconfig rgmanager on
chkconfig --list | grep -e cman -e rgmanager
</source>
<source lang="bash">
cman          0:off 1:off 2:on 3:on 4:on 5:on 6:off
rgmanager      0:off 1:off 2:on 3:on 4:on 5:on 6:off
</source>
 
The next time you restart the nodes, you will be able to run <span class="code">clustat</span> and you should find your cluster up and running!
 
== We're Done! Or, Are We? ==
 
That's it, ladies and gentlemen. Our cluster is completed! In theory now, any failure in the cluster will result in no lost data and, at worst, no more than a minute or two of downtime.
 
"In theory" just isn't good enough in clustering though. Time to take "theory" and make it a tested, known fact.
 
= Testing; Taking Theory And Putting It Into Practice =
 
You may have thought that we were done. Indeed, the cluster has been built, but we don't know if things actually work.
 
Enter testing.
 
In practice, when preparing production clusters for deployment, you should plan to spend '''at least''' twice as long in testing as you did in building the cluster. You need to imagine all failure scenarios, trigger those failures and see what happens.
 
== A Note On The Importance Of Fencing ==
 
It may be tempting to think that you were careful and don't really need to test you cluster thoroughly.
 
'''''You are wrong'''''
 
Baring you being absolutely obsessive with testing every step of the way, you will almost certain make mistakes. Now I make no claims to genius, but I do like to think I am pretty comfortable building 2-node clusters. Despite that, while writing this testing portion of the tutorial, I found the following problems with my cluster;
 
* RGManager's <span class="code">autostart="1"</span> is not evaluated when a node starts, only when quorum is gained. The mistake had me assuming that the storage services would start when the node restarted, after having manually disabled the service prior to node withdrawal.
* The behaviour of <span class="code">echo c > /proc/sysrq-trigger</span> changed since [[EL5]] and now triggers a core dump with 100% CPU load in [[EL6]] KVM guests. This means that a previous expectation of the cluster recovering from these crashes was wrong.
* I forgot to install the <span class="code">obliterate-peer.sh</span> script for DRBD, which I didn't catch until I tried to fail a node.
 
You simply can't make assumptions. Test your cluster in every failure mode you can imagine. Until you do, you won't know what you might have missed!
 
== Controlled VM Migration And Node Withdrawal ==
 
This testing will ensure that live migration works in both directions, and that each node can be cleanly removed from and then rejoin the cluster.
 
The test will consist of the following steps;
 
# Live migrate <span class="code">vm0001-dev</span> and <span class="code">vm0002-web</span> from <span class="code">an-node01</span> to <span class="code">an-node02</span>. This will ensure live migration works and that all VMs will run on a single node.
# Withdraw <span class="code">an-node01</span> from the cluster entirely and reboot it. This will ensure that cold shut-down of the node is successful.
# Once <span class="code">an-node01</span> has rebooted, rejoin it to the cluster. This will ensure that rejoining the cluster works.
# Once <span class="code">an-node01</span> is a member of the cluster, we will wait a few minutes and ensure that <span class="code">vm0001-dev</span> and <span class="code">vm0002-web</span> automatically live migrate back to <span class="code">an-node01</span>. This will ensure that priority is working.
# We will live migrate <span class="code">vm0003-db</span> and <span class="code">vm0004-ms</span> from <span class="code">an-node02</span> to <span class="code">an-node01</span> to ensure that migration works in the other direction.
# With the VMs all running on <span class="code">an-node01</span>, we will withdraw <span class="code">an-node02</span> from the cluster, reboot it, rejoin it to the cluster and then confirm that <span class="code">vm0003-db</span> and <span class="code">vm0004-ms</span> automatically migrate back to <span class="code">an-node02</span>.
 
With all of these tests completed, we will be able to ensure that order and controlled migration of VM services work as expected.
 
=== Live Migration - vm0001-dev And vm0002-dev To an-node02 ===
 
First up, we will use the special <span class="code">clusvcadm</span> switch <span class="code">-M</span>, which tells the cluster to use "live migration". This is, the VM will move to the target member without shutting down. Users of the VM should notice, and worst, a brief network interruption when the cut-over occurs, without any adverse effect on their services or dropped connections.
 
Let's take a quick look at the state of affairs;
 
On <span class="code">an-node02</span>, run;
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sat Dec 31 13:49:41 2011
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, rgmanager
an-node02.alteeve.ca                      2 Online, Local, rgmanager
 
Service Name                  Owner (Last)                  State       
------- ----                  ----- ------                  -----       
service:storage_an01          an-node01.alteeve.ca          started     
service:storage_an02          an-node02.alteeve.ca          started     
vm:vm0001-dev                  an-node01.alteeve.ca          started     
vm:vm0002-web                  an-node01.alteeve.ca          started     
vm:vm0003-db                  an-node02.alteeve.ca          started     
vm:vm0004-ms                  an-node02.alteeve.ca          started     
</source>
 
Lets start by live migrating <span class="code">vm0001-dev</span>. Before we do though, let's <span class="code">[[ssh]]</span> into it and start a ping against a target on the internet. We'll leave this running throughout the live migration.
 
On <span class="code">vm0001-dev</span>;
 
[[Image:vm0001-dev_ping_live-migration-test_01.png|thumb|700px|center|Running <span class="code">ping alteeve.ca</span> on <span class="code">vm0001-dev</span> prior to live migration.]]
 
Now back on <span class="code">an-node01</span>, let's migrate <span class="code">vm0001-dev</span> over to <span class="code">an-node02</span>. This will take a little while as the VM's [[RAM]] gets copied across the [[BCN]].
 
<source lang="bash">
clusvcadm -M vm:vm0001-dev -m an-node02.alteeve.ca
</source>
<source lang="text">
Trying to migrate vm:vm0001-dev to an-node02.alteeve.ca...Success
</source>
 
[[Image:vm0001-dev_ping_live-migration-test_02.png|thumb|700px|center|Mid-migration of <span class="code">vm0001-dev</span>.]]
 
Once complete, check the new status of <span class="code">clustat</span>;
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sat Dec 31 14:11:43 2011
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, rgmanager
an-node02.alteeve.ca                      2 Online, Local, rgmanager
 
Service Name                  Owner (Last)                  State       
------- ----                  ----- ------                  -----       
service:storage_an01          an-node01.alteeve.ca          started     
service:storage_an02          an-node02.alteeve.ca          started     
vm:vm0001-dev                  an-node02.alteeve.ca          started     
vm:vm0002-web                  an-node01.alteeve.ca          started     
vm:vm0003-db                  an-node02.alteeve.ca          started     
vm:vm0004-ms                  an-node02.alteeve.ca          started     
</source>
 
If we look again at <span class="code">vm0001-dev</span>'s ping, we'll see that a few packets were dropped but our ssh session remained intact. Any other active [[TCP]] session should have survived this just fine as well.
 
[[Image:vm0001-dev_ping_live-migration-test_03.png|thumb|700px|center|Results of the ping on <span class="code">vm0001-dev</span> post live migration.]]
 
Wonderful! Now let's live migrate <span class="code">vm0002-web</span> to <span class="code">an-node02</span>.
 
<source lang="bash">
clusvcadm -M vm:vm0002-web -m an-node02.alteeve.ca
</source>
<source lang="text">
Trying to migrate vm:vm0002-web to an-node02.alteeve.ca...Success
</source>
 
Again, check the new status of <span class="code">clustat</span>;
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sat Dec 31 14:17:35 2011
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, rgmanager
an-node02.alteeve.ca                      2 Online, Local, rgmanager
 
Service Name                  Owner (Last)                  State       
------- ----                  ----- ------                  -----       
service:storage_an01          an-node01.alteeve.ca          started     
service:storage_an02          an-node02.alteeve.ca          started     
vm:vm0001-dev                  an-node02.alteeve.ca          started     
vm:vm0002-web                  an-node02.alteeve.ca          started     
vm:vm0003-db                  an-node02.alteeve.ca          started     
vm:vm0004-ms                  an-node02.alteeve.ca          started     
</source>
 
We can see now that all four VMs are running on <span class="code">an-node02</span>! This is possible because of our careful planning of the VM resources earlier. This will mean more load on the host node's CPU, so things might not be as fast as we would like, but all services are on-line!
 
=== Withdraw an-node01 From The Cluster ===
 
So imagine now that we need to do some work on <span class="code">an-node01</span>, like replace a bad network card or add some RAM. We've moved the VMs off, so now the only remaining service is <span class="code">service:storage_an01</span>. We don't want to manually disable this service, because if we did, the service would not automatically start when the node rejoined the cluster. So we're going to just stop <span class="code">rgmanager</span> and let it disable the <span class="code">storage_an01</span> service.
 
Check the state of the cluster;
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 16:11:56 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State       
------- ----                  ----- ------                  -----       
service:storage_an01          an-node01.alteeve.ca          started     
service:storage_an02          an-node02.alteeve.ca          started     
vm:vm0001-dev                  an-node02.alteeve.ca          started     
vm:vm0002-web                  an-node02.alteeve.ca          started     
vm:vm0003-db                  an-node02.alteeve.ca          started     
vm:vm0004-ms                  an-node02.alteeve.ca          started     
</source>
 
Just as we expect, so now we will stop <span class="code">rgmanager</span>, then stop <span class="code">cman</span>.
 
On <span class="code">an-node01</span>;
 
<source lang="bash">
/etc/init.d/rgmanager stop
</source>
<source lang="text">
Stopping Cluster Service Manager:                          [  OK  ]
</source>
 
<source lang="bash">
/etc/init.d/cman stop
</source>
<source lang="text">
Stopping cluster:
  Leaving fence domain...                                [  OK  ]
  Stopping gfs_controld...                                [  OK  ]
  Stopping dlm_controld...                                [  OK  ]
  Stopping fenced...                                      [  OK  ]
  Stopping cman...                                        [  OK  ]
  Waiting for corosync to shutdown:                      [  OK  ]
  Unloading kernel modules...                            [  OK  ]
  Unmounting configfs...                                  [  OK  ]
</source>
 
Checking on <span class="code">an-node02</span>, we can see that all four VMs are running fine and that <span class="code">an-node01</span> is gone.
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 16:13:23 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Offline
an-node02.alteeve.ca                      2 Online, Local, rgmanager
 
Service Name                  Owner (Last)                  State       
------- ----                  ----- ------                  -----       
service:storage_an01          (an-node01.alteeve.ca)        stopped     
service:storage_an02          an-node02.alteeve.ca          started     
vm:vm0001-dev                  an-node02.alteeve.ca          started     
vm:vm0002-web                  an-node02.alteeve.ca          started     
vm:vm0003-db                  an-node02.alteeve.ca          started     
vm:vm0004-ms                  an-node02.alteeve.ca          started     
</source>
 
Test passed!
 
You can now power off and restart <span class="code">an-node01</span>.
 
=== Rejoining an-node01 To The Cluster ===
 
If you haven't already, reboot <span class="code">an-node01</span>. As we set earlier, <span class="code">cman</span> and <span class="code">rgmanager</span> will start automatically. The easiest thing to do for this test is to <span class="code">watch clustat</span> on <span class="code">an-node02</span>. If all goes well, you should see <span class="code">an-node01</span> rejoin the cluster automatically.
 
Connected to cluster;
 
[[Image:2nrhkct_automatic-reconnect-an-node01_01.png|thumb|700px|center|Rebooting <span class="code">an-node01</span>, while <span class="code">an-node02</span> hosts all four VMs.]]
 
Storage coming on-line;
 
[[Image:2nrhkct_automatic-reconnect-an-node01_02.png|thumb|700px|center|Storage coming up on <span class="code">an-node01</span>.]]
 
Back in business!
 
[[Image:2nrhkct_automatic-reconnect-an-node01_03.png|thumb|700px|center|Back in business!]]
 
You should be able to log back into <span class="code">an-node01</span> and see that everything is back on-line. DRBD should be <span class="code">UpToDate</span>, or be in the process of synchronizing.
 
{{warning|1=Never migrate a VM to a node until its underlying DRBD resource is <span class="code">UpToDate</span>! If the sync source node (the one that is <span class="code">UpToDate</span>) goes down, DRBD will drop the resource to <span class="code">Secondary</span>, making it inaccessible to the node and crashing the VM.}}
 
=== Migrating vm0001-dev And vm0002-web Back To an-node01 ===
 
If we were putting the cluster back into its normal state, all that would be left to do is to migrate <span class="code">an-node01</span>'s VMs back. So let's do that.
 
As always, start with a check of the current cluster status.
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 16:31:06 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State       
------- ----                  ----- ------                  -----       
service:storage_an01          an-node01.alteeve.ca          started     
service:storage_an02          an-node02.alteeve.ca          started     
vm:vm0001-dev                  an-node02.alteeve.ca          started     
vm:vm0002-web                  an-node02.alteeve.ca          started     
vm:vm0003-db                  an-node02.alteeve.ca          started     
vm:vm0004-ms                  an-node02.alteeve.ca          started     
</source>
 
Now confirm that the underlying storage is ready. Remember that DRBD resource <span class="code">r1</span> backs the VMs used by the <span class="code">an01-vg0</span> volume groups.
 
<source lang="bash">
cat /proc/drbd
</source>
<source lang="text">
version: 8.3.12 (api:88/proto:86-96)
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03
0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:0 dw:0 dr:12552 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
1: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:2428 dw:2428 dr:9776 al:0 bm:4 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
2: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:510 dw:510 dr:9744 al:0 bm:4 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
</source>
 
All systems ready; Let's migrate <span class="code">vm0001-dev</span> and <span class="code">vm0002-web</span> now.
 
<source lang="bash">
clusvcadm -M vm:vm0001-dev -m an-node01.alteeve.ca
</source>
<source lang="text">
Trying to migrate vm:vm0001-dev to an-node01.alteeve.ca...Success
</source>
<source lang="bash">
clusvcadm -M vm:vm0002-web -m an-node01.alteeve.ca
</source>
<source lang="text">
Trying to migrate vm:vm0002-web to an-node01.alteeve.ca...Success
</source>
 
Check the new status;
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 16:32:11 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State       
------- ----                  ----- ------                  -----       
service:storage_an01          an-node01.alteeve.ca          started     
service:storage_an02          an-node02.alteeve.ca          started     
vm:vm0001-dev                  an-node01.alteeve.ca          started     
vm:vm0002-web                  an-node01.alteeve.ca          started     
vm:vm0003-db                  an-node02.alteeve.ca          started     
vm:vm0004-ms                  an-node02.alteeve.ca          started     
</source>
 
With that, the cluster is back in business!
 
=== Live Migration - vm0003-db And vm0004-ms To an-node01 ===
 
Let's start the process of taking <span class="code">an-node02</span> out of the cluster. The first step is to move <span class="code">vm0003-db</span> and <span class="code">vm0004-ms</span> over to <span class="code">an-node01</span>.
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 16:42:10 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State       
------- ----                  ----- ------                  -----       
service:storage_an01          an-node01.alteeve.ca          started     
service:storage_an02          an-node02.alteeve.ca          started     
vm:vm0001-dev                  an-node01.alteeve.ca          started     
vm:vm0002-web                  an-node01.alteeve.ca          started     
vm:vm0003-db                  an-node02.alteeve.ca          started     
vm:vm0004-ms                  an-node02.alteeve.ca          started     
</source>
 
Ready to migrate.
 
<source lang="bash">
clusvcadm -M vm:vm0003-db -m an-node01.alteeve.ca
</source>
<source lang="text">
Trying to migrate vm:vm0003-db to an-node01.alteeve.ca...Success
</source>
<source lang="bash">
clusvcadm -M vm:vm0004-ms -m an-node01.alteeve.ca
</source>
<source lang="text">
Trying to migrate vm:vm0004-ms to an-node01.alteeve.ca...Success
</source>
 
Confirm;
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 16:42:42 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State       
------- ----                  ----- ------                  -----       
service:storage_an01          an-node01.alteeve.ca          started     
service:storage_an02          an-node02.alteeve.ca          started     
vm:vm0001-dev                  an-node01.alteeve.ca          started     
vm:vm0002-web                  an-node01.alteeve.ca          started     
vm:vm0003-db                  an-node01.alteeve.ca          started     
vm:vm0004-ms                  an-node01.alteeve.ca          started     
</source>
 
Done!
 
=== Withdraw an-node02 From The Cluster ===
 
Double-check that all the VMs are off of <span class="code">an-node02</span> prior to withdrawal.
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 16:45:30 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State       
------- ----                  ----- ------                  -----       
service:storage_an01          an-node01.alteeve.ca          started     
service:storage_an02          an-node02.alteeve.ca          started     
vm:vm0001-dev                  an-node01.alteeve.ca          started     
vm:vm0002-web                  an-node01.alteeve.ca          started     
vm:vm0003-db                  an-node01.alteeve.ca          started     
vm:vm0004-ms                  an-node01.alteeve.ca          started     
</source>
 
As before, we '''will not''' disable the <span class="code">storage_an02</span> service. If we did, the service would not automatically restart when the node rejoined the cluster.
 
So now that <span class="code">an-node01</span> is hosting all of the VMs and is running independently. Now we can stop <span class="code">rgmanager</span> and <span class="code">cman</span>.
 
On <span class="code">an-node02</span>;
 
<source lang="bash">
/etc/init.d/rgmanager stop
</source>
<source lang="text">
Stopping Cluster Service Manager:                          [  OK  ]
</source>
 
<source lang="bash">
/etc/init.d/cman stop
</source>
<source lang="text">
Stopping cluster:
  Leaving fence domain...                                [  OK  ]
  Stopping gfs_controld...                                [  OK  ]
  Stopping dlm_controld...                                [  OK  ]
  Stopping fenced...                                      [  OK  ]
  Stopping cman...                                        [  OK  ]
  Waiting for corosync to shutdown:                      [  OK  ]
  Unloading kernel modules...                            [  OK  ]
  Unmounting configfs...                                  [  OK  ]
</source>
 
Confirm;
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 16:49:14 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Offline
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          (an-node02.alteeve.ca)        stopped
vm:vm0001-dev                  an-node01.alteeve.ca          started
vm:vm0002-web                  an-node01.alteeve.ca          started
vm:vm0003-db                  an-node01.alteeve.ca          started
vm:vm0004-ms                  an-node01.alteeve.ca          started
</source>
 
Done! We can now shut down and reboot <span class="code">an-node02</span> entirely.
 
=== Rejoining an-node02 To The Cluster ===
 
Exactly as we did with <span class="code">an-node01</span>, we will reboot <span class="code">an-node02</span>. The <span class="code">cman</span> and <span class="code">rgmanager</span> services should start automatically, so once again, we will just <span class="code">watch clustat</span> on <span class="code">an-node01</span>. If all goes well, you should see <span class="code">an-node02</span> rejoin the cluster automatically.
 
Connected to cluster;
 
[[Image:2nrhkct_automatic-reconnect-an-node02_01.png|thumb|700px|center|Rebooting <span class="code">an-node02</span>, while <span class="code">an-node02</span> hosts all four VMs.]]
 
Storage coming on-line;
 
[[Image:2nrhkct_automatic-reconnect-an-node02_02.png|thumb|700px|center|Storage coming up on <span class="code">an-node02</span>.]]
 
Back in business!
 
[[Image:2nrhkct_automatic-reconnect-an-node02_03.png|thumb|700px|center|Back in business!]]
 
You should be able to log back into <span class="code">an-node02</span> and see that everything is back on-line. DRBD should be <span class="code">UpToDate</span>, or be in the process of synchronizing.
 
{{warning|1=Again; Never migrate a VM to a node until its underlying DRBD resource is <span class="code">UpToDate</span>! If the sync source node (the one that is <span class="code">UpToDate</span>) goes down, DRBD will drop the resource to <span class="code">Secondary</span>, making it inaccessible to the node and crashing the VM.}}
 
=== Migrating vm0003-db And vm0004-ms Back To an-node02 ===
 
The last step to restore the cluster to its ideal state is to migrate <span class="code">vm0003-db</span> and <span class="code">vm0004-ms</span> back to <span class="code">an-node02</span>.
 
As always, start with a check of the current cluster status.
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 16:57:19 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, rgmanager
an-node02.alteeve.ca                      2 Online, Local, rgmanager
 
Service Name                  Owner (Last)                  State       
------- ----                  ----- ------                  -----       
service:storage_an01          an-node01.alteeve.ca          started     
service:storage_an02          an-node02.alteeve.ca          started     
vm:vm0001-dev                  an-node01.alteeve.ca          started     
vm:vm0002-web                  an-node01.alteeve.ca          started     
vm:vm0003-db                  an-node01.alteeve.ca          started     
vm:vm0004-ms                  an-node01.alteeve.ca          started     
</source>
 
Now confirm that the underlying storage is ready. Remember that DRBD resource <span class="code">r2</span> backs the VMs used by the <span class="code">an02-vg0</span> volume groups.
 
<source lang="bash">
cat /proc/drbd
</source>
<source lang="text">
version: 8.3.12 (api:88/proto:86-96)
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03
0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:0 dw:0 dr:8788 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
1: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:376 dw:376 dr:5876 al:0 bm:7 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
2: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:671 dw:671 dr:5844 al:0 bm:16 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
</source>
 
All systems ready; Let's migrate <span class="code">vm0003-db</span> and <span class="code">vm0004-ms</span> now.
 
<source lang="bash">
clusvcadm -M vm:vm0003-db -m an-node02.alteeve.ca
</source>
<source lang="text">
Trying to migrate vm:vm0003-db to an-node02.alteeve.ca...Success
</source>
<source lang="bash">
clusvcadm -M vm:vm0004-ms -m an-node02.alteeve.ca
</source>
<source lang="text">
Trying to migrate vm:vm0004-ms to an-node02.alteeve.ca...Success
</source>
 
Check the new status;
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 16:59:22 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, rgmanager
an-node02.alteeve.ca                      2 Online, Local, rgmanager
 
Service Name                  Owner (Last)                  State       
------- ----                  ----- ------                  -----       
service:storage_an01          an-node01.alteeve.ca          started     
service:storage_an02          an-node02.alteeve.ca          started     
vm:vm0001-dev                  an-node01.alteeve.ca          started     
vm:vm0002-web                  an-node01.alteeve.ca          started     
vm:vm0003-db                  an-node02.alteeve.ca          started     
vm:vm0004-ms                  an-node02.alteeve.ca          started     
</source>
 
All controlled migration, withdrawal and re-joining tests completed!
 
== Uncontrolled VM Migration and Node Failure ==
 
This test will be more violent than the previous tests. Here we will test failing the VMs and ensuring that the cluster will recover the VMs by restarting them on the hosts. We will repeatedly fail the VMs three times within ten minutes to ensure that the <span class="code">relocate</span> policy kicks in, as we expect it to.
 
Once we complete the VM failure testing, we will fail and recover both nodes, one at a time of course, and rejoin them to the cluster. This will confirm that the VMs recover on the surviving node.
 
The tests will be;
 
* Crash all four VMs three times. The failures will be triggered by using <span class="code">virsh destroy <vm></span> on the current host node.
* After each crash, we will confirm that the VM came back on-line before crashing it again.
* With all of the VMs tested to recover properly, we will live-migrate them back to their designated host nodes.
* Once the cluster is back into its ideal state, we will crash <span class="code">an-node01</span>. Within a few seconds, it should be [[fenced]] and the lost VMs should restart on <span class="code">an-node02</span>. Once it rejoins the cluster and the VMs return to <span class="code">an-node01</span>, we will repeat the test by failing <span class="code">an-node02</span>.
 
=== Failure Testing vm0001-dev ===
 
Confirm that <span class="code">vm0001-dev</span> is running on <span class="code">an-node01</span>.
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 18:29:10 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  an-node01.alteeve.ca          started
vm:vm0002-web                  an-node01.alteeve.ca          started
vm:vm0003-db                  an-node02.alteeve.ca          started
vm:vm0004-ms                  an-node02.alteeve.ca          started
</source>
 
It is, perfect. Now before I kill a VM, I like to start a ping against it. It acts both as an indication of when the node is back up and acts as a crude method of timing how long it took the VM to fully recover.
 
{{note|1=If your VMs are isolated, as they are in this tutorial, you may have to run the ping from another VM or from your firewall.}}
 
<source lang="bash">
ping 10.254.0.1
</source>
<source lang="text">
PING 10.254.0.1 (10.254.0.1) 56(84) bytes of data.
64 bytes from 10.254.0.1: icmp_seq=1 ttl=64 time=0.737 ms
64 bytes from 10.254.0.1: icmp_seq=2 ttl=64 time=0.530 ms
64 bytes from 10.254.0.1: icmp_seq=3 ttl=64 time=0.589 ms
</source>
 
Now, on <span class="code">an-node01</span>, forcefully shut down <span class="code">vm0001-dev</span>;
 
<source lang="bash">
virsh destroy vm0001-dev
</source>
<source lang="text">
Domain vm0001-dev destroyed
</source>
 
Within a few seconds (10, maximum), the cluster will detect that the VM has failed and will restart it.
 
[[Image:2nrhkct_failing-vm0001-dev_01.png|thumb|700px|center|Failure of <span class="code">vm0001-dev</span> detected by the cluster and restarted.]]
 
We can see in <span class="code">an-node01</span>'s syslog that the failure was detected and automatically recovered.
 
<source lang="text">
Jan  1 18:38:25 an-node01 kernel: vbr2: port 2(vnet0) entering disabled state
Jan  1 18:38:25 an-node01 kernel: device vnet0 left promiscuous mode
Jan  1 18:38:25 an-node01 kernel: vbr2: port 2(vnet0) entering disabled state
Jan  1 18:38:27 an-node01 ntpd[2190]: Deleting interface #19 vnet0, fe80::fc54:ff:fe9b:3cf7#123, interface stats: received=0, sent=0, dropped=0, active_time=3058 secs
Jan  1 18:38:35 an-node01 rgmanager[2430]: status on vm "vm0001-dev" returned 7 (unspecified)
Jan  1 18:38:35 an-node01 rgmanager[2430]: Stopping service vm:vm0001-dev
Jan  1 18:38:36 an-node01 rgmanager[2430]: Service vm:vm0001-dev is recovering
Jan  1 18:38:36 an-node01 rgmanager[2430]: Recovering failed service vm:vm0001-dev
Jan  1 18:38:37 an-node01 kernel: device vnet0 entered promiscuous mode
Jan  1 18:38:37 an-node01 kernel: vbr2: port 2(vnet0) entering learning state
Jan  1 18:38:37 an-node01 rgmanager[2430]: Service vm:vm0001-dev started
Jan  1 18:38:39 an-node01 ntpd[2190]: Listening on interface #20 vnet0, fe80::fc54:ff:fe9b:3cf7#123 Enabled
Jan  1 18:38:49 an-node01 kernel: kvm: 12390: cpu0 unimplemented perfctr wrmsr: 0xc1 data 0xabcd
Jan  1 18:38:52 an-node01 kernel: vbr2: port 2(vnet0) entering forwarding state
</source>
 
The first four entries are related to the VM's network being torn down after it was killed. The fifth through eighth lines show the detection and recovery of the node!
 
Going back to the <span class="code">ping</span>, we can see that the VM was down for roughly 36 seconds (time between network loss and recovery, add a bit more time for all services to start.
 
<source lang="text">
PING 10.254.0.1 (10.254.0.1) 56(84) bytes of data.
64 bytes from 10.254.0.1: icmp_seq=1 ttl=64 time=0.737 ms
64 bytes from 10.254.0.1: icmp_seq=2 ttl=64 time=0.530 ms
64 bytes from 10.254.0.1: icmp_seq=3 ttl=64 time=0.589 ms
64 bytes from 10.254.0.1: icmp_seq=4 ttl=64 time=0.589 ms
64 bytes from 10.254.0.1: icmp_seq=5 ttl=64 time=0.477 ms
64 bytes from 10.254.0.1: icmp_seq=6 ttl=64 time=0.482 ms
64 bytes from 10.254.0.1: icmp_seq=7 ttl=64 time=0.489 ms
64 bytes from 10.254.0.1: icmp_seq=8 ttl=64 time=0.495 ms
64 bytes from 10.254.0.1: icmp_seq=9 ttl=64 time=0.503 ms
64 bytes from 10.254.0.1: icmp_seq=10 ttl=64 time=0.513 ms
64 bytes from 10.254.0.1: icmp_seq=11 ttl=64 time=0.516 ms
64 bytes from 10.254.0.1: icmp_seq=12 ttl=64 time=0.524 ms
64 bytes from 10.254.0.1: icmp_seq=13 ttl=64 time=0.405 ms
64 bytes from 10.254.0.1: icmp_seq=14 ttl=64 time=0.536 ms
64 bytes from 10.254.0.1: icmp_seq=15 ttl=64 time=0.441 ms
64 bytes from 10.254.0.1: icmp_seq=16 ttl=64 time=0.552 ms
 
# Node died here, 36 pings lost at ~1 ping/sec.
 
64 bytes from 10.254.0.1: icmp_seq=52 ttl=64 time=0.816 ms
64 bytes from 10.254.0.1: icmp_seq=53 ttl=64 time=0.440 ms
64 bytes from 10.254.0.1: icmp_seq=54 ttl=64 time=0.354 ms
64 bytes from 10.254.0.1: icmp_seq=55 ttl=64 time=0.342 ms
64 bytes from 10.254.0.1: icmp_seq=56 ttl=64 time=0.446 ms
64 bytes from 10.254.0.1: icmp_seq=57 ttl=64 time=0.418 ms
64 bytes from 10.254.0.1: icmp_seq=58 ttl=64 time=0.441 ms
^C
--- 10.254.0.1 ping statistics ---
58 packets transmitted, 23 received, 60% packet loss, time 57949ms
rtt min/avg/max/mdev = 0.342/0.505/0.816/0.109 ms
</source>
 
Not bad at all!
 
Now let's kill it two more times and confirm that the third recovery happens on <span class="code">an-node02</span>. We'll use the <span class="code">ping</span> as an indicator of when the VM is back on-line before killing it the third time.
 
Second failure;
 
<source lang="bash">
virsh destroy vm0001-dev
</source>
<source lang="text">
Domain vm0001-dev destroyed
</source>
 
Checking syslog again;
 
<source lang="text">
Jan  1 18:45:07 an-node01 kernel: vbr2: port 2(vnet0) entering disabled state
Jan  1 18:45:07 an-node01 kernel: device vnet0 left promiscuous mode
Jan  1 18:45:07 an-node01 kernel: vbr2: port 2(vnet0) entering disabled state
Jan  1 18:45:09 an-node01 ntpd[2190]: Deleting interface #20 vnet0, fe80::fc54:ff:fe9b:3cf7#123, interface stats: received=0, sent=0, dropped=0, active_time=390 secs
Jan  1 18:45:46 an-node01 rgmanager[2430]: status on vm "vm0001-dev" returned 7 (unspecified)
Jan  1 18:45:46 an-node01 rgmanager[2430]: Stopping service vm:vm0001-dev
Jan  1 18:45:46 an-node01 rgmanager[2430]: Service vm:vm0001-dev is recovering
Jan  1 18:45:47 an-node01 rgmanager[2430]: Recovering failed service vm:vm0001-dev
Jan  1 18:45:47 an-node01 kernel: device vnet0 entered promiscuous mode
Jan  1 18:45:47 an-node01 kernel: vbr2: port 2(vnet0) entering learning state
Jan  1 18:45:47 an-node01 rgmanager[2430]: Service vm:vm0001-dev started
Jan  1 18:45:50 an-node01 ntpd[2190]: Listening on interface #21 vnet0, fe80::fc54:ff:fe9b:3cf7#123 Enabled
Jan  1 18:45:59 an-node01 kernel: kvm: 17874: cpu0 unimplemented perfctr wrmsr: 0xc1 data 0xabcd
Jan  1 18:46:02 an-node01 kernel: vbr2: port 2(vnet0) entering forwarding state
</source>
 
We can see that the <span class="code">vm0001-dev</span> VM is still on <span class="code">an-node01</span>;
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 18:47:01 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, rgmanager
an-node02.alteeve.ca                      2 Online, Local, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  an-node01.alteeve.ca          started
vm:vm0002-web                  an-node01.alteeve.ca          started
vm:vm0003-db                  an-node02.alteeve.ca          started
vm:vm0004-ms                  an-node02.alteeve.ca          started
</source>
 
Now the third crash. This time it should come up on <span class="code">an-node02</span>.
 
<source lang="bash">
virsh destroy vm0001-dev
</source>
<source lang="text">
Domain vm0001-dev destroyed
</source>
 
Checking <span class="code">an-node01</span>'s syslog again, we'll see something different.
 
<source lang="text">
Jan  1 18:47:26 an-node01 kernel: vbr2: port 2(vnet0) entering disabled state
Jan  1 18:47:26 an-node01 kernel: device vnet0 left promiscuous mode
Jan  1 18:47:26 an-node01 kernel: vbr2: port 2(vnet0) entering disabled state
Jan  1 18:47:27 an-node01 ntpd[2190]: Deleting interface #21 vnet0, fe80::fc54:ff:fe9b:3cf7#123, interface stats: received=0, sent=0, dropped=0, active_time=97 secs
Jan  1 18:47:46 an-node01 rgmanager[2430]: status on vm "vm0001-dev" returned 7 (unspecified)
Jan  1 18:47:46 an-node01 rgmanager[2430]: Stopping service vm:vm0001-dev
Jan  1 18:47:46 an-node01 rgmanager[2430]: Service vm:vm0001-dev is recovering
Jan  1 18:47:46 an-node01 rgmanager[2430]: Restart threshold for vm:vm0001-dev exceeded; attempting to relocate
Jan  1 18:47:47 an-node01 rgmanager[2430]: Service vm:vm0001-dev is now running on member 2
</source>
 
The difference is the "<span class="code">Restart threshold for vm:vm0001-dev exceeded; attempting to relocate</span>" line. Indeed, if we check <span class="code">clustat</span>, we will in fact see it running on <span class="code">an-node02</span>!
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 18:49:38 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, rgmanager
an-node02.alteeve.ca                      2 Online, Local, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  an-node02.alteeve.ca          started
vm:vm0002-web                  an-node01.alteeve.ca          started
vm:vm0003-db                  an-node02.alteeve.ca          started
vm:vm0004-ms                  an-node02.alteeve.ca          started
</source>
 
Success!
 
This test is complete, so we'll finish my migrating the VM back to <span class="code">an-node01</span>.
 
<source lang="bash">
clusvcadm -M vm:vm0001-dev -m an-node01.alteeve.ca
</source>
<source lang="text">
Trying to migrate vm:vm0001-dev to an-node01.alteeve.ca...Success
</source>
 
As always, confirm.
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 18:51:05 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, rgmanager
an-node02.alteeve.ca                      2 Online, Local, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  an-node01.alteeve.ca          started
vm:vm0002-web                  an-node01.alteeve.ca          started
vm:vm0003-db                  an-node02.alteeve.ca          started
vm:vm0004-ms                  an-node02.alteeve.ca          started
</source>
 
Excellent.
 
=== Failure Testing vm0002-web ===
 
We'll go through the same process here as we just did with <span class="code">vm0001-dev</span>, but we won't cover all the details here as much. After each crash of the VM, we'll check <span class="code">clustat</span> and look at the syslog on <span class="code">an-node01</span>. Not shown here is a background ping running to indicate when the VM is back up enough to crash again.
 
Confirm that <span class="code">vm0002-web</span> is on <span class="code">an-node01</span>.
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 19:06:21 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, rgmanager
an-node02.alteeve.ca                      2 Online, Local, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  an-node01.alteeve.ca          started
vm:vm0002-web                  an-node01.alteeve.ca          started
vm:vm0003-db                  an-node02.alteeve.ca          started
vm:vm0004-ms                  an-node02.alteeve.ca          started
</source>
 
Good, we're ready. On <span class="code">an-node01</span>, kill the VM.
 
<source lang="bash">
virsh destroy vm0002-web
</source>
<source lang="text">
Domain vm0002-web destroyed
</source>
 
As we expect, <span class="code">an-node01</span> restarts the VM within a few seconds.
 
<source lang="text">
Jan  1 19:07:16 an-node01 kernel: vbr2: port 3(vnet1) entering disabled state
Jan  1 19:07:16 an-node01 kernel: device vnet1 left promiscuous mode
Jan  1 19:07:16 an-node01 kernel: vbr2: port 3(vnet1) entering disabled state
Jan  1 19:07:18 an-node01 ntpd[2190]: Deleting interface #11 vnet1, fe80::fc54:ff:fe65:3960#123, interface stats: received=0, sent=0, dropped=0, active_time=9315 secs
Jan  1 19:07:27 an-node01 rgmanager[2430]: status on vm "vm0002-web" returned 7 (unspecified)
Jan  1 19:07:27 an-node01 rgmanager[2430]: Stopping service vm:vm0002-web
Jan  1 19:07:27 an-node01 rgmanager[2430]: Service vm:vm0002-web is recovering
Jan  1 19:07:28 an-node01 rgmanager[2430]: Recovering failed service vm:vm0002-web
Jan  1 19:07:28 an-node01 kernel: device vnet1 entered promiscuous mode
Jan  1 19:07:28 an-node01 kernel: vbr2: port 3(vnet1) entering learning state
Jan  1 19:07:29 an-node01 rgmanager[2430]: Service vm:vm0002-web started
Jan  1 19:07:31 an-node01 ntpd[2190]: Listening on interface #23 vnet1, fe80::fc54:ff:fe65:3960#123 Enabled
Jan  1 19:07:38 an-node01 kernel: kvm: 1994: cpu0 unimplemented perfctr wrmsr: 0xc1 data 0xabcd
Jan  1 19:07:43 an-node01 kernel: vbr2: port 3(vnet1) entering forwarding state
</source>
 
Checking <span class="code">clustat</span>, I can see the VM is back on-line.
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 19:09:03 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, rgmanager
an-node02.alteeve.ca                      2 Online, Local, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  an-node01.alteeve.ca          started
vm:vm0002-web                  an-node01.alteeve.ca          started
vm:vm0003-db                  an-node02.alteeve.ca          started
vm:vm0004-ms                  an-node02.alteeve.ca          started
</source>
 
Let's kill it for the second time.
 
<source lang="bash">
virsh destroy vm0002-web
</source>
<source lang="text">
Domain vm0002-web destroyed
</source>
 
We can again see that <span class="code">an-node01</span> recovered it locally.
 
<source lang="text">
Jan  1 19:12:08 an-node01 kernel: vbr2: port 3(vnet1) entering disabled state
Jan  1 19:12:08 an-node01 kernel: device vnet1 left promiscuous mode
Jan  1 19:12:08 an-node01 kernel: vbr2: port 3(vnet1) entering disabled state
Jan  1 19:12:10 an-node01 ntpd[2190]: Deleting interface #23 vnet1, fe80::fc54:ff:fe65:3960#123, interface stats: received=0, sent=0, dropped=0, active_time=279 secs
Jan  1 19:12:17 an-node01 rgmanager[2430]: status on vm "vm0002-web" returned 7 (unspecified)
Jan  1 19:12:17 an-node01 rgmanager[2430]: Stopping service vm:vm0002-web
Jan  1 19:12:18 an-node01 rgmanager[2430]: Service vm:vm0002-web is recovering
Jan  1 19:12:18 an-node01 rgmanager[2430]: Recovering failed service vm:vm0002-web
Jan  1 19:12:19 an-node01 kernel: device vnet1 entered promiscuous mode
Jan  1 19:12:19 an-node01 kernel: vbr2: port 3(vnet1) entering learning state
Jan  1 19:12:19 an-node01 rgmanager[2430]: Service vm:vm0002-web started
Jan  1 19:12:22 an-node01 ntpd[2190]: Listening on interface #24 vnet1, fe80::fc54:ff:fe65:3960#123 Enabled
Jan  1 19:12:28 an-node01 kernel: kvm: 6113: cpu0 unimplemented perfctr wrmsr: 0xc1 data 0xabcd
Jan  1 19:12:34 an-node01 kernel: vbr2: port 3(vnet1) entering forwarding state
</source>
 
Confirm with <span class="code">clustat</span>;
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 19:13:45 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, rgmanager
an-node02.alteeve.ca                      2 Online, Local, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  an-node01.alteeve.ca          started
vm:vm0002-web                  an-node01.alteeve.ca          started
vm:vm0003-db                  an-node02.alteeve.ca          started
vm:vm0004-ms                  an-node02.alteeve.ca          started
</source>
 
This time, it should recover on <span class="code">an-node02</span>;
 
<source lang="bash">
virsh destroy vm0002-web
</source>
<source lang="text">
Domain vm0002-web destroyed
</source>
 
Looking in syslog, we can see the counter was tripped.
 
<source lang="text">
Jan  1 19:14:26 an-node01 kernel: vbr2: port 3(vnet1) entering disabled state
Jan  1 19:14:26 an-node01 kernel: device vnet1 left promiscuous mode
Jan  1 19:14:26 an-node01 kernel: vbr2: port 3(vnet1) entering disabled state
Jan  1 19:14:27 an-node01 rgmanager[2430]: status on vm "vm0002-web" returned 7 (unspecified)
Jan  1 19:14:27 an-node01 rgmanager[2430]: Stopping service vm:vm0002-web
Jan  1 19:14:28 an-node01 rgmanager[2430]: Service vm:vm0002-web is recovering
Jan  1 19:14:28 an-node01 rgmanager[2430]: Restart threshold for vm:vm0002-web exceeded; attempting to relocate
Jan  1 19:14:28 an-node01 ntpd[2190]: Deleting interface #24 vnet1, fe80::fc54:ff:fe65:3960#123, interface stats: received=0, sent=0, dropped=0, active_time=126 secs
Jan  1 19:14:29 an-node01 rgmanager[2430]: Service vm:vm0002-web is now running on member 2
</source>
 
Indeed, this is confirmed with <span class="code">clustat</span>.
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 19:15:57 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, rgmanager
an-node02.alteeve.ca                      2 Online, Local, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  an-node01.alteeve.ca          started
vm:vm0002-web                  an-node02.alteeve.ca          started
vm:vm0003-db                  an-node02.alteeve.ca          started
vm:vm0004-ms                  an-node02.alteeve.ca          started
</source>
 
Excellent, this test has passed as well! Now migrate the VM back and we'll be ready to test the third VM.
 
<source lang="bash">
clusvcadm -M vm:vm0002-web -m an-node01.alteeve.ca
</source>
<source lang="text">
Trying to migrate vm:vm0002-web to an-node01.alteeve.ca...Success
</source>
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 19:17:41 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, rgmanager
an-node02.alteeve.ca                      2 Online, Local, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  an-node01.alteeve.ca          started
vm:vm0002-web                  an-node01.alteeve.ca          started
vm:vm0003-db                  an-node02.alteeve.ca          started
vm:vm0004-ms                  an-node02.alteeve.ca          started
</source>
 
Done.
 
=== Failure Testing vm0003-db ===
 
This should be getting familiar now. The main difference is that the VM is now running on <span class="code">an-node02</span>, so that is where will will kill the VM from and that is where we will watch syslog.
 
Confirm that <span class="code">vm0003-db</span> is on <span class="code">an-node02</span>.
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 19:25:55 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  an-node01.alteeve.ca          started
vm:vm0002-web                  an-node01.alteeve.ca          started
vm:vm0003-db                  an-node02.alteeve.ca          started
vm:vm0004-ms                  an-node02.alteeve.ca          started
</source>
 
Good, we're ready. On <span class="code">an-node02</span>, kill the VM.
 
<source lang="bash">
virsh destroy vm0003-db
</source>
<source lang="text">
Domain vm0003-db destroyed
</source>
 
As we expect, <span class="code">an-node02</span> restarts the VM within a few seconds.
 
<source lang="text">
Jan  1 19:26:21 an-node02 kernel: vbr2: port 2(vnet0) entering disabled state
Jan  1 19:26:21 an-node02 kernel: device vnet0 left promiscuous mode
Jan  1 19:26:21 an-node02 kernel: vbr2: port 2(vnet0) entering disabled state
Jan  1 19:26:22 an-node02 ntpd[2200]: Deleting interface #10 vnet0, fe80::fc54:ff:fe44:83ec#123, interface stats: received=0, sent=0, dropped=0, active_time=8863 secs
Jan  1 19:26:35 an-node02 rgmanager[2439]: status on vm "vm0003-db" returned 7 (unspecified)
Jan  1 19:26:36 an-node02 rgmanager[2439]: Stopping service vm:vm0003-db
Jan  1 19:26:36 an-node02 rgmanager[2439]: Service vm:vm0003-db is recovering
Jan  1 19:26:36 an-node02 rgmanager[2439]: Recovering failed service vm:vm0003-db
Jan  1 19:26:37 an-node02 kernel: device vnet0 entered promiscuous mode
Jan  1 19:26:37 an-node02 kernel: vbr2: port 2(vnet0) entering learning state
Jan  1 19:26:37 an-node02 rgmanager[2439]: Service vm:vm0003-db started
Jan  1 19:26:40 an-node02 ntpd[2200]: Listening on interface #15 vnet0, fe80::fc54:ff:fe44:83ec#123 Enabled
</source>
 
Checking <span class="code">clustat</span>, I can see the VM is back on-line.
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 19:27:06 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  an-node01.alteeve.ca          started
vm:vm0002-web                  an-node01.alteeve.ca          started
vm:vm0003-db                  an-node02.alteeve.ca          started
vm:vm0004-ms                  an-node02.alteeve.ca          started
</source>
 
Let's kill it for the second time.
 
<source lang="bash">
virsh destroy vm0003-db
</source>
<source lang="text">
Domain vm0003-db destroyed
</source>
 
We can again see that <span class="code">an-node02</span> recovered it locally.
 
<source lang="text">
Jan  1 19:27:40 an-node02 kernel: vbr2: port 2(vnet0) entering disabled state
Jan  1 19:27:40 an-node02 kernel: device vnet0 left promiscuous mode
Jan  1 19:27:40 an-node02 kernel: vbr2: port 2(vnet0) entering disabled state
Jan  1 19:27:41 an-node02 ntpd[2200]: Deleting interface #15 vnet0, fe80::fc54:ff:fe44:83ec#123, interface stats: received=0, sent=0, dropped=0, active_time=61 secs
Jan  1 19:27:45 an-node02 rgmanager[2439]: status on vm "vm0003-db" returned 7 (unspecified)
Jan  1 19:27:46 an-node02 rgmanager[2439]: Stopping service vm:vm0003-db
Jan  1 19:27:46 an-node02 rgmanager[2439]: Service vm:vm0003-db is recovering
Jan  1 19:27:46 an-node02 rgmanager[2439]: Recovering failed service vm:vm0003-db
Jan  1 19:27:47 an-node02 kernel: device vnet0 entered promiscuous mode
Jan  1 19:27:47 an-node02 kernel: vbr2: port 2(vnet0) entering learning state
Jan  1 19:27:47 an-node02 rgmanager[2439]: Service vm:vm0003-db started
Jan  1 19:27:50 an-node02 ntpd[2200]: Listening on interface #16 vnet0, fe80::fc54:ff:fe44:83ec#123 Enabled
</source>
 
Confirm with <span class="code">clustat</span>;
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 19:28:21 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  an-node01.alteeve.ca          started
vm:vm0002-web                  an-node01.alteeve.ca          started
vm:vm0003-db                  an-node02.alteeve.ca          started
vm:vm0004-ms                  an-node02.alteeve.ca          started
</source>
 
This time, it should recover on <span class="code">an-node01</span>;
 
<source lang="bash">
virsh destroy vm0003-db
</source>
<source lang="text">
Domain vm0003-db destroyed
</source>
 
Looking in syslog, we can see the counter was tripped.
 
<source lang="text">
Jan  1 19:28:36 an-node02 kernel: vbr2: port 2(vnet0) entering disabled state
Jan  1 19:28:36 an-node02 kernel: device vnet0 left promiscuous mode
Jan  1 19:28:36 an-node02 kernel: vbr2: port 2(vnet0) entering disabled state
Jan  1 19:28:37 an-node02 ntpd[2200]: Deleting interface #16 vnet0, fe80::fc54:ff:fe44:83ec#123, interface stats: received=0, sent=0, dropped=0, active_time=47 secs
Jan  1 19:28:55 an-node02 rgmanager[2439]: status on vm "vm0003-db" returned 7 (unspecified)
Jan  1 19:28:56 an-node02 rgmanager[2439]: Stopping service vm:vm0003-db
Jan  1 19:28:56 an-node02 rgmanager[2439]: Service vm:vm0003-db is recovering
Jan  1 19:28:56 an-node02 rgmanager[2439]: Restart threshold for vm:vm0003-db exceeded; attempting to relocate
Jan  1 19:28:57 an-node02 rgmanager[2439]: Service vm:vm0003-db is now running on member 1
</source>
 
Again, this is confirmed with <span class="code">clustat</span>.
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 19:29:42 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  an-node01.alteeve.ca          started
vm:vm0002-web                  an-node01.alteeve.ca          started
vm:vm0003-db                  an-node01.alteeve.ca          started
vm:vm0004-ms                  an-node02.alteeve.ca          started
</source>
 
This test has passed as well! As before, migrate the VM back and we'll be ready to test the last VM.
 
<source lang="bash">
clusvcadm -M vm:vm0003-db -m an-node02.alteeve.ca
</source>
<source lang="text">
Trying to migrate vm:vm0003-db to an-node02.alteeve.ca...Success
</source>
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 19:30:32 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  an-node01.alteeve.ca          started
vm:vm0002-web                  an-node01.alteeve.ca          started
vm:vm0003-db                  an-node02.alteeve.ca          started
vm:vm0004-ms                  an-node02.alteeve.ca          started
</source>
 
Done.
 
=== Failure Testing vm0004-ms ===
 
{{warning|1=Windows is particularly sensitive to sudden reboots. This is the nature of MS Windows and beyond the ability of the cluster to deal with. As such, be sure that you've created your recovery ISOs and taken reasonable precautions so that you can recover the guest after a hard shut down. That is, of course, what we're about to do here.}}
 
This is the last VM to test. This testing is repetitive and boring, but it is also critical. Good on you for sticking it out. Right then, let's get to it.
 
Confirm that <span class="code">vm0004-ms</span> is on <span class="code">an-node02</span>.
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 19:43:41 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  an-node01.alteeve.ca          started
vm:vm0002-web                  an-node01.alteeve.ca          started
vm:vm0003-db                  an-node02.alteeve.ca          started
vm:vm0004-ms                  an-node02.alteeve.ca          started
</source>
 
Good, we're ready. On <span class="code">an-node02</span>, kill the VM.
 
<source lang="bash">
virsh destroy vm0004-ms
</source>
<source lang="text">
Domain vm0004-ms destroyed
</source>
 
As we expect, <span class="code">an-node02</span> restarts the VM within a few seconds.
 
<source lang="text">
Jan  1 19:43:52 an-node02 kernel: vbr2: port 3(vnet1) entering disabled state
Jan  1 19:43:52 an-node02 kernel: device vnet1 left promiscuous mode
Jan  1 19:43:52 an-node02 kernel: vbr2: port 3(vnet1) entering disabled state
Jan  1 19:43:53 an-node02 ntpd[2200]: Deleting interface #11 vnet1, fe80::fc54:ff:fe5e:b147#123, interface stats: received=0, sent=0, dropped=0, active_time=9895 secs
Jan  1 19:44:06 an-node02 rgmanager[2439]: status on vm "vm0004-ms" returned 7 (unspecified)
Jan  1 19:44:07 an-node02 rgmanager[2439]: Stopping service vm:vm0004-ms
Jan  1 19:44:07 an-node02 rgmanager[2439]: Service vm:vm0004-ms is recovering
Jan  1 19:44:07 an-node02 rgmanager[2439]: Recovering failed service vm:vm0004-ms
Jan  1 19:44:08 an-node02 kernel: device vnet1 entered promiscuous mode
Jan  1 19:44:08 an-node02 kernel: vbr2: port 3(vnet1) entering learning state
Jan  1 19:44:08 an-node02 rgmanager[2439]: Service vm:vm0004-ms started
Jan  1 19:44:11 an-node02 ntpd[2200]: Listening on interface #18 vnet1, fe80::fc54:ff:fe5e:b147#123 Enabled
Jan  1 19:44:23 an-node02 kernel: vbr2: port 3(vnet1) entering forwarding state
</source>
 
Checking <span class="code">clustat</span>, I can see the VM is back on-line.
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 19:44:38 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  an-node01.alteeve.ca          started
vm:vm0002-web                  an-node01.alteeve.ca          started
vm:vm0003-db                  an-node02.alteeve.ca          started
vm:vm0004-ms                  an-node02.alteeve.ca          started
</source>
 
Let's kill it for the second time.
 
<source lang="bash">
virsh destroy vm0004-ms
</source>
<source lang="text">
Domain vm0004-ms destroyed
</source>
 
We can again see that <span class="code">an-node02</span> recovered it locally.
 
<source lang="text">
Jan  1 19:44:54 an-node02 kernel: vbr2: port 3(vnet1) entering disabled state
Jan  1 19:44:54 an-node02 kernel: device vnet1 left promiscuous mode
Jan  1 19:44:54 an-node02 kernel: vbr2: port 3(vnet1) entering disabled state
Jan  1 19:44:55 an-node02 ntpd[2200]: Deleting interface #18 vnet1, fe80::fc54:ff:fe5e:b147#123, interface stats: received=0, sent=0, dropped=0, active_time=44 secs
Jan  1 19:45:16 an-node02 rgmanager[2439]: status on vm "vm0004-ms" returned 7 (unspecified)
Jan  1 19:45:17 an-node02 rgmanager[2439]: Stopping service vm:vm0004-ms
Jan  1 19:45:17 an-node02 rgmanager[2439]: Service vm:vm0004-ms is recovering
Jan  1 19:45:17 an-node02 rgmanager[2439]: Recovering failed service vm:vm0004-ms
Jan  1 19:45:18 an-node02 kernel: device vnet1 entered promiscuous mode
Jan  1 19:45:18 an-node02 kernel: vbr2: port 3(vnet1) entering learning state
Jan  1 19:45:18 an-node02 rgmanager[2439]: Service vm:vm0004-ms started
Jan  1 19:45:21 an-node02 ntpd[2200]: Listening on interface #19 vnet1, fe80::fc54:ff:fe5e:b147#123 Enabled
Jan  1 19:45:33 an-node02 kernel: vbr2: port 3(vnet1) entering forwarding state
</source>
 
Confirm with <span class="code">clustat</span>;
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 19:46:17 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  an-node01.alteeve.ca          started
vm:vm0002-web                  an-node01.alteeve.ca          started
vm:vm0003-db                  an-node02.alteeve.ca          started
vm:vm0004-ms                  an-node02.alteeve.ca          started
</source>
 
This time, it should recover on <span class="code">an-node01</span>;
 
<source lang="bash">
virsh destroy vm0004-ms
</source>
<source lang="text">
Domain vm0004-ms destroyed
</source>
 
Looking in syslog, we can see the counter was tripped.
 
<source lang="text">
Jan  1 19:45:33 an-node02 kernel: vbr2: port 3(vnet1) entering forwarding state
Jan  1 19:46:30 an-node02 kernel: vbr2: port 3(vnet1) entering disabled state
Jan  1 19:46:30 an-node02 kernel: device vnet1 left promiscuous mode
Jan  1 19:46:30 an-node02 kernel: vbr2: port 3(vnet1) entering disabled state
Jan  1 19:46:32 an-node02 ntpd[2200]: Deleting interface #19 vnet1, fe80::fc54:ff:fe5e:b147#123, interface stats: received=0, sent=0, dropped=0, active_time=71 secs
Jan  1 19:46:36 an-node02 rgmanager[2439]: status on vm "vm0004-ms" returned 7 (unspecified)
Jan  1 19:46:37 an-node02 rgmanager[2439]: Stopping service vm:vm0004-ms
Jan  1 19:46:37 an-node02 rgmanager[2439]: Service vm:vm0004-ms is recovering
Jan  1 19:46:37 an-node02 rgmanager[2439]: Restart threshold for vm:vm0004-ms exceeded; attempting to relocate
Jan  1 19:46:38 an-node02 rgmanager[2439]: Service vm:vm0004-ms is now running on member 1
</source>
 
Indeed, this is confirmed with <span class="code">clustat</span>.
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 19:48:23 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  an-node01.alteeve.ca          started
vm:vm0002-web                  an-node01.alteeve.ca          started
vm:vm0003-db                  an-node02.alteeve.ca          started
vm:vm0004-ms                  an-node01.alteeve.ca          started
</source>
 
Wonderful! All four VMs fail and recover as we expected them to. Move the VM back and we're ready to crash the nodes!
 
<source lang="bash">
clusvcadm -M vm:vm0004-ms -m an-node02.alteeve.ca
</source>
<source lang="text">
Trying to migrate vm:vm0004-ms to an-node02.alteeve.ca...Success
</source>
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 19:49:32 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  an-node01.alteeve.ca          started
vm:vm0002-web                  an-node01.alteeve.ca          started
vm:vm0003-db                  an-node02.alteeve.ca          started
vm:vm0004-ms                  an-node02.alteeve.ca          started
</source>
 
Done and done!
 
=== Failing and Recovery of an-node01 ===
 
The final stage of testing is also the most brutal. We're going to hang <span class="code">an-node01</span> in such a way that it stops responding to messages from <span class="code">an-node02</span>. Within a few seconds, <span class="code">an-node01</span> should be fenced, then shortly after the two lost VMs should boot up on <span class="code">an-node02</span>.
 
The is a particularly important test for a somewhat non-obvious reason.
 
{{note|1=It's one thing to migrate or boot VMs one at a time. The other VMs will not likely be under load, so the resources of the host should be more or less free for the VM being recovered. After a failure though, all lost VMs will be simultaneously recovered, taxing the host's resources to a greater extent. This test ensures that each node has sufficient resources to effectively recover the VMs simultaneously.}}
 
We could just shut off <span class="code">an-node01</span>, but we tested this earlier when we setup fencing. What we have not yet tested is how the cluster recovers from a hung node. To hang the host, we're going to trigger a special event in the kernel, using [http://en.wikipedia.org/wiki/Magic_SysRq_key#Alternate_ways_to_invoke_Magic_SysRq magic SysRq] triggers. We'll do this by sending the letter <span class="code">c</span> to the <span class="code">/proc/sysrq-trigger</span> file. This will "[http://en.wikipedia.org/wiki/Magic_SysRq_key#Magic_commands Reboot kexec and output a crashdump]". The node should be [[fenced]] before a memory dump can complete, so don't expect to see anything in <span class="code">/var/crashed</span> unless your system is extremely fast.
 
{{warning|1=If you are skimming, take note! The next command will crash your node!}}
 
So, on <span class="code">an-node01</span>, issue the following command to crash the node.
 
<source lang="bash">
echo c > /proc/sysrq-trigger
</source>
 
This command will not return. Watching syslog on <span class="code">an-node02</span>, we'll see output like this;
 
<source lang="text">
Jan  1 21:26:00 an-node02 kernel: block drbd1: PingAck did not arrive in time.
Jan  1 21:26:00 an-node02 kernel: block drbd1: peer( Primary -> Unknown ) conn( Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown ) susp( 0 -> 1 )
Jan  1 21:26:00 an-node02 kernel: block drbd1: asender terminated
Jan  1 21:26:00 an-node02 kernel: block drbd1: Terminating asender thread
Jan  1 21:26:00 an-node02 kernel: block drbd1: Connection closed
Jan  1 21:26:00 an-node02 kernel: block drbd1: conn( NetworkFailure -> Unconnected )
Jan  1 21:26:00 an-node02 kernel: block drbd1: helper command: /sbin/drbdadm fence-peer minor-1
Jan  1 21:26:00 an-node02 kernel: block drbd1: receiver terminated
Jan  1 21:26:00 an-node02 kernel: block drbd1: Restarting receiver thread
Jan  1 21:26:00 an-node02 kernel: block drbd1: receiver (re)started
Jan  1 21:26:00 an-node02 kernel: block drbd1: conn( Unconnected -> WFConnection )
Jan  1 21:26:00 an-node02 /sbin/obliterate-peer.sh: Local node ID: 2 / Remote node: an-node01.alteeve.ca
Jan  1 21:26:01 an-node02 kernel: block drbd2: PingAck did not arrive in time.
Jan  1 21:26:01 an-node02 kernel: block drbd2: peer( Primary -> Unknown ) conn( Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown ) susp( 0 -> 1 )
Jan  1 21:26:01 an-node02 kernel: block drbd2: asender terminated
Jan  1 21:26:01 an-node02 kernel: block drbd2: Terminating asender thread
Jan  1 21:26:01 an-node02 kernel: block drbd2: Connection closed
Jan  1 21:26:01 an-node02 kernel: block drbd2: conn( NetworkFailure -> Unconnected )
Jan  1 21:26:01 an-node02 kernel: block drbd2: helper command: /sbin/drbdadm fence-peer minor-2
Jan  1 21:26:01 an-node02 kernel: block drbd2: receiver terminated
Jan  1 21:26:01 an-node02 kernel: block drbd2: Restarting receiver thread
Jan  1 21:26:01 an-node02 kernel: block drbd2: receiver (re)started
Jan  1 21:26:01 an-node02 kernel: block drbd2: conn( Unconnected -> WFConnection )
Jan  1 21:26:01 an-node02 /sbin/obliterate-peer.sh: Local node ID: 2 / Remote node: an-node01.alteeve.ca
Jan  1 21:26:01 an-node02 /sbin/obliterate-peer.sh: kill node failed: Invalid argument
Jan  1 21:26:03 an-node02 kernel: block drbd0: PingAck did not arrive in time.
Jan  1 21:26:03 an-node02 kernel: block drbd0: peer( Primary -> Unknown ) conn( Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown ) susp( 0 -> 1 )
Jan  1 21:26:03 an-node02 kernel: block drbd0: asender terminated
Jan  1 21:26:03 an-node02 kernel: block drbd0: Terminating asender thread
Jan  1 21:26:03 an-node02 kernel: block drbd0: Connection closed
Jan  1 21:26:03 an-node02 kernel: block drbd0: conn( NetworkFailure -> Unconnected )
Jan  1 21:26:03 an-node02 kernel: block drbd0: helper command: /sbin/drbdadm fence-peer minor-0
Jan  1 21:26:03 an-node02 kernel: block drbd0: receiver terminated
Jan  1 21:26:03 an-node02 kernel: block drbd0: Restarting receiver thread
Jan  1 21:26:03 an-node02 kernel: block drbd0: receiver (re)started
Jan  1 21:26:03 an-node02 kernel: block drbd0: conn( Unconnected -> WFConnection )
Jan  1 21:26:03 an-node02 /sbin/obliterate-peer.sh: Local node ID: 2 / Remote node: an-node01.alteeve.ca
Jan  1 21:26:03 an-node02 /sbin/obliterate-peer.sh: kill node failed: Invalid argument
Jan  1 21:26:09 an-node02 corosync[1963]:  [TOTEM ] A processor failed, forming new configuration.
Jan  1 21:26:11 an-node02 corosync[1963]:  [QUORUM] Members[1]: 2
Jan  1 21:26:11 an-node02 corosync[1963]:  [TOTEM ] A processor joined or left the membership and a new membership was formed.
Jan  1 21:26:11 an-node02 kernel: dlm: closing connection to node 1
Jan  1 21:26:11 an-node02 corosync[1963]:  [CPG  ] chosen downlist: sender r(0) ip(10.20.0.2) ; members(old:2 left:1)
Jan  1 21:26:11 an-node02 corosync[1963]:  [MAIN  ] Completed service synchronization, ready to provide service.
Jan  1 21:26:11 an-node02 fenced[2022]: fencing node an-node01.alteeve.ca
Jan  1 21:26:11 an-node02 kernel: GFS2: fsid=an-cluster-A:shared.0: jid=1: Trying to acquire journal lock...
Jan  1 21:26:14 an-node02 fence_node[15572]: fence an-node01.alteeve.ca success
Jan  1 21:26:14 an-node02 kernel: block drbd1: helper command: /sbin/drbdadm fence-peer minor-1 exit code 7 (0x700)
Jan  1 21:26:14 an-node02 kernel: block drbd1: fence-peer helper returned 7 (peer was stonithed)
Jan  1 21:26:14 an-node02 kernel: block drbd1: pdsk( DUnknown -> Outdated )
Jan  1 21:26:14 an-node02 kernel: block drbd1: new current UUID 6355AAB258658E8F:4642D156D54731A1:5F8A6B05E2FCCE19:165E9B466805EC81
Jan  1 21:26:14 an-node02 kernel: block drbd1: susp( 1 -> 0 )
Jan  1 21:26:15 an-node02 fenced[2022]: fence an-node01.alteeve.ca success
Jan  1 21:26:15 an-node02 fence_node[15672]: fence an-node01.alteeve.ca success
Jan  1 21:26:15 an-node02 kernel: block drbd0: helper command: /sbin/drbdadm fence-peer minor-0 exit code 7 (0x700)
Jan  1 21:26:15 an-node02 kernel: block drbd0: fence-peer helper returned 7 (peer was stonithed)
Jan  1 21:26:15 an-node02 kernel: block drbd0: pdsk( DUnknown -> Outdated )
Jan  1 21:26:15 an-node02 kernel: block drbd0: new current UUID C1F5EF16EE80E6C1:1B503B46E6650575:234E9A10EE04FDE7:7DBC4288E230DC9B
Jan  1 21:26:15 an-node02 kernel: block drbd0: susp( 1 -> 0 )
Jan  1 21:26:15 an-node02 fence_node[15627]: fence an-node01.alteeve.ca success
Jan  1 21:26:15 an-node02 kernel: block drbd2: helper command: /sbin/drbdadm fence-peer minor-2 exit code 7 (0x700)
Jan  1 21:26:15 an-node02 kernel: block drbd2: fence-peer helper returned 7 (peer was stonithed)
Jan  1 21:26:15 an-node02 kernel: block drbd2: pdsk( DUnknown -> Outdated )
Jan  1 21:26:15 an-node02 kernel: block drbd2: new current UUID 1F79DE480F1E33C1:A674C3CB12017193:76118DDAE165C5FB:871F8081B7D527A9
Jan  1 21:26:15 an-node02 kernel: block drbd2: susp( 1 -> 0 )
Jan  1 21:26:16 an-node02 kernel: GFS2: fsid=an-cluster-A:shared.0: jid=1: Looking at journal...
Jan  1 21:26:16 an-node02 kernel: GFS2: fsid=an-cluster-A:shared.0: jid=1: Done
Jan  1 21:26:16 an-node02 rgmanager[2514]: Marking service:storage_an01 as stopped: Restricted domain unavailable
Jan  1 21:26:16 an-node02 rgmanager[2514]: Taking over service vm:vm0001-dev from down member an-node01.alteeve.ca
Jan  1 21:26:16 an-node02 rgmanager[2514]: Taking over service vm:vm0002-web from down member an-node01.alteeve.ca
Jan  1 21:26:17 an-node02 kernel: device vnet2 entered promiscuous mode
Jan  1 21:26:17 an-node02 kernel: vbr2: port 4(vnet2) entering learning state
Jan  1 21:26:17 an-node02 rgmanager[2514]: Service vm:vm0001-dev started
Jan  1 21:26:17 an-node02 kernel: device vnet3 entered promiscuous mode
Jan  1 21:26:17 an-node02 kernel: vbr2: port 5(vnet3) entering learning state
Jan  1 21:26:18 an-node02 rgmanager[2514]: Service vm:vm0002-web started
Jan  1 21:26:20 an-node02 ntpd[2275]: Listening on interface #12 vnet2, fe80::fc54:ff:fe9b:3cf7#123 Enabled
Jan  1 21:26:20 an-node02 ntpd[2275]: Listening on interface #13 vnet3, fe80::fc54:ff:fe65:3960#123 Enabled
Jan  1 21:26:27 an-node02 kernel: kvm: 16177: cpu0 unimplemented perfctr wrmsr: 0xc1 data 0xabcd
Jan  1 21:26:29 an-node02 kernel: kvm: 16118: cpu0 unimplemented perfctr wrmsr: 0xc1 data 0xabcd
Jan  1 21:26:32 an-node02 kernel: vbr2: port 4(vnet2) entering forwarding state
Jan  1 21:26:32 an-node02 kernel: vbr2: port 5(vnet3) entering forwarding state
</source>
 
Checking with <span class="code">clustat</span>, we can confirm that all four VMs are now running on <span class="code">an-node02</span>.
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 21:28:00 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, rgmanager
an-node02.alteeve.ca                      2 Online, Local, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  an-node02.alteeve.ca          started
vm:vm0002-web                  an-node02.alteeve.ca          started
vm:vm0003-db                  an-node02.alteeve.ca          started
vm:vm0004-ms                  an-node02.alteeve.ca          started
</source>
 
Perfect! This is exactly why we built the cluster!
 
If we wait a few minutes, we'll see that the hung node has recovered.
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 22:30:04 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, rgmanager
an-node02.alteeve.ca                      2 Online, Local, rgmanager
 
Service Name                  Owner (Last)                  State       
------- ----                  ----- ------                  -----       
service:storage_an01          an-node01.alteeve.ca          started     
service:storage_an02          an-node02.alteeve.ca          started     
vm:vm0001-dev                  an-node02.alteeve.ca          started     
vm:vm0002-web                  an-node02.alteeve.ca          started     
vm:vm0003-db                  an-node02.alteeve.ca          started     
vm:vm0004-ms                  an-node02.alteeve.ca          started     
</source>
 
Before we can push the VMs back though, we must make sure that the underlying DRBD resource has finished synchronizing.
 
{{note|1=With four VMs, it will most certainly take time for underlying resource to resync. Do not migrate the VMs until this has completed!}}
 
<source lang="bash">
cat /proc/drbd
</source>
<source lang="text">
version: 8.3.12 (api:88/proto:86-96)
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03
0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:1182704 nr:1053880 dw:1052676 dr:1245848 al:0 bm:266 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
1: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:2087568 nr:362698 dw:366444 dr:2263316 al:9 bm:411 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
2: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:2098343 nr:1114307 dw:1065375 dr:2340421 al:10 bm:551 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
</source>
 
We're ready, so lets migrate back <span class="code">vm0001-dev</span> and <span class="code">vm0002-web</span>.
 
<source lang="bash">
clusvcadm -M vm:vm0001-dev -m an-node01.alteeve.ca
</source>
<source lang="text">
Trying to migrate vm:vm0001-dev to an-node01.alteeve.ca...Success
</source>
<source lang="bash">
clusvcadm -M vm:vm0002-web -m an-node01.alteeve.ca
</source>
<source lang="text">
Trying to migrate vm:vm0002-web to an-node01.alteeve.ca...Success
</source>
 
Confirm;
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 22:37:10 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  an-node01.alteeve.ca          started
vm:vm0002-web                  an-node01.alteeve.ca          started
vm:vm0003-db                  an-node02.alteeve.ca          started
vm:vm0004-ms                  an-node02.alteeve.ca          started
</source>
 
There we have it. Successful crash and recovery of <span class="code">an-node01</span>.
 
==== Discussing the syslog Messages ====
 
Let's step back and look at the syslog output; There are a few things to discuss.
 
The first thing we see is that almost immediately after hanging <span class="code">an-node01</span>, the first messages are from DRBD, not the cluster. This in turn trigger's DRBD's <span class="code">fence-handler</span> script, <span class="code">obliterate-peer.sh</span>. This is because DRBD is extremely sensitive to interruptions, even more so than the cluster itself. You will notice that DRBD reacted a full 9 seconds faster than the cluster.
 
The first thing the cluster does, upon realizing it has lost communication with its peer, is call a fence against the lost node. As mentioned, this involves calling <span class="code">obliterate-peer.sh</span>, which is itself a very simple wrapper for <span class="code">cman_tool</span> and <span class="code">fence_node</span> shell calls.
 
<source lang="text">
Jan  1 21:26:00 an-node02 kernel: block drbd1: helper command: /sbin/drbdadm fence-peer minor-1
Jan  1 21:26:00 an-node02 kernel: block drbd1: receiver terminated
Jan  1 21:26:00 an-node02 kernel: block drbd1: Restarting receiver thread
Jan  1 21:26:00 an-node02 kernel: block drbd1: receiver (re)started
Jan  1 21:26:00 an-node02 kernel: block drbd1: conn( Unconnected -> WFConnection )
Jan  1 21:26:00 an-node02 /sbin/obliterate-peer.sh: Local node ID: 2 / Remote node: an-node01.alteeve.ca
</source>
 
Here we see DRBD calling the handler (first message), shortly after we see a log entry from <span class="code">obliterate-peer.sh</span> (last entry). What you don't see is that right after that last message, <span class="code">obliterate-peer.sh</span> goes into a 10-iteration loop where it calls <span class="code">fence_node</span> against its peer.
 
<source lang="text">
Jan  1 21:26:01 an-node02 /sbin/obliterate-peer.sh: Local node ID: 2 / Remote node: an-node01.alteeve.ca
Jan  1 21:26:01 an-node02 /sbin/obliterate-peer.sh: kill node failed: Invalid argument
</source>
 
The <span class="code">fence_node</span> call runs in the background, so the <span class="code">obliterate-peer.sh</span> script goes into a short sleep before trying again (and again...). These subsequent calls will generate the <span class="code">kill node failed: Invalid argument</span> because the first call is already in the process of fencing the node, and are thus safe to ignore. The important past was that this error message '''didn't''' follow the first entry.
 
<source lang="text">
Jan  1 21:26:15 an-node02 fenced[2022]: fence an-node01.alteeve.ca success
</source>
 
This is what matters. Here we see that the fence succeeded and the hung node was indeed fenced.
 
=== Failing and Recovery of an-node02 ===
 
With everything back in place, we'll hang <span class="code">an-node02</span> and ensure that its VMs will recover on <span class="code">an-node01</span>.
 
As always, check the current state.
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 22:53:43 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  an-node01.alteeve.ca          started
vm:vm0002-web                  an-node01.alteeve.ca          started
vm:vm0003-db                  an-node02.alteeve.ca          started
vm:vm0004-ms                  an-node02.alteeve.ca          started
</source>
 
Now hang <span class="code">an-node02</span>.
 
<source lang="bash">
echo c > /proc/sysrq-trigger
</source>
 
As before, that command will not return. If we check <span class="code">an-node01</span>'s syslog though, we should see that the node is fenced and the lost VMs are recovered.
 
<source lang="text">
Jan  1 22:56:14 an-node01 kernel: block drbd1: PingAck did not arrive in time.
Jan  1 22:56:14 an-node01 kernel: block drbd1: peer( Primary -> Unknown ) conn( Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown ) susp( 0 -> 1 )
Jan  1 22:56:15 an-node01 kernel: block drbd1: asender terminated
Jan  1 22:56:15 an-node01 kernel: block drbd1: Terminating asender thread
Jan  1 22:56:15 an-node01 kernel: block drbd1: Connection closed
Jan  1 22:56:15 an-node01 kernel: block drbd1: conn( NetworkFailure -> Unconnected )
Jan  1 22:56:15 an-node01 kernel: block drbd1: helper command: /sbin/drbdadm fence-peer minor-1
Jan  1 22:56:15 an-node01 kernel: block drbd1: receiver terminated
Jan  1 22:56:15 an-node01 kernel: block drbd1: Restarting receiver thread
Jan  1 22:56:15 an-node01 kernel: block drbd1: receiver (re)started
Jan  1 22:56:15 an-node01 kernel: block drbd1: conn( Unconnected -> WFConnection )
Jan  1 22:56:15 an-node01 /sbin/obliterate-peer.sh: Local node ID: 1 / Remote node: an-node02.alteeve.ca
Jan  1 22:56:19 an-node01 kernel: block drbd0: PingAck did not arrive in time.
Jan  1 22:56:19 an-node01 kernel: block drbd0: peer( Primary -> Unknown ) conn( Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown ) susp( 0 -> 1 )
Jan  1 22:56:19 an-node01 kernel: block drbd0: asender terminated
Jan  1 22:56:19 an-node01 kernel: block drbd0: Terminating asender thread
Jan  1 22:56:19 an-node01 kernel: block drbd0: Connection closed
Jan  1 22:56:19 an-node01 kernel: block drbd0: conn( NetworkFailure -> Unconnected )
Jan  1 22:56:19 an-node01 kernel: block drbd0: helper command: /sbin/drbdadm fence-peer minor-0
Jan  1 22:56:19 an-node01 kernel: block drbd0: receiver terminated
Jan  1 22:56:19 an-node01 kernel: block drbd0: Restarting receiver thread
Jan  1 22:56:19 an-node01 kernel: block drbd0: receiver (re)started
Jan  1 22:56:19 an-node01 kernel: block drbd0: conn( Unconnected -> WFConnection )
Jan  1 22:56:19 an-node01 /sbin/obliterate-peer.sh: Local node ID: 1 / Remote node: an-node02.alteeve.ca
Jan  1 22:56:19 an-node01 /sbin/obliterate-peer.sh: kill node failed: Invalid argument
Jan  1 22:56:21 an-node01 kernel: block drbd2: PingAck did not arrive in time.
Jan  1 22:56:21 an-node01 kernel: block drbd2: peer( Primary -> Unknown ) conn( Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown ) susp( 0 -> 1 )
Jan  1 22:56:21 an-node01 kernel: block drbd2: asender terminated
Jan  1 22:56:21 an-node01 kernel: block drbd2: Terminating asender thread
Jan  1 22:56:21 an-node01 kernel: block drbd2: Connection closed
Jan  1 22:56:21 an-node01 kernel: block drbd2: conn( NetworkFailure -> Unconnected )
Jan  1 22:56:21 an-node01 kernel: block drbd2: receiver terminated
Jan  1 22:56:21 an-node01 kernel: block drbd2: Restarting receiver thread
Jan  1 22:56:21 an-node01 kernel: block drbd2: receiver (re)started
Jan  1 22:56:21 an-node01 kernel: block drbd2: conn( Unconnected -> WFConnection )
Jan  1 22:56:21 an-node01 kernel: block drbd2: helper command: /sbin/drbdadm fence-peer minor-2
Jan  1 22:56:21 an-node01 /sbin/obliterate-peer.sh: Local node ID: 1 / Remote node: an-node02.alteeve.ca
Jan  1 22:56:21 an-node01 /sbin/obliterate-peer.sh: kill node failed: Invalid argument
Jan  1 22:56:22 an-node01 corosync[1958]:  [TOTEM ] A processor failed, forming new configuration.
Jan  1 22:56:24 an-node01 corosync[1958]:  [QUORUM] Members[1]: 1
Jan  1 22:56:24 an-node01 corosync[1958]:  [TOTEM ] A processor joined or left the membership and a new membership was formed.
Jan  1 22:56:24 an-node01 kernel: dlm: closing connection to node 2
Jan  1 22:56:24 an-node01 corosync[1958]:  [CPG  ] chosen downlist: sender r(0) ip(10.20.0.1) ; members(old:2 left:1)
Jan  1 22:56:24 an-node01 corosync[1958]:  [MAIN  ] Completed service synchronization, ready to provide service.
Jan  1 22:56:24 an-node01 fenced[2014]: fencing node an-node02.alteeve.ca
Jan  1 22:56:24 an-node01 kernel: GFS2: fsid=an-cluster-A:shared.1: jid=0: Trying to acquire journal lock...
Jan  1 22:56:28 an-node01 fenced[2014]: fence an-node02.alteeve.ca success
Jan  1 22:56:29 an-node01 fence_node[638]: fence an-node02.alteeve.ca success
Jan  1 22:56:29 an-node01 kernel: block drbd2: helper command: /sbin/drbdadm fence-peer minor-2 exit code 7 (0x700)
Jan  1 22:56:29 an-node01 kernel: block drbd2: fence-peer helper returned 7 (peer was stonithed)
Jan  1 22:56:29 an-node01 kernel: block drbd2: pdsk( DUnknown -> Outdated )
Jan  1 22:56:29 an-node01 kernel: block drbd2: new current UUID 207F7C9279067EC1:3EEB0F756A6A289F:FD92DAC355F53A93:FD91DAC355F53A93
Jan  1 22:56:29 an-node01 kernel: block drbd2: susp( 1 -> 0 )
Jan  1 22:56:29 an-node01 fence_node[518]: fence an-node02.alteeve.ca success
Jan  1 22:56:29 an-node01 kernel: block drbd1: helper command: /sbin/drbdadm fence-peer minor-1 exit code 7 (0x700)
Jan  1 22:56:29 an-node01 kernel: block drbd1: fence-peer helper returned 7 (peer was stonithed)
Jan  1 22:56:29 an-node01 kernel: block drbd1: pdsk( DUnknown -> Outdated )
Jan  1 22:56:29 an-node01 kernel: block drbd1: new current UUID C65C044AE682D8C5:67D512BD61B70265:C1947DF86E910F8B:C1937DF86E910F8B
Jan  1 22:56:29 an-node01 kernel: block drbd1: susp( 1 -> 0 )
Jan  1 22:56:29 an-node01 rgmanager[2507]: Marking service:storage_an02 as stopped: Restricted domain unavailable
Jan  1 22:56:29 an-node01 fence_node[583]: fence an-node02.alteeve.ca success
Jan  1 22:56:29 an-node01 kernel: block drbd0: helper command: /sbin/drbdadm fence-peer minor-0 exit code 7 (0x700)
Jan  1 22:56:29 an-node01 kernel: block drbd0: fence-peer helper returned 7 (peer was stonithed)
Jan  1 22:56:29 an-node01 kernel: block drbd0: pdsk( DUnknown -> Outdated )
Jan  1 22:56:29 an-node01 kernel: block drbd0: new current UUID 295A00166167B5C3:A3F3889ECF7247F5:30313B4AFFF6F82B:30303B4AFFF6F82B
Jan  1 22:56:29 an-node01 kernel: block drbd0: susp( 1 -> 0 )
Jan  1 22:56:29 an-node01 kernel: GFS2: fsid=an-cluster-A:shared.1: jid=0: Looking at journal...
Jan  1 22:56:30 an-node01 kernel: GFS2: fsid=an-cluster-A:shared.1: jid=0: Done
Jan  1 22:56:30 an-node01 rgmanager[2507]: Taking over service vm:vm0003-db from down member an-node02.alteeve.ca
Jan  1 22:56:30 an-node01 rgmanager[2507]: Taking over service vm:vm0004-ms from down member an-node02.alteeve.ca
Jan  1 22:56:30 an-node01 kernel: device vnet2 entered promiscuous mode
Jan  1 22:56:30 an-node01 kernel: vbr2: port 4(vnet2) entering learning state
Jan  1 22:56:30 an-node01 rgmanager[2507]: Service vm:vm0003-db started
Jan  1 22:56:31 an-node01 kernel: device vnet3 entered promiscuous mode
Jan  1 22:56:31 an-node01 kernel: vbr2: port 5(vnet3) entering learning state
Jan  1 22:56:31 an-node01 rgmanager[2507]: Service vm:vm0004-ms started
Jan  1 22:56:34 an-node01 ntpd[2267]: Listening on interface #12 vnet3, fe80::fc54:ff:fe5e:b147#123 Enabled
Jan  1 22:56:34 an-node01 ntpd[2267]: Listening on interface #13 vnet2, fe80::fc54:ff:fe44:83ec#123 Enabled
Jan  1 22:56:40 an-node01 kernel: kvm: 1074: cpu0 unimplemented perfctr wrmsr: 0xc1 data 0xabcd
Jan  1 22:56:45 an-node01 kernel: vbr2: port 4(vnet2) entering forwarding state
Jan  1 22:56:46 an-node01 kernel: vbr2: port 5(vnet3) entering forwarding state
</source>
 
Checking <span class="code">clustat</span>;
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 22:57:36 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Offline
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          (an-node02.alteeve.ca)        stopped
vm:vm0001-dev                  an-node01.alteeve.ca          started
vm:vm0002-web                  an-node01.alteeve.ca          started
vm:vm0003-db                  an-node01.alteeve.ca          started
vm:vm0004-ms                  an-node01.alteeve.ca          started
</source>
 
All four VMs are back up and running on <span class="code">an-node01</span>!
 
Within a few moments, we should see see that <span class="code">an-node02</span> has rejoined the cluster.
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 23:00:43 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  an-node01.alteeve.ca          started
vm:vm0002-web                  an-node01.alteeve.ca          started
vm:vm0003-db                  an-node01.alteeve.ca          started
vm:vm0004-ms                  an-node01.alteeve.ca          started
</source>
 
Now we'll wait for the backing DRBD resources to be in sync.
 
<source lang="bash">
cat /proc/drbd
</source>
<source lang="text">
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03
0: cs:SyncTarget ro:Primary/Primary ds:Inconsistent/UpToDate C r-----
    ns:0 nr:272884 dw:271744 dr:5700 al:0 bm:25 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:780928
[====>...............] sync'ed: 26.4% (780928/1052672)K
finish: 0:10:02 speed: 1,284 (1,280) want: 250 K/sec
1: cs:SyncTarget ro:Primary/Primary ds:Inconsistent/UpToDate C r-----
    ns:0 nr:272196 dw:271048 dr:3688 al:0 bm:45 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:122292
[=============>......] sync'ed: 70.2% (122292/393216)K
finish: 0:01:31 speed: 1,328 (1,276) want: 250 K/sec
2: cs:SyncTarget ro:Primary/Primary ds:Inconsistent/UpToDate C r-----
    ns:0 nr:273426 dw:272258 dr:3636 al:0 bm:47 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:781500
[====>...............] sync'ed: 26.4% (781500/1052760)K
finish: 0:09:49 speed: 1,308 (1,284) want: 250 K/sec
</source>
 
(time passes)
 
<source lang="bash">
cat /proc/drbd
</source>
<source lang="text">
version: 8.3.12 (api:88/proto:86-96)
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03
0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:1053812 dw:1052672 dr:6964 al:0 bm:74 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
1: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:394560 dw:393412 dr:4988 al:0 bm:70 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
2: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:1055190 dw:1054022 dr:4936 al:0 bm:167 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
</source>
 
Now we're ready to migrate <span class="code">vm0003-db</span> and <span class="code">vm0004-ms</span> back to <span class="code">an-node02</span>.
 
<source lang="bash">
clusvcadm -M vm:vm0003-db -m an-node02.alteeve.ca
</source>
<source lang="text">
Trying to migrate vm:vm0003-db to an-node02.alteeve.ca...Success
</source>
<source lang="bash">
clusvcadm -M vm:vm0004-ms -m an-node02.alteeve.ca
</source>
<source lang="text">
Trying to migrate vm:vm0004-ms to an-node02.alteeve.ca...Success
</source>
 
A final check;
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 23:08:06 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  an-node01.alteeve.ca          started
vm:vm0002-web                  an-node01.alteeve.ca          started
vm:vm0003-db                  an-node02.alteeve.ca          started
vm:vm0004-ms                  an-node02.alteeve.ca          started
</source>
 
Good!
 
== Complete Cold Shut Down And Cold Starting The Cluster ==
 
The final testing is now complete. There is one final task to cover though; "Cold Shut Down" and "Cold Start" of the cluster. This involves shutting down all VMs, stopping <span class="code">rgmanager</span> and <span class="code">cman</span> on both nodes, then powering off both nodes.
 
The cold-start process involves simply powering both nodes on within the set <span class="code">post_join_delay</span>, then manually enabling the four VMs.
 
=== Stopping All VMs ===
 
Check the status as always;
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 23:13:24 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  an-node01.alteeve.ca          started
vm:vm0002-web                  an-node01.alteeve.ca          started
vm:vm0003-db                  an-node02.alteeve.ca          started
vm:vm0004-ms                  an-node02.alteeve.ca          started
</source>
 
All four VMs are up, so we'll stop all of them.
 
{{note|1=You might want to get into the habit of stopping the windows machines, then connecting to them over [[RDP]] or using <span class="code">virt-manager</span> to ensure that it has started to power down. If it hasn't, shut it down from within the OS.}}
 
<source lang="bash">
clusvcadm -d vm:vm0001-dev
</source>
<source lang="text">
Local machine disabling vm:vm0001-dev...Success
</source>
 
<source lang="bash">
clusvcadm -d vm:vm0002-web
</source>
<source lang="text">
Local machine disabling vm:vm0002-web...Success
</source>
 
<source lang="bash">
clusvcadm -d vm:vm0003-db
</source>
<source lang="text">
Local machine disabling vm:vm0003-db...Success
</source>
 
<source lang="bash">
clusvcadm -d vm:vm0004-ms
</source>
<source lang="text">
Local machine disabling vm:vm0004-ms...Success
</source>
 
Confirm;
 
<source lang="bash">
clustat
</source>
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 23:17:29 2012
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.ca                      1 Online, Local, rgmanager
an-node02.alteeve.ca                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State
------- ----                  ----- ------                  -----
service:storage_an01          an-node01.alteeve.ca          started
service:storage_an02          an-node02.alteeve.ca          started
vm:vm0001-dev                  (an-node01.alteeve.ca)        disabled
vm:vm0002-web                  (an-node01.alteeve.ca)        disabled
vm:vm0003-db                  (an-node02.alteeve.ca)        disabled
vm:vm0004-ms                  (an-node02.alteeve.ca)        disabled
</source>
 
Good, we can now stop <span class="code">rgmanager</span> on both nodes.
 
=== Shutting Down The Cluster Entirely ===
 
{{note|1=It can sometimes take a minute or two for <span class="code">rgmanager</span> to stop. Please be patient.}}
 
On <span class="code">an-node01</span>;
 
<source lang="bash">
/etc/init.d/rgmanager stop
</source>
<source lang="text">
Stopping Cluster Service Manager:                          [  OK  ]
</source>
 
On <span class="code">an-node02</span>;
 
<source lang="bash">
/etc/init.d/rgmanager stop
</source>
<source lang="text">
Stopping Cluster Service Manager:                          [  OK  ]
</source>
 
Now stop <span class="code">cman</span> on both nodes.
 
On <span class="code">an-node01</span>;
 
<source lang="bash">
/etc/init.d/cman stop
</source>
<source lang="text">
Stopping cluster:
  Leaving fence domain...                                [  OK  ]
  Stopping gfs_controld...                                [  OK  ]
  Stopping dlm_controld...                                [  OK  ]
  Stopping fenced...                                      [  OK  ]
  Stopping cman...                                        [  OK  ]
  Waiting for corosync to shutdown:                      [  OK  ]
  Unloading kernel modules...                            [  OK  ]
  Unmounting configfs...                                  [  OK  ]
</source>
 
On <span class="code">an-node02</span>;
 
<source lang="bash">
/etc/init.d/cman stop
</source>
<source lang="text">
Stopping cluster:
  Leaving fence domain...                                [  OK  ]
  Stopping gfs_controld...                                [  OK  ]
  Stopping dlm_controld...                                [  OK  ]
  Stopping fenced...                                      [  OK  ]
  Stopping cman...                                        [  OK  ]
  Waiting for corosync to shutdown:                      [  OK  ]
  Unloading kernel modules...                            [  OK  ]
  Unmounting configfs...                                  [  OK  ]
</source>
 
We're down, we can safely power off the nodes now.
 
<source lang="bash">
poweroff
</source>
<source lang="text">
Broadcast message from root@an-node01.alteeve.ca
(/dev/pts/0) at 23:22 ...
 
The system is going down for power off NOW!
</source>
 
Cold-Stop achieved!
 
=== Cold-Starting The Cluster ===
 
{{note|1=It is important to power on both nodes within <span class="code">post_join_delay</span> seconds. Otherwise, the slower node will be fenced and the boot process will take longer than it needs to.}}
 
Power on both nodes. You can just hit the power button, or if you have a workstation on the [[BCN]] with <span class="code">fence-agents</span> installed, you can call <span class="code">fence_ipmilan</span> (or the agent you use in your cluster).
 
<source lang="bash">
fence_ipmilan -a an-node01.ipmi -l root -p secret -o on
</source>
<source lang="text">
Powering on machine @ IPMI:an-node01.ipmi...Done
</source>
 
<source lang="bash">
fence_ipmilan -a an-node02.ipmi -l root -p secret -o on
</source>
<source lang="text">
Powering on machine @ IPMI:an-node02.ipmi...Done
</source>
 
Once they're up, log into them again and check their status. You will see that the VMs are off-line.
 
<source lang="bash">
clustat
</source>
</source>
<source lang="text">
<source lang="text">
Cluster Status for an-cluster-A @ Sun Jan  1 23:40:16 2012
Member Status: Quorate
Member Status: Quorate


  Member Name                            ID  Status
  Member Name                            ID  Status
  ------ ----                            ---- ------
  ------ ----                            ---- ------
  an-node01.alteeve.com                       1 Online, Local, rgmanager
  an-node01.alteeve.ca                       1 Online, Local, rgmanager
  an-node02.alteeve.com                       2 Online, rgmanager
  an-node02.alteeve.ca                       2 Online, rgmanager


  Service Name                  Owner (Last)                  State         
  Service Name                  Owner (Last)                  State         
  ------- ----                  ----- ------                  -----         
  ------- ----                  ----- ------                  -----         
  service:storage_an01          an-node01.alteeve.com         started       
  service:storage_an01          an-node01.alteeve.ca         started       
  service:storage_an02          an-node02.alteeve.com         started       
  service:storage_an02          an-node02.alteeve.ca         started       
</source>
vm:vm0001-dev                  (none)                         disabled     
 
vm:vm0002-web                  (none)                        disabled     
What we see are two section; The top section shows the cluster members and the lower part covers the managed resources.
  vm:vm0003-db                  (none)                        disabled     
 
  vm:vm0004-ms                  (none)                        disabled     
We can see that both members, <span class="code">an-node01.alteeve.com</span> and <span class="code">an-node02.alteeve.com</span> are <span class="code">Online</span>, meaning that <span class="code">cman</span> is running and that they've joined the cluster. It also shows us that both members are running <span class="code">rgmanager</span>. You will always see <span class="code">Local</span> beside the name of the node you ran the actual <span class="code">clustat</span> command from.
 
Under the services, you can see the two new services we created with the <span class="code">service:</span> prefix. We can see that each service is <span class="code">started</span>, meaning that all four of the resources are up and running properly and which node each service is running on.
 
Note that the two storage services are running, despite not having started them? T
 
hat is because the <span class="code">rgmanager</span> service was started earlier. When we pushed out the updated configuration, <span class="code">rgmanager</span> saw the two new storage services had <span class="code">autostart="1"</span> and started them. If you check your storage services now, you will see that they are all online.
 
DRBD;
 
<source lang="bash">
/etc/init.d/drbd status
</source>
<source lang="text">
drbd driver loaded OK; device status:
version: 8.3.11 (api:88/proto:86-96)
GIT-hash: 0de839cee13a4160eed6037c4bddd066645e23c5 build by dag@Build64R6, 2011-08-08 08:54:05
m:res  cs        ro              ds                p  mounted fstype
0:r0  Connected  Primary/Primary  UpToDate/UpToDate  C
1:r1  Connected  Primary/Primary  UpToDate/UpToDate C
2:r2  Connected  Primary/Primary  UpToDate/UpToDate  C
</source>
 
Clustered LVM;
 
<source lang="bash">
pvscan; vgscan; lvscan
</source>
<source lang="text">
  PV /dev/drbd2  VG an02-vg0    lvm2 [200.75 GiB / 200.75 GiB free]
  PV /dev/drbd1  VG an01-vg0    lvm2 [200.75 GiB / 200.75 GiB free]
  PV /dev/drbd0  VG shared-vg0  lvm2 [20.00 GiB / 0    free]
  Total: 3 [421.49 GiB] / in use: 3 [421.49 GiB] / in no VG: 0 [0  ]
  Reading all physical volumes.  This may take a while...
  Found volume group "an02-vg0" using metadata type lvm2
  Found volume group "an01-vg0" using metadata type lvm2
  Found volume group "shared-vg0" using metadata type lvm2
  ACTIVE            '/dev/shared-vg0/shared' [20.00 GiB] inherit
</source>
</source>


GFS2;
Check that DRBD is ready;


<source lang="bash">
<source lang="bash">
/etc/init.d/gfs2 status
cat /proc/drbd
</source>
</source>
<source lang="text">
<source lang="text">
Configured GFS2 mountpoints:  
version: 8.3.12 (api:88/proto:86-96)
/shared
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03
Active GFS2 mountpoints:  
0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
/shared
    ns:4 nr:0 dw:0 dr:8712 al:0 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
1: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:0 dw:0 dr:4632 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
2: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:0 dw:0 dr:4648 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
</source>
</source>


== Managing Cluster Resources ==
Golden, let's start the VMs.
 
Managing services in the cluster is done with a fairly simple tool called <span class="code">clusvcadm</span>.
 
The main commands we're going to look at shortly are:
 
* <span class="code">clusvcadm -e <service> -m <node></span>: Enable the <span class="code"><service></span> on the specified <span class="code"><node></span>. When a <span class="code"><node></span> is not specified, the local node where the command was run is assumed.
* <span class="code">clusvcadm -d <service></span>: Disable the <span class="code"><service></span>.
 
There are other ways to use <span class="code">clusvcadm</span> which we will look at after the virtual servers are provisioned and under cluster control.
 
== Stopping Clustered Storage - A Preview To Cold-Stopping The Cluster ==
 
To stop the storage services, we'll use the <span class="code">clusvcadm</span>, '''clu'''ster '''s'''er'''v'''i'''c'''e '''adm'''inistrator, another tool provided by <span class="code">rgmanager</span>. We'll use the <span class="code">-d</span> switch, which tells <span class="code">rgmanager</span> to '''d'''isable the service.
 
{{note|1=Services with the <span class="code">service:</span> prefix can be called with their name alone. As we will see later, other services will need to have the service type prefix included.}}
 
On '''<span class="code">an-node01</span>''', run:


<source lang="bash">
<source lang="bash">
clusvcadm -d storage_an01
clusvcadm -e vm:vm0001-dev -m an-node01.alteeve.ca
</source>
</source>
<source lang="text">
<source lang="text">
Local machine disabling service:storage_an01...Success
vm:vm0001-dev is now running on an-node01.alteeve.ca
</source>
</source>
If we now run <span class="code">clustat</span> from either node, we should see this;


<source lang="bash">
<source lang="bash">
clustat
clusvcadm -e vm:vm0002-web -m an-node01.alteeve.ca
</source>
</source>
<source lang="text">
<source lang="text">
Cluster Status for an-clusterA @ Mon Oct 31 16:12:49 2011
vm:vm0002-web is now running on an-node01.alteeve.ca
Member Status: Quorate
 
Member Name                            ID  Status
------ ----                            ---- ------
an-node01.alteeve.com                      1 Online, Local, rgmanager
an-node02.alteeve.com                      2 Online, rgmanager
 
Service Name                  Owner (Last)                  State       
------- ----                  ----- ------                  -----       
service:storage_an01          (an-node01.alteeve.com)        disabled     
service:storage_an02          an-node02.alteeve.com          started     
</source>
</source>
Notice how <span class="code">service:storage_an01</span> is now in the <span class="code">disabled</span> state? If you check the status of <span class="code">drbd</span> now on <span class="code">an-node02</span> you will see that <span class="code">an-node01</span> is indeed down.


<source lang="bash">
<source lang="bash">
/etc/init.d/drbd status
clusvcadm -e vm:vm0003-db -m an-node02.alteeve.ca
</source>
</source>
<source lang="text">
<source lang="text">
drbd driver loaded OK; device status:
vm:vm0003-db is now running on an-node02.alteeve.ca
version: 8.3.11 (api:88/proto:86-96)
GIT-hash: 0de839cee13a4160eed6037c4bddd066645e23c5 build by dag@Build64R6, 2011-08-08 08:54:05
m:res  cs            ro              ds                p  mounted  fstype
0:r0  WFConnection  Primary/Unknown  UpToDate/Outdated  C
1:r1  WFConnection  Primary/Unknown  UpToDate/Outdated  C
2:r2  WFConnection  Primary/Unknown  UpToDate/Outdated  C
You have new mail in /var/spool/mail/root
</source>
</source>
If you want to shut down the entire cluster, you will want to stop the <span class="code">storage_an02</span> service as well. For fun, let's do this, but lets stop the service from <span class="code">an-node01</span>;


<source lang="bash">
<source lang="bash">
clusvcadm -d storage_an02
clusvcadm -e vm:vm0004-ms -m an-node02.alteeve.ca
</source>
</source>
<source lang="text">
<source lang="text">
Local machine disabling service:storage_an02...Success
vm:vm0004-ms is now running on an-node02.alteeve.ca
</source>
</source>


Now on both nodes, we should see this from <span class="code">clustat</span>;
Check the new status;


<source lang="bash">
<source lang="bash">
Line 4,644: Line 9,113:
</source>
</source>
<source lang="text">
<source lang="text">
Cluster Status for an-clusterA @ Mon Oct 31 16:16:50 2011
Cluster Status for an-cluster-A @ Sun Jan  1 23:45:35 2012
Member Status: Quorate
Member Status: Quorate


  Member Name                            ID  Status
  Member Name                            ID  Status
  ------ ----                            ---- ------
  ------ ----                            ---- ------
  an-node01.alteeve.com                       1 Online, Local, rgmanager
  an-node01.alteeve.ca                       1 Online, rgmanager
  an-node02.alteeve.com                       2 Online, rgmanager
  an-node02.alteeve.ca                       2 Online, Local, rgmanager


  Service Name                  Owner (Last)                  State         
  Service Name                  Owner (Last)                  State         
  ------- ----                  ----- ------                  -----         
  ------- ----                  ----- ------                  -----         
  service:storage_an01          (an-node01.alteeve.com)        disabled     
  service:storage_an01          an-node01.alteeve.ca          started     
  service:storage_an02          (an-node02.alteeve.com)        disabled     
  service:storage_an02          an-node02.alteeve.ca          started      
</source>
vm:vm0001-dev                  an-node01.alteeve.ca          started     
 
vm:vm0002-web                  an-node01.alteeve.ca          started     
{{warning|1=From this point on, you will need to stop both storage services before stopping <span class="code">rgmanager</span> and <span class="code">cman</span>.}}
vm:vm0003-db                  an-node02.alteeve.ca          started     
 
vm:vm0004-ms                  an-node02.alteeve.ca          started     
We can now, if we wanted to, stop <span class="code">rgmanager</span> and <span class="code">cman</span>. This is, in fact, how we will cold-stop the cluster from now on. Technically speaking, we could just stop <span class="code">rgmanager</span> and it would call the stop against all services. However, I much prefer, and strongly recommend, manually stopping all services one at a time. That way, should a problem arise, you can catch it right away.
 
We'll cover cold stopping the cluster after we finish provisioning VMs.
 
== Starting Clustered Storage ==
 
Normally from now on, the clustered storage will start automatically. However, it's a good exercise to look at manually starting them as well.
 
The main difference from stopping the service is that we trade the <span class="code">-d</span> switch for the <span class="code">-e</span>, '''e'''nable, switch. We will also add the target cluster member name using the <span class="code">-m</span> switch. We didn't this while stopping because the cluster could tell where the service was running and, thus, which member to contact to stop the service.
 
Technically we can omit the member name and the cluster will try to use the local node as the target member. In the case of our storage services, they are constrained to one member only by their failover domain, so it's even less required that we specify the target memeber. All this said, it's still a worthy habit to get into, so I will include the member through the rest of this tutorial.
 
{{warning|1=The storage services need to start at about the same time on both nodes. This is because the initially started storage service will hang when it tries to start <span class="code">drbd</span> until either the other node is up or until it times out. For this reason, be sure to have two terminal windows open to make then next two calls simultaneously.}}
 
On '''<span class="code">an-node01</span>''', run;
 
<source lang="bash">
clusvcadm -e storage_an01 -m an-node01.alteeve.com
</source>
<source lang="text">
Member an-node01.alteeve.com trying to enable service:storage_an01...Success
service:storage_an01 is now running on an-node01.alteeve.com
</source>
 
On '''<span class="code">an-node02</span>''', run;
 
<source lang="bash">
clusvcadm -e storage_an02 -m an-node02.alteeve.com
</source>
<source lang="text">
Member an-node02.alteeve.com trying to enable service:storage_an02...Success
service:storage_an02 is now running on an-node02.alteeve.com
</source>
 
Now <span class="code">clustat</span> on either node should again show the storage services running again.
 
<source lang="bash">
clustat
</source>
</source>
<source lang="text">
Cluster Status for an-clusterA @ Mon Oct 31 16:33:43 2011
Member Status: Quorate


Member Name                            ID  Status
We're back up and running!
------ ----                            ---- ------
an-node01.alteeve.com                      1 Online, Local, rgmanager
an-node02.alteeve.com                      2 Online, rgmanager


Service Name                  Owner (Last)                  State       
== Done and Done! ==
------- ----                  ----- ------                  -----       
service:storage_an01          an-node01.alteeve.com          started     
service:storage_an02          an-node02.alteeve.com          started     
</source>


== A Note On Resource Management With DRBD ==
That, ladies and gentlemen, is all she wrote!


When the cluster starts for the first time, where neither node's DRBD storage was up, the first node to start will wait for
You should now be safely ready to take your cluster into production at this stage.
<span class="code">/etc/drbd.d/global_common.conf</span>'s <span class="code">wfc-timeout</span> seconds (<span class="code">300</span> in our case) for the second node to start. For this reason, we want to ensure that we enable the storage resources more or less at the same time and from two different terminals. The reason for two terminals is that the <span class="code">clusvcadm -e ...</span> command won't return until all resources have started, so you need the second terminal window to start the other node's clustered storage service while the first one waits.


If the clustered storage service ever fails, look in [[syslog]]'s <span class="code">/var/log/messages</span> for a split-brain error. Look for a message like:
Happy Hacking!


<source lang="text">
= Troubleshooting =
Mar 29 20:24:37 an-node01 kernel: block drbd2: helper command: /sbin/drbdadm initial-split-brain minor-2
Mar 29 20:24:37 an-node01 kernel: block drbd2: helper command: /sbin/drbdadm initial-split-brain minor-2 exit code 0 (0x0)
Mar 29 20:24:37 an-node01 kernel: block drbd2: Split-Brain detected but unresolved, dropping connection!
Mar 29 20:24:37 an-node01 kernel: block drbd2: helper command: /sbin/drbdadm split-brain minor-2
Mar 29 20:24:37 an-node01 kernel: block drbd2: helper command: /sbin/drbdadm split-brain minor-2 exit code 0 (0x0)
Mar 29 20:24:37 an-node01 kernel: block drbd2: conn( WFReportParams -> Disconnecting )
</source>


With the fencing hook into the cluster, this should be a very hard problem to run into. If you do though, [http://linbit.com Linbit] has the authoritative guide to recover from this situation.
The troubleshooting section seems to have pushed Media Wiki beyond it's single-article length limit. For this reason, it has been moved to it's own page.


* [http://www.drbd.org/users-guide-legacy/s-resolve-split-brain.html Manual split brain recovery]
* [[2-Node Red Hat KVM Cluster Tutorial - Troubleshooting]]


= Provisioning Virtual Machines =
== Disabling rsyslog Rate Limiting ==


Now we're getting to the purpose of our cluster; Provision virtual machines!
Please see;


<span class="code"></span>
* [[2-Node Red Hat KVM Cluster Tutorial - Troubleshooting# Disabling rsyslog Rate Limiting|Disabling rsyslog Rate Limiting]]
<source lang="xml">
</source>
<source lang="bash">
</source>
<source lang="text">
</source>


{{footer}}
{{footer}}

Latest revision as of 18:22, 5 January 2014

 AN!Wiki :: How To :: 2-Node Red Hat KVM Cluster Tutorial - Archive

Warning: This tutorial is officially deprecated. It has been replaced with AN!Cluster Tutorial 2. Please do not follow this tutorial any more.

This paper has one goal;

  • Creating a 2-node, high-availability cluster hosting KVM virtual machines using RHCS "stable 3" with DRBD and clustered LVM for synchronizing storage data. This is an updated version of the earlier Red Hat Cluster Service 2 Tutorial Tutorial. You will find much in common with that tutorial if you've previously followed that document. Please don't skip large sections though. There are some differences that are subtle but important.

Grab a coffee, put on some nice music and settle in for some geekly fun.

The Task Ahead

Before we start, let's take a few minutes to discuss clustering and its complexities.

Technologies We Will Use

  • Red Hat Enterprise Linux 6 (EL6); You can use a derivative like CentOS v6.
  • Red Hat Cluster Services "Stable" version 3. This describes the following core components:
    • Corosync; Provides cluster communications using the totem protocol.
    • Cluster Manager (cman); Manages the starting, stopping and managing of the cluster.
    • Resource Manager (rgmanager); Manages cluster resources and services. Handles service recovery during failures.
    • Clustered Logical Volume Manager (clvm); Cluster-aware (disk) volume manager. Backs GFS2 filesystems and KVM virtual machines.
    • Global File Systems version 2 (gfs2); Cluster-aware, concurrently mountable file system.
  • Distributed Redundant Block Device (DRBD); Keeps shared data synchronized across cluster nodes.
  • KVM; Hypervisor that controls and supports virtual machines.

A Note on Hardware

In this tutorial, I will make reference to specific hardware components and devices. I do this to share what devices and equipment I use, but I do not endorse any of the products named in this tutorial. I am in no way affiliated with any hardware vendor not do I receive any compensation or gifts from any company.

A Note on Patience

When someone wants to become a pilot, they can't jump into a plane and try to take off. It's not that flying is inherently hard, but it requires a foundation of understanding. Clustering is the same in this regard; there are many different pieces that have to work together just to get off the ground.

You must have patience.

Like a pilot on their first flight, seeing a cluster come to life is a fantastic experience. Don't rush it! Do your homework and you'll be on your way before you know it.

Coming back to earth:

Many technologies can be learned by creating a very simple base and then building on it. The classic "Hello, World!" script created when first learning a programming language is an example of this. Unfortunately, there is no real analogue to this in clustering. Even the most basic cluster requires several pieces be in place and working together. If you try to rush by ignoring pieces you think are not important, you will almost certainly waste time. A good example is setting aside fencing, thinking that your test cluster's data isn't important. The cluster software has no concept of "test". It treats everything as critical all the time and will shut down if anything goes wrong.

Take your time, work through these steps, and you will have the foundation cluster sooner than you realize. Clustering is fun because it is a challenge.

Prerequisites

It is assumed that you are familiar with Linux systems administration, specifically Red Hat Enterprise Linux and its derivatives. You will need to have somewhat advanced networking experience as well. You should be comfortable working in a terminal (directly or over ssh). Familiarity with XML will help, but is not terribly required as its use here is pretty self-evident.

If you feel a little out of depth at times, don't hesitate to set this tutorial aside. Browse over to the components you feel the need to study more, then return and continue on. Finally, and perhaps most importantly, you must have patience! If you have a manager asking you to "go live" with a cluster in a month, tell him or her that it simply won't happen. If you rush, you will skip important points and you will fail.

Patience is vastly more important than any pre-existing skill.

Focus and Goal

There is a different cluster for every problem. Generally speaking though, there are two main problems that clusters try to resolve; Performance and High Availability. Performance clusters are generally tailored to the application requiring the performance increase. There are some general tools for performance clustering, like Red Hat's LVS (Linux Virtual Server) for load-balancing common applications like the Apache web-server.

This tutorial will focus on High Availability clustering, often shortened to simply HA and not to be confused with the Linux-HA "heartbeat" cluster suite, which we will not be using here. The cluster will provide a shared file systems and will provide for the high availability on KVM-based virtual servers. The goal will be to have the virtual servers live-migrate during planned node outages and automatically restart on a surviving node when the original host node fails.

Below is a very brief overview:

High Availability clusters like ours have two main parts; Cluster management and resource management.

The cluster itself is responsible for maintaining the cluster nodes in a group. This group is part of a "Closed Process Group", or CPG. When a node fails, the cluster manager must detect the failure, reliably eject the node from the cluster using fencing and then reform the CPG. Each time the cluster changes, or "re-forms", the resource manager is called. The resource manager checks to see how the cluster changed, consults its configuration and determines what to do, if anything.

The details of all this will be discussed in detail a little later on. For now, it's sufficient to have in mind these two major roles and understand that they are somewhat independent entities.

Platform

This tutorial was written using RHEL version 6.2, x86_64 architecture. The KVM hypervisor will not run on i686. No testing was done on other EL6 derivatives. That said, there is no reason to believe that this tutorial will not apply to any variant of EL6. As much as possible, the language will be distro-agnostic.

A Word On Complexity

Introducing the Fabimer Principle:

Clustering is not inherently hard, but it is inherently complex. Consider:

  • Any given program has N bugs.
    • RHCS uses; cman, corosync, dlm, fenced, rgmanager, and many more smaller apps.
    • We will be adding DRBD, GFS2, clvmd, libvirtd and KVM.
    • Right there, we have N^10 possible bugs. We'll call this A.
  • A cluster has Y nodes.
    • In our case, 2 nodes, each with 3 networks across 6 interfaces bonded into pairs.
    • The network infrastructure (Switches, routers, etc). We will be using two managed switches, adding another layer of complexity.
    • This gives us another Y^(2*(3*2))+2, the +2 for managed switches. We'll call this B.
  • Let's add the human factor. Let's say that a person needs roughly 5 years of cluster experience to be considered an proficient. For each year less than this, add a Z "oops" factor, (5-Z)^2. We'll call this C.
  • So, finally, add up the complexity, using this tutorial's layout, 0-years of experience and managed switches.
    • (N^10) * (Y^(2*(3*2))+2) * ((5-0)^2) == (A * B * C) == an-unknown-but-big-number.

This isn't meant to scare you away, but it is meant to be a sobering statement. Obviously, those numbers are somewhat artificial, but the point remains.

Any one piece is easy to understand, thus, clustering is inherently easy. However, given the large number of variables, you must really understand all the pieces and how they work together. DO NOT think that you will have this mastered and working in a month. Certainly don't try to sell clusters as a service without a lot of internal testing.

Clustering is kind of like chess. The rules are pretty straight forward, but the complexity can take some time to master.

Overview of Components

When looking at a cluster, there is a tendency to want to dive right into the configuration file. That is not very useful in clustering.

  • When you look at the configuration file, it is quite short.

Clustering isn't like most applications or technologies. Most of us learn by taking something such as a configuration file, and tweaking it to see what happens. I tried that with clustering and learned only what it was like to bang my head against the wall.

  • Understanding the parts and how they work together is critical.

You will find that the discussion on the components of clustering, and how those components and concepts interact, will be much longer than the initial configuration. It is true that we could talk very briefly about the actual syntax, but it would be a disservice. Please don't rush through the next section, or worse, skip it and go right to the configuration. You will waste far more time than you will save.

  • Clustering is easy, but it has a complex web of inter-connectivity. You must grasp this network if you want to be an effective cluster administrator!

Component; cman

The cman portion of the the cluster is the cluster manager. In the 3.0 series used in EL6, cman acts mainly as a quorum provider. That is, is adds up the votes from the cluster members and decides if there is a simple majority. If there is, the cluster is "quorate" and is allowed to provide cluster services. Newer versions of the Red Hat Cluster Suite found in Fedora will use a new quorum provider and cman will be removed entirely.

Until it is removed, the cman service will be used to start and stop all of the daemons needed to make the cluster operate.

Component; corosync

Corosync is the heart of the cluster. Almost all other cluster compnents operate though this.

In Red Hat clusters, corosync is configured via the central cluster.conf file. It can be configured directly in corosync.conf, but given that we will be building an RHCS cluster, we will only use cluster.conf. That said, almost all corosync.conf options are available in cluster.conf. This is important to note as you will see references to both configuration files when searching the Internet.

Corosync sends messages using multicast messaging by default. Recently, unicast support has been added, but due to network latency, it is only recommended for use with small clusters of two to four nodes. We will be using multicast in this tutorial.

A Little History

There were significant changes between RHCS the old version 2 and version 3 available on EL6, which we are using.

In the RHCS version 2, there was a component called openais which provided totem. The OpenAIS project was designed to be the heart of the cluster and was based around the Service Availability Forum's Application Interface Specification. AIS is an open API designed to provide inter-operable high availability services.

In 2008, it was decided that the AIS specification was overkill for most clustered applications being developed in the open source community. At that point, OpenAIS was split in to two projects: Corosync and OpenAIS. The former, Corosync, provides totem, cluster membership, messaging, and basic APIs for use by clustered applications, while the OpenAIS project became an optional add-on to corosync for users who want the full AIS API.

You will see a lot of references to OpenAIS while searching the web for information on clustering. Understanding its evolution will hopefully help you avoid confusion.

Concept; quorum

Quorum is defined as the minimum set of hosts required in order to provide clustered services and is used to prevent split-brain situations.

The quorum algorithm used by the RHCS cluster is called "simple majority quorum", which means that more than half of the hosts must be online and communicating in order to provide service. While simple majority quorum is a very common quorum algorithm, other quorum algorithms exist (grid quorum, YKD Dyanamic Linear Voting, etc.).

The idea behind quorum is that, when a cluster splits into two or more partitions, which ever group of machines has quorum can safely start clustered services knowing that no other lost nodes will try to do the same.

Take this scenario;

  • You have a cluster of four nodes, each with one vote.
    • The cluster's expected_votes is 4. A clear majority, in this case, is 3 because (4/2)+1, rounded down, is 3.
    • Now imagine that there is a failure in the network equipment and one of the nodes disconnects from the rest of the cluster.
    • You now have two partitions; One partition contains three machines and the other partition has one.
    • The three machines will have quorum, and the other machine will lose quorum.
    • The partition with quorum will reconfigure and continue to provide cluster services.
    • The partition without quorum will withdraw from the cluster and shut down all cluster services.

When the cluster reconfigures and the partition wins quorum, it will fence the node(s) in the partition without quorum. Once the fencing has been confirmed successful, the partition with quorum will begin accessing clustered resources, like shared filesystems.

This also helps explain why an even 50% is not enough to have quorum, a common question for people new to clustering. Using the above scenario, imagine if the split were 2 and 2 nodes. Because either can't be sure what the other would do, neither can safely proceed. If we allowed an even 50% to have quorum, both partition might try to take over the clustered services and disaster would soon follow.

There is one, and only one except to this rule.

In the case of a two node cluster, as we will be building here, any failure results in a 50/50 split. If we enforced quorum in a two-node cluster, there would never be high availability because and failure would cause both nodes to withdraw. The risk with this exception is that we now place the entire safety of the cluster on fencing, a concept we will cover in a second. Fencing is a second line of defense and something we are loath to rely on alone.

Even in a two-node cluster though, proper quorum can be maintained by using a quorum disk, called a qdisk. Unfortunately, qdisk on a DRBD resource comes with its own problems, so we will not be able to use it here.

Concept; Virtual Synchrony

Many cluster operations, like distributed locking and so on, have to occur in the same order across all nodes. This concept is called "virtual synchrony".

This is provided by corosync using "closed process groups", CPG. A closed process group is simply a private group of processes in a cluster. Within this closed group, all messages between members are ordered. Delivery, however, is not guaranteed. If a member misses messages, it is up to the member's application to decide what action to take.

Let's look at two scenarios showing how locks are handled using CPG;

  • The cluster starts up cleanly with two members.
  • Both members are able to start service:foo.
  • Both want to start it, but need a lock from DLM to do so.
    • The an-node01 member has its totem token, and sends its request for the lock.
    • DLM issues a lock for that service to an-node01.
    • The an-node02 member requests a lock for the same service.
    • DLM rejects the lock request.
  • The an-node01 member successfully starts service:foo and announces this to the CPG members.
  • The an-node02 sees that service:foo is now running on an-node01 and no longer tries to start the service.
  • The two members want to write to a common area of the /shared GFS2 partition.
    • The an-node02 sends a request for a DLM lock against the FS, gets it.
    • The an-node01 sends a request for the same lock, but DLM sees that a lock is pending and rejects the request.
    • The an-node02 member finishes altering the file system, announces the changed over CPG and releases the lock.
    • The an-node01 member updates its view of the filesystem, requests a lock, receives it and proceeds to update the filesystems.
    • It completes the changes, annouces the changes over CPG and releases the lock.

Messages can only be sent to the members of the CPG while the node has a totem token from corosync.

Concept; Fencing

Warning: DO NOT BUILD A CLUSTER WITHOUT PROPER, WORKING AND TESTED FENCING.
Laugh, but this is a weekly conversation.

Fencing is a absolutely critical part of clustering. Without fully working fence devices, your cluster will fail.

Sorry, I promise that this will be the only time that I speak so strongly. Fencing really is critical, and explaining the need for fencing is nearly a weekly event.

So then, let's discuss fencing.

When a node stops responding, an internal timeout and counter start ticking away. During this time, no DLM locks are allowed to be issued. Anything using DLM, including rgmanager, clvmd and gfs2, are effectively hung. The hung node is detected using a totem token timeout. That is, if a token is not received from a node within a period of time, it is considered lost and a new token is sent. After a certain number of lost tokens, the cluster declares the node dead. The remaining nodes reconfigure into a new cluster and, if they have quorum (or if quorum is ignored), a fence call against the silent node is made.

The fence daemon will look at the cluster configuration and get the fence devices configured for the dead node. Then, one at a time and in the order that they appear in the configuration, the fence daemon will call those fence devices, via their fence agents, passing to the fence agent any configured arguments like username, password, port number and so on. If the first fence agent returns a failure, the next fence agent will be called. If the second fails, the third will be called, then the forth and so on. Once the last (or perhaps only) fence device fails, the fence daemon will retry again, starting back at the start of the list. It will do this indefinitely until one of the fence devices succeeds.

Here's the flow, in point form:

  • The totem token moves around the cluster members. As each member gets the token, it sends sequenced messages to the CPG members.
  • The token is passed from one node to the next, in order and continuously during normal operation.
  • Suddenly, one node stops responding.
    • A timeout starts (~238ms by default), and each time the timeout is hit, and error counter increments and a replacement token is created.
    • The silent node responds before the failure counter reaches the limit.
      • The failure counter is reset to 0
      • The cluster operates normally again.
  • Again, one node stops responding.
    • Again, the timeout begins. As each totem token times out, a new packet is sent and the error count increments.
    • The error counts exceed the limit (4 errors is the default); Roughly one second has passed (238ms * 4 plus some overhead).
    • The node is declared dead.
    • The cluster checks which members it still has, and if that provides enough votes for quorum.
      • If there are too few votes for quorum, the cluster software freezes and the node(s) withdraw from the cluster.
      • If there are enough votes for quorum, the silent node is declared dead.
        • corosync calls fenced, telling it to fence the node.
        • The fenced daemon notifies DLM and locks are blocked.
        • Which fence device(s) to use, that is, what fence_agent to call and what arguments to pass, is gathered.
        • For each configured fence device:
          • The agent is called and fenced waits for the fence_agent to exit.
          • The fence_agent's exit code is examined. If it's a success, recovery starts. If it failed, the next configured fence agent is called.
        • If all (or the only) configured fence fails, fenced will start over.
        • fenced will wait and loop forever until a fence agent succeeds. During this time, the cluster is effectively hung.
      • Once a fence_agent succeeds, fenced notifies DLM and lost locks are recovered.
        • GFS2 partitions recover using their journal.
        • Lost cluster resources are recovered as per rgmanager's configuration (including file system recovery as needed).
  • Normal cluster operation is restored, minus the lost node.

This skipped a few key things, but the general flow of logic should be there.

This is why fencing is so important. Without a properly configured and tested fence device or devices, the cluster will never successfully fence and the cluster will remain hung until a human can intervene.

Component; totem

The totem protocol defines message passing within the cluster and it is used by corosync. A token is passed around all the nodes in the cluster, and nodes can only send messages while they have the token. A node will keep its messages in memory until it gets the token back with no "not ack" messages. This way, if a node missed a message, it can request it be resent when it gets its token. If a node isn't up, it will simply miss the messages.

The totem protocol supports something called 'rrp', Redundant Ring Protocol. Through rrp, you can add a second backup ring on a separate network to take over in the event of a failure in the first ring. In RHCS, these rings are known as "ring 0" and "ring 1". The RRP is being re-introduced in RHCS version 3. Its use is experimental and should only be used with plenty of testing.

Component; rgmanager

When the cluster membership changes, corosync tells the rgmanager that it needs to recheck its services. It will examine what changed and then will start, stop, migrate or recover cluster resources as needed.

Within rgmanager, one or more resources are brought together as a service. This service is then optionally assigned to a failover domain, an subset of nodes that can have preferential ordering.

The rgmanager daemon runs separately from the cluster manager, cman. This means that, to fully start the cluster, we need to start both cman and then rgmanager.

Component; qdisk

Note: qdisk does not work reliably on a DRBD resource, so we will not be using it in this tutorial.

A Quorum disk, known as a qdisk is small partition on SAN storage used to enhance quorum. It generally carries enough votes to allow even a single node to take quorum during a cluster partition. It does this by using configured heuristics, that is custom tests, to decided which node or partition is best suited for providing clustered services during a cluster reconfiguration. These heuristics can be simple, like testing which partition has access to a given router, or they can be as complex as the administrator wishes using custom scripts.

Though we won't be using it here, it is well worth knowing about when you move to a cluster with SAN storage.

Component; DRBD

DRBD; Distributed Replicating Block Device, is a technology that takes raw storage from two or more nodes and keeps their data synchronized in real time. It is sometimes described as "RAID 1 over Cluster Nodes", and that is conceptually accurate. In this tutorial's cluster, DRBD will be used to provide that back-end storage as a cost-effective alternative to a traditional SAN device.

To help visualize DRBD's use and role, Take a look at how we will implement our cluster's storage.

This shows;

  • Each node having four physical disks tied together in a RAID Level 5 array and presented to the Node's OS as a single drive which is found at /dev/sda.
  • Each node's OS uses three primary partitions for /boot, <swap> and /.
  • Three extended partitions are created;
    • /dev/sda5 backs a small partition used as a GFS2-formatted shared mount point.
    • /dev/sda6 backs the VMs designed to run primarily on an-node01.
    • /dev/sda7 backs the VMs designed to run primarily on an-node02.
  • All three extended partitions are combined using DRBD to create three DRBD resources;
    • /dev/drbd0 is backed by /dev/sda5.
    • /dev/drbd1 is backed by /dev/sda6.
    • /dev/drbd2 is backed by /dev/sda7.
  • All three DRBD resources are managed by clustered LVM.
  • The GFS2-formatted LV is mounted on /shared on both nodes.
  • Each VM gets its own LV.
  • All three DRBD resources sync over the Storage Network, which uses the bonded bond1 (backed by eth1 and eth4).

Don't worry if this seems illogical at this stage. The main thing to look at are the drbdX devices and how they each tie back to a corresponding sdaY device on either node.

 _________________________________________________                 _________________________________________________ 
| [ an-node01 ]                                   |               |                                   [ an-node02 ] |
|  ________       __________                      |               |                      __________       ________  |
| [_disk_1_]--+--[_/dev/sda_]                     |               |                     [_/dev/sda_]--+--[_disk_1_] |
|  ________   |    |   ___________    _______     |               |     _______    ___________   |    |   ________  |
| [_disk_2_]--+    +--[_/dev/sda1_]--[_/boot_]    |               |    [_/boot_]--[_/dev/sda1_]--+    +--[_disk_2_] |
|  ________   |    |   ___________    ________    |               |    ________    ___________   |    |   ________  |
| [_disk_3_]--+    +--[_/dev/sda2_]--[_<swap>_]   |               |   [_<swap>_]--[_/dev/sda2_]--+    +--[_disk_3_] |
|  ________   |    |   ___________    ___         |               |         ___    ___________   |    |   ________  |
| [_disk_4_]--/    +--[_/dev/sda3_]--[_/_]        |               |        [_/_]--[_/dev/sda3_]--+    \--[_disk_4_] |
|                  |   ___________                |               |                ___________   |                  |
|                  +--[_/dev/sda5_]------------\  |               |  /------------[_/dev/sda5_]--+                  |
|                  |   ___________             |  |               |  |             ___________   |                  |
|                  +--[_/dev/sda6_]----------\ |  |               |  | /----------[_/dev/sda6_]--+                  |
|                  |   ___________           | |  |               |  | |           ___________   |                  |
|                  \--[_/dev/sda7_]--------\ | |  |               |  | | /--------[_/dev/sda7_]--/                  |
|        _______________    ____________   | | |  |               |  | | |   ____________    _______________        |
|    /--[_Clustered_LVM_]--[_/dev/drbd2_]--/ | |  |               |  | | \--[_/dev/drbd2_]--[_Clustered_LVM_]--\    |
|   _|__                     |   _______     | |  |               |  | |      |   _______                    __|_   |
|  [_PV_]                    \--{_bond1_}    | |  |               |  | |      \--{_bond1_}                  [_PV_]  |
|   _|________                               | |  |               |  | |                               ________|_   |
|  [_an02-vg0_]                              | |  |               |  | |                              [_an02-vg0_]  |
|    |   ________________________    ....... | |  |               |  | |  _____     ________________________   |    |
|    +--[_/dev/an02-vg0/vm0003_1_]---:.vm3.: | |  |               |  | | [_vm3_]---[_/dev/an02-vg0/vm0003_1_]--+    |
|    |   ________________________    ....... | |  |               |  | |  _____     ________________________   |    |
|    \--[_/dev/an02-vg0/vm0004_1_]---:.vm4.: | |  |               |  | | [_vm4_]---[_/dev/an02-vg0/vm0004_1_]--/    |
|          _______________    ____________   | |  |               |  | |   ____________    _______________          |
|      /--[_Clustered_LVM_]--[_/dev/drbd1_]--/ |  |               |  | \--[_/dev/drbd1_]--[_Clustered_LVM_]--\      |
|     _|__                     |   _______     |  |               |  |      |   _______                    __|_     |
|    [_PV_]                    \--{_bond1_}    |  |               |  |      \--{_bond1_}                  [_PV_]    |
|     _|________                               |  |               |  |                               ________|_     |
|    [_an01-vg0_]                              |  |               |  |                              [_an01-vg0_]    |
|      |   ________________________     _____  |  |               |  | .......    ________________________   |      |
|      +--[_/dev/an01-vg0/vm0001_1_]---[_vm1_] |  |               |  | :.vm1.:---[_/dev/an01-vg0/vm0001_1_]--+      |
|      |   ________________________     _____  |  |               |  | .......    ________________________   |      |
|      \--[_/dev/an01-vg0/vm0002_1_]---[_vm2_] |  |               |  | :.vm2.:---[_/dev/an01-vg0/vm0002_1_]--/      |
|            _______________    ____________   |  |               |  |   ____________    _______________            |
|        /--[_Clustered_LVM_]--[_/dev/drbd0_]--/  |               |  \--[_/dev/drbd0_]--[_Clustered_LVM_]--\        |
|       _|__                     |   _______      |               |       |   _______                    __|_       |
|      [_PV_]                    \--{_bond1_}     |               |       \--{_bond1_}                  [_PV_]      |
|       _|__________                              |               |                              __________|_       |
|      [_shared-vg0_]                             |               |                             [_shared-vg0_]      |
|       _|_________________________               |               |               _________________________|_       |
|      [_/dev/shared-vg0/lv_shared_]              |               |              [_/dev/shared-vg0/lv_shared_]      |
|        |   ______    _________                  |               |                  _________    ______   |        |
|        \--[_GFS2_]--[_/shared_]                 |               |                 [_/shared_]--[_GFS2_]--/        |
|                                          _______|   _________   |_______                                          |
|                                         | bond1 =--| Storage |--= bond1 |                                         |
|                                         |______||  | Network |  ||______|                                         |
|_________________________________________________|  |_________|  |_________________________________________________|
.

Component; Clustered LVM

With DRBD providing the raw storage for the cluster, we must next consider partitions. This is where Clustered LVM, known as CLVM, comes into play.

CLVM is ideal in that by using DLM, the distributed lock manager. It won't allow access to cluster members outside of corosync's closed process group, which, in turn, requires quorum.

It is ideal because it can take one or more raw devices, known as "physical volumes", or simple as PVs, and combine their raw space into one or more "volume groups", known as VGs. These volume groups then act just like a typical hard drive and can be "partitioned" into one or more "logical volumes", known as LVs. These LVs are where KVM's virtual machine guests will exist and where we will create our GFS2 clustered file system.

LVM is particularly attractive because of how flexible it is. We can easily add new physical volumes later, and then grow an existing volume group to use the new space. This new space can then be given to existing logical volumes, or entirely new logical volumes can be created. This can all be done while the cluster is online offering an upgrade path with no down time.

Component; GFS2

With DRBD providing the clusters raw storage space, and Clustered LVM providing the logical partitions, we can now look at the clustered file system. This is the role of the Global File System version 2, known simply as GFS2.

It works much like standard filesystem, with user-land tools like mkfs.gfs2, fsck.gfs2 and so on. The major difference is that it and clvmd use the cluster's distributed locking mechanism provided by the dlm_controld daemon. Once formatted, the GFS2-formatted partition can be mounted and used by any node in the cluster's closed process group. All nodes can then safely read from and write to the data on the partition simultaneously.

Note: GFS2 is only supported when run on top of Clustered LVM LVs. This is because, in certain error states, gfs2_controld will call dmsetup to disconnect the GFS2 partition from its storage in certain failure states.

Component; DLM

One of the major roles of a cluster is to provide distributed locking for clustered storage and resource management.

Whenever a resource, GFS2 filesystem or clustered LVM LV needs a lock, it sends a request to dlm_controld which runs in userspace. This communicates with DLM in kernel. If the lockspace does not yet exist, DLM will create it and then give the lock to the requester. Should a subsequant lock request come in for the same lockspace, it will be rejected. Once the application using the lock is finished with it, it will release the lock. After this, another node may request and receive a lock for the lockspace.

If a node fails, fenced will alert dlm_controld that a fence is pending and new lock requests will block. After a successful fence, fenced will alert DLM that the node is gone and any locks the victim node held are released. At this time, other nodes may request a lock on the lockspaces the lost node held and can perform recovery, like replaying a GFS2 filesystem journal, prior to resuming normal operation.

Note that DLM locks are not used for actually locking the file system. That job is still handled by plock() calls (POSIX locks).

Component; KVM

Two of the most popular open-source virtualization platforms available in the Linux world today and Xen and KVM. The former is maintained by Citrix and the other by Redhat. It would be difficult to say which is "better", as they're both very good. Xen can be argued to be more mature where KVM is the "official" solution supported by Red Hat in EL6.

We will be using the KVM hypervisor within which our highly-available virtual machine guests will reside. It is a type-1 hypervisor, which means that the host operating system runs directly on the bare hardware. Contrasted against Xen, which is a type-2 hypervisor where even the installed OS is itself just another virtual machine.

Node Installation

This section is going to be intentionally vague, as I don't want to influence too heavily what hardware you buy or how you install your operating systems. However, we need a baseline, a minimum system requirement of sorts. Also, I will refer fairly frequently to my setup, so I will share with you the details of what I bought. Please don't take this as an endorsement though... Every cluster will have its own needs, and you should plan and purchase for your particular needs.

In my case, my goal was to have a low-power consumption setup and I knew that I would never put my cluster into production as it's strictly a research and design cluster. As such, I can afford to be quite modest.

Minimum Requirements

This will cover two sections;

  • Node Minimum requirements
  • Infrastructure requirements

The nodes are the two separate servers that will, together, form the base of our cluster. The infrastructure covers the networking and the switched power bars called a PDUs.

Node Requirements

General;

As these nodes will host virtual machines, then will need sufficient RAM and provide virtualization-enabled CPUs. Most, though not all, modern processors support hardware virtualization extensions. Finally, you need to have sufficient network bandwidth across two independent links to support the maximum burst storage traffic plus enough headroom to ensure that cluster traffic is never interrupted.

Network;

This tutorial will use three independent networks, each using two physical interfaces in a bonded configuration. These will route through two separate managed switches for high-availability networking. Each network will be dedicated to a given traffic type. This requires six interfaces and, with a separate IPMI interface, consumes a staggering seven ports per node.

Understanding that this may not be feasible, you can drop this to just two connections in a single bonded interface. If you decide to do this, you will need to configure QoS to ensure that totem multicast traffic gets highest priority as a delay of less than one second can cause the cluster to break. You also need to test sustained, heavy disk traffic to ensure that it doesn't cause problems. In particular, run storage tests from a virtual machine and then live-migrate that machine to create a "worst case" network load. If that succeeds, you are probably safe. All of this is outside of this tutorial's scope though.

Power;

In production, you will want to use servers which have redundant power supplies and ensure that either side of the power connects to two separate power sources.

Out-of-Band Management;

As we will discuss later, the ideal method of fencing a node is to use IPMI or one of the vendor-specific variants like HP's iLO, Dell's DRAC or IBM's RSA. This allows another node in the cluster to force the host node to power off, regardless of the state of the operating system. Critically, it can confirm to the caller once the node has been shut down, which allows for the cluster to safely and confidently recover lost services.

The two nodes used to create this tutorial have the following hardware (again, these will never see production use, so I could afford to go low);

Infrastructure Requirements

Network;

You will need two separate switches in order to provide High Availability. These do not need to be stacked or even managed, but you do need to consider their actual capabilities and disregard the stated capacity. What I mean by this, in essence, is that not all gigabit equipment is equal. You will need to calculate how much bandwidth (in raw data throughput and as packets-per-second) and confirm that the switch can sustain that load. Most switches will rate these two values as their switching fabric capacity, so be sure to look closely at the specifications.

Another thing to consider is whether you wish to run at an MTU higher that 1500 bytes per packet. This is generally referred to in specification sheets as "jumbo frame" support. However, many lesser companies will advertise support for jumbo frames, but they only support up to 4 KiB. Most professional networks looking to implement large MTU sizes aim for 9 KiB frame sizes, so be sure to look at the actual size of the largest supported jumbo frame before purchasing network equipment.

Power;

As we will discuss later, we need a backup fence device. This will be implemented using a specific brand and model of switched power distribution unit, called a PDU which is effectively a power bar whose outlets can be independently turned on and off over the network. This tutorial uses an [ APC AP7900] PDU, but many others are available. Should you choose to use another make or model, you must first ensure that it has a supported fence agent. Ensuring this is an exercise for the reader.

In production environments, it is ideal to have each PDU backed by its own UPS, and each UPS connected to a separate mains electrical circuit. This way, the failure of a given PDU, UPS or mains circuit will not cause an interruption to the cluster. Do be sure to plan your power infrastructure to supply enough power to drive the entire cluster at full load in a failed state. That is, more plainly, don't divide the total load in two when planning your infrastructure. You must always plan for a failed state!

Hardware used in this tutorial are;

Two Notes;

  1. The D-Link switch I use is being phased out and is being replaced by the DGS-3120-24TC models. The DGS-3120 models are much improved over the DGS-3100 series and can be safely used in stacked configuration (thus enabling the use of VLAN LAGs). The DGS-3100 would interrupt traffic when a switch in the stack recovered, which would partition the cluster. This forced me to unstack the switches in this tutorial.
  2. Given my budget, I could not afford to purchase redundant power supplies for use in this tutorial. As such, my test cluster has the power as a single point of failure. For learning, this is fine, but it is strongly ill-advised in production. I do show an example configuration of redundant PSU use spread across separate PDUs from a production cluster.

Pre-Installation Planning

Before you assemble your servers, it is highly advised to first record the MAC addresses of the NICs. I always write a little file called <node>-nics.txt matched to the device name I plan to set it to.

vim ~/an-node01-nics.txt
eth0	00:E0:81:C7:EC:49	# Back-Channel Network - Link 1
eth1	00:E0:81:C7:EC:48	# Storage Network - Link 1
eth2	00:E0:81:C7:EC:47	# Internet-Facing Network - Link 1
eth3	00:1B:21:9D:59:FC	# Back-Channel Network - Link 2
eth4	00:1B:21:BF:70:02	# Storage Network - Link 2
eth5	00:1B:21:BF:6F:FE	# Back-Channel Network - Link 2

How, or even if you record this is entirely up to you.

OS Installation

Warning: EL6.1 shipped with a version of corosync that had a token retransmit bug. On slower systems, there would be a form of race condition which would cause totem tokens the be retransmitted and cause significant performance problems. This has been resolved in EL6.2 so please be sure to upgrade.

Beyond being based on RHEL 6, there are no requirements for how the operating system is installed. This tutorial is written using "minimal" installs, and as such, installation instructions will be provided that will install all needed packages if they aren't already installed on your nodes.

A few notes about the installation used for this tutorial;

Obviously, this significantly reduces the security of your nodes. For learning, which is the goal here, this helps keep a focus on the clustering and simplifies debugging when things go wrong. In production clusters though, these steps are ill advised. It is strongly suggested that you enable first the firewall, then when that is working, enabling selinux. Leaving selinux for last is intentional, as it generally takes the most work to get right.

Network Security

When building production clusters, you will want to consider two options with regard to network security.

First, the interfaces connected to an untrusted network, like the Internet, should not have an IP address, though the interfaces themselves will need to be up so that virtual machines can route through them to the outside world. Alternatively, anything inbound from the virtual machines or inbound from the untrusted network should be DROPed by the firewall.

Second, if you can not run the cluster communications or storage traffic on dedicated network connections over isolated subnets, you will need to configure the firewall to block everything except the ports needed by storage and cluster traffic. The default ports are below.

Component Protocol Port Note
dlm TCP 21064
drbd TCP 7788+ Each DRBD resource will use an additional port, generally counting up (ie: r0 will use 7788, r1 will use 7789, r2 will use 7790 and so on).
luci TCP 8084 Optional web-based configuration tool, not used in this tutorial.
modclusterd TCP 16851
ricci TCP 11111 Each DRBD resource will use an additional port, generally counting up (ie: r1 will use 7790, r2 will use 7791 and so on).
totem UDP/multicast 5404, 5405 Uses a multicast group for cluster communications
Note: As of EL6.2, you can now use unicast for totem communication instead of multicast. This is not advised, and should only be used for clusters of two or three nodes on networks where unresolvable multicast issues exist. If using gfs2, as we do here, using unicast for totem is strongly discouraged.

As mentioned above, we will disable selinux and iptables. This is to simplify the learning process and both should be enabled pre-production.

To disable the firewall (note that I disable both iptables and ip6tables):

chkconfig iptables off
chkconfig ip6tables off
/etc/init.d/iptables stop
/etc/init.d/ip6tables stop

To disable selinux:

cp /etc/selinux/config /etc/selinux/config.orig
vim /etc/selinux/config
diff -u /etc/selinux/config.orig /etc/selinux/config
--- /etc/selinux/config.orig	2012-06-15 18:13:12.416646749 -0400
+++ /etc/selinux/config	2012-06-15 18:09:46.920938956 -0400
@@ -4,7 +4,7 @@
 #     enforcing - SELinux security policy is enforced.
 #     permissive - SELinux prints warnings instead of enforcing.
 #     disabled - No SELinux policy is loaded.
-SELINUX=enforcing
+SELINUX=disabled
 # SELINUXTYPE= can take one of these two values:
 #     targeted - Targeted processes are protected,
 #     mls - Multi Level Security protection.

You must reboot for the selinux changes to take effect.

Network

Before we begin, let's take a look at a block diagram of what we're going to build. This will help when trying to see what we'll be talking about.

                                                              ______________                                                         
                                                             [___Internet___]                                                        
  _____________________________________________________             |             _____________________________________________________ 
 | [ an-node01 ]                                       |            |            |                                       [ an-node02 ] |
 |                       ____________    ______________|        ____|____        |______________    ____________                       |
 |                      |    vbr2    |--| bond2        |       | [ IFN ] |       |        bond2 |--|   vbr2     |                      |
 |  _________________   | 10.255.0.1 |  | ______       |      _|_________|_      |       ______ |  | 10.255.0.2 |  ................... |
 | | [ vm0001-dev ]  |  |____________|  || eth2 =--\   |     |   Switch 1  |     |   /--= eth2 ||  |____________|  :  [ vm0001-dev ] : |
 | | [ Dev Server ]  |    | | : :       ||_____|    \--=-----|_____________|-----=--/    |_____||       | | : :    :  [ Dev Server ] : |
 | |           ______|    | | : :       | ______    /--=-----|   Switch 2  |-----=--\    ______ |       | | : :    :.......          : |
 | |          | eth0 =----/ | : :       || eth5 =--/   |     |_____________|     |   \--= eth5 ||       | | : :----= eth0 :          : |
 | |          |_____||      | : :       ||_____|       |                         |       |_____||       | | :      ::.....:          : |
 | |      10.254.0.1 |      | : :       |______________|                         |______________|       | | :      :                 : |
 | |_________________|      | : :        ______________|                         |______________        | | :      :.................: |
 |                          | : :       | bond1        |        _________        |        bond1 |       | | :                          |
 |  _________________       | : :       |   10.10.0.1  |       | [ SN  ] |       | 10.10.0.2    |       | | :      ................... |
 | | [ vm0002-web ]  |      | : :       | ______       |      _|_________|_      |       ______ |       | | :      :  [ vm0002-web ] : |
 | | [ Web Server ]  |      | : :       || eth1 =--\   |     |   Switch 1  |     |   /--= eth1 ||       | | :      :  [ Web Server ] : |
 | |           ______|      | : :       ||_____|    \--=-----|_____________|-----=--/    |_____||       | | :      :.......          : |
 | |          | eth0 =------/ : :       | ______    /--=-----|   Switch 2  |-----=--\    ______ |       | | :------= eth0 :          : |
 | |          |_____||        : :       || eth4 =--/   |     |_____________|     |   \--= eth4 ||       | |        ::.....:          : |
 | |      10.254.0.2 |        : :       ||_____|       |                         |       |_____||       | |        :                 : |
 | |_________________|        : :       |______________|                         |______________|       | |        :.................: |
 |                            : :        ______________|                         |______________        | |                            |
 | ...................        : :       | bond0        |        _________        |        bond0 |       | |         _________________  |
 | : [ vm0003-db  ]  :        : :       |   10.20.0.1  |       | [ BCN ] |       | 10.20.0.2    |       | |        |  [ vm0003-db  ] | |
 | : [ DB Server  ]  :        : :       | ______       |      _|_________|_      |       ______ |       | |        |  [ DB Server  ] | |
 | :          .......:        : :       || eth0 =--\   |  /--|   Switch 1  |--\  |   /--= eth0 ||       | |        |______           | |
 | :          : eth0 =--------: :       ||_____|    \--=--+--|_____________|--+--=--/    |_____||       | \--------= eth0 |          | |
 | :          :.....::          :       | ______    /--=--+--|   Switch 2  |--+--=--\    ______ |       |          ||_____|          | |
 | :                 :          :       || eth3 =--/   |  |  |_____________|  |  |   \--= eth3 ||       |          | 10.254.0.3      | |
 | :.................:          :       ||_____|       |  |     |       |     |  |       |_____||       |          |_________________| |
 |                              :       |______________|  |     |       |     |  |______________|       |                              |
 | ...................          :                      |  |     |       |     |  |                      |           _________________  |
 | : [ vm0004-win ]  :          :                      |  |     |       |     |  |                      |          |  [ vm0004-win ] | |
 | : [ MS Server  ]  :          :                      |  |     |       |     |  |                      |          |  [ MS Server  ] | |
 | :          .......:          :                      |  |     |       |     |  |                      |          |______           | |
 | :          : NIC0 =----------:                      |  |     |       |     |  |                      \----------= NIC0 |          | |
 | :          :.....::                           ______|  |     |       |     |  |______                           ||_____|          | |
 | :                 :                  _____   | IPMI =--/     |       |     \--= IPMI |   _____                  | 10.254.0.4      | |
 | :.................:                 [_BMC_]--|_____||        |       |        ||_____|--[_BMC_]                 |_________________| |
 |                                                     |        |       |        |                                                     |
 |                                 ______ ______       |        |       |        |       ______ ______                                 |
 |                                | PSU1 | PSU2 |      |        |       |        |      | PSU2 | PSU1 |                                |
 |________________________________|______|______|______|        |       |        |______|______|______|________________________________|
                                       || ||                ____|_     _|____                || ||                                      
                                       || ||               | PDU1 |   | PDU2 |               || ||                                      
                                       || ||               |______|   |______|               || ||                                      
                                       || ||                 || ||     || ||                 || ||                                      
                                       || \\===[ Power 1 ]===// ||     || \\===[ Power 1 ]===// ||                                      
                                       \\======[ Power 2 ]======||=====//                       ||                                      
                                                                \\=============[ Power 2 ]======//

The cluster will use three separate /16 (255.255.0.0) networks;

Note: There are situations where it is not possible to add additional network cards, blades being a prime example. In these cases it will be up to the admin to decide how to proceed. If there is sufficient bandwidth, you can merge all networks, but it is advised in such cases to isolate IFN traffic from the SN/BCN traffic using VLANs.
Purpose Subnet Notes
Internet-Facing Network (IFN) 10.255.0.0/16
  • Each node will use 10.255.0.x where x matches the node ID.
  • Virtual Machines in the cluster that need to be connected to the Internet will use 192.168.1.0/24. These IPs are intentionally separate from the two nodes' IFN bridge's IPs. If you are particularly concerned about security, you can drop the bridges' IPs once the cluster is build and add a firewall rule to reject all traffic from the VMs.
Storage Network (SN) 10.10.0.0/16
  • Each node will use 10.10.0.x where x matches the node ID.
Back-Channel Network (BCN) 10.20.0.0/16
  • Each node will use 10.20.0.x where x matches the node ID.
  • Node-specific IPMI or other out-of-band management devices will use 10.20.1.x where x matches the node ID.
  • Multi-port fence devices, switches and similar will use 10.20.2.z where z is a simple sequence.

Miscellaneous equipment in the cluster, like managed switches, will use 10.20.3.z where z is a simple sequence.

Optional OpenVPN Network 10.30.0.0/16 * For clients behind firewalls, I like to create a VPN server for the cluster nodes to log into when support is needed. This way, the client retains control over when remote access is available simply by starting and stopping the openvpn daemon. This will not be discussed any further in this tutorial.

We will be using six interfaces, bonded into three pairs of two NICs in Active/Passive (mode 1) configuration. Each link of each bond will be on alternate, unstacked switches. This configuration is the only configuration supported by Red Hat in clusters. We will also configure affinity by specifying interfaces eth0, eth1 and eth2 as primary for the bond0, bond1 and bond2 interfaces, respectively. This way, when everything is working fine, all traffic is routed through the same switch for maximum performance.

Note: Only the bonded interface used by corosync must be in Active/Passive configuration (bond0 in this tutorial). If you want to experiment with other bonding modes for bond1 or bond2, please feel free to do so. That is outside the scope of this tutorial, however.

If you can not install six interfaces in your server, then four interfaces will do with the SN and BCN networks merged.

Warning: If you wish to merge the SN and BCN onto one interface, test to ensure that the storage traffic will not block cluster communication. Test by forming your cluster and then pushing your storage to maximum read and write performance for an extended period of time (minimum of several seconds). If the cluster partitions, you will need to do some advanced quality-of-service or other network configuration to ensure reliable delivery of cluster network traffic.

In this tutorial, we will use two D-Link DGS-3120-24TC/SI, stacked, using three VLANs to isolate the three networks.

  • BCN will have VLAN ID of 1, which is the default VLAN.
  • SN will have VLAN ID number 100.
  • IFN will have VLAN ID number 101.
Note: Switch configuration details.

The actual mapping of interfaces to bonds to networks will be:

Subnet Cable Colour VLAN ID Link 1 Link 2 Bond IP
BCN Blue 1 eth0 eth3 bond0 10.20.0.x
SN Green 100 eth1 eth4 bond1 10.10.0.x
IFN Black 101 eth2 eth5 bond2 10.255.0.x

Setting Up the Network

Warning: The following steps can easily get confusing, given how many files we need to edit. Losing access to your server's network is a very real possibility! Do not continue without direct access to your servers! If you have out-of-band access via iKVM, console redirection or similar, be sure to test that it is working before proceeding.

Planning The Use of Physical Interfaces

In production clusters, I generally intentionally get three separate dual-port controllers (two on-board interfaces plus two separate dual-port PCIe cards). I then ensure that no bond uses two interfaces on the same physical board. Thus, should a card or its bus interface fail, none of the bonds will fail completely.

Lets take a look at an example layout;

 ____________________                            
| [ an-node01 ]      |                           
|         ___________|            _______              
|        |     ______|           | bond0 |             
|        | O  | eth0 =-----------=---.---=------{
|        | n  |_____||  /--------=--/    |             
|        | b         |  |        |_______|             
|        | o   ______|  |         _______        
|        | a  | eth1 =--|--\     | bond1 |      
|        | r  |_____||  |   \----=--.----=------{
|        | d         |  |  /-----=--/    |       
|        |___________|  |  |     |_______|       
|         ___________|  |  |      _______        
|        |     ______|  |  |     | bond2 |       
|        | P  | eth2 =--|--|-----=---.---=------{
|        | C  |_____||  |  |  /--=--/    |       
|        | I         |  |  |  |  |_______|       
|        | e   ______|  |  |  |                  
|        |    | eth3 =--/  |  |                  
|        | 1  |_____||     |  |                  
|        |___________|     |  |                  
|         ___________|     |  |                  
|        |     ______|     |  |                  
|        | P  | eth4 =-----/  |                  
|        | C  |_____||        |                  
|        | I         |        |                  
|        | e   ______|        |                  
|        |    | eth5 =--------/                  
|        | 2  |_____||                           
|        |___________|                           
|____________________|

Consider the possible failure scenarios;

  • The on-board controllers fail;
    • bond0 falls back onto eth3 on the PCIe 1 controller.
    • bond1 falls back onto eth4 on the PCIe 2 controller.
    • bond2 is unaffected.
  • The PCIe #1 controller fails
    • bond0 remains on eth0 interface but losses its redundancy as eth3 is down.
    • bond1 is unaffected.
    • bond2 falls back onto eth5 on the PCIe 2 controller.
  • The PCIe #2 controller fails
    • bond0 is unaffected.
    • bond1 remains on eth1 interface but losses its redundancy as eth4 is down.
    • bond2 remains on eth2 interface but losses its redundancy as eth5 is down.

In all three failure scenarios, no network interruption occurs making for the most robust configuration possible.

Managed and Stacking Switch Notes

Note: If you have two stacked switches, be extra careful to test them to ensure that traffic will not block when a switch is lost or is recovering!

There are two things you need to be wary of with managed switches.

  • Don't stack them unless you can confirm that there will be no interruption in traffic flow on the surviving switch when the lost switch disappears or recovers. It may seem like it makes sense to stack them and create Link Aggregation Groups, but this can cause problems. When in doubt, don't stack the switches.
  • Disable Spanning Tree Protocol on all ports used by the cluster. Otherwise, when a lost switch is recovered, STP negotiation will cause traffic to stop on the ports for upwards of thirty seconds. This is more than enough time to partition a cluster.

If you use three VLANs across two unstacked switches, be sure to use a dedicate uplink for each VLAN. You may need to enable STP of these uplinks to avoid switch loops if the VLANs themselves are not enough. The reason for doing this is to ensure that cluster communications always have a clear path for traffic. If you had only one uplink between the two switches, and you found yourself in a situation where a node's BCN and SN faulted through the backup switch, the storage traffic could saturate the uplink and cause intolerable latency for the BCN traffic, leading to cluster partitioning.

Connecting Fence Devices

As we will see soon, each node can be fenced either by calling its IPMI interface or by calling the PDU and cutting the node's power. Each of these methods are inherently single points of failure as each has only one network connection. To work around this concern, we will connect all IPMI interfaces to one switch and the PDUs to the secondary switch. This way, should a switch fail, only one of the two fence devices will fail and fencing in general will still be possible via the alternate fence device.

Generally speaking, I like to connect the IPMI interfaces to the primary switch and the PDUs to the backup switch.

Making Sure We Know Our Interfaces

When you installed the operating system, the network interfaces names are somewhat randomly assigned to the physical network interfaces. It more than likely that you will want to re-order.

Before you start moving interface names around, you will want to consider which physical interfaces you will want to use on which networks. At the end of the day, the names themselves have no meaning. At the very least though, make them consistent across nodes.

Some things to consider, in order of importance:

  • If you have a shared interface for your out-of-band management interface, like IPMI or iLO, you will want that interface to be on the Back-Channel Network.
  • For redundancy, you want to spread out which interfaces are paired up. In my case, I have three interfaces on my mainboard and three additional add-in cards. I will pair each onboard interface with an add-in interface. In my case, my IPMI interface physically piggy-backs on one of the onboard interfaces so this interface will need to be part of the BCN bond.
  • Your interfaces with the lowest latency should be used for the back-channel network.
  • Your two fastest interfaces should be used for your storage network.
  • The remaining two slowest interfaces should be used for the Internet-Facing Network bond.

In my case, all six interfaces are identical, so there is little to consider. The left-most interface on my system has IPMI, so its paired network interface will be eth0. I simply work my way left, incrementing as I go. What you do will be whatever makes most sense to you.

There is a separate, short tutorial on re-ordering network interface;

Once you have the physical interfaces named the way you like, proceed to the next step.

Planning Our Network

To setup our network, we will need to edit the ifcfg-ethX, ifcfg-bondX and ifcfg-vbr2 scripts. The last one will create a bridge, like a virtual network switch, which will be used to route network connections between the virtual machines and the outside world, via the IFN. You will note that the bridge will have the IP addresses, not the bonded interface bond2. It will instead be slaved to the vbr2 bridge.

We're going to be editing a lot of files. It's best to lay out what we'll be doing in a chart. So our setup will be:

Node BCN IP and Device SN IP and Device IFN IP and Device
an-node01 10.20.0.1 on bond0 10.10.0.1 on bond1 10.255.0.1 on vbr2 (bond2 slaved)
an-node02 10.20.0.2 on bond0 10.10.0.2 on bond1 10.255.0.2 on vbr2 (bond2 slaved)

Switch Network Daemons

The new NetworkManager daemon is much more flexible and is perfect for machines like laptops which move around networks a lot. However, it does this by making a lot of decisions for you and changing the network as it sees fit. As good as this is for laptops and the like, it's not appropriate for servers. We will want to use the traditional network service.

yum remove NetworkManager

Now enable network to start with the system.

chkconfig network on
chkconfig --list network
network        	0:off	1:off	2:on	3:on	4:on	5:on	6:off

Creating Some Network Configuration Files

Warning: Bridge configuration files must have a file name which will sort after the interface and bridge files. The actual device name can be whatever you want though. If the system tries to start a bridge before its slaved interface is up, it will fail. I personally like to use the name vbrX for "virtual machine bridge". You can use whatever makes sense to you, with the above concern in mind.

Start by touching the configuration files we will need.

touch /etc/sysconfig/network-scripts/ifcfg-bond{0,1,2}
touch /etc/sysconfig/network-scripts/ifcfg-vbr2

Now make a backup of your configuration files, in case something goes wrong and you want to start over.

mkdir /root/backups/
rsync -av /etc/sysconfig/network-scripts/ifcfg-eth* /root/backups/
sending incremental file list
ifcfg-eth0
ifcfg-eth1
ifcfg-eth2
ifcfg-eth3
ifcfg-eth4
ifcfg-eth5

sent 1467 bytes  received 126 bytes  3186.00 bytes/sec
total size is 1119  speedup is 0.70

Configuring The Bridge

We'll start in reverse order, crafting the bridge's script first.

an-node01 IFN Bridge:

vim /etc/sysconfig/network-scripts/ifcfg-vbr2
# Internet-Facing Network - Bridge
DEVICE="vbr2"
TYPE="Bridge"
BOOTPROTO="static"
IPADDR="10.255.0.1"
NETMASK="255.255.0.0"
GATEWAY="10.255.255.254"
DNS1="8.8.8.8"
DNS2="8.8.4.4"
DEFROUTE="yes"

Creating the Bonded Interfaces

Next up, we'll can create the three bonding configuration files. This is where two physical network interfaces are tied together to work like a single, highly available network interface. You can think of a bonded interface as being akin to RAID level 1; A new virtual device is created out of two real devices.

We're going to see a long line called "BONDING_OPTS". Let's look at the meaning of these options before we look at the configuration;

  • mode=1 sets the bonding mode to active-backup.
  • The miimon=100 tells the bonding driver to check if the network cable has been unplugged or plugged in every 100 milliseconds.
  • The use_carrier=1 tells the driver to use the driver to maintain the link state. Some drivers don't support that. If you run into trouble, try changing this to 0.
  • The updelay=120000 tells the driver to delay switching back to the primary interface for 120,000 milliseconds (2 minutes). This is designed to give the switch connected to the primary interface time to finish booting. Setting this too low may cause the bonding driver to switch back before the network switch is ready to actually move data. Some switches will not provide a link until it is fully booted, so please experiment.
  • The downdelay=0 tells the driver not to wait before changing the state of an interface when the link goes down. That is, when the driver detects a fault, it will switch to the backup interface immediately.

an-node01 BCN Bond:

vim /etc/sysconfig/network-scripts/ifcfg-bond0
# Back-Channel Network - Bond
DEVICE="bond0"
BOOTPROTO="static"
NM_CONTROLLED="no"
ONBOOT="yes"
BONDING_OPTS="mode=1 miimon=100 use_carrier=1 updelay=120000 downdelay=0 primary=eth0"
IPADDR="10.20.0.1"
NETMASK="255.255.0.0"

an-node01 SN Bond:

vim /etc/sysconfig/network-scripts/ifcfg-bond1
# Storage Network - Bond
DEVICE="bond1"
BOOTPROTO="static"
NM_CONTROLLED="no"
ONBOOT="yes"
BONDING_OPTS="mode=1 miimon=100 use_carrier=1 updelay=120000 downdelay=0 primary=eth1"
IPADDR="10.10.0.1"
NETMASK="255.255.0.0"

an-node01 IFN Bond:

vim /etc/sysconfig/network-scripts/ifcfg-bond2
# Internet-Facing Network - Bond
DEVICE="bond2"
BRIDGE="vbr2"
BOOTPROTO="none"
NM_CONTROLLED="no"
ONBOOT="yes"
BONDING_OPTS="mode=1 miimon=100 use_carrier=1 updelay=120000 downdelay=0 primary=eth2"

Alter The Interface Configurations

With the bridge and bonds in place, we can now alter the interface configurations.

Which two interfaces you use in a given bond is entirely up to you. I've found it easiest to keep straight when I match the bondX to the primary interface's ethX number.

an-node01's eth0, the BCN bond0, Link 1:

vim /etc/sysconfig/network-scripts/ifcfg-eth0
# Back-Channel Network - Link 1
HWADDR="00:E0:81:C7:EC:49"
DEVICE="eth0"
NM_CONTROLLED="no"
ONBOOT="yes"
BOOTPROTO="none"
MASTER="bond0"
SLAVE="yes"

an-node01's eth1, the SN bond1, Link 1:

vim /etc/sysconfig/network-scripts/ifcfg-eth1
# Storage Network - Link 1
HWADDR="00:E0:81:C7:EC:48"
DEVICE="eth1"
NM_CONTROLLED="no"
ONBOOT="yes"
BOOTPROTO="none"
MASTER="bond1"
SLAVE="yes"

an-node01's eth2, the IFN bond2, Link 1:

vim /etc/sysconfig/network-scripts/ifcfg-eth2
# Internet-Facing Network - Link 1
HWADDR="00:E0:81:C7:EC:47"
DEVICE="eth2"
NM_CONTROLLED="no"
ONBOOT="yes"
BOOTPROTO="none"
MASTER="bond2"
SLAVE="yes"

an-node01's eth3, the BCN bond0, Link 2:

vim /etc/sysconfig/network-scripts/ifcfg-eth3
# Back-Channel Network - Link 2
HWADDR="00:1B:21:9D:59:FC"
DEVICE="eth3"
NM_CONTROLLED="no"
ONBOOT="yes"
BOOTPROTO="none"
MASTER="bond0"
SLAVE="yes"

an-node01's eth4, the SN bond1, Link 2:

vim /etc/sysconfig/network-scripts/ifcfg-eth4
# Storage Network - Link 2
HWADDR="00:1B:21:BF:70:02"
DEVICE="eth4"
NM_CONTROLLED="no"
ONBOOT="yes"
BOOTPROTO="none"
MASTER="bond1"
SLAVE="yes"

an-node01's eth5, the IFN bond2, Link 2:

vim /etc/sysconfig/network-scripts/ifcfg-eth5
# Internet-Facing Network - Link 2
HWADDR="00:1B:21:BF:6F:FE"
DEVICE="eth5"
NM_CONTROLLED="no"
ONBOOT="yes"
BOOTPROTO="none"
MASTER="bond2"
SLAVE="yes"

Loading The New Network Configuration

Simple restart the network service.

/etc/init.d/network restart

Updating /etc/hosts

On both nodes, update the /etc/hosts file to reflect your network configuration. Remember to add entries for your IPMI, switched PDUs and other devices.

vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

# an-node01
10.20.0.1	an-node01 an-node01.bcn an-node01.alteeve.ca
10.20.1.1	an-node01.ipmi
10.10.0.1	an-node01.sn
10.255.0.1	an-node01.ifn

# an-node02
10.20.0.2	an-node02 an-node02.bcn an-node02.alteeve.ca
10.20.1.2	an-node02.ipmi
10.10.0.2	an-node02.sn
10.255.0.2	an-node02.ifn

# Fence devices
10.20.2.1       pdu1 pdu1.alteeve.ca
10.20.2.2       pdu2 pdu2.alteeve.ca

# VPN interfaces, if used.
10.30.0.1	an-node01.vpn
10.30.0.2	an-node02.vpn
Warning: Remember, which ever switch you have the IPMI interfaces connected to, be sure to connect the PDU into the opposite switch! If both fence types are on one switch, then that switch becomes a single point of failure!
Note: I like to run an OpenVPN server and set up my remote clusters and customers as clients on this VPN to enable rapid, secure remote access when the client's firewall blocks inbound connections. This offers the client the option of disabling the openvpn client daemon until they wish to enable access. This tends to be easier for the client to manage as opposed to manipulating the firewall on demand. This will be the only mention of the VPN in this tutorial, but explains the last entries in the file above.

Setting up SSH

Setting up SSH shared keys will allow your nodes to pass files between one another and execute commands remotely without needing to enter a password. This will be needed later when we want to enable applications like libvirtd and its tools, like virt-manager.

SSH is, on its own, a very big topic. If you are not familiar with SSH, please take some time to learn about it before proceeding. A great first step is the Wikipedia entry on SSH, as well as the SSH man page; man ssh.

SSH can be a bit confusing keeping connections straight in you head. When you connect to a remote machine, you start the connection on your machine as the user you are logged in as. This is the source user. When you call the remote machine, you tell the machine what user you want to log in as. This is the remote user.

You will need to create an SSH key for each source user on each node, and then you will need to copy the newly generated public key to each remote machine's user directory that you want to connect to. In this example, we want to connect to either node, from either node, as the root user. So we will create a key for each node's root user and then copy the generated public key to the other node's root user's directory.

For each user, on each machine you want to connect from, run:

# The '2047' is just to screw with brute-forces a bit. :)
ssh-keygen -t rsa -N "" -b 2047 -f ~/.ssh/id_rsa
Generating public/private rsa key pair.
Created directory '/root/.ssh'.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
4a:52:a1:c7:60:d5:e8:6d:c4:75:20:dd:62:2b:86:c5 root@an-node01.alteeve.ca
The key's randomart image is:
+--[ RSA 2047]----+
|    o.o=.ooo.    |
|   . +..E.+..    |
|    ..+= . o     |
|     oo = .      |
|    . .oS.       |
|     o .         |
|      .          |
|                 |
|                 |
+-----------------+

This will create two files: the private key called ~/.ssh/id_rsa and the public key called ~/.ssh/id_rsa.pub. The private must never be group or world readable! That is, it should be set to mode 0600.

If you look closely when you created the ssh key, the node's fingerprint is show (4a:52:a1:c7:60:d5:e8:6d:c4:75:20:dd:62:2b:86:c5 for an-node01 above). Make a note of the fingerprint for each machine, and then compare it to the one presented to you when you ssh to a machine for the first time. If you are presented with a fingerprint that doesn't match, you could be facing a "man in the middle" attack.

To look up a fingerprint in the future, you can run the following;

ssh-keygen -l -f ~/.ssh/id_rsa
2047 4a:52:a1:c7:60:d5:e8:6d:c4:75:20:dd:62:2b:86:c5 /root/.ssh/id_rsa.pub (RSA)

The two newly generated files should look like;

Private key:

cat ~/.ssh/id_rsa
-----BEGIN RSA PRIVATE KEY-----
MIIEnwIBAAKCAQBs+CsWeKegqmtneZcLDvHV4QT1n+ajj98gkmjoLcIFW5g/VFRL
pSMMkwkQBgGDkmKPvYFa5OolL6qBQSAN1NpP8zET+1lZr4OFg/TZTuA8QnhNeh6V
mU2hSoyJfEkKJ6TVYg4s1rsbbTZPLdCDe9CMn/iI824WUu2wA8RwhF2WTqqTrWTW
4h8tYK9Y4eT4IYMXiYZ8+eQfzHyMaNxvUcI1Z8heMn/CEnrA67ja7Czi/ljYnw0I
3MXy9d2ANYjYahBLF2+ok19NS9tkFHDlcZTh0gTQ4vV5fksgdJjsWl5l/aLjnSRf
x2pQrMl3w8U7JBpr0PWJPIuzd4q47+KBI1A9AgEjAoIBADTtkUVtzcMQ8lbUqHMV
4y1eqqMwaLXYKowp2y7xp2GwJWCWrJnFPOjZs/HXCAy00Ml5TXVKnZ0IhgRENCP5
q92wos8w8OJrMUDZsXDdKxX0ZlGEdUFZFxPTwJqM0wTuryXQiorOsqbr5y3Fy62T
6PPYq+q/YVtM2dkmZrpO66DGcTkBA8tq8tTU3TdqZEVfmCzM9DIGz2hprvky+yDU
Pa296CP7+lHFty34K6j/WxD49+aKrdxXxdLbH/3Wfq7a9fu/FuYObPRtXoYRJNGP
ZEzfVoNwVdc3vETuzZPDoidkc4jomA4vM4cTS1EvwEWVHfaSdIE0wF16N1FlDgNA
hKsCgYEA9Xp5vGoPRer3hTSglGrPOTTkGEhXiE/JDMZ7w4fk2lXo+Q7HqxetrS6l
hMxY+x2W0FBfKwJqBuhVv4Y5MPLbC2JazwYDoP85g6RWH72ebsqdYwYvSx808iDs
C8HArWv8RtQ/K1pRVkq0GPhTdc22sYE9aKa5Hc6nd0SEmq+hLoUCgYBxo9c3M28h
jDpxwTkYszMfpIb++tCSrcBw8guqdqjhW6yH9kXva3NjfuzpOisb7cFN6dcSqjaC
HEZjpBWPUGLOPMnL1/mSsTErusgyh2+x8WjRjuqBJrh7CDN8gejMiski5nALQpxt
s6PKI5WHVqPQ395+549LQnoaCROyf4TUWQKBgFQp/doy/ewWC7ikVFAkntHI/b8u
vuzoJ6yb0qlwa7iSe8MbAwaldo8IrcchfZfs40AbjlfjkhD/M1ebu9ZEot9U6+81
QxKgpgE/qH/pPaJUGLQ8ooAn9OVNHbrjWADx0tZ0p/GbTxZFf5OIVyETVJShVuIN
RshkHCjkSrixPpObAoGAPbC2qPAJINcYaaNoI1n3Lm9B+CHBrrYYAsyJ/XOdgabL
X8A0l+nfjciPPMfOQlx+4ScrnGsHpbeT7PKsnkGUuRmvYAeHe4TC69psrbc8om0b
pPXPwnQbAPXSzo+qQybE9bBLc9O0AQm/UHm3kpy/VCHB7R6ePsxQ6Y/mHxIGR2MC
gYEAhW7evwpxUMcW+BV84xIIt7cW2K/mu8nOb2qajFTej+WgvHNT+h4vgs4ZrTkH
rHyUiN/tzTCxBnkoh1w9FmCdnAdr/+br56Zq8oEXzBUUALqeW0xnB0zpTc6Hn0xq
iU0P5cM1sgyCWv83MgeGegcpxt54K5bqUjPKjaUpLNqbtiA=
-----END RSA PRIVATE KEY-----

Public key (single line, but wrapped here to make it more readable):

cat ~/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQBs+CsWeKegqmtneZcLDvHV4QT1n+ajj98gkmjo
LcIFW5g/VFRLpSMMkwkQBgGDkmKPvYFa5OolL6qBQSAN1NpP8zET+1lZr4OFg/TZTuA8QnhN
eh6VmU2hSoyJfEkKJ6TVYg4s1rsbbTZPLdCDe9CMn/iI824WUu2wA8RwhF2WTqqTrWTW4h8t
YK9Y4eT4IYMXiYZ8+eQfzHyMaNxvUcI1Z8heMn/CEnrA67ja7Czi/ljYnw0I3MXy9d2ANYjY
ahBLF2+ok19NS9tkFHDlcZTh0gTQ4vV5fksgdJjsWl5l/aLjnSRfx2pQrMl3w8U7JBpr0PWJ
PIuzd4q47+KBI1A9 root@an-node01.alteeve.ca
Note: Generate the key on an-node02 before proceeding.

In order to enable password-less login, we need to create a file called ~/.ssh/authorized_keys and put both nodes' public key in it. To seed the ~/.ssh/authorized_keys file, we'll simply copy the ~/.ssh/id_rsa.pub file. After that, we will append an-node02's public key into it over ssh. Once both keys are in it, we'll push it over to an-node02. If you want to add your workstation's key as well, this is the best time to do so.

From an-node01, type:

rsync -av ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
sending incremental file list
id_rsa.pub

sent 482 bytes  received 31 bytes  1026.00 bytes/sec
total size is 404  speedup is 0.79

Now we'll grab the public key from an-node02 over SSH and append it to the new authorized_keys file.

I noted when I created an-node02's ssh key that its fingerprint was 04:08:37:43:6b:5c:a0:b0:f5:27:a7:46:d4:77:a3:34. This matches the one presented to me in the next step, so I trust that I am talking to the right machine.

ssh root@an-node02 "cat ~/.ssh/id_rsa.pub" >> ~/.ssh/authorized_keys
The authenticity of host 'an-node02 (10.20.0.2)' can't be established.
RSA key fingerprint is 04:08:37:43:6b:5c:a0:b0:f5:27:a7:46:d4:77:a3:34.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'an-node02,10.20.0.2' (RSA) to the list of known hosts.
root@an-node02's password:
Note: If you want to add your workstation's key, do so here.

Now push the local copy of authorized_keys with both keys over to an-node02.

rsync -av ~/.ssh/authorized_keys root@an-node02:/root/.ssh/
root@an-node02's password: 
sending incremental file list
authorized_keys

sent 1704 bytes  received 31 bytes  694.00 bytes/sec
total size is 1621  speedup is 0.93

Now log into the remote machine. This time, the connection should succeed without having entered a password!

ssh root@an-node02
Last login: Sat Dec 10 16:06:21 2011 from 10.20.255.254

Perfect! Once you can log into both nodes, from either node, without a password you will be finished.

Populating And Pushing ~/.ssh/known_hosts

Various applications will connect to the other node using different methods and networks. Each connection, when first established, will prompt for you to confirm that you trust the authentication, as we saw above. Many programs can't handle this prompt and will simply fail to connect. So to get around this, lets ssh into both nodes using all host names. This will populate a file called ~/.ssh/known_hosts. Once you do this on one node, you can simply copy the known_hosts to the other nodes and user's ~/.ssh/ directories.

I simply paste this into a terminal, answering yes and then immediately exit from the ssh session. This is a bit tedious, I admit, but it only needs to be done one time for all nodes. Take the time to check the fingerprints as they are displayed to you. It is a bad habit to blindly type yes.

Alter this to suit your host names.

ssh root@an-node01 && \
ssh root@an-node01.alteeve.ca && \
ssh root@an-node01.bcn && \
ssh root@an-node01.sn && \
ssh root@an-node01.ifn && \
ssh root@an-node02 && \
ssh root@an-node02.alteeve.ca && \
ssh root@an-node02.bcn && \
ssh root@an-node02.sn && \
ssh root@an-node02.ifn
The authenticity of host 'an-node01 (10.20.0.1)' can't be established.
RSA key fingerprint is e6:cb:50:41:88:26:c3:a5:aa:85:80:89:02:6f:ae:5e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'an-node01,10.20.0.1' (RSA) to the list of known hosts.
Last login: Sun Dec 11 04:45:50 2011 from 10.20.255.254
[root@an-node01 ~]#
exit
logout
Connection to an-node01 closed.
The authenticity of host 'an-node01.alteeve.ca (10.20.0.1)' can't be established.
RSA key fingerprint is e6:cb:50:41:88:26:c3:a5:aa:85:80:89:02:6f:ae:5e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'an-node01.alteeve.ca' (RSA) to the list of known hosts.
Last login: Sun Dec 11 04:50:24 2011 from an-node01
[root@an-node01 ~]#
exit
logout
Connection to an-node01.alteeve.ca closed.
The authenticity of host 'an-node01.bcn (10.20.0.1)' can't be established.
RSA key fingerprint is e6:cb:50:41:88:26:c3:a5:aa:85:80:89:02:6f:ae:5e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'an-node01.bcn' (RSA) to the list of known hosts.
Last login: Sun Dec 11 04:51:14 2011 from an-node01
[root@an-node01 ~]#
exit
logout
Connection to an-node01.bcn closed.
The authenticity of host 'an-node01.sn (10.10.0.1)' can't be established.
RSA key fingerprint is e6:cb:50:41:88:26:c3:a5:aa:85:80:89:02:6f:ae:5e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'an-node01.sn,10.10.0.1' (RSA) to the list of known hosts.
Last login: Sun Dec 11 04:53:23 2011 from an-node01
[root@an-node01 ~]#
exit
logout
Connection to an-node01.sn closed.
The authenticity of host 'an-node01.ifn (10.255.0.1)' can't be established.
RSA key fingerprint is e6:cb:50:41:88:26:c3:a5:aa:85:80:89:02:6f:ae:5e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'an-node01.ifn,10.255.0.1' (RSA) to the list of known hosts.
Last login: Sun Dec 11 04:54:30 2011 from an-node01.sn
[root@an-node01 ~]#
exit
logout
Connection to an-node01.ifn closed.

This is the connection to an-node02, which we established earlier when we pushed the authorized_keys, so this time we're not asked to verify the key.

Last login: Sun Dec 11 05:44:40 2011 from 10.20.255.254
[root@an-node02 ~]#
exit
logout
Connection to an-node02 closed.

Now we'll be asked to verify keys again, as only the base an-node02 hostname had been recorded earlier.

The authenticity of host 'an-node02.alteeve.ca (10.20.0.2)' can't be established.
RSA key fingerprint is 04:08:37:43:6b:5c:a0:b0:f5:27:a7:46:d4:77:a3:34.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'an-node02.alteeve.ca' (RSA) to the list of known hosts.
Last login: Sun Dec 11 05:54:44 2011 from an-node01
[root@an-node02 ~]#
exit
logout
Connection to an-node02.alteeve.ca closed.
The authenticity of host 'an-node02.bcn (10.20.0.2)' can't be established.
RSA key fingerprint is 04:08:37:43:6b:5c:a0:b0:f5:27:a7:46:d4:77:a3:34.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'an-node02.bcn' (RSA) to the list of known hosts.
Last login: Sun Dec 11 06:05:58 2011 from an-node01
[root@an-node02 ~]#
exit
logout
Connection to an-node02.bcn closed.
The authenticity of host 'an-node02.sn (10.10.0.2)' can't be established.
RSA key fingerprint is 04:08:37:43:6b:5c:a0:b0:f5:27:a7:46:d4:77:a3:34.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'an-node02.sn,10.10.0.2' (RSA) to the list of known hosts.
Last login: Sun Dec 11 06:07:20 2011 from an-node01
exit
logout
Connection to an-node02.sn closed.
The authenticity of host 'an-node02.ifn (10.255.0.2)' can't be established.
RSA key fingerprint is 04:08:37:43:6b:5c:a0:b0:f5:27:a7:46:d4:77:a3:34.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'an-node02.ifn,10.255.0.2' (RSA) to the list of known hosts.
Last login: Sun Dec 11 06:08:11 2011 from an-node01.sn
[root@an-node02 ~]#
exit
logout
Connection to an-node02.ifn closed.

Finally done!

Now we can simply copy the ~/.ssh/known_hosts file to the other node.

rsync -av root@an-node01:/root/.ssh/known_hosts ~/.ssh/
receiving incremental file list

sent 11 bytes  received 41 bytes  104.00 bytes/sec
total size is 4413  speedup is 84.87

Now we can connect via SSH to either node, from either node, using any of the networks and we will not be prompted to enter a password or to verify SSH fingerprints any more.

Configuring The Cluster Foundation

We need to configure the cluster in two stages. This is because we have something of a chicken-and-egg problem.

  • We need clustered storage for our virtual machines.
  • Our clustered storage needs the cluster for fencing.

Conveniently, clustering has two logical parts;

  • Cluster communication and membership.
  • Cluster resource management.

The first, communication and membership, covers which nodes are part of the cluster and ejecting faulty nodes from the cluster, among other tasks. The second part, resource management, is provided by a second tool called rgmanager. It's this second part that we will set aside for later.

Installing Required Programs

You will need to install the packages below. Under CentOS, Scientific Linux or other RHEL-based distros, you can simply run the command below.

For Red Hat customers though, you will need to enable the "RHEL Server Resilient Storage" entitlement. If you are foregoing GFS2 to save money, then you will need to instead enable the "RHEL Server High Availability" entitlement instead.

Once you are ready, run the following command to install what you need. If you opted not to use GFS2, remove gfs2-utils. The gpm is also optional as it provides mouse facility in the command-line.

yum install cman corosync rgmanager ricci gfs2-utils ntp \
            libvirt lvm2-cluster qemu-kvm qemu-kvm-tools \
            virt-install virt-viewer syslinux wget gpm \
            rsync

Disable the 'qemu' Bridge

By default, libvirtd creates a bridge called virbr0 designed to connect virtual machines to the first eth0 interface. Our system will not need this, so we will remove it now.

If libvirtd has started, skip to the next step. If you haven't started libvirtd yet, you can manually disable the bridge by blanking out the config file.

cat /dev/null >/etc/libvirt/qemu/networks/default.xml

If libvirtd has started, then you will need to first stop the bridge.

virsh net-destroy default
Network default destroyed

To disable and remove it, run the following;

virsh net-autostart default --disable
Network default unmarked as autostarted
virsh net-undefine default
Network default has been undefined

Keeping Time In Sync

It is very important that time on both nodes be kept in sync. The way to do this is to setup [[[NTP]], the network time protocol. I like to use the tick.redhat.com time server, though you are free to substitute your preferred time source.

First, add the timeserver to the NTP configuration file by appending the following lines to the end of it.

echo server tick.redhat.com$'\n'restrict tick.redhat.com mask 255.255.255.255 nomodify notrap noquery >> /etc/ntp.conf
tail -n 4 /etc/ntp.conf
# Enable writing of statistics records.
#statistics clockstats cryptostats loopstats peerstats
server tick.redhat.com
restrict tick.redhat.com mask 255.255.255.255 nomodify notrap noquery

Now make sure that the ntpd service starts on boot, then start it manually.

chkconfig ntpd on
/etc/init.d/ntpd start
Starting ntpd:                                             [  OK  ]

Configuration Methods

In Red Hat Cluster Services, the heart of the cluster is found in the /etc/cluster/cluster.conf XML configuration file.

There are three main ways of editing this file. Two are already well documented, so I won't bother discussing them, beyond introducing them. The third way is by directly hand-crafting the cluster.conf file. This method is not very well documented, and directly manipulating configuration files is my preferred method. As my boss loves to say; "The more computers do for you, the more they do to you".

The first two, well documented, graphical tools are:

  • system-config-cluster, older GUI tool run directly from one of the cluster nodes.
  • Conga, comprised of the ricci node-side client and the luci web-based server (can be run on machines outside the cluster).

I do like the tools above, but I often find issues that send me back to the command line. I'd recommend setting them aside for now as well. Once you feel comfortable with cluster.conf syntax, then by all means, go back and use them. I'd recommend not relying on them though, which might be the case if you try to use them too early in your studies.

The First cluster.conf Foundation Configuration

The very first stage of building the cluster is to create a configuration file that is as minimal as possible. We're going to do this on an-node01 and, when we're done, copy it over to an-node02.

Name the Cluster and Set The Configuration Version

The cluster tag is the parent tag for the entire cluster configuration file.

vim /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster name="an-cluster-A" config_version="1">
</cluster>

The cluster element has two attributes that we need to set;

  • name=""
  • config_version=""

The name="" attribute defines the name of the cluster. It must be unique amongst the clusters on your network. It should be descriptive, but you will not want to make it too long, either. You will see this name in the various cluster tools and you will enter in, for example, when creating a GFS2 partition later on. This tutorial uses the cluster name an-cluster-A.

The config_version="" attribute is an integer indicating the version of the configuration file. Whenever you make a change to the cluster.conf file, you will need to increment this version number by 1. If you don't increment this number, then the cluster tools will not know that the file needs to be reloaded. As this is the first version of this configuration file, it will start with 1. Note that this tutorial will increment the version after every change, regardless of whether it is explicitly pushed out to the other nodes and reloaded. The reason is to help get into the habit of always increasing this value.

Configuring cman Options

We are setting up a special kind of cluster, called a 2-Node cluster.

This is a special case because traditional quorum will not be useful. With only two nodes, each having a vote of 1, the total votes is 2. Quorum needs 50% + 1, which means that a single node failure would shut down the cluster, as the remaining node's vote is 50% exactly. That kind of defeats the purpose to having a cluster at all.

So to account for this special case, there is a special attribute called two_node="1". This tells the cluster manager to continue operating with only one vote. This option requires that the expected_votes="" attribute be set to 1. Normally, expected_votes is set automatically to the total sum of the defined cluster nodes' votes (which itself is a default of 1). This is the other half of the "trick", as a single node's vote of 1 now always provides quorum (that is, 1 meets the 50% + 1 requirement).

In short; this disables quorum.

<?xml version="1.0"?>
<cluster name="an-cluster-A" config_version="2">
	<cman expected_votes="1" two_node="1" />
</cluster>

Take note of the self-closing <... /> tag. This is an XML syntax that tells the parser not to look for any child or a closing tags.

Defining Cluster Nodes

This example is a little artificial, please don't load it into your cluster as we will need to add a few child tags, but one thing at a time.

This introduces two tags, the later a child tag of the former;

  • clusternodes
    • clusternode

The first is the parent clusternodes tag, which takes no attributes of its own. Its sole purpose is to contain the clusternode child tags, of which there will be one per node.

<?xml version="1.0"?>
<cluster name="an-cluster-A" config_version="3">
	<cman expected_votes="1" two_node="1" />
	<clusternodes>
		<clusternode name="an-node01.alteeve.ca" nodeid="1" />
		<clusternode name="an-node02.alteeve.ca" nodeid="2" />
	</clusternodes>
</cluster>

The clusternode tag defines each cluster node. There are many attributes available, but we will look at just the two required ones.

The first is the name="" attribute. The value should match the fully qualified domain name, which you can check by running uname -n on each node. This isn't strictly required, mind you, but for simplicity's sake, this is the name we will use.

The cluster decides which network to use for cluster communication by resolving the name="..." value. It will take the returned IP address and try to match it to one of the IPs on the system. Once it finds a match, that becomes the network the cluster will use. In our case, an-node01.alteeve.ca resolves to 10.20.0.1, which is used by bond0.

If you have syslinux installed, you can check this out yourself using the following command;

ifconfig |grep -B 1 $(gethostip -d $(uname -n)) | grep HWaddr | awk '{ print $1 }'
bond0

Please see the clusternode's name attribute document for details on how name to interface mapping is resolved.

The second attribute is nodeid="". This must be a unique integer amongst the <clusternode ...> elements in the cluster. It is what the cluster itself uses to identify the node.

Defining Fence Devices

Fencing devices are used to forcible eject a node from a cluster if it stops responding.

This is generally done by forcing it to power off or reboot. Some SAN switches can logically disconnect a node from the shared storage device, a process called fabric fencing, which has the same effect of guaranteeing that the defective node can not alter the shared storage. A common, third type of fence device is one that cuts the mains power to the server. These are called PDUs and are effectively power bars where each outlet can be independently switched off over the network.

In this tutorial, our nodes support IPMI, which we will use as the primary fence device. We also have an APC brand switched PDU which will act as a backup fence device.

Note: Not all brands of switched PDUs are supported as fence devices. Before you purchase a fence device, confirm that it is supported.

All fence devices are contained within the parent fencedevices tag, which has no attributes of its own. Within this parent tag are one or more fencedevice child tags.

<?xml version="1.0"?>
<cluster name="an-cluster-A" config_version="4">
        <cman expected_votes="1" two_node="1" />
        <clusternodes>
                <clusternode name="an-node01.alteeve.ca" nodeid="1" />
                <clusternode name="an-node02.alteeve.ca" nodeid="2" />
        </clusternodes>
        <fencedevices>
                <fencedevice name="ipmi_an01" agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" passwd="secret" />
                <fencedevice name="ipmi_an02" agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" passwd="secret" />
                <fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.ca" name="pdu2" />
        </fencedevices>
</cluster>

In our cluster, each fence device used will have its own fencedevice tag. If you are using IPMI, this means you will have a fencedevice entry for each node, as each physical IPMI BMC is a unique fence device. On the other hand, fence devices that support multiple nodes, like switched PDUs, will have just one entry. In our case, we're using both types, so we have three fences devices; The two IPMI BMCs plus the switched PDU.

All fencedevice tags share two basic attributes; name="" and agent="".

  • The name attribute must be unique among all the fence devices in your cluster. As we will see in the next step, this name will be used within the <clusternode...> tag.
  • The agent tag tells the cluster which fence agent to use when the fenced daemon needs to communicate with the physical fence device. A fence agent is simple a shell script that acts as a go-between layer between the fenced daemon and the fence hardware. This agent takes the arguments from the daemon, like what port to act on and what action to take, and performs the requested action against the target node. The agent is responsible for ensuring that the execution succeeded and returning an appropriate success or failure exit code.

For those curious, the full details are described in the FenceAgentAPI. If you have two or more of the same fence device, like IPMI, then you will use the same fence agent value a corresponding number of times.

Beyond these two attributes, each fence agent will have its own subset of attributes. The scope of which is outside this tutorial, though we will see examples for IPMI and a switched PDU. All fence agents have a corresponding man page that will show you what attributes it accepts and how they are used. The two fence agents we will see here have their attributes defines in the following man pages.

  • man fence_ipmilan - IPMI fence agent.
  • man fence_apc_snmp - APC-brand switched PDU using SNMP.

The example above is what this tutorial will use.

Using the Fence Devices

Now we have nodes and fence devices defined, we will go back and tie them together. This is done by:

  • Defining a fence tag containing all fence methods and devices.
    • Defining one or more method tag(s) containing the device call(s) needed for each fence attempt.
      • Defining one or more device tag(s) containing attributes describing how to call the fence device to kill this node.

Here is how we implement IPMI as the primary fence device with the APC switched PDU as the backup method.

<?xml version="1.0"?>
<cluster name="an-cluster-A" config_version="5">
        <cman expected_votes="1" two_node="1" />
        <clusternodes>
                <clusternode name="an-node01.alteeve.ca" nodeid="1">
                        <fence>
                                <method name="ipmi">
                                        <device name="ipmi_an01" action="reboot" />
                                </method>
                                <method name="pdu">
                                        <device name="pdu2" port="1" action="reboot" />
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="an-node02.alteeve.ca" nodeid="2">
                        <fence>
                                <method name="ipmi">
                                        <device name="ipmi_an02" action="reboot" />
                                </method>
                                <method name="pdu">
                                        <device name="pdu2" port="2" action="reboot" />
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <fencedevices>
                <fencedevice name="ipmi_an01" agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" passwd="secret" />
                <fencedevice name="ipmi_an02" agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" passwd="secret" />
                <fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.ca" name="pdu2" />
        </fencedevices>
</cluster>

First, notice that the fence tag has no attributes. It's merely a parent for the method(s) child elements.

There are two method elements, one for each fence device, named ipmi and pdu. These names are merely descriptive and can be whatever you feel is most appropriate.

Within each method element is one or more device tags. For a given method to succeed, all defined device elements must themselves succeed. This is very useful for grouping calls to separate PDUs when dealing with nodes having redundant power supplies, as shown in the PDU example above.

The actual fence device configuration is the final piece of the puzzle. It is here that you specify per-node configuration options and link these attributes to a given fencedevice. Here, we see the link to the fencedevice via the name, ipmi_an01 in this example.

Note that the PDU definition needs a port="" attribute where the IPMI fence devices do not. These are the sorts of differences you will find, varying depending on how the fence device agent works.

When a fence call is needed, the fence devices will be called in the order they are found here. If both devices fail, the cluster will go back to the start and try again, looping indefinitely until one device succeeds.

Note: It's important to understand why we use IPMI as the primary fence device. The FenceAgentAPI specification suggests, but does not require, that a fence device confirm that the node is off. IPMI can do this, the switched PDU can not. Thus, IPMI won't return a success unless the node is truly off. The PDU, however, will return a success once the power is cut to the requested port. The risk is that a misconfigured node with redundant PDU may in fact still be running, leading to disastrous consequences.

Let's step through an example fence call to help show how the per-cluster and fence device attributes are combined during a fence call.

  • The cluster manager decides that a node needs to be fenced. Let's say that the victim is an-node02.
  • The first method in the fence section under an-node02 is consulted. Within it there are two method entries, named ipmi and pdu. The IPMI method's device has one attribute while the PDU's device has two attributes;
    • port; only found in the PDU method, this tells the cluster that an-node02 is connected to switched PDU's outlet number 2.
    • action; Found on both devices, this tells the cluster that the fence action to take is reboot. How this action is actually interpreted depends on the fence device in use, though the name certainly implies that the node will be forced off and then restarted.
  • The cluster searches in fencedevices for a fencedevice matching the name ipmi_an02. This fence device has four attributes;
    • agent; This tells the cluster to call the fence_ipmilan fence agent script, as we discussed earlier.
    • ipaddr; This tells the fence agent where on the network to find this particular IPMI BMC. This is how multiple fence devices of the same type can be used in the cluster.
    • login; This is the login user name to use when authenticating against the fence device.
    • passwd; This is the password to supply along with the login name when authenticating against the fence device.
  • Should the IPMI fence call fail for some reason, the cluster will move on to the second pdu method, repeating the steps above but using the PDU values.

When the cluster calls the fence agent, it does so by initially calling the fence agent script with no arguments.

/usr/sbin/fence_ipmilan

Then it will pass to that agent the following arguments:

ipaddr=an-node02.ipmi
login=root
passwd=secret
action=reboot

As you can see then, the first three arguments are from the fencedevice attributes and the last one is from the device attributes under an-node02's clusternode's fence tag.

If this method fails, then the PDU will be called in a very similar way, but with an extra argument from the device attributes.

/usr/sbin/fence_apc_snmp

Then it will pass to that agent the following arguments:

ipaddr=pdu2.alteeve.ca
port=2
action=reboot

Should this fail, the cluster will go back and try the IPMI interface again. It will loop through the fence device methods forever until one of the methods succeeds. Below are snippets from other clusters using different fence device configurations which might help you build your cluster.

Example <fencedevice...> Tag For IPMI

Warning: When using IPMI for fencing, it is very important that you disable ACPI. If acpid is running when an IPMI-based fence is called against it, it will begin a graceful shutdown. This means that it will stay running for another four seconds. This is more than enough time for it to initiate a shutdown of the peer, resulting in both nodes powering down if the network is interrupted.

As stated above, it is critical to disable the acpid daemon from running with the server.

chkconfig acpid off
/etc/init.d/acpid stop
Warning: After this tutorial was completed, a new <device ... /> attribute called delay="..." was added. This is a very useful attribute that allows you to tell fenced "hey, if you need to fence node X, pause for Y seconds before doing so". By setting this on only one node, you can effectively ensure that when both nodes try to fence each other at the same time, the one with the delay="Y" set will always win.

Here we will show what IPMI <fencedevice...> tags look like.

	...
		<clusternode name="an-node01.alteeve.ca" nodeid="1">
			<fence>
				<method name="ipmi">
					<device name="ipmi_an01" action="reboot"/>
				</method>
			</fence>
		</clusternode>
		<clusternode name="an-node02.alteeve.ca" nodeid="2">
			<fence>
				<method name="ipmi">
					<device name="ipmi_an02" action="reboot"/>
				</method>
			</fence>
		</clusternode>
	...
	<fencedevices>
		<fencedevice name="ipmi_an01" agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" passwd="secret" />
		<fencedevice name="ipmi_an02" agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" passwd="secret" />
	</fencedevices>
  • ipaddr; This is the resolvable name or IP address of the device. If you use a resolvable name, it is strongly advised that you put the name in /etc/hosts as DNS is another layer of abstraction which could fail.
  • login; This is the login name to use when the fenced daemon connects to the device.
  • passwd; This is the login password to use when the fenced daemon connects to the device.
  • name; This is the name of this particular fence device within the cluster which, as we will see shortly, is matched in the <clusternode...> element where appropriate.
Note: We will see shortly that, unlike switched PDUs or other network fence devices, IPMI does not have ports. This is because each IPMI BMC supports just its host system. More on that later.

Example <fencedevice...> Tag For HP iLO

Here we will show how to use iLO (integraterd Lights-Out) management devices as <fencedevice...> entries. We won't be using it ourselves, but it is quite popular as a fence device so I wanted to show an example of its use.

	...
		<clusternode name="an-node01.alteeve.ca" nodeid="1">
			<fence>
				<method name="ilo">
					<device action="reboot" name="ilo_an01"/>
				</method>
			</fence>
		</clusternode>
		<clusternode name="an-node02.alteeve.ca" nodeid="2">
			<fence>
				<method name="ilo">
					<device action="reboot" name="ilo_an02"/>
				</method>
			</fence>
		</clusternode>
	...
	<fencedevices>
		<fencedevice agent="fence_ilo" ipaddr="an-node01.ilo" login="root" name="ilo_an01" passwd="secret"/>
		<fencedevice agent="fence_ilo" ipaddr="an-node02.ilo" login="root" name="ilo_an02" passwd="secret"/>
	</fencedevices>
  • ipaddr; This is the resolvable name or IP address of the device. If you use a resolvable name, it is strongly advised that you put the name in /etc/hosts as DNS is another layer of abstraction which could fail.
  • login; This is the login name to use when the fenced daemon connects to the device.
  • passwd; This is the login password to use when the fenced daemon connects to the device.
  • name; This is the name of this particular fence device within the cluster which, as we will see shortly, is matched in the <clusternode...> element where appropriate.
Note: Like IPMI, iLO does not have ports. This is because each iLO BMC supports just its host system.
Note: A reader kindly reported that iLO3 does not work with the fence_ilo agent. The recommendation is to now use fence_ipmilan with the following options; <fencedevice agent="fence_ipmilan" ipaddr="an-node01.ilo" lanplus="1" login="Administrator" name="ilo_an01" passwd="secret" power_wait="4"/>.

Example <fencedevice...> Tag For Dell's DRAC

Note: I have not tested fencing on Dell, but am using a reference working configuration from another user.

Here we will show how to use DRAC (Dell Remote Access Controller) management devices as <fencedevice...> entries. We won't be using it ourselves, but it is another popular as a fence device so I wanted to show an example of its use.

	...
		<clusternode name="an-node01.alteeve.ca" nodeid="1">
			<fence>
				<method name="drac">
					<device action="reboot" name="drac_an01"/>
				</method>
			</fence>
		</clusternode>
		<clusternode name="an-node02.alteeve.ca" nodeid="2">
			<fence>
				<method name="ilo">
					<device action="reboot" name="drac_an02"/>
				</method>
			</fence>
		</clusternode>
	...
	<fencedevices>
		<fencedevice agent="fence_drac5" cmd_prompt="admin1-&gt;" ipaddr="an-node01.drac" login="root" name="drac_an01" passwd="secret" secure="1"/>
		<fencedevice agent="fence_drac5" cmd_prompt="admin1-&gt;" ipaddr="an-node02.drac" login="root" name="drac_an02" passwd="secret" secure="1"/>
	</fencedevices>
  • ipaddr; This is the resolvable name or IP address of the device. If you use a resolvable name, it is strongly advised that you put the name in /etc/hosts as DNS is another layer of abstraction which could fail.
  • login; This is the login name to use when the fenced daemon connects to the device.
  • passwd; This is the login password to use when the fenced daemon connects to the device.
  • name; This is the name of this particular fence device within the cluster which, as we will see shortly, is matched in the <clusternode...> element where appropriate.
  • cmd_prompt; This is the string that the fence agent looks for when talking to the DRAC device.
  • secure; This tells the agent to use SSH.
Note: Like IPMI and iLO, DRAC does not have ports. This is because each DRAC BMC supports just its host system.

Example <fencedevice...> Tag For APC Switched PDUs

Here we will show how to configure APC switched PDU <fencedevice...> tags. There are two agents for these devices; One that uses the telnet or ssh login and one that uses SNMP. This tutorial uses the later, and it is recommended that you do the same.

The example below is from a production cluster that uses redundant power supplies and two separate PDUs. This is how you will want to configure any production clusters you build.

	...
		<clusternode name="an-node01.alteeve.ca" nodeid="1">
			<fence>
				<method name="pdu2">
					<device action="reboot" name="pdu1" port="1"/>
					<device action="reboot" name="pdu2" port="1"/>
				</method>
			</fence>
		</clusternode>
		<clusternode name="an-node02.alteeve.ca" nodeid="2">
			<fence>
				<method name="pdu2">
					<device action="reboot" name="pdu1" port="2"/>
					<device action="reboot" name="pdu2" port="2"/>
				</method>
			</fence>
		</clusternode>
	...
	<fencedevices>
 		<fencedevice agent="fence_apc_snmp" ipaddr="pdu1.alteeve.ca" name="pdu1" />
		<fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.ca" name="pdu2" />
	</fencedevices>
  • agent; This is the name of the script under /usr/sbin/ to use when calling the physical PDU.
  • ipaddr; This is the resolvable name or IP address of the device. If you use a resolvable name, it is strongly advised that you put the name in /etc/hosts as DNS is another layer of abstraction which could fail.
  • name; This is the name of this particular fence device within the cluster which, as we will see shortly, is matched in the <clusternode...> element where appropriate.

Give Nodes More Time To Start

Clusters with more than three nodes will have to gain quorum before they can fence other nodes. As we discussed earlier though, this is not the case when using the two_node="1" attribute in the cman element. What this means in practice is that if you start the cluster on one node and then wait too long to start the cluster on the second node, the first will fence the second.

The logic behind this is; When the cluster starts, it will try to talk to its fellow node and then fail. With the special two_node="1" attribute set, the cluster knows that it is allowed to start clustered services, but it has no way to say for sure what state the other node is in. It could well be online and hosting services for all it knows. So it has to proceed on the assumption that the other node is alive and using shared resources. Given that, and given that it can not talk to the other node, its only safe option is to fence the other node. Only then can it be confident that it is safe to start providing clustered services.

<?xml version="1.0"?>
<cluster name="an-cluster-A" config_version="6">
        <cman expected_votes="1" two_node="1" />
        <clusternodes>
                <clusternode name="an-node01.alteeve.ca" nodeid="1">
                        <fence>
                                <method name="ipmi">
                                        <device name="ipmi_an01" action="reboot" />
                                </method>
                                <method name="pdu">
                                        <device name="pdu2" port="1" action="reboot" />
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="an-node02.alteeve.ca" nodeid="2">
                        <fence>
                                <method name="ipmi">
                                        <device name="ipmi_an02" action="reboot" />
                                </method>
                                <method name="pdu">
                                        <device name="pdu2" port="2" action="reboot" />
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <fencedevices>
                <fencedevice name="ipmi_an01" agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" passwd="secret" />
                <fencedevice name="ipmi_an02" agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" passwd="secret" />
                <fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.ca" name="pdu2" />
        </fencedevices>
        <fence_daemon post_join_delay="30" />
</cluster>

The new tag is fence_daemon, seen near the bottom if the file above. The change is made using the post_join_delay="30" attribute. By default, the cluster will declare the other node dead after just 6 seconds. The reason is that the larger this value, the slower the start-up of the cluster services will be. During testing and development though, I find this value to be far too short and frequently led to unnecessary fencing. Once your cluster is setup and working, it's not a bad idea to reduce this value to the lowest value with which you are comfortable.

Configuring Totem

There are many attributes for the totem element. For now though, we're only going to set two of them. We know that cluster communication will be travelling over our private, secured BCN network, so for the sake of simplicity, we're going to disable encryption. We are also offering network redundancy using the bonding drivers, so we're also going to disable totem's redundant ring protocol.

<?xml version="1.0"?>
<cluster name="an-cluster-A" config_version="7">
        <cman expected_votes="1" two_node="1" />
        <clusternodes>
                <clusternode name="an-node01.alteeve.ca" nodeid="1">
                        <fence>
                                <method name="ipmi">
                                        <device name="ipmi_an01" action="reboot" />
                                </method>
                                <method name="pdu">
                                        <device name="pdu2" port="1" action="reboot" />
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="an-node02.alteeve.ca" nodeid="2">
                        <fence>
                                <method name="ipmi">
                                        <device name="ipmi_an02" action="reboot" />
                                </method>
                                <method name="pdu">
                                        <device name="pdu2" port="2" action="reboot" />
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <fencedevices>
                <fencedevice name="ipmi_an01" agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" passwd="secret" />
                <fencedevice name="ipmi_an02" agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" passwd="secret" />
                <fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.ca" name="pdu2" />
        </fencedevices>
        <fence_daemon post_join_delay="30" />
        <totem rrp_mode="none" secauth="off"/>
</cluster>
Note: At this time, redundant ring protocol is not supported (RHEL6.1 and lower). It is in technology preview mode in RHEL6.2 and above. This is another reason why we will not be using it in this tutorial..

RRP is an optional second ring that can be used for cluster communication in the case of a break down in the first ring. However, if you wish to explore it further, please take a look at the clusternode element tag called <altname...>. When altname is used though, then the rrp_mode attribute will need to be changed to either active or passive (the details of which are outside the scope of this tutorial).

The second option we're looking at here is the secauth="off" attribute. This controls whether the cluster communications are encrypted or not. We can safely disable this because we're working on a known-private network, which yields two benefits; It's simpler to setup and it's a lot faster. If you must encrypt the cluster communications, then you can do so here. The details of which are also outside the scope of this tutorial though.

Validating and Pushing the /etc/cluster/cluster.conf File

One of the most noticeable changes in RHCS cluster stable 3 is that we no longer have to make a long, cryptic xmllint call to validate our cluster configuration. Now we can simply call ccs_config_validate.

ccs_config_validate
Configuration validates

If there was a problem, you need to go back and fix it. DO NOT proceed until your configuration validates. Once it does, we're ready to move on!

With it validated, we need to push it to the other node. As the cluster is not running yet, we will push it out using rsync.

rsync -av /etc/cluster/cluster.conf root@an-node02:/etc/cluster/
sending incremental file list
cluster.conf

sent 1198 bytes  received 31 bytes  2458.00 bytes/sec
total size is 1118  speedup is 0.91

Setting Up ricci

Another change from RHCS stable 2 is how configuration changes are propagated. Before, after a change, we'd push out the updated cluster configuration by calling ccs_tool update /etc/cluster/cluster.conf. Now this is done with cman_tool version -r. More fundamentally though, the cluster needs to authenticate against each node and does this using the local ricci system user. The user has no password initially, so we need to set one.

On both nodes:

passwd ricci
Changing password for user ricci.
New password: 
Retype new password: 
passwd: all authentication tokens updated successfully.

You will need to enter this password once from each node against the other node. We will see this later.

Now make sure that the ricci daemon is set to start on boot and is running now.

chkconfig ricci on
chkconfig --list ricci
ricci          	0:off	1:off	2:on	3:on	4:on	5:on	6:off

Now start it up.

/etc/init.d/ricci start
Starting ricci:                                            [  OK  ]
Note: If you don't see [ OK ], don't worry, it is probably because it was already running.

We also need to have a daemon called modclusterd running on start.

chkconfig modclusterd on
chkconfig --list modclusterd
modclusterd    	0:off	1:off	2:on	3:on	4:on	5:on	6:off

Now start it up.

/etc/init.d/modclusterd start
Starting Cluster Module - cluster monitor: Setting verbosity level to LogBasic
                                                           [  OK  ]

Starting the Cluster for the First Time

It's a good idea to open a second terminal on either node and tail the /var/log/messages syslog file. All cluster messages will be recorded here and it will help to debug problems if you can watch the logs. To do this, in the new terminal windows run;

clear; tail -f -n 0 /var/log/messages

This will clear the screen and start watching for new lines to be written to syslog. When you are done watching syslog, press the <ctrl> + c key combination.

How you lay out your terminal windows is, obviously, up to your own preferences. Below is a configuration I have found very useful.

Terminal window layout for watching 2 nodes. Left windows are used for entering commands and the left windows are used for tailing syslog.

With the terminals setup, lets start the cluster!

Warning: If you don't start cman on both nodes within 30 seconds, the slower node will be fenced.

On both nodes, run:

/etc/init.d/cman start
Starting cluster: 
   Checking if cluster has been disabled at boot...        [  OK  ]
   Checking Network Manager...                             [  OK  ]
   Global setup...                                         [  OK  ]
   Loading kernel modules...                               [  OK  ]
   Mounting configfs...                                    [  OK  ]
   Starting cman...                                        [  OK  ]
   Waiting for quorum...                                   [  OK  ]
   Starting fenced...                                      [  OK  ]
   Starting dlm_controld...                                [  OK  ]
   Starting gfs_controld...                                [  OK  ]
   Unfencing self...                                       [  OK  ]
   Joining fence domain...                                 [  OK  ]

Here is what you should see in syslog:

Dec 13 12:08:44 an-node01 kernel: DLM (built Nov  9 2011 08:04:11) installed
Dec 13 12:08:45 an-node01 corosync[3434]:   [MAIN  ] Corosync Cluster Engine ('1.4.1'): started and ready to provide service.
Dec 13 12:08:45 an-node01 corosync[3434]:   [MAIN  ] Corosync built-in features: nss dbus rdma snmp
Dec 13 12:08:45 an-node01 corosync[3434]:   [MAIN  ] Successfully read config from /etc/cluster/cluster.conf
Dec 13 12:08:45 an-node01 corosync[3434]:   [MAIN  ] Successfully parsed cman config
Dec 13 12:08:45 an-node01 corosync[3434]:   [TOTEM ] Initializing transport (UDP/IP Multicast).
Dec 13 12:08:45 an-node01 corosync[3434]:   [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Dec 13 12:08:46 an-node01 corosync[3434]:   [TOTEM ] The network interface [10.20.0.1] is now up.
Dec 13 12:08:46 an-node01 corosync[3434]:   [QUORUM] Using quorum provider quorum_cman
Dec 13 12:08:46 an-node01 corosync[3434]:   [SERV  ] Service engine loaded: corosync cluster quorum service v0.1
Dec 13 12:08:46 an-node01 corosync[3434]:   [CMAN  ] CMAN 3.0.12.1 (built Sep 30 2011 03:17:43) started
Dec 13 12:08:46 an-node01 corosync[3434]:   [SERV  ] Service engine loaded: corosync CMAN membership service 2.90
Dec 13 12:08:46 an-node01 corosync[3434]:   [SERV  ] Service engine loaded: openais checkpoint service B.01.01
Dec 13 12:08:46 an-node01 corosync[3434]:   [SERV  ] Service engine loaded: corosync extended virtual synchrony service
Dec 13 12:08:46 an-node01 corosync[3434]:   [SERV  ] Service engine loaded: corosync configuration service
Dec 13 12:08:46 an-node01 corosync[3434]:   [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01
Dec 13 12:08:46 an-node01 corosync[3434]:   [SERV  ] Service engine loaded: corosync cluster config database access v1.01
Dec 13 12:08:46 an-node01 corosync[3434]:   [SERV  ] Service engine loaded: corosync profile loading service
Dec 13 12:08:46 an-node01 corosync[3434]:   [QUORUM] Using quorum provider quorum_cman
Dec 13 12:08:46 an-node01 corosync[3434]:   [SERV  ] Service engine loaded: corosync cluster quorum service v0.1
Dec 13 12:08:46 an-node01 corosync[3434]:   [MAIN  ] Compatibility mode set to whitetank.  Using V1 and V2 of the synchronization engine.
Dec 13 12:08:46 an-node01 corosync[3434]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
Dec 13 12:08:46 an-node01 corosync[3434]:   [CMAN  ] quorum regained, resuming activity
Dec 13 12:08:46 an-node01 corosync[3434]:   [QUORUM] This node is within the primary component and will provide service.
Dec 13 12:08:46 an-node01 corosync[3434]:   [QUORUM] Members[1]: 1
Dec 13 12:08:46 an-node01 corosync[3434]:   [QUORUM] Members[1]: 1
Dec 13 12:08:46 an-node01 corosync[3434]:   [CPG   ] chosen downlist: sender r(0) ip(10.20.0.1) ; members(old:0 left:0)
Dec 13 12:08:46 an-node01 corosync[3434]:   [MAIN  ] Completed service synchronization, ready to provide service.
Dec 13 12:08:47 an-node01 corosync[3434]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
Dec 13 12:08:47 an-node01 corosync[3434]:   [QUORUM] Members[2]: 1 2
Dec 13 12:08:47 an-node01 corosync[3434]:   [QUORUM] Members[2]: 1 2
Dec 13 12:08:47 an-node01 corosync[3434]:   [CPG   ] chosen downlist: sender r(0) ip(10.20.0.1) ; members(old:1 left:0)
Dec 13 12:08:47 an-node01 corosync[3434]:   [MAIN  ] Completed service synchronization, ready to provide service.
Dec 13 12:08:49 an-node01 fenced[3490]: fenced 3.0.12.1 started
Dec 13 12:08:49 an-node01 dlm_controld[3515]: dlm_controld 3.0.12.1 started
Dec 13 12:08:51 an-node01 gfs_controld[3565]: gfs_controld 3.0.12.1 started
Note: If you see messages like rsyslogd-2177: imuxsock begins to drop messages from pid 29288 due to rate-limiting, this is caused by new default configuration in rsyslogd. To disable rate limiting, please follow the instructions in Disabling rsyslog Rate Limiting below.

Now to confirm that the cluster is operating properly, run cman_tool status;

cman_tool status
Version: 6.2.0
Config Version: 7
Cluster Name: an-cluster-A
Cluster Id: 24561
Cluster Member: Yes
Cluster Generation: 8
Membership state: Cluster-Member
Nodes: 2
Expected votes: 1
Total votes: 2
Node votes: 1
Quorum: 1  
Active subsystems: 7
Flags: 2node 
Ports Bound: 0  
Node name: an-node01.alteeve.ca
Node ID: 1
Multicast addresses: 239.192.95.81 
Node addresses: 10.20.0.1

We can see that the both nodes are talking because of the Nodes: 2 entry.

If you ever want to see the nitty-gritty configuration, you can run corosync-objctl.

corosync-objctl
cluster.name=an-cluster-A
cluster.config_version=7
cluster.cman.expected_votes=1
cluster.cman.two_node=1
cluster.cman.nodename=an-node01.alteeve.ca
cluster.cman.cluster_id=24561
cluster.clusternodes.clusternode.name=an-node01.alteeve.ca
cluster.clusternodes.clusternode.nodeid=1
cluster.clusternodes.clusternode.fence.method.name=ipmi
cluster.clusternodes.clusternode.fence.method.device.name=ipmi_an01
cluster.clusternodes.clusternode.fence.method.device.action=reboot
cluster.clusternodes.clusternode.fence.method.name=pdu
cluster.clusternodes.clusternode.fence.method.device.name=pdu2
cluster.clusternodes.clusternode.fence.method.device.port=1
cluster.clusternodes.clusternode.fence.method.device.action=reboot
cluster.clusternodes.clusternode.name=an-node02.alteeve.ca
cluster.clusternodes.clusternode.nodeid=2
cluster.clusternodes.clusternode.fence.method.name=ipmi
cluster.clusternodes.clusternode.fence.method.device.name=ipmi_an02
cluster.clusternodes.clusternode.fence.method.device.action=reboot
cluster.clusternodes.clusternode.fence.method.name=pdu
cluster.clusternodes.clusternode.fence.method.device.name=pdu2
cluster.clusternodes.clusternode.fence.method.device.port=2
cluster.clusternodes.clusternode.fence.method.device.action=reboot
cluster.fencedevices.fencedevice.name=ipmi_an01
cluster.fencedevices.fencedevice.agent=fence_ipmilan
cluster.fencedevices.fencedevice.ipaddr=an-node01.ipmi
cluster.fencedevices.fencedevice.login=root
cluster.fencedevices.fencedevice.passwd=secret
cluster.fencedevices.fencedevice.name=ipmi_an02
cluster.fencedevices.fencedevice.agent=fence_ipmilan
cluster.fencedevices.fencedevice.ipaddr=an-node02.ipmi
cluster.fencedevices.fencedevice.login=root
cluster.fencedevices.fencedevice.passwd=secret
cluster.fencedevices.fencedevice.agent=fence_apc_snmp
cluster.fencedevices.fencedevice.ipaddr=pdu2.alteeve.ca
cluster.fencedevices.fencedevice.name=pdu2
cluster.fence_daemon.post_join_delay=30
cluster.totem.rrp_mode=none
cluster.totem.secauth=off
totem.rrp_mode=none
totem.secauth=off
totem.transport=udp
totem.version=2
totem.nodeid=1
totem.vsftype=none
totem.token=10000
totem.join=60
totem.fail_recv_const=2500
totem.consensus=2000
totem.key=an-cluster-A
totem.interface.ringnumber=0
totem.interface.bindnetaddr=10.20.0.1
totem.interface.mcastaddr=239.192.95.81
totem.interface.mcastport=5405
libccs.next_handle=7
libccs.connection.ccs_handle=3
libccs.connection.config_version=7
libccs.connection.fullxpath=0
libccs.connection.ccs_handle=4
libccs.connection.config_version=7
libccs.connection.fullxpath=0
libccs.connection.ccs_handle=5
libccs.connection.config_version=7
libccs.connection.fullxpath=0
logging.timestamp=on
logging.to_logfile=yes
logging.logfile=/var/log/cluster/corosync.log
logging.logfile_priority=info
logging.to_syslog=yes
logging.syslog_facility=local4
logging.syslog_priority=info
aisexec.user=ais
aisexec.group=ais
service.name=corosync_quorum
service.ver=0
service.name=corosync_cman
service.ver=0
quorum.provider=quorum_cman
service.name=openais_ckpt
service.ver=0
runtime.services.quorum.service_id=12
runtime.services.cman.service_id=9
runtime.services.ckpt.service_id=3
runtime.services.ckpt.0.tx=0
runtime.services.ckpt.0.rx=0
runtime.services.ckpt.1.tx=0
runtime.services.ckpt.1.rx=0
runtime.services.ckpt.2.tx=0
runtime.services.ckpt.2.rx=0
runtime.services.ckpt.3.tx=0
runtime.services.ckpt.3.rx=0
runtime.services.ckpt.4.tx=0
runtime.services.ckpt.4.rx=0
runtime.services.ckpt.5.tx=0
runtime.services.ckpt.5.rx=0
runtime.services.ckpt.6.tx=0
runtime.services.ckpt.6.rx=0
runtime.services.ckpt.7.tx=0
runtime.services.ckpt.7.rx=0
runtime.services.ckpt.8.tx=0
runtime.services.ckpt.8.rx=0
runtime.services.ckpt.9.tx=0
runtime.services.ckpt.9.rx=0
runtime.services.ckpt.10.tx=0
runtime.services.ckpt.10.rx=0
runtime.services.ckpt.11.tx=2
runtime.services.ckpt.11.rx=3
runtime.services.ckpt.12.tx=0
runtime.services.ckpt.12.rx=0
runtime.services.ckpt.13.tx=0
runtime.services.ckpt.13.rx=0
runtime.services.evs.service_id=0
runtime.services.evs.0.tx=0
runtime.services.evs.0.rx=0
runtime.services.cfg.service_id=7
runtime.services.cfg.0.tx=0
runtime.services.cfg.0.rx=0
runtime.services.cfg.1.tx=0
runtime.services.cfg.1.rx=0
runtime.services.cfg.2.tx=0
runtime.services.cfg.2.rx=0
runtime.services.cfg.3.tx=0
runtime.services.cfg.3.rx=0
runtime.services.cpg.service_id=8
runtime.services.cpg.0.tx=4
runtime.services.cpg.0.rx=8
runtime.services.cpg.1.tx=0
runtime.services.cpg.1.rx=0
runtime.services.cpg.2.tx=0
runtime.services.cpg.2.rx=0
runtime.services.cpg.3.tx=16
runtime.services.cpg.3.rx=23
runtime.services.cpg.4.tx=0
runtime.services.cpg.4.rx=0
runtime.services.cpg.5.tx=2
runtime.services.cpg.5.rx=3
runtime.services.confdb.service_id=11
runtime.services.pload.service_id=13
runtime.services.pload.0.tx=0
runtime.services.pload.0.rx=0
runtime.services.pload.1.tx=0
runtime.services.pload.1.rx=0
runtime.services.quorum.service_id=12
runtime.connections.active=6
runtime.connections.closed=110
runtime.connections.fenced:CPG:3490:19.service_id=8
runtime.connections.fenced:CPG:3490:19.client_pid=3490
runtime.connections.fenced:CPG:3490:19.responses=5
runtime.connections.fenced:CPG:3490:19.dispatched=9
runtime.connections.fenced:CPG:3490:19.requests=5
runtime.connections.fenced:CPG:3490:19.sem_retry_count=0
runtime.connections.fenced:CPG:3490:19.send_retry_count=0
runtime.connections.fenced:CPG:3490:19.recv_retry_count=0
runtime.connections.fenced:CPG:3490:19.flow_control=0
runtime.connections.fenced:CPG:3490:19.flow_control_count=0
runtime.connections.fenced:CPG:3490:19.queue_size=0
runtime.connections.fenced:CPG:3490:19.invalid_request=0
runtime.connections.fenced:CPG:3490:19.overload=0
runtime.connections.dlm_controld:CPG:3515:22.service_id=8
runtime.connections.dlm_controld:CPG:3515:22.client_pid=3515
runtime.connections.dlm_controld:CPG:3515:22.responses=5
runtime.connections.dlm_controld:CPG:3515:22.dispatched=8
runtime.connections.dlm_controld:CPG:3515:22.requests=5
runtime.connections.dlm_controld:CPG:3515:22.sem_retry_count=0
runtime.connections.dlm_controld:CPG:3515:22.send_retry_count=0
runtime.connections.dlm_controld:CPG:3515:22.recv_retry_count=0
runtime.connections.dlm_controld:CPG:3515:22.flow_control=0
runtime.connections.dlm_controld:CPG:3515:22.flow_control_count=0
runtime.connections.dlm_controld:CPG:3515:22.queue_size=0
runtime.connections.dlm_controld:CPG:3515:22.invalid_request=0
runtime.connections.dlm_controld:CPG:3515:22.overload=0
runtime.connections.dlm_controld:CKPT:3515:23.service_id=3
runtime.connections.dlm_controld:CKPT:3515:23.client_pid=3515
runtime.connections.dlm_controld:CKPT:3515:23.responses=0
runtime.connections.dlm_controld:CKPT:3515:23.dispatched=0
runtime.connections.dlm_controld:CKPT:3515:23.requests=0
runtime.connections.dlm_controld:CKPT:3515:23.sem_retry_count=0
runtime.connections.dlm_controld:CKPT:3515:23.send_retry_count=0
runtime.connections.dlm_controld:CKPT:3515:23.recv_retry_count=0
runtime.connections.dlm_controld:CKPT:3515:23.flow_control=0
runtime.connections.dlm_controld:CKPT:3515:23.flow_control_count=0
runtime.connections.dlm_controld:CKPT:3515:23.queue_size=0
runtime.connections.dlm_controld:CKPT:3515:23.invalid_request=0
runtime.connections.dlm_controld:CKPT:3515:23.overload=0
runtime.connections.gfs_controld:CPG:3565:26.service_id=8
runtime.connections.gfs_controld:CPG:3565:26.client_pid=3565
runtime.connections.gfs_controld:CPG:3565:26.responses=5
runtime.connections.gfs_controld:CPG:3565:26.dispatched=8
runtime.connections.gfs_controld:CPG:3565:26.requests=5
runtime.connections.gfs_controld:CPG:3565:26.sem_retry_count=0
runtime.connections.gfs_controld:CPG:3565:26.send_retry_count=0
runtime.connections.gfs_controld:CPG:3565:26.recv_retry_count=0
runtime.connections.gfs_controld:CPG:3565:26.flow_control=0
runtime.connections.gfs_controld:CPG:3565:26.flow_control_count=0
runtime.connections.gfs_controld:CPG:3565:26.queue_size=0
runtime.connections.gfs_controld:CPG:3565:26.invalid_request=0
runtime.connections.gfs_controld:CPG:3565:26.overload=0
runtime.connections.fenced:CPG:3490:28.service_id=8
runtime.connections.fenced:CPG:3490:28.client_pid=3490
runtime.connections.fenced:CPG:3490:28.responses=5
runtime.connections.fenced:CPG:3490:28.dispatched=8
runtime.connections.fenced:CPG:3490:28.requests=5
runtime.connections.fenced:CPG:3490:28.sem_retry_count=0
runtime.connections.fenced:CPG:3490:28.send_retry_count=0
runtime.connections.fenced:CPG:3490:28.recv_retry_count=0
runtime.connections.fenced:CPG:3490:28.flow_control=0
runtime.connections.fenced:CPG:3490:28.flow_control_count=0
runtime.connections.fenced:CPG:3490:28.queue_size=0
runtime.connections.fenced:CPG:3490:28.invalid_request=0
runtime.connections.fenced:CPG:3490:28.overload=0
runtime.connections.corosync-objctl:CONFDB:3698:27.service_id=11
runtime.connections.corosync-objctl:CONFDB:3698:27.client_pid=3698
runtime.connections.corosync-objctl:CONFDB:3698:27.responses=444
runtime.connections.corosync-objctl:CONFDB:3698:27.dispatched=0
runtime.connections.corosync-objctl:CONFDB:3698:27.requests=447
runtime.connections.corosync-objctl:CONFDB:3698:27.sem_retry_count=0
runtime.connections.corosync-objctl:CONFDB:3698:27.send_retry_count=0
runtime.connections.corosync-objctl:CONFDB:3698:27.recv_retry_count=0
runtime.connections.corosync-objctl:CONFDB:3698:27.flow_control=0
runtime.connections.corosync-objctl:CONFDB:3698:27.flow_control_count=0
runtime.connections.corosync-objctl:CONFDB:3698:27.queue_size=0
runtime.connections.corosync-objctl:CONFDB:3698:27.invalid_request=0
runtime.connections.corosync-objctl:CONFDB:3698:27.overload=0
runtime.totem.pg.msg_reserved=1
runtime.totem.pg.msg_queue_avail=761
runtime.totem.pg.mrp.srp.orf_token_tx=2
runtime.totem.pg.mrp.srp.orf_token_rx=405
runtime.totem.pg.mrp.srp.memb_merge_detect_tx=53
runtime.totem.pg.mrp.srp.memb_merge_detect_rx=53
runtime.totem.pg.mrp.srp.memb_join_tx=3
runtime.totem.pg.mrp.srp.memb_join_rx=5
runtime.totem.pg.mrp.srp.mcast_tx=45
runtime.totem.pg.mrp.srp.mcast_retx=0
runtime.totem.pg.mrp.srp.mcast_rx=56
runtime.totem.pg.mrp.srp.memb_commit_token_tx=4
runtime.totem.pg.mrp.srp.memb_commit_token_rx=4
runtime.totem.pg.mrp.srp.token_hold_cancel_tx=4
runtime.totem.pg.mrp.srp.token_hold_cancel_rx=7
runtime.totem.pg.mrp.srp.operational_entered=2
runtime.totem.pg.mrp.srp.operational_token_lost=0
runtime.totem.pg.mrp.srp.gather_entered=2
runtime.totem.pg.mrp.srp.gather_token_lost=0
runtime.totem.pg.mrp.srp.commit_entered=2
runtime.totem.pg.mrp.srp.commit_token_lost=0
runtime.totem.pg.mrp.srp.recovery_entered=2
runtime.totem.pg.mrp.srp.recovery_token_lost=0
runtime.totem.pg.mrp.srp.consensus_timeouts=0
runtime.totem.pg.mrp.srp.mtt_rx_token=913
runtime.totem.pg.mrp.srp.avg_token_workload=0
runtime.totem.pg.mrp.srp.avg_backlog_calc=0
runtime.totem.pg.mrp.srp.rx_msg_dropped=0
runtime.totem.pg.mrp.srp.continuous_gather=0
runtime.totem.pg.mrp.srp.firewall_enabled_or_nic_failure=0
runtime.totem.pg.mrp.srp.members.1.ip=r(0) ip(10.20.0.1) 
runtime.totem.pg.mrp.srp.members.1.join_count=1
runtime.totem.pg.mrp.srp.members.1.status=joined
runtime.totem.pg.mrp.srp.members.2.ip=r(0) ip(10.20.0.2) 
runtime.totem.pg.mrp.srp.members.2.join_count=1
runtime.totem.pg.mrp.srp.members.2.status=joined
runtime.blackbox.dump_flight_data=no
runtime.blackbox.dump_state=no
cman_private.COROSYNC_DEFAULT_CONFIG_IFACE=xmlconfig:cmanpreconfig

If you want to check what DLM lockspaces, you can use dlm_tool ls to list lock spaces. Given that we're not running and resources or clustered filesystems though, there won't be any at this time. We'll look at this again later.

Testing Fencing

We need to thoroughly test our fence configuration and devices before we proceed. Should the cluster call a fence, and if the fence call fails, the cluster will hang until the fence finally succeeds. There is no way to abort a fence, so this could effectively hang the cluster. If we have problems, we need to find them now.

We need to run two tests from each node against the other node for a total of four tests.

  • The first test will use fence_ipmilan. To do this, we will hang the victim node by running echo c > /proc/sysrq-trigger on it. This will immediately and completely hang the kernel. The other node should detect the failure and reboot the victim. You can confirm that IPMI was used by watching the fence PDU and not seeing it power-cycle the port.
  • Secondly, we will pull the power on the victim node. This is done to ensure that the IPMI BMC is also dead and will simulate a failure in the power supply. You should see the other node try to fence the victim, fail initially, then try again using the second, switched PDU. If you want the PDU, you should see the power indicator LED go off and then come back on.
Note: To "pull the power", we can actually just log into the PDU and turn off the victim's power. In this case, we'll see the power restored when the PDU is used to fence the node. We can actually use the fence_apc fence agent to pull the power, as we'll see.
Test Victim Pass?
echo c > /proc/sysrq-trigger an-node01 Yes / No
fence_apc_snmp -a pdu2.alteeve.ca -n 1 -o off an-node01 Yes / No
echo c > /proc/sysrq-trigger an-node02 Yes / No
fence_apc_snmp -a pdu2.alteeve.ca -n 2 -o off an-node02 Yes / No

After the lost node is recovered, remember to restart cman before starting the next test.

Hanging an-node01

Be sure to be tailing the /var/log/messages on an-node02. Go to an-node01's first terminal and run the following command.

Warning: This command will not return and you will lose all ability to talk to this node until it is rebooted.

On an-node01 run:

echo c > /proc/sysrq-trigger

On an-node02's syslog terminal, you should see the following entries in the log.

Dec 13 12:42:39 an-node02 corosync[2758]:   [TOTEM ] A processor failed, forming new configuration.
Dec 13 12:42:41 an-node02 corosync[2758]:   [QUORUM] Members[1]: 2
Dec 13 12:42:41 an-node02 corosync[2758]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
Dec 13 12:42:41 an-node02 corosync[2758]:   [CPG   ] chosen downlist: sender r(0) ip(10.20.0.2) ; members(old:2 left:1)
Dec 13 12:42:41 an-node02 corosync[2758]:   [MAIN  ] Completed service synchronization, ready to provide service.
Dec 13 12:42:41 an-node02 kernel: dlm: closing connection to node 1
Dec 13 12:42:41 an-node02 fenced[2817]: fencing node an-node01.alteeve.ca
Dec 13 12:42:56 an-node02 fenced[2817]: fence an-node01.alteeve.ca success

Perfect!

If you are watching an-node01's display, you should now see it starting to boot back up.

Note: Remember to start cman once the node boots back up before trying the next test.

Cutting the Power to an-node01

As was discussed earlier, IPMI and other out-of-band management interfaces have a fatal flaw as a fence device. Their BMC draws its power from the same power supply as the node itself. Thus, when the power supply itself fails (or the mains connection is pulled/tripped over), fencing via IPMI will fail. This makes the power supply a single point of failure, which is what the PDU protects us against.

So to simulate a failed power supply, we're going to use an-node02's fence_apc fence agent to turn off the power to an-node01.

Alternatively, you could also just unplug the power and the fence would still succeed. The fence call only needs to confirm that the node is off to succeed. Whether the node restarts after or not is not important so far as the cluster is concerned.

From an-node02, pull the power on an-node01 with the following call;

fence_apc_snmp -a pdu2.alteeve.ca -n 1 -o off
Success: Powered OFF

Back on an-node02's syslog, we should see the following entries;

Dec 13 12:45:46 an-node02 corosync[2758]:   [TOTEM ] A processor failed, forming new configuration.
Dec 13 12:45:48 an-node02 corosync[2758]:   [QUORUM] Members[1]: 2
Dec 13 12:45:48 an-node02 corosync[2758]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
Dec 13 12:45:48 an-node02 corosync[2758]:   [CPG   ] chosen downlist: sender r(0) ip(10.20.0.2) ; members(old:2 left:1)
Dec 13 12:45:48 an-node02 corosync[2758]:   [MAIN  ] Completed service synchronization, ready to provide service.
Dec 13 12:45:48 an-node02 kernel: dlm: closing connection to node 1
Dec 13 12:45:48 an-node02 fenced[2817]: fencing node an-node01.alteeve.ca
Dec 13 12:46:08 an-node02 fenced[2817]: fence an-node01.alteeve.ca dev 0.0 agent fence_ipmilan result: error from agent
Dec 13 12:46:08 an-node02 fenced[2817]: fence an-node01.alteeve.ca success

Hoozah!

Notice that there is an error from the fence_ipmilan. This is exactly what we expected because of the IPMI's BMC lost power and couldn't respond.

So now we know that an-node01 can be fenced successfully from both fence devices. Now we need to run the same tests against an-node02.

Hanging an-node02

Warning: DO NOT ASSUME THAT an-node02 WILL FENCE PROPERLY JUST BECAUSE an-node01 PASSED!. There are many ways that a fence could fail; Bad password, misconfigured device, plugged into the wrong port on the PDU and so on. Always test all nodes using all methods!

Be sure to be tailing the /var/log/messages on an-node02. Go to an-node01's first terminal and run the following command.

Note: This command will not return and you will lose all ability to talk to this node until it is rebooted.

On an-node02 run:

echo c > /proc/sysrq-trigger

On an-node01's syslog terminal, you should see the following entries in the log.

Dec 13 12:52:34 an-node01 corosync[3445]:   [TOTEM ] A processor failed, forming new configuration.
Dec 13 12:52:36 an-node01 corosync[3445]:   [QUORUM] Members[1]: 1
Dec 13 12:52:36 an-node01 corosync[3445]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
Dec 13 12:52:36 an-node01 corosync[3445]:   [CPG   ] chosen downlist: sender r(0) ip(10.20.0.1) ; members(old:2 left:1)
Dec 13 12:52:36 an-node01 corosync[3445]:   [MAIN  ] Completed service synchronization, ready to provide service.
Dec 13 12:52:36 an-node01 kernel: dlm: closing connection to node 2
Dec 13 12:52:36 an-node01 fenced[3501]: fencing node an-node02.alteeve.ca
Dec 13 12:52:51 an-node01 fenced[3501]: fence an-node02.alteeve.ca success

Again, perfect!

Cutting the Power to an-node02

From an-node01, pull the power on an-node02 with the following call;

fence_apc_snmp -a pdu2.alteeve.ca -n 2 -o off
Success: Powered OFF

Back on an-node01's syslog, we should see the following entries;

Dec 13 12:55:58 an-node01 corosync[3445]:   [TOTEM ] A processor failed, forming new configuration.
Dec 13 12:56:00 an-node01 corosync[3445]:   [QUORUM] Members[1]: 1
Dec 13 12:56:00 an-node01 corosync[3445]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
Dec 13 12:56:00 an-node01 corosync[3445]:   [CPG   ] chosen downlist: sender r(0) ip(10.20.0.1) ; members(old:2 left:1)
Dec 13 12:56:00 an-node01 kernel: dlm: closing connection to node 2
Dec 13 12:56:00 an-node01 corosync[3445]:   [MAIN  ] Completed service synchronization, ready to provide service.
Dec 13 12:56:00 an-node01 fenced[3501]: fencing node an-node02.alteeve.ca
Dec 13 12:56:20 an-node01 fenced[3501]: fence an-node02.alteeve.ca dev 0.0 agent fence_ipmilan result: error from agent
Dec 13 12:56:20 an-node01 fenced[3501]: fence an-node02.alteeve.ca success

Woot!

Only now can we safely say that our fencing is setup and working properly.

Testing Network Redundancy

Next up of the testing block is our network configuration. Seeing as we've build our bonds, we need to now test that they are working properly.

  • Make sure that cman has started on both nodes.

First, we'll test all network cables individually, one node and one bonded interface at a time.

  • For each network; IFN, SN and BCN;
    • On both nodes, start a ping flood against the opposing node specifying the appropriate network name suffix in the first window and starting tailing syslog in the second window.
    • watch each bond's /proc/net/bonding/bondX file to see which interfaces are active.
    • Pull the currently-active network cable from the bond (either at the switch or at the node).
    • Check the state of the bonds again and see that they've switched to their backup interface. If a node gets fenced, you know something went wrong. You should see a handful of lost packets in the ping flood.
    • Restore the network cable and wait 2 minutes, then verify that the old primary interface was restored. You will see another handful of lost packets in the flood during the recovery.
    • Pull the cable again, then restore it. This time, do not wait 2 minutes. After just a few seconds, pull the backup link and ensure that the bond immediately resumed use of the primary interface.
    • Repeat the above steps for all bonds on both nodes. This will take a while, but you need to ensure configuration errors are found now.
Warning: Testing the complete primary switch failure and subsequant recovery is very, very important. Please do NOT skip this step!

Once all bonds have been tested, we'll do a final test by failing the primary switch.

  • Cut the power to the switch.
  • Check all bond status files. Confirm that all have switched to their backup links.
  • Restore power to the switch and wait 2 minutes.
  • Confirm that the bonds did not switch to the primary interfaces before the switch was ready to move data.

If all of these steps pass and the cluster doesn't partition, then you can be confident that your network is configured properly for full redundancy.

Network Testing Terminal Layout

If you have a couple of monitors, particularly one with portrait mode, you might be able to open 16 terminals at once. This is how many are needed to run ping floods, watch the bond status files, tail syslog and watch cman_tool all at the same time. This configuration makes it very easy to keep a near real-time, complete view of all network components.

On the left window, the top-left terminal shows watch cman_tool status and the top-right terminal shows tail -f -n 0 /var/log/messages for an-node01. The bottom two terminals show the same for an-node02.

On the right, portrait-mode window, the terminal layout used for monitoring the bonded link status and ping floods are shown. There are two columns; an-node01 on the left and an-node02 on the right. Each column is stacked into six rows, bond0 on the top followed by ping -f an-node02.bcn, bond1 in the middle followed by ping -f an-node02.sn and bond2 at the bottom followed by ping -f an-node02.ifn. The left window shows the standard tail on syslog plus watch cman_tool status.

Terminal layout used for HA network testing; Calls shown.
Terminal layout used for HA network testing; Calls running.

How to Know if the Tests Passed

Well, the most obvious answer to this question is if the cluster is still working after a switch is powered off.

We can be a little more subtle than that though.

The state of each bond is viewable by looking in the special /proc/net/bonding/bondX files, where X is the bond number. Lets take a look at bond0 on an-node01.

cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: eth0 (primary_reselect always)
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 120000
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:e0:81:c7:ec:49
Slave queue ID: 0

Slave Interface: eth3
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:1b:21:9d:59:fc
Slave queue ID: 0

We can see that the currently active interface is eth0. This is the key bit we're going to be watching for these tests. I know that eth0 on an-node01 is connected to by first switch. So when I pull the cable to that switch, or when I fail that switch entirely, I should see eth3 take over.

We'll also be watching syslog. If things work right, we should not see any messages from the cluster during failure and recovery.

Failing The First Interface

Let's look at the first test. We'll fail an-node01's eth0 interface by pulling its cable.

On an-node01's syslog, you will see;

Dec 13 14:03:19 an-node01 kernel: e1000e: eth0 NIC Link is Down
Dec 13 14:03:19 an-node01 kernel: bonding: bond0: link status definitely down for interface eth0, disabling it
Dec 13 14:03:19 an-node01 kernel: bonding: bond0: making interface eth3 the new active one.

Looking again at an-node01's bond0's status;

cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: eth0 (primary_reselect always)
Currently Active Slave: eth3
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 120000
Down Delay (ms): 0

Slave Interface: eth0
MII Status: down
Link Failure Count: 1
Permanent HW addr: 00:e0:81:c7:ec:49
Slave queue ID: 0

Slave Interface: eth3
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:1b:21:9d:59:fc
Slave queue ID: 0

We can see now that eth0 is down and that eth3 has taken over.

If you look at the windows running the ping flood, both an-node01 and an-node02 should show nearly the same number of lost packets;

PING an-node02 (10.20.0.2) 56(84) bytes of data.
........................

The failure of the link was successful!

Recovering The First Interface

Surviving failure is only half the test. We also need to test the recovery of the interface. When ready, reconnect an-node01's eth0.

The first thing you should notice is in an-node01's syslog;

Dec 13 14:06:40 an-node01 kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:06:40 an-node01 kernel: bonding: bond0: link status up for interface eth0, enabling it in 120000 ms.

The bond will still be using eth3, so lets wait two minutes.

After the two minutes, you should see the following addition syslog entries.

Dec 13 14:08:40 an-node01 kernel: bond0: link status definitely up for interface eth0, 1000 Mbps full duplex.
Dec 13 14:08:40 an-node01 kernel: bonding: bond0: making interface eth0 the new active one.

If we go back to the bond status file, we'll see that the eth0 interface has been restored.

cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: eth0 (primary_reselect always)
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 120000
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Link Failure Count: 1
Permanent HW addr: 00:e0:81:c7:ec:49
Slave queue ID: 0

Slave Interface: eth3
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:1b:21:9d:59:fc
Slave queue ID: 0

Note that the only difference from before is that eth0's Link Failure Count has been incremented to 1.

The test has passed!

Now repeat the test for the other two bonds, then for all three bonds on an-node02. Remember to also repeat each test, but pull the backup interface before the 2 minutes delays has completed. The primary interface should immediately take over again. This will confirm that failover for the backup link is also working properly.

Failing The First Switch

Note: Make sure that cman is running before beginning the test! The real test is less about the failure and recovery of the network itself and more about whether it fails and recovers in such a way that the cluster stays up and no partitioning occurs.

Check that all bonds on both nodes are using their primary interfaces. Confirm your cabling to ensure that these are all routed to the primary switch and that all backup links are cabled into the backup switch. Once done, pull the power to the primary switch. Both nodes should show similar output in their syslog windows;

Dec 13 14:16:17 an-node01 kernel: e1000e: eth2 NIC Link is Down
Dec 13 14:16:17 an-node01 kernel: e1000e: eth0 NIC Link is Down
Dec 13 14:16:17 an-node01 kernel: bonding: bond0: link status definitely down for interface eth0, disabling it
Dec 13 14:16:17 an-node01 kernel: bonding: bond0: making interface eth3 the new active one.
Dec 13 14:16:17 an-node01 kernel: bonding: bond2: link status definitely down for interface eth2, disabling it
Dec 13 14:16:17 an-node01 kernel: bonding: bond2: making interface eth5 the new active one.
Dec 13 14:16:17 an-node01 kernel: device eth2 left promiscuous mode
Dec 13 14:16:17 an-node01 kernel: device eth5 entered promiscuous mode
Dec 13 14:16:17 an-node01 kernel: e1000e: eth1 NIC Link is Down
Dec 13 14:16:18 an-node01 kernel: bonding: bond1: link status definitely down for interface eth1, disabling it
Dec 13 14:16:18 an-node01 kernel: bonding: bond1: making interface eth4 the new active one.

I can look at an-node01's /proc/net/bonding/bond0 file and see:

cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: eth0 (primary_reselect always)
Currently Active Slave: eth3
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 120000
Down Delay (ms): 0

Slave Interface: eth0
MII Status: down
Link Failure Count: 3
Permanent HW addr: 00:e0:81:c7:ec:49
Slave queue ID: 0

Slave Interface: eth3
MII Status: up
Link Failure Count: 2
Permanent HW addr: 00:1b:21:9d:59:fc
Slave queue ID: 0

Notice Currently Active Slave is now eth3? You can also see now that eth0's link is down (MII Status: down).

It should be the same story for all the other bonds on both nodes.

If we check the status of the cluster, we'll see that all is good.

cman_tool status
Version: 6.2.0
Config Version: 7
Cluster Name: an-cluster-A
Cluster Id: 24561
Cluster Member: Yes
Cluster Generation: 40
Membership state: Cluster-Member
Nodes: 2
Expected votes: 1
Total votes: 2
Node votes: 1
Quorum: 1  
Active subsystems: 7
Flags: 2node 
Ports Bound: 0  
Node name: an-node01.alteeve.ca
Node ID: 1
Multicast addresses: 239.192.95.81 
Node addresses: 10.20.0.1

Success! We just failed the primary switch without any interruption of clustered services.

We're not out of the woods yet, though...

Restoring The First Switch

Now that we've confirmed all of the bonds are working on the backup switch, lets restore power to the first switch.

Warning: Be sure to wait five minutes after restoring power before declaring the recovery a success! Some configuration faults will take a few minutes to appear.

It is very important to wait for a while after restoring power to the switch. Some of the common problems that can break your cluster will not show up immediately. A good example is a misconfiguration of STP. In this case, the switch will come up, a short time will pass and then the switch will trigger an STP reconfiguration. Once this happens, both switches will block traffic for many seconds. This will partition you cluster.

So then, lets power it back up.

Within a few moments, you should see this in your syslog;

Dec 13 14:19:30 an-node01 kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:19:30 an-node01 kernel: bonding: bond0: link status up for interface eth0, enabling it in 120000 ms.
Dec 13 14:19:30 an-node01 kernel: e1000e: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:19:30 an-node01 kernel: e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:19:30 an-node01 kernel: bonding: bond2: link status up for interface eth2, enabling it in 120000 ms.
Dec 13 14:19:30 an-node01 kernel: bonding: bond1: link status up for interface eth1, enabling it in 120000 ms.

As with the individual link test, the backup interfaces will remain in use for two minutes. This is critical because miimon has detected the connection to the switches, but the switches are still a long way from being able to route traffic. After the two minutes, we'll see the primary interfaces return to active state.

Dec 13 14:20:25 an-node01 kernel: e1000e: eth0 NIC Link is Down
Dec 13 14:20:25 an-node01 kernel: bonding: bond0: link status down again after 55000 ms for interface eth0.
Dec 13 14:20:26 an-node01 kernel: e1000e: eth1 NIC Link is Down
Dec 13 14:20:26 an-node01 kernel: bonding: bond1: link status down again after 55800 ms for interface eth1.
Dec 13 14:20:27 an-node01 kernel: e1000e: eth2 NIC Link is Down
Dec 13 14:20:27 an-node01 kernel: bonding: bond2: link status down again after 56800 ms for interface eth2.
Dec 13 14:20:27 an-node01 kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:20:27 an-node01 kernel: bonding: bond0: link status up for interface eth0, enabling it in 120000 ms.
Dec 13 14:20:28 an-node01 kernel: e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:20:28 an-node01 kernel: bonding: bond1: link status up for interface eth1, enabling it in 120000 ms.
Dec 13 14:20:29 an-node01 kernel: e1000e: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:20:29 an-node01 kernel: bonding: bond2: link status up for interface eth2, enabling it in 120000 ms.
Dec 13 14:20:31 an-node01 kernel: e1000e: eth0 NIC Link is Down
Dec 13 14:20:31 an-node01 kernel: bonding: bond0: link status down again after 3500 ms for interface eth0.
Dec 13 14:20:32 an-node01 kernel: e1000e: eth1 NIC Link is Down
Dec 13 14:20:32 an-node01 kernel: bonding: bond1: link status down again after 4100 ms for interface eth1.
Dec 13 14:20:32 an-node01 kernel: e1000e: eth2 NIC Link is Down
Dec 13 14:20:32 an-node01 kernel: bonding: bond2: link status down again after 3500 ms for interface eth2.
Dec 13 14:20:33 an-node01 kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:20:33 an-node01 kernel: bonding: bond0: link status up for interface eth0, enabling it in 120000 ms.
Dec 13 14:20:34 an-node01 kernel: e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:20:34 an-node01 kernel: bonding: bond1: link status up for interface eth1, enabling it in 120000 ms.
Dec 13 14:20:35 an-node01 kernel: e1000e: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:20:35 an-node01 kernel: bonding: bond2: link status up for interface eth2, enabling it in 120000 ms.

See all that bouncing? That is caused by many switches showing a link (that is the MII status) without actually being able to push traffic. As part of the switches boot sequence, the links will go down and come back up a couple of times. The 2 minute counter will reset with each bounce, so the recovery time is actually quite a bit longer than two minutes. This is fine, no need to rush back to the first switch.

Note that you will not see this bouncing on switches that hold back on MII status until finished booting.

After a few minutes, the old interfaces will actually be restored.

Dec 13 14:22:33 an-node01 kernel: bond0: link status definitely up for interface eth0, 1000 Mbps full duplex.
Dec 13 14:22:33 an-node01 kernel: bonding: bond0: making interface eth0 the new active one.
Dec 13 14:22:34 an-node01 kernel: bond1: link status definitely up for interface eth1, 1000 Mbps full duplex.
Dec 13 14:22:34 an-node01 kernel: bonding: bond1: making interface eth1 the new active one.
Dec 13 14:22:35 an-node01 kernel: bond2: link status definitely up for interface eth2, 1000 Mbps full duplex.
Dec 13 14:22:35 an-node01 kernel: bonding: bond2: making interface eth2 the new active one.
Dec 13 14:22:35 an-node01 kernel: device eth5 left promiscuous mode
Dec 13 14:22:35 an-node01 kernel: device eth2 entered promiscuous mode

Complete success!

Warning: It is worth restating the importance of spreading your two fence methods across two switches. If both your PDU(s) and you IPMI (or iLO, etc) interfaces all run through one switch, that switch becomes a single point of failure. Generally, I run the IPMI/iLO/etc fence devices on the primary switch and the PDU(s) on the secondary switch.

Failing The Secondary Switch

Before we can say that everything is perfect, we need to test failing and recovering the secondary switch. The main purpose of this test is to ensure that there are no problems caused when the secondary switch restarts.

To fail the switch, as we did with the primary switch, simply cut its power. We should see the following in both node's syslog;

Dec 13 14:30:57 an-node01 kernel: e1000e: eth3 NIC Link is Down
Dec 13 14:30:57 an-node01 kernel: bonding: bond0: link status definitely down for interface eth3, disabling it
Dec 13 14:30:58 an-node01 kernel: e1000e: eth4 NIC Link is Down
Dec 13 14:30:58 an-node01 kernel: e1000e: eth5 NIC Link is Down
Dec 13 14:30:58 an-node01 kernel: bonding: bond1: link status definitely down for interface eth4, disabling it
Dec 13 14:30:58 an-node01 kernel: bonding: bond2: link status definitely down for interface eth5, disabling it

Let's take a look at an-node01's bond0 status file.

cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: eth0 (primary_reselect always)
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 120000
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Link Failure Count: 3
Permanent HW addr: 00:e0:81:c7:ec:49
Slave queue ID: 0

Slave Interface: eth3
MII Status: down
Link Failure Count: 3
Permanent HW addr: 00:1b:21:9d:59:fc
Slave queue ID: 0

Note that the eth3 interface is shown as down. There should have been no dropped packets in the ping-flood window at all.

Restoring The Second Switch

When the power is restored to the switch, we'll see the same "bouncing" as the switch goes through its startup process. Notice that the backup link also remains listed as down for 2 minutes, despite the interface not being used by the bonded interface.

Dec 13 14:33:36 an-node01 kernel: e1000e: eth4 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:33:36 an-node01 kernel: e1000e: eth5 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:33:36 an-node01 kernel: bonding: bond1: link status up for interface eth4, enabling it in 120000 ms.
Dec 13 14:33:36 an-node01 kernel: bonding: bond2: link status up for interface eth5, enabling it in 120000 ms.
Dec 13 14:33:37 an-node01 kernel: e1000e: eth3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:33:37 an-node01 kernel: bonding: bond0: link status up for interface eth3, enabling it in 120000 ms.
Dec 13 14:34:34 an-node01 kernel: e1000e: eth5 NIC Link is Down
Dec 13 14:34:34 an-node01 kernel: bonding: bond2: link status down again after 58000 ms for interface eth5.
Dec 13 14:34:36 an-node01 kernel: e1000e: eth5 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 13 14:34:36 an-node01 kernel: bonding: bond2: link status up for interface eth5, enabling it in 120000 ms.
Dec 13 14:34:38 an-node01 kernel: e1000e: eth5 NIC Link is Down
Dec 13 14:34:38 an-node01 kernel: bonding: bond2: link status down again after 2000 ms for interface eth5.
Dec 13 14:34:40 an-node01 kernel: e1000e: eth5 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Dec 13 14:34:40 an-node01 kernel: bonding: bond2: link status up for interface eth5, enabling it in 120000 ms.

After two minutes from the last bound, we'll see the backup interfaces return to up state in the bond's status file.

Dec 13 14:35:36 an-node01 kernel: bond1: link status definitely up for interface eth4, 1000 Mbps full duplex.
Dec 13 14:35:37 an-node01 kernel: bond0: link status definitely up for interface eth3, 1000 Mbps full duplex.
Dec 13 14:36:40 an-node01 kernel: bond2: link status definitely up for interface eth5, 1000 Mbps full duplex.

After a full five minutes, the cluster and the network remain stable. We can officially declare our network to be fully highly available!

Installing DRBD

DRBD is an open-source application for real-time, block-level disk replication created and maintained by Linbit. We will use this to keep the data on our cluster consistent between the two nodes.

To install it, we have three choices;

  1. Purchase a Red Hat blessed, fully supported copy from Linbit.
  2. Install from the freely available, community maintained ELRepo repository.
  3. Install from source files.

We will be using the 8.3.x version of DRBD. This tracts the Red Hat and Linbit supported versions, providing the most tested combination and providing a painless path to move to a fully supported version, should you decide to do so down the road.

Option 1 - Fully Supported by Red Hat and Linbit

Red Hat decided to no longer directly support DRBD in EL6 to narrow down what applications they shipped and focus on improving those components. Given the popularity of DRBD, however, Red Hat struck a deal with Linbit, the authors and maintainers of DRBD. You have the option of purchasing a fully supported version of DRBD that is blessed by Red Hat for use under Red Hat Enterprise Linux 6.

If you are building a fully supported cluster, please contact Linbit to purchase DRBD. Once done, you will get an email with you login information and, most importantly here, the URL hash needed to access the official repositories.

First you will need to add an entry in /etc/yum.repo.d/ for DRBD, but this needs to be hand-crafted as you must specify the URL hash given to you in the email as part of the repo configuration.

  • Log into the Linbit portal.
  • Click on Account.
  • Under Your account details, click on the hash string to the right of URL hash:.
  • Click on RHEL 6 (even if you are using CentOS or another EL6 distro.

This will take you to a new page called Instructions for using the DRBD package repository. The details installation instruction are found here.

Lets use the imaginative URL hash of abcdefghijklmnopqrstuvwxyz0123456789ABCD and we're are in fact using x86_64 architecture. Given this, we would create the following repository configuration file.

vim /etc/yum.repos.d/linbit.repo
[drbd-8]
name=DRBD 8
baseurl=http://packages.linbit.com/abcdefghijklmnopqrstuvwxyz0123456789ABCD/rhel6/x86_64
gpgcheck=0

Once this is saved, you can install DRBD using yum;

yum install drbd kmod-drbd

Done!

Option 2 - Install From ELRepo

ELRepo is a community-maintained repository of packages for Enterprise Linux; Red Hat Enterprise Linux and its derivatives like CentOS. This is the easiest option for a freely available DRBD package.

The main concern with this option is that you are seceding control of DRBD to a community-controlled project. This is a trusted repo, but there are still undeniable security concerns.

Check for the latest installation RPM and information;

# Install the ELRepo GPG key, add the repo and install DRBD.
rpm --import http://elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://elrepo.org/elrepo-release-6-4.el6.elrepo.noarch.rpm
Retrieving http://elrepo.org/elrepo-release-6-4.el6.elrepo.noarch.rpm
Preparing...                ########################################### [100%]
   1:elrepo-release         ########################################### [100%]
yum install drbd83-utils kmod-drbd83

This is the method used for this tutorial.

Option 3 - Install From Source

If you do not wish to pay for access to the official DRBD repository and do not feel comfortable adding a public repository, your last option is to install from Linbit's source code. The benefit of this is that you can vet the source before installing it, making it a more secure option. The downside is that you will need to manually install updates and security fixes as they are made available.

On Both nodes run:

# Download, compile and install DRBD
yum install flex gcc make kernel-devel
wget -c http://oss.linbit.com/drbd/8.3/drbd-8.3.15.tar.gz
tar -xvzf drbd-8.3.15.tar.gz
cd drbd-8.3.15
./configure \
   --prefix=/usr \
   --localstatedir=/var \
   --sysconfdir=/etc \
   --with-utils \
   --with-km \
   --with-udev \
   --with-pacemaker \
   --with-rgmanager \
   --with-bashcompletion
make
make install
chkconfig --add drbd
chkconfig drbd off

Hooking DRBD Into The Cluster's Fencing

Warning: This script has no delay built into it. In many cases, if the link between the DRBD resources fail, both nodes may fence simultaneously causing both nodes to shut down. If you add sleep 10; to one of the nodes, then you can ensure that dual-fencing won't occur.

We will use a script, written by Lon Hohberger of Red Hat. This script will capture fence calls from DRBD and in turn calls the cluster's fence_node against the opposing node. It this way, DRBD will avoid split-brain without the need to maintain two separate fence configurations.

On Both nodes run:

# Obliterate peer - fence via cman
wget -c https://alteeve.ca/files/an-cluster/sbin/obliterate-peer.sh -O /sbin/obliterate-peer.sh
chmod a+x /sbin/obliterate-peer.sh
ls -lah /sbin/obliterate-peer.sh
-rwxr-xr-x 1 root root 2.1K May  4  2011 /sbin/obliterate-peer.sh

We'll configure DRBD to use this script shortly.

Alternate Fence Handler; rhcs_fence

Note: Caveat: The author of this tutorial is also the author of this script.

A new fence handler which ties DRBD into RHCS is now available called rhcs_fence with the goal of replacing obliterate-peer.sh. It aims to extend Lon's script, which hasn't been actively developed in some time.

This agent has had minimal testing, so please test thoroughly when using it.

This agent addresses the simultaneous fencing issue by automatically adding a delay to the fence call based on the host node's ID number, with the node having ID of 1 having no delay at all. It is also a little more elegant about how it handles the actual fence call with the goal of being more reliable when a fence action takes longer than usual to complete.

To install it, run the following on both nodes.

wget -c https://raw.github.com/digimer/rhcs_fence/master/rhcs_fence 
chmod 755 rhcs_fence
mv rhcs_fence /sbin/
ls -lah /sbin/rhcs_fence
-rwxr-xr-x 1 root root 15K Jan 24 22:04 /usr/sbin/rhcs_fence

The "Why" of Our Layout

We will be creating three separate DRBD resources. The reason for this is to minimize the chance of data loss in a split-brain event.

We're going to take steps to ensure that a split-brain is exceedingly unlikely, but we always have to plan for the worst case scenario. The biggest concern with recovering from a split-brain is that, by necessity, one of the nodes will lose data. Further, there is no way to automate the recovery, as there is no clear way for DRBD to tell which node has the more valuable data.

Consider this scenario;

  • You have a two-node cluster running two VMs. One is a mirror for a project and the other is an accounting application. Node 1 hosts the mirror, Node 2 hosts the accounting application.
  • A partition occurs and both nodes try to fence the other.
  • Network access is lost, so both nodes fall back to fencing using PDUs.
  • Both nodes have redundant power supplies, and at some point in time, the power cables on the second PDU got reversed.
  • The fence_apc_snmp agent succeeds, because the requested outlets were shut off. However, do to the cabling mistake, neither node actually shut down.
  • Both nodes proceed to run independently, thinking they are the only node left.
  • During this split-brain, the mirror VM downloads over a gigabyte of updates. Meanwhile, an hour earlier, the accountant updates the books, totalling less than one megabyte of changes.

At this point, you will need to discard the changed on one of the nodes. So now you have to choose;

  • Is the node with the most changes more valid?
  • Is the node with the most recent changes more valid?

Neither of these are true, as the node with the older data and smallest amount of changed data is the accounting data which is significantly more valuable.

Now imagine that both VMs have equally valuable data. What then? Which side do you discard?

The approach we will use is to create two separate DRBD resources. Then we will assign the VMs into two groups; VMs normally designed to run on one node will go one one resource while the VMs designed to normally run on the other resource will share the second resource.

With all the VMs on a given resource running on the same DRBD resource, we can fairly easily decide which node to discard changes on, on a per-resource level.

To summarize, we're going to create the following three resources;

  • r0; A small resource for the shared files formatted with GFS2.
  • r1; This resource will back the VMs designed to primarily run on an-node01.
  • r2; This resource will back the VMs designed to primarily run on an-node02.

Creating The Partitions For DRBD

It is possible to use LVM on the hosts, and simply create LVs to back our DRBD resources. However, this causes confusion as LVM will see the PV signatures on both the DRBD backing devices and the DRBD device itself. Getting around this requires editing LVM's filter option, which is somewhat complicated. Not overly so, mind you, but enough to be outside the scope of this document.

Also, by working with fdisk directly, it will give us a chance to make sure that the DRBD partitions start on an even 64 KiB boundry. This is important for decent performance on Windows VMs, as we will see later. This is true for both traditional platter and modern solid-state drives.

On our nodes, we created three primary disk partitions;

  • /dev/sda1; The /boot partition.
  • /dev/sda2; The root / partition.
  • /dev/sda3; The swap partition.

We will create a new extended partition. Then within it we will create three new partitions;

  • /dev/sda5; a small partition we will later use for our shared GFS2 partition.
  • /dev/sda6; a partition big enough to host the VMs that will normally run on an-node01.
  • /dev/sda7; a partition big enough to host the VMs that will normally run on an-node02.

As we create each partition, we will do a little math to ensure that the start sector is on a 64 KiB boundry.

Block Alignment

For performance reasons, we want to ensure that the file systems created within a VM matches the block alignment of the underlying storage stack, clear down to the base partitions on /dev/sda (or what ever your lowest-level block device is).

Imagine this misaligned scenario;

Note: Not to scale
                 ________________________________________________________________
VM File system  |~~~~~|_______|_______|_______|_______|_______|_______|_______|__
                |~~~~~|==========================================================
DRBD Partition  |~~~~~|_______|_______|_______|_______|_______|_______|_______|__
64 KiB block    |_______|_______|_______|_______|_______|_______|_______|_______|
512byte sectors |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|

Now, when the guest wants to write one block worth of data, it actually causes two blocks to be written, causing avoidable disk I/O.

Note: Not to scale
                 ________________________________________________________________
VM File system  |~~~~~~~|_______|_______|_______|_______|_______|_______|_______|
                |~~~~~~~|========================================================
DRBD Partition  |~~~~~~~|_______|_______|_______|_______|_______|_______|_______|
64 KiB block    |_______|_______|_______|_______|_______|_______|_______|_______|
512byte sectors |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|

By changing the start cylinder of our partitions to always start on 64 KiB boundaries, we're sure to keep the guest OS's file system in-line with the DRBD backing device's blocks. Thus, all reads and writes in the guest OS effect a matching number of real blocks, maximizing disk I/O efficiency.

Thankfully, as we'll see in a moment, the parted program has a mode that will tell it to always optimally align partitions, so we won't need to do any crazy math.

Note: You will want to do this with SSD drives, too. It's true that the performance will remain about the same, but SSD drives have a limited number of write cycles, and aligning the blocks will minimize block writes.

Special thanks to Pasi Kärkkäinen for his patience in explaining to me the importance of disk alignment. He created two images which I used as templates for the ASCII art images above;

Creating the DRBD Partitions

Here I will show you the values I entered to create the three partitions I needed on my nodes.

DO NOT DIRECTLY COPY THIS!

The values you enter will almost certainly be different.

We're going to use a program called parted to configure the disk /dev/sda. Pay close attention to the -a optimal switch. This tells parted to create new partitions with optimal block alignment, which is crucial for virtual machine performance.

parted -a optimal /dev/sda
GNU Parted 2.1
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted)

We're now in the parted console. Before we start, let's take a look at the current disk configuration along with the amount of free space available.

print free
Model: ATA ST9500420ASG (scsi)
Disk /dev/sda: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type     File system     Flags
        32.3kB  1049kB  1016kB           Free Space
 1      1049kB  269MB   268MB   primary  ext4            boot
 2      269MB   43.2GB  42.9GB  primary  ext4
 3      43.2GB  47.5GB  4295MB  primary  linux-swap(v1)
        47.5GB  500GB   453GB            Free Space

Before we can create the three DRBD partition, we first need to create an extended partition wherein which we will create the three logical partitions. From the output above, we can see that the free space starts at 47.5GB, and that the drive ends at 500GB. Knowing this, we can now create the extended partition.

mkpart extended 47.5GB 500GB
Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda
(Device or resource busy).  As a result, it may not reflect all of your changes
until after reboot.

Don't worry about that message, we will reboot when we finish.

So now we can confirm that the new extended partition was create by again printing the partition table and the free space.

print free
Model: ATA ST9500420ASG (scsi)
Disk /dev/sda: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type      File system     Flags
        32.3kB  1049kB  1016kB            Free Space
 1      1049kB  269MB   268MB   primary   ext4            boot
 2      269MB   43.2GB  42.9GB  primary   ext4
 3      43.2GB  47.5GB  4295MB  primary   linux-swap(v1)
 4      47.5GB  500GB   453GB   extended                  lba
        47.5GB  500GB   453GB             Free Space
        500GB   500GB   24.6kB            Free Space

Perfect. So now we're going to create our three logical partitions. We're going to use the same start position as last time, but the end position will be 20 GiB further in.

mkpart logical 47.5GB 67.5GB
Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda
(Device or resource busy).  As a result, it may not reflect all of your changes
until after reboot.

We'll check again to see the new partition layout.

print free
Model: ATA ST9500420ASG (scsi)
Disk /dev/sda: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type      File system     Flags
        32.3kB  1049kB  1016kB            Free Space
 1      1049kB  269MB   268MB   primary   ext4            boot
 2      269MB   43.2GB  42.9GB  primary   ext4
 3      43.2GB  47.5GB  4295MB  primary   linux-swap(v1)
 4      47.5GB  500GB   453GB   extended                  lba
 5      47.5GB  67.5GB  20.0GB  logical
        67.5GB  500GB   433GB             Free Space
        500GB   500GB   24.6kB            Free Space

Again, perfect. Now I have a total of 433GB left free. How you carve this up for your VMs will depend entirely on what kind of VMs you plan to install and what their needs are. For me, I will divide the space evenly into to logical partitions of 216.5GB (433 / 2 = 216.5).

The first partition will start at 67.5 and end at 284GB (67.5 + 216.5 = 284)

mkpart logical 67.5GB 284GB
Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda
(Device or resource busy).  As a result, it may not reflect all of your changes
until after reboot.

Once again, lets look at the new partition table.

print free
Model: ATA ST9500420ASG (scsi)
Disk /dev/sda: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type      File system     Flags
        32.3kB  1049kB  1016kB            Free Space
 1      1049kB  269MB   268MB   primary   ext4            boot
 2      269MB   43.2GB  42.9GB  primary   ext4
 3      43.2GB  47.5GB  4295MB  primary   linux-swap(v1)
 4      47.5GB  500GB   453GB   extended                  lba
 5      47.5GB  67.5GB  20.0GB  logical
 6      67.5GB  284GB   216GB   logical
        284GB   500GB   216GB             Free Space
        500GB   500GB   24.6kB            Free Space

Finally, our last partition will start at 284GB and use the rest of the free space, ending at 500GB.

mkpart logical 284GB 500GB
Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda
(Device or resource busy).  As a result, it may not reflect all of your changes
until after reboot.

One last time, let's look at the partition table.

print free
Model: ATA ST9500420ASG (scsi)
Disk /dev/sda: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type      File system     Flags
        32.3kB  1049kB  1016kB            Free Space
 1      1049kB  269MB   268MB   primary   ext4            boot
 2      269MB   43.2GB  42.9GB  primary   ext4
 3      43.2GB  47.5GB  4295MB  primary   linux-swap(v1)
 4      47.5GB  500GB   453GB   extended                  lba
 5      47.5GB  67.5GB  20.0GB  logical
 6      67.5GB  284GB   216GB   logical
 7      284GB   500GB   216GB   logical
        500GB   500GB   24.6kB            Free Space

Just as we asked for. Before we finish though, let's be extra careful and do a manual check of our three partitions to ensure that they are, in fact, aligned optimally. There will be no output from the following commands if the partitions are aligned.

(parted) align-check opt 5
(parted) align-check opt 6
(parted) align-check opt 7
(parted)

Excellent! We can now exit.

quit
Information: You may need to update /etc/fstab.

Now we need to reboot to make the kernel see the new partition table.

reboot

Done! Do this for both nodes, then proceed.

Configuring DRBD

DRBD is configured in two parts;

  • Global and common configuration options
  • Resource configurations

We will be creating three separate DRBD resources, so we will create three separate resource configuration files. More on that in a moment.

Configuring DRBD Global and Common Options

The first file to edit is /etc/drbd.d/global_common.conf. In this file, we will set global configuration options and set default resource configuration options. These default resource options can be overwritten in the actual resource files which we'll create once we're done here.

I'll explain the values we're setting here, and we'll put the explanation of each option in the file itself, as it will be useful to have them should you need to alter the files sometime in the future.

The first addition is in the handlers { } directive. We're going to add the fence-peer option and configure it to use the obliterate-peer.sh script we spoke about earlier in the DRBD section.

vim /etc/drbd.d/global_common.conf
	handlers {
		# This script is a wrapper for RHCS's 'fence_node' command line
		# tool. It will call a fence against the other node and return
		# the appropriate exit code to DRBD.
		fence-peer		"/sbin/obliterate-peer.sh";
	}
Note: If you used the rhcs_fence handler, use 'fence-peer "/usr/sbin/rhcs_fence";'.

We're going to add three options to the startup { } directive; We're going to tell DRBD to make both nodes "primary" on start, to wait five minutes on start for its peer to connect and, if the peer never connected last time, to wait onto two minutes.

	startup {
		# This tells DRBD to promote both nodes to Primary on start.
		become-primary-on	both;

		# This tells DRBD to wait five minutes for the other node to
		# connect. This should be longer than it takes for cman to
		# timeout and fence the other node *plus* the amount of time it
		# takes the other node to reboot. If you set this too short,
		# you could corrupt your data. If you want to be extra safe, do
		# not use this at all and DRBD will wait for the other node
		# forever.
		wfc-timeout		300;

		# This tells DRBD to wait for the other node for three minutes
		# if the other node was degraded the last time it was seen by
		# this node. This is a way to speed up the boot process when
		# the other node is out of commission for an extended duration.
		degr-wfc-timeout	120;
	}

For the disk { } directive, we're going to configure DRBD's behaviour when a split-brain is detected. By setting fencing to resource-and-stonith, we're telling DRBD to stop all disk access and call a fence against its peer node rather than proceeding.

	disk {
		# This tells DRBD to block IO and fence the remote node (using
		# the 'fence-peer' helper) when connection with the other node
		# is unexpectedly lost. This is what helps prevent split-brain
		# condition and it is incredible important in dual-primary
		# setups!
		fencing			resource-and-stonith;
	}

In the net { } directive, we're going to tell DRBD that it is allowed to run in dual-primary mode and we're going to configure how it behaves if a split-brain has occurred, despite our best efforts. The recovery (or lack there of) requires three options; What to do when neither node had been primary (after-sb-0pri), what to do if only one node had been primary (after-sb-1pri) and finally, what to do if both nodes had been primary (after-sb-2pri), as will most likely be the case for us. This last instance will be configured to tell DRBD just to drop the connection, which will require human intervention to correct.

At this point, you might be wondering why we won't simply run Primary/Secondary. The reason is because of live-migration. When we push a VM across to the backup node, there is a short period of time where both nodes need to be writeable.

	net {
		# This tells DRBD to allow two nodes to be Primary at the same
		# time. It is needed when 'become-primary-on both' is set.
		allow-two-primaries;

		# The following three commands tell DRBD how to react should
		# our best efforts fail and a split brain occurs. You can learn
		# more about these options by reading the drbd.conf man page.
		# NOTE! It is not possible to safely recover from a split brain
		# where both nodes were primary. This care requires human
		# intervention, so 'disconnect' is the only safe policy.
		after-sb-0pri		discard-zero-changes;
		after-sb-1pri		discard-secondary;
		after-sb-2pri		disconnect;
	}

We'll make our usual backup of the configuration file, add the new sections and then create a diff to see exactly how things have changed.

cp /etc/drbd.d/global_common.conf /etc/drbd.d/global_common.conf.orig
vim /etc/drbd.d/global_common.conf 
diff -u  /etc/drbd.d/global_common.conf.orig /etc/drbd.d/global_common.conf
--- /etc/drbd.d/global_common.conf.orig	2011-12-13 22:22:30.916128360 -0500
+++ /etc/drbd.d/global_common.conf	2011-12-13 22:26:30.733379609 -0500
@@ -14,22 +14,67 @@
 		# split-brain "/usr/lib/drbd/notify-split-brain.sh root";
 		# out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
 		# before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
+
 		# after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
+                # This script is a wrapper for RHCS's 'fence_node' command line
+                # tool. It will call a fence against the other node and return
+                # the appropriate exit code to DRBD.
+                fence-peer              "/sbin/obliterate-peer.sh";
 	}
 
 	startup {
 		# wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb
+
+                # This tells DRBD to promote both nodes to Primary on start.
+                become-primary-on       both;
+
+                # This tells DRBD to wait five minutes for the other node to
+                # connect. This should be longer than it takes for cman to
+                # timeout and fence the other node *plus* the amount of time it
+                # takes the other node to reboot. If you set this too short,
+                # you could corrupt your data. If you want to be extra safe, do
+                # not use this at all and DRBD will wait for the other node
+                # forever.
+                wfc-timeout             300;
+
+                # This tells DRBD to wait for the other node for three minutes
+                # if the other node was degraded the last time it was seen by
+                # this node. This is a way to speed up the boot process when
+                # the other node is out of commission for an extended duration.
+                degr-wfc-timeout        120;
 	}
 
 	disk {
 		# on-io-error fencing use-bmbv no-disk-barrier no-disk-flushes
 		# no-disk-drain no-md-flushes max-bio-bvecs
+
+                # This tells DRBD to block IO and fence the remote node (using
+                # the 'fence-peer' helper) when connection with the other node
+                # is unexpectedly lost. This is what helps prevent split-brain
+                # condition and it is incredible important in dual-primary
+                # setups!
+                fencing                 resource-and-stonith;
 	}
 
 	net {
 		# sndbuf-size rcvbuf-size timeout connect-int ping-int ping-timeout max-buffers
 		# max-epoch-size ko-count allow-two-primaries cram-hmac-alg shared-secret
 		# after-sb-0pri after-sb-1pri after-sb-2pri data-integrity-alg no-tcp-cork
+
+
+                # This tells DRBD to allow two nodes to be Primary at the same
+                # time. It is needed when 'become-primary-on both' is set.
+                allow-two-primaries;
+
+                # The following three commands tell DRBD how to react should
+                # our best efforts fail and a split brain occurs. You can learn
+                # more about these options by reading the drbd.conf man page.
+                # NOTE! It is not possible to safely recover from a split brain
+                # where both nodes were primary. This care requires human
+                # intervention, so 'disconnect' is the only safe policy.
+                after-sb-0pri           discard-zero-changes;
+                after-sb-1pri           discard-secondary;
+                after-sb-2pri           disconnect;
 	}
 
 	syncer {

Configuring the DRBD Resources

As mentioned earlier, we are going to create three DRBD resources.

  • Resource r0, which will be device /dev/drbd0, will be the shared GFS2 partition.
  • Resource r1, which will be device /dev/drbd1, will provide disk space for VMs that will normally run on an-node01.
  • Resource r2, which will be device /dev/drbd2, will provide disk space for VMs that will normally run on an-node02.
Note: The reason for the two separate VM resources is to help protect against data loss in the off chance that a split-brain occurs, despite our counter-measures. As we will see later, recovering from a split brain requires discarding the changes on one side of the resource. If VMs are running on the same resource but on different nodes, this would lead to data loss. Using two resources helps prevent that scenario.

Each resource configuration will be in its own file saved as /etc/drbd.d/rX.res. The three of them will be pretty much the same. So let's take a look at the first GFS2 resource r0.res, then we'll just look at the changes for r1.res and r2.res. These files won't exist initially.

vim /etc/drbd.d/r0.res
# This is the resource used for the shared GFS2 partition.
resource r0 {
	# This is the block device path.
	device		/dev/drbd0;

	# We'll use the normal internal metadisk (takes about 32MB/TB)
	meta-disk	internal;

	# This is the `uname -n` of the first node
	on an-node01.alteeve.ca {
		# The 'address' has to be the IP, not a hostname. This is the
		# node's SN (bond1) IP. The port number must be unique amoung
		# resources.
		address		10.10.0.1:7788;

		# This is the block device backing this resource on this node.
		disk		/dev/sda5;
	}
	# Now the same information again for the second node.
	on an-node02.alteeve.ca {
		address		10.10.0.2:7788;
		disk		/dev/sda5;
	}
}

Now copy this to r1.res and edit for the an-node01 VM resource. The main differences are the resource name, r1, the block device, /dev/drbd1, the port, 7790 and the backing block devices, /dev/sda6.

cp /etc/drbd.d/r0.res /etc/drbd.d/r1.res
vim /etc/drbd.d/r1.res
# This is the resource used for VMs that will normally run on an-node01.
resource r1 {
	# This is the block device path.
	device		/dev/drbd1;

	# We'll use the normal internal metadisk (takes about 32MB/TB)
	meta-disk	internal;

	# This is the `uname -n` of the first node
	on an-node01.alteeve.ca {
		# The 'address' has to be the IP, not a hostname. This is the
		# node's SN (bond1) IP. The port number must be unique amoung
		# resources.
		address		10.10.0.1:7789;

		# This is the block device backing this resource on this node.
		disk		/dev/sda6;
	}
	# Now the same information again for the second node.
	on an-node02.alteeve.ca {
		address		10.10.0.2:7789;
		disk		/dev/sda6;
	}
}

The last resource is again the same, with the same set of changes.

cp /etc/drbd.d/r1.res /etc/drbd.d/r2.res
vim /etc/drbd.d/r2.res
# This is the resource used for VMs that will normally run on an-node02.
resource r2 {
	# This is the block device path.
	device		/dev/drbd2;

	# We'll use the normal internal metadisk (takes about 32MB/TB)
	meta-disk	internal;

	# This is the `uname -n` of the first node
	on an-node01.alteeve.ca {
		# The 'address' has to be the IP, not a hostname. This is the
		# node's SN (bond1) IP. The port number must be unique amoung
		# resources.
		address		10.10.0.1:7790;

		# This is the block device backing this resource on this node.
		disk		/dev/sda7;
	}
	# Now the same information again for the second node.
	on an-node02.alteeve.ca {
		address		10.10.0.2:7790;
		disk		/dev/sda7;
	}
}

The final step is to validate the configuration. This is done by running the following command;

drbdadm dump
# /etc/drbd.conf
common {
    protocol               C;
    net {
        allow-two-primaries;
        after-sb-0pri    discard-zero-changes;
        after-sb-1pri    discard-secondary;
        after-sb-2pri    disconnect;
    }
    disk {
        fencing          resource-and-stonith;
    }
    startup {
        wfc-timeout      300;
        degr-wfc-timeout 120;
        become-primary-on both;
    }
    handlers {
        pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
        pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
        local-io-error   "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
        fence-peer       /sbin/obliterate-peer.sh;
    }
}

# resource r0 on an-node01.alteeve.ca: not ignored, not stacked
resource r0 {
    on an-node01.alteeve.ca {
        device           /dev/drbd0 minor 0;
        disk             /dev/sda5;
        address          ipv4 10.10.0.1:7788;
        meta-disk        internal;
    }
    on an-node02.alteeve.ca {
        device           /dev/drbd0 minor 0;
        disk             /dev/sda5;
        address          ipv4 10.10.0.2:7788;
        meta-disk        internal;
    }
}

# resource r1 on an-node01.alteeve.ca: not ignored, not stacked
resource r1 {
    on an-node01.alteeve.ca {
        device           /dev/drbd1 minor 1;
        disk             /dev/sda6;
        address          ipv4 10.10.0.1:7789;
        meta-disk        internal;
    }
    on an-node02.alteeve.ca {
        device           /dev/drbd1 minor 1;
        disk             /dev/sda6;
        address          ipv4 10.10.0.2:7789;
        meta-disk        internal;
    }
}

# resource r2 on an-node01.alteeve.ca: not ignored, not stacked
resource r2 {
    on an-node01.alteeve.ca {
        device           /dev/drbd2 minor 2;
        disk             /dev/sda7;
        address          ipv4 10.10.0.1:7790;
        meta-disk        internal;
    }
    on an-node02.alteeve.ca {
        device           /dev/drbd2 minor 2;
        disk             /dev/sda7;
        address          ipv4 10.10.0.2:7790;
        meta-disk        internal;
    }
}

You'll note that the output is formatted differently from the configuration files we created, but the values themselves are the same. If there had of been errors, you would have seen them printed. Fix any problems before proceeding. Once you get a clean dump, copy the configuration over to the other node.

rsync -av /etc/drbd.d root@an-node02:/etc/
sending incremental file list
drbd.d/
drbd.d/global_common.conf
drbd.d/global_common.conf.orig
drbd.d/r0.res
drbd.d/r1.res
drbd.d/r2.res

sent 7534 bytes  received 129 bytes  5108.67 bytes/sec
total size is 7874  speedup is 1.03

Initializing The DRBD Resources

Now that we have DRBD configured, we need to initialize the DRBD backing devices and then bring up the resources for the first time.

Note: To save a bit of time and typing, the following sections will use a little bash magic. When commands need to be run on all three resources, rather than running the same command three times with the different resource names, we will use the short-hand form r{0,1,2} or r{0..2}.

On both nodes, create the new metadata on the backing devices. You may need to type yes to confirm the action if any data is seen. If DRBD sees an actual file system, it will error and insist that you clear the partition. You can do this by running; dd if=/dev/zero of=/dev/sdaX bs=4M, where X is the partition you want to clear. This is called "zeroing out" a partition. The dd program does not print its progress, and can take a long time. To check the progress, open a new session to the server and run 'kill -USR1 $(pgrep -l '^dd$' | awk '{ print $1 }')'.

If DRBD sees old metadata, it will prompt you to type yes before it will proceed. In my case, I had recently zeroed-out my drive so DRBD had no concerns and just created the metadata for the three resources.

drbdadm create-md r{0..2}
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
success
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
success
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
success

Before you go any further, we'll need to load the drbd kernel module. Note that you won't normally need to do this. Later, after we get everything running the first time, we'll be able to start and stop the DRBD resources using the /etc/init.d/drbd script, which loads and unloads the drbd kernel module as needed.

modprobe drbd

Now go back to the terminal windows we had used to watch the cluster start. We now want to watch the output of cat /proc/drbd so we can keep tabs on the current state of the DRBD resources. We'll do this by using the watch program, which will refresh the output of the cat call every couple of seconds.

watch cat /proc/drbd
version: 8.3.12 (api:88/proto:86-96)
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03

Back in the first terminal, we need to attach the backing device, /dev/sda{5..7} to their respective DRBD resources, r{0..2}. After running the following command, you will see no output on the first terminal, but the second terminal's /proc/drbd should update.

drbdadm attach r{0..2}
version: 8.3.12 (api:88/proto:86-96)
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03
 0: cs:StandAlone ro:Secondary/Unknown ds:Inconsistent/DUnknown   r----s
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:19515784
 1: cs:StandAlone ro:Secondary/Unknown ds:Inconsistent/DUnknown   r----s
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:211418788
 2: cs:StandAlone ro:Secondary/Unknown ds:Inconsistent/DUnknown   r----s
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:211034800

Take note of the connection state, cs:StandAlone, the current role, ro:Secondary/Unknown and the disk state, ds:Inconsistent/DUnknown. This tells us that our resources are not talking to one another, are not usable because they are in the Secondary state (you can't even read the /dev/drbdX device) and that the backing device does not have an up to date view of the data.

This all makes sense of course, as the resources are brand new.

So the next step is to connect the two nodes together. As before, we won't see any output from the first terminal, but the second terminal will change.

Note: After running the following command on the first node, its connection state will become cs:WFConnection which means that it is waiting for a connection from the other node.
drbdadm connect r{0..2}
version: 8.3.12 (api:88/proto:86-96)
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03
 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:19515784
 1: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:211418788
 2: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:211034800

We can now see that the two nodes are talking to one another properly as the connection state has changed to cs:Connected. They can see that their peer node is in the same state as they are; Secondary/Inconsistent.

Seeing as the resources are brand new, there is no data to synchronize the two nodes. We're going to issue a special command that will only ever be used this one time. It will tell DRBD to immediately consider the DRBD resources to be up to date.

On one node only, run;

drbdadm -- --clear-bitmap new-current-uuid r{0..2}

As before, look to the second terminal to see the new state of affairs.

version: 8.3.12 (api:88/proto:86-96)
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03
 0: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r-----
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
 1: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r-----
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
 2: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r-----
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0

Voila!

We could promote both sides to Primary by running drbdadm primary r{0..2} on both nodes, but there is no purpose in doing that at this stage as we can safely say our DRBD is ready to go. So instead, let's just stop DRBD entirely. We'll also prevent it from starting on boot as drbd will be managed by the cluster in a later step.

On both nodes run;

/etc/init.d/drbd stop
Stopping all DRBD resources: .

Now disable it from starting on boot.

chkconfig drbd off
chkconfig --list drbd
drbd           	0:off	1:off	2:off	3:off	4:off	5:off	6:off

The second terminal will start complaining that /proc/drbd no longer exists. This is because the drbd init script unloaded the drbd kernel module. It is expected and not a problem.

Configuring Clustered Storage

Before we can provision the first virtual machine, we must first create the storage that will back them. This will take a few steps;

  • Configuring LVM's clustered locking and creating the PVs, VGs and LVs
  • Formatting and configuring the shared GFS2 partition.
  • Adding storage to the cluster's resource management.

Clustered Logical Volume Management

We will assign all three DRBD resources to be managed by clustered LVM. This isn't strictly needed for the GFS2 partition, as it uses DLM directly. However, the flexibility of LVM is very appealing, and will make later growth of the GFS2 partition quite trivial, should the need arise.

The real reason for clustered LVM in our cluster is to provide DLM-backed locking to the partitions, or logical volumes in LVM, that will be used to back our VMs. Of course, the flexibility of LVM managed storage is enough of a win to justify using LVM for our VMs in itself, and shouldn't be ignored here.

Configuring Clustered LVM Locking

Before we create the clustered LVM, we need to first make three changes to the LVM configuration.

  • We need to filter out the DRBD backing devices so that LVM doesn't see the same signature twice.
  • Switch from local locking to clustered locking.
  • Prevent fall-back to local locking when the cluster is not available.

Start by making a backup of lvm.conf and then begin editing it.

cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.orig
vim /etc/lvm/lvm.conf

The configuration option to filter out the DRBD backing device is, surprisingly, filter = [ ... ]. By default, it is set to allow everything via the "a/.*/" regular expression. We're only using DRBD in our LVM, so we're going to flip that to reject everything except DRBD by changing the regex to "a|/dev/drbd*|", "r/.*/". If we didn't do this, LVM would see the same signature on the DRBD device and again on the backing devices, at which time it would ignore the DRBD device. This filter allows LVM to only inspect the DRBD devices for LVM signatures.

Change;

    # By default we accept every block device:
    filter = [ "a/.*/" ]

To;

    # We're only using LVM on DRBD resource.
    filter = [ "a|/dev/drbd*|", "r/.*/" ]

For the locking, we're going to change the locking_type from 1 (local locking) to 3, (clustered locking). This is what tells LVM to use DLM.

Change;

    locking_type = 1

To;

    locking_type = 3

Lastly, we're also going to disallow fall-back to local locking. Normally, LVM would try to access a clustered LVM VG using local locking if DLM is not available. We want to prevent any access to the clustered LVM volumes except when the DLM is itself running. This is done by changing fallback_to_local_locking to 0.

Change;

    fallback_to_local_locking = 1

To;

    fallback_to_local_locking = 0

Save the changes, then lets run a diff against our backup to see a summary of the changes.

diff -u /etc/lvm/lvm.conf.orig /etc/lvm/lvm.conf
--- /etc/lvm/lvm.conf.orig	2011-12-14 17:42:16.416094972 -0500
+++ /etc/lvm/lvm.conf	2011-12-14 17:49:15.747097684 -0500
@@ -62,8 +62,8 @@
     # If it doesn't do what you expect, check the output of 'vgscan -vvvv'.
 
 
-    # By default we accept every block device:
-    filter = [ "a/.*/" ]
+    # We're only using LVM on DRBD resource.
+    filter = [ "a|/dev/drbd*|", "r/.*/" ]
 
     # Exclude the cdrom drive
     # filter = [ "r|/dev/cdrom|" ]
@@ -356,7 +356,7 @@
     # Type 3 uses built-in clustered locking.
     # Type 4 uses read-only locking which forbids any operations that might 
     # change metadata.
-    locking_type = 1
+    locking_type = 3
 
     # Set to 0 to fail when a lock request cannot be satisfied immediately.
     wait_for_locks = 1
@@ -372,7 +372,7 @@
     # to 1 an attempt will be made to use local file-based locking (type 1).
     # If this succeeds, only commands against local volume groups will proceed.
     # Volume Groups marked as clustered will be ignored.
-    fallback_to_local_locking = 1
+    fallback_to_local_locking = 0
 
     # Local non-LV directory that holds file-based locks while commands are
     # in progress.  A directory like /tmp that may get wiped on reboot is OK.

Perfect! Now copy the modified lvm.conf file to the other node.

rsync -av /etc/lvm/lvm.conf root@an-node02:/etc/lvm/
sending incremental file list
lvm.conf

sent 2351 bytes  received 283 bytes  5268.00 bytes/sec
total size is 28718  speedup is 10.90

Testing the clvmd Daemon

A little later on, we're going to put clustered LVM under the control of rgmanager. Before we can do that though, we need to start it manually so that we can use it to create the LV that will back the GFS2 /shared partition, which we will also be adding to rgmanager when we build our storage services.

Before we start the clvmd daemon, we'll want to ensure that the cluster is running.

cman_tool status
Version: 6.2.0
Config Version: 7
Cluster Name: an-cluster-A
Cluster Id: 24561
Cluster Member: Yes
Cluster Generation: 68
Membership state: Cluster-Member
Nodes: 2
Expected votes: 1
Total votes: 2
Node votes: 1
Quorum: 1  
Active subsystems: 7
Flags: 2node 
Ports Bound: 0  
Node name: an-node01.alteeve.ca
Node ID: 1
Multicast addresses: 239.192.95.81 
Node addresses: 10.20.0.1

It is, and both nodes are members. We can start the clvmd daemon now.

/etc/init.d/clvmd start
Starting clvmd: 
Activating VG(s):   No volume groups found
                                                           [  OK  ]

We've not created any clustered volume groups yet, so that complaint about not finding volume groups is expected.

We don't want clvmd to start at boot, as we will be putting it under the cluster's control. So we need to make sure that clvmd is disabled at boot, and then we'll stop clvmd for now.

chkconfig clvmd off
chkconfig --list clvmd
clvmd          	0:off	1:off	2:off	3:off	4:off	5:off	6:off

Now stop it entirely.

/etc/init.d/clvmd stop
Signaling clvmd to exit                                    [  OK  ]
clvmd terminated                                           [  OK  ]

Initialize our DRBD Resource for use as LVM PVs

This is the first time we're actually going to use DRBD and clustered LVM, so we need to make sure that both are started. Earlier we stopped them, so if they're not running now, we need to restart them.

First, check (and start if needed) drbd.

/etc/init.d/drbd status
drbd not loaded

It's stopped, so we'll start it on both nodes now.

/etc/init.d/drbd start
Starting DRBD resources: [ d(r0) d(r1) d(r2) n(r0) n(r1) n(r2) ].

It looks like it started, but let's confirm that the resources are all Connected, Primary and UpToDate.

/etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.3.12 (api:88/proto:86-96)
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03
m:res  cs         ro               ds                 p  mounted  fstype
0:r0   Connected  Primary/Primary  UpToDate/UpToDate  C
1:r1   Connected  Primary/Primary  UpToDate/UpToDate  C
2:r2   Connected  Primary/Primary  UpToDate/UpToDate  C

Excellent, now to check on clvmd.

/etc/init.d/clvmd status
clvmd is stopped

It's also stopped, so lets start it now.

/etc/init.d/clvmd start
Starting clvmd: 
Activating VG(s):   No volume groups found
                                                           [  OK  ]

Now we're ready to start!

Before we can use LVM, clustered or otherwise, we need to initialize one or more raw storage devices. This is done using the pvcreate command. We're going to do this on an-node01, then run pvscan on an-node02. We should see the newly initialized DRBD resources appear.

Running pvscan first, we'll see that no PVs have been created.

pvscan
  No matching physical volumes found

On an-node01, initialize the PVs;

pvcreate /dev/drbd{0..2}
  Writing physical volume data to disk "/dev/drbd0"
  Physical volume "/dev/drbd0" successfully created
  Writing physical volume data to disk "/dev/drbd1"
  Physical volume "/dev/drbd1" successfully created
  Writing physical volume data to disk "/dev/drbd2"
  Physical volume "/dev/drbd2" successfully created

On both nodes, re-run pvscan and the new PVs should show. This works because DRBD is keeping the data in sync, including the new LVM signatures.

pvscan
  PV /dev/drbd0                      lvm2 [18.61 GiB]
  PV /dev/drbd1                      lvm2 [201.62 GiB]
  PV /dev/drbd2                      lvm2 [201.26 GiB]
  Total: 3 [421.49 GiB] / in use: 0 [0   ] / in no VG: 3 [421.49 GiB]

Done.

Creating Cluster Volume Groups

As with initializing the DRBD resource above, we will create out volume groups, VGs, on an-node01 only, but we will then see them on both nodes.

Check to confirm that no VGs exist;

vgdisplay
  No volume groups found

Now to create the VGs, we'll use the vgcreate command with the -c y switch, which tells LVM to make the VG a clustered VG. Note that when the clvmd daemon is running, -c y is implied. However, I like to get into the habit of using it because it will trigger an error if, for some reason, clvmd wasn't actually running.

On an-node01, create the three VGs.

  • VG for the GFS2 /shared partition;
vgcreate -c y shared-vg0 /dev/drbd0
  Clustered volume group "shared-vg0" successfully created
  • VG for the VMs that will primarily run on an-node01;
vgcreate -c y an01-vg0 /dev/drbd1
  Clustered volume group "an01-vg0" successfully created
  • VG for the VMs that will primarily run on an-node02;
vgcreate -c y an02-vg0 /dev/drbd2
  Clustered volume group "an02-vg0" successfully created

Now on both nodes, we should see the three new volume groups.

vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "an02-vg0" using metadata type lvm2
  Found volume group "an01-vg0" using metadata type lvm2
  Found volume group "shared-vg0" using metadata type lvm2

Creating a Logical Volume

At this stage, we're going to create only one LV for the GFS2 partition. We'll create the rest later when we're ready to provision the VMs. This will be the /shared partiton, which we will discuss further in the next section.

As before, we'll create the LV on an-node01 and then verify it exists on both nodes.

Before we create our first LV, check lvscan.

lvscan

Nothing is returned.

On an-node01, create the the LV on the shared-vg0 VG, using all of the available space.

lvcreate -l 100%FREE -n shared shared-vg0
  Logical volume "shared" created

Now on both nodes, check that the new LV exists.

lvscan
  ACTIVE            '/dev/shared-vg0/shared' [18.61 GiB] inherit

Perfect. We can now create our GFS2 partition.

Creating The Shared GFS2 Partition

The GFS2-formatted /shared partition will be used for four main purposes;

  • /shared/files; Storing files like ISO images needed when provisioning VMs.
  • /shared/provision; Storing short scripts used to call virt-install which handles the creation of our VMs.
  • /shared/definitions; This is where the XML definition files which define the emulated hardware backing our VMs are kept. This is the most critical directory as the cluster will look here when starting and recovering VMs.
  • /shared/archive; This is used to store old copies of the XML definition files. I like to make a time-stamped copy of definition files prior to altering and redefining a VM. This way, I can quickly and easily revert to an old configuration should I run into trouble.

Make sure that both drbd and clvmd are running.

The mkfs.gfs2 call uses a few switches that are worth explaining;

  • -p lock_dlm; This tells GFS2 to use DLM for its clustered locking. Currently, this is the only supported locking type.
  • -j 2; This tells GFS2 to create two journals. This must match the number of nodes that will try to mount this partition at any one time.
  • -t an-cluster-A:shared; This is the lockspace name, which must be in the format <clustename>:<fsname>. The clustername must match the one in cluster.conf, and any node that belongs to a cluster of another name will not be allowed to access the file system.
Note: Depending on the size of the new partition, this call could take a while to complete. Please be patient.

Then, on an-node01, run;

mkfs.gfs2 -p lock_dlm -j 2 -t an-cluster-A:shared /dev/shared-vg0/shared
This will destroy any data on /dev/shared-vg0/shared.
It appears to contain: symbolic link to `../dm-0'
Are you sure you want to proceed? [y/n] y
Device:                    /dev/shared-vg0/shared
Blocksize:                 4096
Device Size                18.61 GB (4878336 blocks)
Filesystem Size:           18.61 GB (4878333 blocks)
Journals:                  2
Resource Groups:           75
Locking Protocol:          "lock_dlm"
Lock Table:                "an-cluster-A:shared"
UUID:                      162a80eb-59b3-08bd-5d69-740cbb60aa45

On both nodes, run all of the following commands.

mkdir /shared
mount /dev/shared-vg0/shared /shared/

Confirm that /shared is now mounted.

df -hP /shared
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/shared--vg0-shared   19G  259M   19G   2% /shared

Note that the path under Filesystem is different from what we used when creating the GFS2 partition. This is an effect of Device Mapper, which is used by LVM to create symlinks to actual block device paths. If we look at our /dev/shared-vg0/shared device and the device from df, /dev/mapper/shared--vg0-shared, we'll see that they both point to the same actual block device.

ls -lah /dev/shared-vg0/shared /dev/mapper/shared--vg0-shared
lrwxrwxrwx 1 root root 7 Oct 23 16:35 /dev/mapper/shared--vg0-shared -> ../dm-0
lrwxrwxrwx 1 root root 7 Oct 23 16:35 /dev/shared-vg0/shared -> ../dm-0
ls -lah /dev/dm-0
brw-rw---- 1 root disk 253, 0 Oct 23 16:35 /dev/dm-0

This next step uses some command-line voodoo. It takes the output from gfs2_tool sb /dev/shared-vg0/shared uuid, parses out the UUID, converts it to lower-case and spits out a string that can be used in /etc/fstab. We'll run it twice; The first time to confirm that the output is what we expect and the second time to append it to /etc/fstab.

The gfs2 daemon can only work on GFS2 partitions that have been defined in /etc/fstab, so this is a required step on both nodes.

We use defaults,noatime,nodiratime instead of just defaults for performance reasons. Normally, every time a file or directory is accessed, its atime (or diratime) is updated, which requires a disk write, which requires an exclusive DLM lock, which is expensive. If you need to know when a file or directory was accessed, remove ,noatime,nodiratime.

echo `gfs2_tool sb /dev/shared-vg0/shared uuid | awk '/uuid =/ { print $4; }' | sed -e "s/\(.*\)/UUID=\L\1\E \/shared\t\tgfs2\tdefaults,noatime,nodiratime\t0 0/"`
UUID=162a80eb-59b3-08bd-5d69-740cbb60aa45 /shared gfs2 defaults,noatime,nodiratime 0 0

This looks good, so now re-run it but redirect the output to append to /etc/fstab. We'll confirm it worked by checking the status of the gfs2 daemon.

echo `gfs2_tool sb /dev/shared-vg0/shared uuid | awk '/uuid =/ { print $4; }' | sed -e "s/\(.*\)/UUID=\L\1\E \/shared\t\tgfs2\tdefaults,noatime,nodiratime\t0 0/"` >> /etc/fstab
/etc/init.d/gfs2 status
Configured GFS2 mountpoints: 
/shared
Active GFS2 mountpoints: 
/shared

Perfect, gfs2 can see the partition now! We're ready to setup our directories.

On an-node01

mkdir /shared/{definitions,provision,archive,files}

On both nodes, confirm that all of the new directories exist and are visible.

ls -lah /shared/
total 24K
drwxr-xr-x   6 root root 3.8K Dec 14 19:05 .
dr-xr-xr-x. 24 root root 4.0K Dec 14 18:44 ..
drwxr-xr-x   2 root root    0 Dec 14 19:05 archive
drwxr-xr-x   2 root root    0 Dec 14 19:05 definitions
drwxr-xr-x   2 root root    0 Dec 14 19:05 files
drwxr-xr-x   2 root root    0 Dec 14 19:05 provision

Wonderful!

As with drbd and clvmd, we don't want to have gfs2 start at boot as we're going to put it under the control of the cluster.

chkconfig gfs2 off
chkconfig --list gfs2
gfs2           	0:off	1:off	2:off	3:off	4:off	5:off	6:off

Renaming a GFS2 Partition

Warning: Be sure to unmount the GFS2 partition from all nodes prior to altering the cluster or filesystem names!

If you ever need to rename your cluster, you will need to update your GFS2 partition before you can remount it. Unmount the partition from all nodes and run:

gfs2_tool sb /dev/shared-vg0/shared table "new_cluster_name:shared"
You shouldn't change any of these values if the filesystem is mounted.

Are you sure? [y/n] y

current lock table name = "an-cluster-A:shared"
new lock table name = "new_cluster_name:shared"
Done

Then you can change the cluster's name in cluster.conf and then remount the GFS2 partition.

You can use the same command, changing the GFS2 partition name, if you want to change the name of the filesystem instead of (or at the same time as) the cluster's name.

Stopping All Clustered Storage Components

Before we can put storage under the cluster's control, we need to make sure that the gfs2, clvmd and drbd daemons are stopped.

On both nodes, run;

/etc/init.d/gfs2 stop && /etc/init.d/clvmd stop && /etc/init.d/drbd stop
Unmounting GFS2 filesystem (/shared):                      [  OK  ]
Deactivating clustered VG(s):   0 logical volume(s) in volume group "an02-vg0" now active
  0 logical volume(s) in volume group "an01-vg0" now active
  0 logical volume(s) in volume group "shared-vg0" now active
                                                           [  OK  ]
Signaling clvmd to exit                                    [  OK  ]
clvmd terminated                                           [  OK  ]
Stopping all DRBD resources: .

Managing Storage In The Cluster

A little while back, we spoke about how the cluster is split into two components; cluster communication managed by cman and resource management provided by rgmanager. It's the later which we will now begin to configure.

In the cluster.conf, the rgmanager component is contained within the <rm /> element tags. Within this element are three types of child elements. They are:

  • Fail-over Domains - <failoverdomains />;
    • These are optional constraints which allow for control which nodes, and under what circumstances, services may run. When not used, a service will be allowed to run on any node in the cluster without constraints or ordering.
  • Resources - <resources />;
    • Within this element, available resources are defined. Simply having a resource here will not put it under cluster control. Rather, it makes it available for use in <service /> elements.
  • Services - <service />;
    • This element contains one or more parallel or series child-elements which are themselves references to <resources /> elements. When in parallel, the services will start and stop at the same time. When in series, the services start in order and stop in reverse order. We will also see a specialized type of service that uses the <vm /> element name, as you can probably guess, for creating virtual machine services.

We'll look at each of these components in more detail shortly.

A Note On Daemon Starting

There are four daemons we will be putting under cluster control;

  • drbd; Replicated storage.
  • clvmd; Clustered LVM.
  • gfs2; Mounts and Unmounts configured GFS2 partition.
  • libvirtd; Provides access to virsh and other libvirt tools. Needed for running our VMs.

The reason we do not want to start these daemons with the system is so that we can let the cluster do it. This way, should any fail, the cluster will detect the failure and fail the entire service tree. For example, lets say that drbd failed to start, rgmanager would fail the storage service and give up, rather than continue trying to start clvmd and the rest. With libvirtd being the last daemon, it will not be possible to start a VM unless the storage started successfully.

If we had left these daemons to boot on start, the failure of the drbd would not effect the start-up of clvmd, which would then not find its PVs given that DRBD is down. Next, the system would try to start the gfs2 daemon which would also fail as the LV backing the partition would not be available. Finally, the system would start libvirtd, which would allow the start of virtual machine, which would also be missing their "hard drives" as their backing LVs would also not be available. Pretty messy situation to clean up from.

Defining The Resources

Lets start by first defining our clustered resources.

As stated before, the addition of these resources does not, in itself, put the defined resources under the cluster's management. Instead, it defines services, like init.d scripts. These can then be used by one or more <service /> elements, as we will see shortly. For now, it is enough to know what, until a resource is defined, it can not be used in the cluster.

Given that this is the first component of rgmanager being added to cluster.conf, we will be creating the parent <rm /> elements here as well.

Let's take a look at the new section, then discuss the parts.

<?xml version="1.0"?>
<cluster name="an-cluster-A" config_version="8">
        <cman expected_votes="1" two_node="1" />
        <clusternodes>
                <clusternode name="an-node01.alteeve.ca" nodeid="1">
                        <fence>
                                <method name="ipmi">
                                        <device name="ipmi_an01" action="reboot" />
                                </method>
                                <method name="pdu">
                                        <device name="pdu2" port="1" action="reboot" />
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="an-node02.alteeve.ca" nodeid="2">
                        <fence>
                                <method name="ipmi">
                                        <device name="ipmi_an02" action="reboot" />
                                </method>
                                <method name="pdu">
                                        <device name="pdu2" port="2" action="reboot" />
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <fencedevices>
                <fencedevice name="ipmi_an01" agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" passwd="secret" />
                <fencedevice name="ipmi_an02" agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" passwd="secret" />
                <fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.ca" name="pdu2" />
        </fencedevices>
        <fence_daemon post_join_delay="30" />
        <totem rrp_mode="none" secauth="off"/>
        <rm>
                <resources>
                        <script file="/etc/init.d/drbd" name="drbd"/>
                        <script file="/etc/init.d/clvmd" name="clvmd"/>
                        <script file="/etc/init.d/gfs2" name="gfs2"/>
                        <script file="/etc/init.d/libvirtd" name="libvirtd"/>
                </resources>
        </rm>
</cluster>

First and foremost; Note that we've incremented the version to 8. As always, increment and then edit.

Let's focus on the new section;

	<rm>
		<resources>
			<script file="/etc/init.d/drbd" name="drbd"/>
			<script file="/etc/init.d/clvmd" name="clvmd"/>
			<script file="/etc/init.d/gfs2" name="gfs2"/>
			<script file="/etc/init.d/libvirtd" name="libvirtd"/>
		</resources>
	</rm>

The <resources>...</resources> element contains our four <script .../> resources. This is a particular type of resource which specifically handles that starting and stopping of init.d style scripts. That is, the script must exit with LSB compliant codes. They must also properly react to being called with the sole argument of start, stop and status.

There are many other types of resources which, with the exception of <vm .../>, we will not be looking at in this tutorial. Should you be interested in them, please look in /usr/share/cluster for the various scripts (executable files that end with .sh).

Each of our four <script ... /> resources have two attributes;

  • file="..."; The full path to the script to be managed.
  • name="..."; A unique name used to reference this resource later on in the <service /> elements.

Other resources are more involved, but the <script .../> resources are quite simple.

Creating Failover Domains

Fail-over domains are, at their most basic, a collection of one or more nodes in the cluster with a particular set of rules associated with them. Services can then be configured to operate within the context of a given fail-over domain. There are a few key options to be aware of.

Fail-over domains are optional and can be left out of the cluster, generally speaking. However, in our cluster, we will need them for our storage services, as we will later see, so please do not skip this step.

  • A fail-over domain can be unordered or prioritized.
    • When unordered, a service will start on any node in the domain. Should that node later fail, it will restart to another random node in the domain.
    • When prioritized, a service will start on the available node with the highest priority in the domain. Should that node later fail, the service will restart on the available node with the next highest priority.
  • A fail-over domain can be restricted or unrestricted.
    • When restricted, a service is only allowed to start on, or restart on. a nodes in the domain. When no nodes are available, the service will be stopped.
    • When unrestricted, a service will try to start on, or restart on, a node in the domain. However, when no domain members are available, the cluster will pick another available node at random to start the service on.
  • A fail-over domain can have a fail-back policy.
    • When a domain allows for fail-back and the domain is ordered, and a node with a higher priority (re)joins the cluster, services within the domain will migrate to that higher-priority node. This allows for automated restoration of services on a failed node when it rejoins the cluster.
    • When a domain does not allow for fail-back, but is unrestricted, fail-back of services that fell out of the domain will happen anyway. That is to say, nofailback="1" is ignored if a service was running on a node outside of the fail-over domain and a node within the domain joins the cluster. However, once the service is on a node within the domain, the service will not relocate to a higher-priority node should one join the cluster later.
    • When a domain does not allow for fail-back and is restricted, then fail-back of services will never occur.

What we need to do at this stage is to create something of a hack. Let me explain;

As discussed earlier, we need to start a set of local daemons on all nodes. These aren't really clustered resources though as they can only ever run on their host node. They will never be relocated or restarted elsewhere in the cluster as as such, are not highly available. So to work around this desire to "cluster the unclusterable", we're going to create a fail-over domain for each node in the cluster. Each of these domains will have only one of the cluster nodes as members of the domain and the domain will be restricted, unordered and have no fail-back. With this configuration, any service group using it will only ever run on the one node in the domain.

In the next step, we will create a service group, then replicate it once for each node in the cluster. The only difference will be the failoverdomain each is set to use. With our configuration of two nodes then, we will have two fail-over domains, one for each node, and we will define the clustered storage service twice, each one using one of the two fail-over domains.

Let's look at the complete updated cluster.conf, then we will focus closer on the new section.

<?xml version="1.0"?>
<cluster name="an-cluster-A" config_version="9">
        <cman expected_votes="1" two_node="1" />
        <clusternodes>
                <clusternode name="an-node01.alteeve.ca" nodeid="1">
                        <fence>
                                <method name="ipmi">
                                        <device name="ipmi_an01" action="reboot" />
                                </method>
                                <method name="pdu">
                                        <device name="pdu2" port="1" action="reboot" />
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="an-node02.alteeve.ca" nodeid="2">
                        <fence>
                                <method name="ipmi">
                                        <device name="ipmi_an02" action="reboot" />
                                </method>
                                <method name="pdu">
                                        <device name="pdu2" port="2" action="reboot" />
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <fencedevices>
                <fencedevice name="ipmi_an01" agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" passwd="secret" />
                <fencedevice name="ipmi_an02" agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" passwd="secret" />
                <fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.ca" name="pdu2" />
        </fencedevices>
        <fence_daemon post_join_delay="30" />
        <totem rrp_mode="none" secauth="off"/>
        <rm>
                <resources>
                        <script file="/etc/init.d/drbd" name="drbd"/>
                        <script file="/etc/init.d/clvmd" name="clvmd"/>
                        <script file="/etc/init.d/gfs2" name="gfs2"/>
                        <script file="/etc/init.d/libvirtd" name="libvirtd"/>
                </resources>
                <failoverdomains>
                        <failoverdomain name="only_an01" nofailback="1" ordered="0" restricted="1">
                                <failoverdomainnode name="an-node01.alteeve.ca"/>
                        </failoverdomain>
                        <failoverdomain name="only_an02" nofailback="1" ordered="0" restricted="1">
                                <failoverdomainnode name="an-node02.alteeve.ca"/>
                        </failoverdomain>
                </failoverdomains>
        </rm>
</cluster>

As always, the version was incremented, this time to 9. We've also added the new <failoverdomains>...</failoverdomains> element. Let's take a closer look at this new element.

                <failoverdomains>
                        <failoverdomain name="only_an01" nofailback="1" ordered="0" restricted="1">
                                <failoverdomainnode name="an-node01.alteeve.ca"/>
                        </failoverdomain>
                        <failoverdomain name="only_an02" nofailback="1" ordered="0" restricted="1">
                                <failoverdomainnode name="an-node02.alteeve.ca"/>
                        </failoverdomain>
                </failoverdomains>

The first thing to node is that there are two <failoverdomain...>...</failoverdomain> child elements.

  • The first has the name only_an01 and contains only the node an-node01 as a member.
  • The second is effectively identical, save that the domain's name is only_an02 and it contains only the node an-node02 as a member.

The <failoverdomain ...> element has four attributes;

  • The name="..." attribute sets the unique name of the domain which we will later use to bind a service to the domain.
  • The nofailback="1" attribute tells the cluster to never "fail back" any services in this domain. This seems redundant, given there is only one node, but when combined with restricted="0", prevents any migration of services.
  • The ordered="0" this is also somewhat redundant in that there is only one node defined in the domain, but I don't like to leave attributes undefined so I have it here.
  • The restricted="1" attribute is key in that it tells the cluster to not try to restart services within this domain on any other nodes outside of the one defined in the fail-over domain.

Each of the <failoverdomain...> elements has a single <failoverdomainnode .../> child element. This is a very simple element which has, at this time, only one attribute;

  • name="..."; The name of the node to include in the fail-over domain. This name must match the corresponding <clusternode name="..." node name.

At this point, we're ready to finally create our clustered storage services.

Creating Clustered Storage Services

With the resources defined and the fail-over domains created, we can set about creating our services.

Generally speaking, services can have one or more resources within them. When two or more resources exist, then can be put into a dependency tree, they can used in parallel or a combination of parallel and dependent resources.

When you create a service dependency tree, you put each dependent resource as a child element of its parent. The resources are then started in order, starting at the top of the tree and working its way down to the deepest child resource. If at any time one of the resources should fail, the entire service will be declared failed and no attempt will be made to try and start any further child resources. Conversely, stopping the service will cause the deepest child resource to be stopped first. Then the second deepest and on upwards towards the top resource. This is exactly the behaviour we want, as we will see shortly.

When resources are defined in parallel, all defined resources will be started at the same time. Should any one of the resources fail to start, the entire resource will declared failed. Stopping the service will likewise cause a simultaneous call to stop all resources.

As before, let's take a look at the entire updated cluster.conf file, then we'll focus in on the new service section.

<?xml version="1.0"?>
<cluster name="an-cluster-A" config_version="10">
        <cman expected_votes="1" two_node="1" />
        <clusternodes>
                <clusternode name="an-node01.alteeve.ca" nodeid="1">
                        <fence>
                                <method name="ipmi">
                                        <device name="ipmi_an01" action="reboot" />
                                </method>
                                <method name="pdu">
                                        <device name="pdu2" port="1" action="reboot" />
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="an-node02.alteeve.ca" nodeid="2">
                        <fence>
                                <method name="ipmi">
                                        <device name="ipmi_an02" action="reboot" />
                                </method>
                                <method name="pdu">
                                        <device name="pdu2" port="2" action="reboot" />
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <fencedevices>
                <fencedevice name="ipmi_an01" agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" passwd="secret" />
                <fencedevice name="ipmi_an02" agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" passwd="secret" />
                <fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.ca" name="pdu2" />
        </fencedevices>
        <fence_daemon post_join_delay="30" />
        <totem rrp_mode="none" secauth="off"/>
        <rm>
                <resources>
                        <script file="/etc/init.d/drbd" name="drbd"/>
                        <script file="/etc/init.d/clvmd" name="clvmd"/>
                        <script file="/etc/init.d/gfs2" name="gfs2"/>
                        <script file="/etc/init.d/libvirtd" name="libvirtd"/>
                </resources>
                <failoverdomains>
                        <failoverdomain name="only_an01" nofailback="1" ordered="0" restricted="1">
                                <failoverdomainnode name="an-node01.alteeve.ca"/>
                        </failoverdomain>
                        <failoverdomain name="only_an02" nofailback="1" ordered="0" restricted="1">
                                <failoverdomainnode name="an-node02.alteeve.ca"/>
                        </failoverdomain>
                </failoverdomains>
                <service name="storage_an01" autostart="1" domain="only_an01" exclusive="0" recovery="restart">
                        <script ref="drbd">
                                <script ref="clvmd">
                                        <script ref="gfs2">
                                                <script ref="libvirtd"/>
                                        </script>
                                </script>
                        </script>
                </service>
                <service name="storage_an02" autostart="1" domain="only_an02" exclusive="0" recovery="restart">
                        <script ref="drbd">
                                <script ref="clvmd">
                                        <script ref="gfs2">
                                                <script ref="libvirtd"/>
                                        </script>
                                </script>
                        </script>
                </service>
        </rm>
</cluster>

With the version now at 10, we have added two <service...>...</service> elements. Each containing a four <script ...> type resources in a service tree configuration. Let's take a closer look.

		<service name="storage_an01" autostart="1" domain="only_an01" exclusive="0" recovery="restart">
			<script ref="drbd">
				<script ref="clvmd">
					<script ref="gfs2">
						<script ref="libvirtd"/>
					</script>
				</script>
			</script>
		</service>
		<service name="storage_an02" autostart="1" domain="only_an02" exclusive="0" recovery="restart">
			<script ref="drbd">
				<script ref="clvmd">
					<script ref="gfs2">
						<script ref="libvirtd"/>
					</script>
				</script>
			</script>
		</service>

The <service ...>...</service> elements have five attributes each;

  • The name="..." attribute is a unique name that will be used to identify the service, as we will see later.
  • The autostart="1" attribute tells the cluster that, when it starts, it should automatically start this service.
  • The domain="..." attribute tells the cluster which fail-over domain this service must run within. The two otherwise identical services each point to a different fail-over domain, as we discussed in the previous section.
  • The exclusive="0" attribute tells the cluster that a node running this service is allowed to to have other services running as well.
  • The recovery="restart" attribute sets the service recovery policy. As the name implies, the cluster will try to restart this service should it fail. Should the service fail multiple times in a row, it will be disabled. The exact number of failures allowed before disabling is configurable using the optional max_restarts and restart_expire_time attributes, which are not covered here.
Warning: It is a fairly common mistake to interpret exclusive to mean that a service is only allowed to run on one node at a time. This is not the case, please do not use this attribute incorrectly.

Within each of the two <service ...>...</service> attributes are four <script...> type resources. These are configured as a service tree in the order;

  • drbd -> clvmd -> gfs2 -> libvirtd.

Each of these <script ...> elements has just one attribute; ref="..." which points to a corresponding script resource.

The logic for this particular resource tree is;

  • DRBD needs to start so that the bare clustered storage devices become available.
  • Clustered LVM must next start so that the logical volumes used by GFS2 and our VMs become available.
  • The GFS2 partition contains the XML definition files needed to start our virtual machines.
  • Finally, libvirtd must be running for the virtual machines to be able to run. By putting this daemon in the resource tree, we can ensure that no attempt to start a VM will succeed until all of the clustered storage stack is available.

From the other direction, we need the stop order to be organized in the reverse order.

  • Stopping libvirtd would cause any remaining running VMs to stop. If a VM is blocking, it will prevent libvirtd from stopping and, thus, delay any of our other clustered storage resources from attempting to stop.
  • We need the GFS2 partition to unmount after the VM goes down and before Clustered LVM map stop.
  • With all VMs and the GFS2 partition stopped, we can safely say that all LVs are no longer in use and thus clvmd can stop.
  • With Clustered LVM now stopped, nothing should be using our DRBD resources any more, so we can safely stop them, too.

All in all, it's a surprisingly simple and effective configuration.

Validating And Pushing The Changes

We've made a big change, so it's all the more important that we validate the config before proceeding.

ccs_config_validate
Configuration validates

We need to now tell the cluster to use the new configuration file. Unlike last time, we won't use rsync. Now that the cluster is up and running, we can use it to push out the updated configuration file using cman_tool. This is the first time we've used the cluster to push out an updated cluster.conf file, so we will have to enter the password we set earlier for the ricci user on both nodes.

cman_tool version -r
You have not authenticated to the ricci daemon on an-node01.alteeve.ca
Password:
You have not authenticated to the ricci daemon on an-node02.alteeve.ca
Password:

If you were watching syslog, you will have seen an entries like the ones below.

Dec 14 20:39:08 an-node01 modcluster: Updating cluster.conf
Dec 14 20:39:12 an-node01 corosync[2360]:   [QUORUM] Members[2]: 1 2

Now we can confirm that both nodes are using the new configuration by re-running the cman_tool version command, but without the -r switch.

On both;

cman_tool version
6.2.0 config 10

Checking The Cluster's Status

Now let's look at a new tool; clustat, cluster status. We'll be using clustat extensively from here on out to monitor the status of the cluster members and managed services. It does not manage the cluster in any way, it is simply a status tool. We'll see how

Here is what it should look like when run from an-node01.

clustat
Cluster Status for an-cluster-A @ Wed Dec 14 20:45:04 2011
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local
 an-node02.alteeve.ca                       2 Online

At this point, we're only running the foundation of the cluster, so we can only see which nodes are in the cluster. We've added resources to the cluster configuration though, so it's time to start the resource layer as well, which is managed by the rgmanager daemon.

At this time, we're still starting the cluster manually after each node boots, so we're going to make sure that rgmanager is disabled at boot.

chkconfig rgmanager off
chkconfig --list rgmanager
rgmanager      	0:off	1:off	2:off	3:off	4:off	5:off	6:off

Now let's start it.

Note: We've configured the storage services to start automatically. When we start rgmanager now, it will start the storage resources, including DRBD. In turn, DRBD will stop up to five minutes and wait for its peer. This will cause the first node you start rgmanager on to appear to hang until the other node's rgmanager has started DRBD as well.
/etc/init.d/rgmanager start
Starting Cluster Service Manager:                          [  OK  ]

Now let's run clustat again, and see what's new.

clustat
Cluster Status for an-cluster-A @ Wed Dec 14 20:52:11 2011
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:storage_an01           an-node01.alteeve.ca          started       
 service:storage_an02           an-node02.alteeve.ca          started

What we see are two section; The top section shows the cluster members and the lower part covers the managed resources.

We can see that both members, an-node01.alteeve.ca and an-node02.alteeve.ca are Online, meaning that cman is running and that they've joined the cluster. It also shows us that both members are running rgmanager. You will always see Local beside the name of the node you ran the actual clustat command from.

Under the services, you can see the two new services we created with the service: prefix. We can see that each service is started, meaning that all four of the resources are up and running properly and which node each service is running on.

Note that the two storage services are running, despite not having started them? That is because the rgmanager service was started earlier. When we pushed out the updated configuration, rgmanager saw the two new storage services had autostart="1" and started them. If you check your storage services now, you will see that they are all online.

DRBD;

/etc/init.d/drbd status
version: 8.3.12 (api:88/proto:86-96)
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03
m:res  cs         ro               ds                 p  mounted  fstype
0:r0   Connected  Primary/Primary  UpToDate/UpToDate  C
1:r1   Connected  Primary/Primary  UpToDate/UpToDate  C
2:r2   Connected  Primary/Primary  UpToDate/UpToDate  C

Clustered LVM;

pvscan; vgscan; lvscan
  PV /dev/drbd2   VG an02-vg0     lvm2 [201.25 GiB / 201.25 GiB free]
  PV /dev/drbd1   VG an01-vg0     lvm2 [201.62 GiB / 201.62 GiB free]
  PV /dev/drbd0   VG shared-vg0   lvm2 [18.61 GiB / 0    free]
  Total: 3 [421.48 GiB] / in use: 3 [421.48 GiB] / in no VG: 0 [0   ]
  Reading all physical volumes.  This may take a while...
  Found volume group "an02-vg0" using metadata type lvm2
  Found volume group "an01-vg0" using metadata type lvm2
  Found volume group "shared-vg0" using metadata type lvm2
  ACTIVE            '/dev/shared-vg0/shared' [18.61 GiB] inherit

GFS2;

/etc/init.d/gfs2 status
Configured GFS2 mountpoints: 
Configured GFS2 mountpoints: 
/shared
Active GFS2 mountpoints: 
/shared

Nice, eh?

Managing Cluster Resources

Managing services in the cluster is done with a fairly simple tool called clusvcadm.

The main commands we're going to look at shortly are:

  • clusvcadm -e <service> -m <node>: Enable the <service> on the specified <node>. When a <node> is not specified, the local node where the command was run is assumed.
  • clusvcadm -d <service>: Disable the <service>.

There are other ways to use clusvcadm which we will look at after the virtual servers are provisioned and under cluster control.

Stopping Clustered Storage - A Preview To Cold-Stopping The Cluster

To stop the storage services, we'll use the rgmanager command line tool clusvcadm, the cluster service administrator. Specifically, we'll use its -d switch, which tells rgmanager to disable the service.

Note: Services with the service: prefix can be called with their name alone. As we will see later, other services will need to have the service type prefix included.

As always, confirm the current state of affairs before starting. On both nodes, run clustat to confirm that the storage services are up.

clustat
Cluster Status for an-cluster-A @ Tue Dec 20 20:37:42 2011
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:storage_an01           an-node01.alteeve.ca          started       
 service:storage_an02           an-node02.alteeve.ca          started

They are, so now lets gracefully shut them down.

On an-node01, run:

clusvcadm -d storage_an01
Local machine disabling service:storage_an01...Success

If we now run clustat from either node, we should see this;

clustat
Cluster Status for an-cluster-A @ Tue Dec 20 20:38:28 2011
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:storage_an01           (an-node01.alteeve.ca)        disabled      
 service:storage_an02           an-node02.alteeve.ca          started

Notice how service:storage_an01 is now in the disabled state? If you check the status of drbd now on an-node02 you will see that an-node01 is indeed down.

/etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.3.12 (api:88/proto:86-96)
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03
m:res  cs            ro               ds                 p  mounted  fstype
0:r0   WFConnection  Primary/Unknown  UpToDate/Outdated  C
1:r1   WFConnection  Primary/Unknown  UpToDate/Outdated  C
2:r2   WFConnection  Primary/Unknown  UpToDate/Outdated  C

If you want to shut down the entire cluster, you will need to stop the storage_an02 service as well. For fun, let's do this, but lets stop the service from an-node01;

clusvcadm -d storage_an02
Local machine disabling service:storage_an02...Success

Now on both nodes, we should see this from clustat;

clustat
Cluster Status for an-cluster-A @ Tue Dec 20 20:39:55 2011
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:storage_an01           (an-node01.alteeve.ca)        disabled      
 service:storage_an02           (an-node02.alteeve.ca)        disabled
Warning: If you are not doing a cold shut-down of the cluster, you will want to skip this step and just stop rgmanager. The reason is that the autostart="1" value only gets evaluated when quorum is gained. If you disable the storage_anXX service and then reboot the node, the cluster has not lost quorum. Thus, when the node rejoins the cluster, the storage service will not automatically start.

We can now, if we wanted to, stop the rgmanager and cman daemons. This is, in fact, how we will cold-stop the cluster from now on.

We'll cover cold stopping the cluster after we finish provisioning VMs.

Starting Clustered Storage

Normally from now on, the clustered storage will start automatically. However, it's a good exercise to look at how to manually start them, just in case.

The main difference from stopping the service is that we swap the -d switch for the -e, enable, switch. We will also add the target cluster member name using the -m switch. We didn't need to use the member switch while stopping because the cluster could tell where the service was running and, thus, which member to contact to stop the service.

Should you omit the member name, the cluster will try to use the local node as the target member. Note though that a target service will start on the node the command was issued on, regardless of the fail-over domain's ordered policy. That is to say, a service will not start on another node in the cluster when the member option is not specified, despite the fail-over configuration set to prefer another node.

Note: The storage services need to start at about the same time on both nodes. This is because the initially started storage service will hang when it tries to start drbd until either the other node is up or until it times out. For this reason, be sure to have two terminal windows open to make then next two calls simultaneously.

On an-node01, run;

clusvcadm -e storage_an01 -m an-node01.alteeve.ca
Member an-node01.alteeve.ca trying to enable service:storage_an01...Success
service:storage_an01 is now running on an-node01.alteeve.ca

On an-node02, run;

clusvcadm -e storage_an02 -m an-node02.alteeve.ca
Member an-node02.alteeve.ca trying to enable service:storage_an02...Success
service:storage_an02 is now running on an-node02.alteeve.ca

Now clustat on either node should again show the storage services running again.

clustat
Cluster Status for an-cluster-A @ Tue Dec 20 21:09:19 2011
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:storage_an01           an-node01.alteeve.ca          started       
 service:storage_an02           an-node02.alteeve.ca          started

A Note On Resource Management With DRBD

When the cluster starts for the first time, where neither node's DRBD storage was up, the first node to start will wait for /etc/drbd.d/global_common.conf's wfc-timeout seconds (300 in our case) for the second node to start. For this reason, we want to ensure that we enable the storage resources more or less at the same time and from two different terminals. The reason for two terminals is that the clusvcadm -e ... command won't return until all resources have started, so you need the second terminal window to start the other node's clustered storage service while the first one waits.

If the clustered storage service ever fails, look in syslog's /var/log/messages for a split-brain error. Look for a message like:

Mar 29 20:24:37 an-node01 kernel: block drbd2: helper command: /sbin/drbdadm initial-split-brain minor-2
Mar 29 20:24:37 an-node01 kernel: block drbd2: helper command: /sbin/drbdadm initial-split-brain minor-2 exit code 0 (0x0)
Mar 29 20:24:37 an-node01 kernel: block drbd2: Split-Brain detected but unresolved, dropping connection!
Mar 29 20:24:37 an-node01 kernel: block drbd2: helper command: /sbin/drbdadm split-brain minor-2
Mar 29 20:24:37 an-node01 kernel: block drbd2: helper command: /sbin/drbdadm split-brain minor-2 exit code 0 (0x0)
Mar 29 20:24:37 an-node01 kernel: block drbd2: conn( WFReportParams -> Disconnecting )

With the fencing hook into the cluster, this should be a very hard problem to run into. If you do though, Linbit has the authoritative guide to recover from this situation.

Provisioning Virtual Machines

Now we're getting to the purpose of our cluster; Provision virtual machines!

We have two steps left;

  • Provision our VMs.
  • Add the VMs to rgmanager.

"Provisioning" a virtual machine simple means to create it; Assign a collection of emulated hardware, connected to physical devices, to a given virtual machine and begin the process of installing the operating system on it. This tutorial is more about clustering than it is about virtual machine administration, so some experience with managing virtual machines has to be assumed. If you need to brush up, here are some resources;

When you feel comfortable, proceed.

Before We Begin - Setting Up Our Workstation

The virtual machines are, for obvious reasons, headless. That is, they have no real video card into which we can plug a monitor and watch the progress of the install. This would, left unresolved, make it pretty hard to install the operating systems as there is simply no network in the early stages of most operating system installations.

Part of the libvirtd package is a program called virt-manager which is available on most all modern Linux distributions. This application makes it very easy to connect to our virtual machines, regardless of their network state.

How you install this will depend on your workstation.

On RPM-based systems, try:

yum install virt-manager

On deb based systems, try:

apt-get install virt-manager

On SUSE-based systems, try;

zypper install virt-manager

Once it is installed, you need to determine whether your workstation is on the IFN or BCN. I've got my laptop on the BCN, so I will connect to the nodes using just their short host names. If you're on the same IFN as the nodes, you will need to append .ifn to the host names.

Initial installation of virt-manager.

To connect to the the cluster nodes;

  1. Click on File -> Add Connection....
  2. Make sure that Hypervisor is set to QEMU/KVM.
  3. Click to check Connect to remote host.
  4. Make sure that Method is set to SSH/span>.
  5. Make sure that Username is set to root.
  6. Enter the Hostname using the proper entry from /etc/hosts (ie: an-node01 or an-node01.ifn)
  7. Click on the button labelled Connect.
  8. Repeat these steps for the other node.
New connection window.

Once your two nodes have been added to virt-manager, you should see both nodes as connected, but no VMs will be shown as we've not yet provisioned any yet.

Two nodes added to virt-manager.

We'll come back to virt-manager shortly.

Provision Planning

Before we can start creating virtual machines, we need to take stock of what resources we have available and how we want to divy them out to the VMs.

In my cluster, I've got 200 GiB available on each of my two nodes.

vgdisplay |grep -i -e free -e "vg name"
  VG Name               an02-vg0
  Free  PE / Size       51521 / 201.25 GiB
  VG Name               an01-vg0
  Free  PE / Size       51615 / 201.62 GiB
  VG Name               shared-vg0
  Free  PE / Size       0 / 0

I know I have 8 GiB of memory, but I have to slice off a certain amount of that for the host OS. I've got my nodes sitting about where they will be normally, so I can check how much memory is in use fairly easily.

cat /proc/meminfo |grep -e MemTotal -e MemFree
MemTotal:        8050312 kB
MemFree:         7432288 kB

I'm sitting about about 604 MiB used (8,050,312 KiB - 7,432,288 KiB == 618,024 KiB / 1,024 == 603.54 MiB). I think I can safely operate within 1 GiB, leaving me 7 GiB of RAM to allocate to VMs.

Next up, I need to confirm how many CPU cores I have available.

cat /proc/cpuinfo |grep processor
processor	: 0
processor	: 1
processor	: 2
processor	: 3

I've got four, and I like to dedicate the first one to the host OS, so I've got three to allocate to my VMs.

On the network front, I know I've got two bridges, one to the IFN and one to the BCN.

So let's summarize:

  • 400 GiB of space, 200 GiB per DRBD resource.
  • 7 GiB of RAM.
  • 3 CPU cores (can over-allocate).
  • 1 network bridge, vbr2.

With this list in mind, we can now start planning out the VMs.

The network can share the same subnet as the IFN if you wish, but I prefer to isolate my VMs from the IFN using a different subnet, 10.254.0.0/16. This is, admittedly, "security by obscurity" and in no way is it a replacement for proper isolation. In production, you will want to setup firewalls on you nodes to prevent access from virtual machines.

With that said, here is what we will install now. Obviously, you will have other needs and goals. Mine is an admittedly artificial network.

  • A development server. This would be used for testing, so it will have more modest resources.
  • A web server, which will mainly use a DB server, so will need CPU and RAM, but not much disk.
  • A database server.
  • A windows server. I don't exactly have a use for it, except to show how to install a Windows VM for those who do need it.

Now to divvy up the resources;

VM Name Primary Host Disk CPU RAM IFN OS
Dev Server vm0001-dev an-node01 150 GiB 1 GiB 2 core 10.254.0.1/16 CentOS 6
Web Server vm0002-web an-node01 50 GiB 2 GiB 2 cores 10.254.0.2/16 CentOS 6
Database Server vm0003-db an-node02 100 GiB 2 GiB 2 cores 10.254.0.3/16 CentOS 6
Web Server vm0004-ms an-node02 100 GiB 2 GiB 2 cores 10.254.0.4/16 Windows Server 2008 R2 64-bit

Notice that we've over-allocated the CPU cores? This is ok. We're going to restrict the VMs to CPU cores number 1 through 3, leaving core number 0 for the host OS. When all of the VMs are running on one node, the hypervisor's scheduler will handle shuffling jobs from the VMs' cores to the real cores that are least loaded at a given time.

As for the RAM though, we can not use more than we have. We're going to leave 1 GiB for the host, so we'll divvy the remaining 7 GiB between the VMs. Remember, we have to plan for when all four VMs will run on just one node.

A Note on VM Configuration

It would be a questionably valueable divertion to cover the setup of each VM. It will be up to you, reader, to setup each VM however you like.

Provisioning vm0001-dev

Note: We're going to spend a lot more time on this first VM, so bear with me here, even if you aren't interested in creating a VM like this.

Before we can provision, we need to gather whatever install source we'll need for the VM. This can be a simple ISO file, as we'll see on the windows install later, or it can be files on a web server, which we'll use here. We'll also need to create the "hard drive" for the VM, which will be a new LV. Finally, we'll craft the virt-install command which will begin the actual OS install.

This being a Linux machine, we can provision this using a network. Conveniently, I've got a PXE server setup with the CentOS install files available on my local network at http://10.255.255.254/c6/x86_64/img/. You don't need to have a full PXE server setup, mounting the install ISO and pointing a web server at the mounted directory would work just fine. I'm also going to further customize my install by using a kickstart file which, effectively, pre-answers the installation questions so that the install is fully automated.

So, let's create the new LV. I know that this machine will be primarily run on an-node01 and that it will be 150 GiB. I personally always name the LVs as vmXXXX-Y, where X is the VM's name and the Y is a simple integer. You are obviously free to use whatever makes most sense to you.

Creating vm0001-dev's Storage

With that, the lvcreate call is;

On an-node01, run;

lvcreate -L 150G -n vm0001-1 /dev/an01-vg0
  Logical volume "vm0001-1" created

Creating vm0001-dev's virt-install Call

Now with the storage created, we can craft the virt-install command. I like to put this into a file under the /shared/provision/ directory for future reference. Let's take a look at the command, then we'll discuss what the switches are for.

touch /shared/provision/vm0001-dev.sh
chmod 755 /shared/provision/vm0001-dev.sh 
vim /shared/provision/vm0001-dev.sh
virt-install --connect qemu:///system \
  --name vm0001-dev \
  --ram 1024 \
  --arch x86_64 \
  --vcpus 1 \
  --location http://10.255.255.254/c6/x86_64/img/ \
  --extra-args "ks=http://10.255.255.254/c6/x86_64/ks/c6_minimal.ks" \
  --os-type linux \
  --os-variant rhel6 \
  --disk path=/dev/an01-vg0/vm0001-1 \
  --network bridge=vbr2 \
  --vnc
Note: Don't use tabs to indent the lines.

Let's break it down;

  • --connect qemu:///system

This tells virt-install to use the QEMU hardware emulator (as opposed to Xen) and to install the VM on to local system.

  • --name vm0001-dev

This sets the name of the VM. It is the name we will use in the cluster configuration and whenever we use the libvirtd tools, like virsh.

  • --ram 1024

This sets the amount of RAM, in MiB, to allocate to this VM. Here, we're allocating 1 GiB (1,024 MiB).

  • --arch x86_64

This sets the emulated CPU's architecture to 64-bit. This can be used even when you plan to install a 32-bit OS, but not the other way around, of course.

  • --vcpus 1

This sets the number of CPU cores to allocate to this VM. Here, we're setting just one.

  • --location http://10.255255.254/c6/x86_64/img/

This tells virt-install to pull the installation files from the URL specified.

  • --extra-args "ks=http://10.255.255.254/c6/x86_64/ks/c6_minimal.ks"

This is an optional command used to pass the install kernel arguments. Here, I'm using it to tell the kernel to grab the specified kickstart file for use during the installation.

Note: If you want to copy the kickstart script used in this tutorial, you can find it here.
  • --os-type linux

This broadly sets hardware emulation for optimal use with Linux-based virtual machines.

  • --os-variant rhel6

This further refines tweaks to the hardware emulation to maximize performance for RHEL6 (and derivative) installs.

  • --disk path=/dev/an01-vg0/vm0001-1

This tells the installer to use the LV we created earlier as the backing storage device for the new virtual machine.

  • --network bridge=vbr2

This tells the installer to create a network card in the VM and to then connect it to the vbr2 bridge, thus connecting the VM to the IFN. Optionally, you could add ,model=e1000 option to tells the emulator to mimic an Intel e1000 hardware NIC. The default is to use the virtio virtualized network card. If you have two or more bridges, you can repeat the --network switch as many times as you need.

  • --vnc

This tells virt-manager to create a VNC server on the VM and, if possible, immediately connect it the just-provisioned VM. With a minimal install on the nodes, the automatically spawned client will fail. This is fine, just use virt-manager from my workstation.

Note: If you close the initial VNC window and want to reconnect to the VM, you can simply open up virt-manager, connect to the an-node01 host if needed, and double-click on the vm0001-dev entry. This will effectively "plug a monitor into the VM".

Initializing vm0001-dev's Install

Well, time to start the install!

On an-node01, run;

/shared/provision/vm0001-dev.sh
Starting install...
Retrieving file .treeinfo...                             |  676 B     00:00 ... 
Retrieving file vmlinuz...                               | 7.5 MB     00:00 ... 
Retrieving file initrd.img...                            |  59 MB     00:02 ... 
Creating domain...                                       |    0 B     00:00     
WARNING  Unable to connect to graphical console: virt-viewer not installed. Please install the 'virt-viewer' package.
Domain installation still in progress. You can reconnect to 
the console to complete the installation process.

And it's off!

Initial provision of vm0001-dev.

Progressing nicely.

Installation of vm0001-dev proceeding as expected.

And done! Note that, depending on your kickstart file, it may have automatically rebooted or you may need to reboot manually.

Note: I've found that there are occassions where the VM will power off instead of rebooting. With virt-manager, you can click to select the new VM and then press the "play" button to boot the VM manually.
Installation of vm0001-dev complete.

Defining vm0001-dev On an-node02

We can use virsh to see that the new virtual machine exists and what state it is in. Note that I've gotten into the habit of using --all to get around virsh's default behaviour of hiding VMs that are off.

On an-node01;

virsh list --all
 Id Name                 State
----------------------------------
  2 vm0001-dev           running

On an-node02;

virsh list --all
 Id Name                 State
----------------------------------

As we see, the new vm0001-dev is only known to an-node01. This is, in and of itself, just fine.

We're going to need to put the virtual machine's XML definition file in a common place accessible on both nodes. This could be matching but separate directories on either node, or it can be a common shared location. As we've got the cluster's /shared GFS2 partition, we're going to use the /shared/definitions directory we create earlier. This avoids the need to remember to keep two copies of the file in sync across both nodes.

To backup the VM's configuration, we'll again use virsh, but this time with the dumpxml command.

On an-node01;

virsh dumpxml vm0001-dev > /shared/definitions/vm0001-dev.xml
cat /shared/definitions/vm0001-dev.xml
<domain type='kvm' id='2'>
  <name>vm0001-dev</name>
  <uuid>2512b2dd-a1a8-f990-2a0d-6c41968ab3f8</uuid>
  <memory>1048576</memory>
  <currentMemory>1048576</currentMemory>
  <vcpu>1</vcpu>
  <os>
    <type arch='x86_64' machine='rhel6.2.0'>hvm</type>
    <boot dev='network'/>
    <boot dev='cdrom'/>
    <boot dev='hd'/>
    <bootmenu enable='yes'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/an01-vg0/vm0001-1'/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <interface type='bridge'>
      <mac address='52:54:00:9b:3c:f7'/>
      <source bridge='vbr2'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/2'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/2'>
      <source path='/dev/pts/2'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <graphics type='vnc' port='5900' autoport='yes'/>
    <video>
      <model type='cirrus' vram='9216' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </memballoon>
  </devices>
</domain>

There we go; That is the emulated hardware on which your virtual machine exists. Pretty neat, eh?

I like to keep all of my VMs defined on all of my nodes. This is entirely optional, as the cluster will define the VM on a target node when needed. It is, though, a good chance to examine how this is done manually.

On an-node02;

virsh define /shared/definitions/vm0001-dev.xml
Domain vm0001-dev defined from /shared/definitions/vm0001-dev.xml

We can confirm that it now exists by re-running virsh list --all.

virsh list --all
 Id Name                 State
----------------------------------
  - vm0001-dev           shut off

You should also now be able to see vm0001-dev under an-node02 in your virt-manager window. It will be listed as shutoff, which is expected. Do not try to turn it on while it's running on the other node!

Provisioning vm0002-web

This installation will be pretty much the same as it was for vm0001-dev, so we'll look mainly at the differences.

Creating vm0002-web's Storage

We'll use lvcreate again, but this time we won't specify a specific size, but instead a percentage of the remainin free space will be defined. Note that the -L switch changes to -l;

On an-node01, run;

lvcreate -l 100%FREE -n vm0002-1 /dev/an01-vg0
  Logical volume "vm0002-1" created

Creating vm0002-web's virt-install Call

The virt-install command will be quite similar to the previous one.

touch /shared/provision/vm0002-web.sh
chmod 755 /shared/provision/vm0002-web.sh 
vim /shared/provision/vm0002-web.sh
virt-install --connect qemu:///system \
  --name vm0002-web \
  --ram 2048 \
  --arch x86_64 \
  --vcpus 2 \
  --location http://10.255.255.254/c6/x86_64/img/ \
  --extra-args "ks=http://10.255.255.254/c6/x86_64/ks/c6_minimal.ks" \
  --os-type linux \
  --os-variant rhel6 \
  --disk path=/dev/an01-vg0/vm0002-1 \
  --network bridge=vbr2 \
  --vnc

Lets look at the differences;

  • --name vm0002-web; This sets the new name of the VM.
  • --ram 2048; This doubles the amount of RAM to 2048 MiB.
  • --vcpus 2; This sets the number of CPU cores to two.
  • --disk path=/dev/an01-vg0/vm0002-1; The path to the new LV is set.

Note that the same kickstart file from before is used. This is fine as it doesn't specify a specific IP address and it is smart enough to adapt to the new virtual disk size.

Initializing vm0002-web's Install

Well, time to start the install!

On an-node01, run;

/shared/provision/vm0002-web.sh
Starting install...
Retrieving file .treeinfo...                             |  676 B     00:00 ... 
Retrieving file vmlinuz...                               | 7.5 MB     00:00 ... 
Retrieving file initrd.img...                            |  59 MB     00:02 ... 
Creating domain...                                       |    0 B     00:00     
WARNING  Unable to connect to graphical console: virt-viewer not installed. Please install the 'virt-viewer' package.
Domain installation still in progress. You can reconnect to 
the console to complete the installation process.

The install should proceed more or less the same as it did for vm0001-dev.

Defining vm0002-web On an-node02

We can use virsh to see that the new virtual machine exists and what state it is in. Note that I've gotten into the habit of using --all to get around virsh's default behaviour of hiding VMs that are off.

On an-node01;

virsh list --all
 Id Name                 State
----------------------------------
  2 vm0001-dev           running
  4 vm0002-web           running

On an-node02;

virsh list --all
 Id Name                 State
----------------------------------
  - vm0001-dev           shut off

As before, the new vm0002-web is only known to an-node01.

On an-node01;

virsh dumpxml vm0002-web > /shared/definitions/vm0002-web.xml
cat /shared/definitions/vm0002-web.xml
<domain type='kvm' id='4'>
  <name>vm0002-web</name>
  <uuid>02f967ab-103f-c276-c40f-9eaa47339df4</uuid>
  <memory>2097152</memory>
  <currentMemory>2097152</currentMemory>
  <vcpu>2</vcpu>
  <os>
    <type arch='x86_64' machine='rhel6.2.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/an01-vg0/vm0002-1'/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <interface type='bridge'>
      <mac address='52:54:00:65:39:60'/>
      <source bridge='vbr2'/>
      <target dev='vnet1'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/3'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/3'>
      <source path='/dev/pts/3'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <graphics type='vnc' port='5901' autoport='yes'/>
    <video>
      <model type='cirrus' vram='9216' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </memballoon>
  </devices>
</domain>

There we go; That is the emulated hardware on which your virtual machine exists. Pretty neat, eh?

I like to keep all of my VMs defined on all of my nodes. This is entirely optional, as the cluster will define the VM on a target node when needed. It is, though, a good chance to examine how this is done manually.

On an-node02;

virsh define /shared/definitions/vm0002-web.xml
Domain vm0002-web defined from /shared/definitions/vm0002-web.xml

We can confirm that it now exists by re-running virsh list --all.

virsh list --all
 Id Name                 State
----------------------------------
  - vm0001-dev           shut off
  - vm0002-web           shut off

Provisioning vm0003-db

This installation will, again, be pretty much the same as it was for vm0001-dev and vm0002-web, so we'll again look mainly at the differences.

Creating vm0003-db's Storage

We'll use lvcreate again, but being the first LV on the an02-vg0, we'll specify the specific size again.

On an-node01, run;

lvcreate -L 100G -n vm0003-1 /dev/an02-vg0
  Logical volume "vm0003-1" created

Creating vm0003-db's virt-install Call

The virt-install command will be quite similar to the previous one.

touch /shared/provision/vm0003-db.sh
chmod 755 /shared/provision/vm0003-db.sh 
vim /shared/provision/vm0003-db.sh
virt-install --connect qemu:///system \
  --name vm0003-db \
  --ram 2048 \
  --arch x86_64 \
  --vcpus 2 \
  --location http://10.255.255.254/c6/x86_64/img/ \
  --extra-args "ks=http://10.255.255.254/c6/x86_64/ks/c6_minimal.ks" \
  --os-type linux \
  --os-variant rhel6 \
  --disk path=/dev/an02-vg0/vm0003-1 \
  --network bridge=vbr2 \
  --vnc

Lets look at the differences;

  • --name vm0003-db; This sets the new name of the VM.
  • --disk path=/dev/an02-vg0/vm0003-1; The path to the new LV is set. Note that the VG has changed as this VM will run in an-node02 normally.

Initializing vm0003-db's Install

This time we're going to provision the new VM on an-node02, as that is where it will live normally.

On an-node02, run;

/shared/provision/vm0003-db.sh
Starting install...
Retrieving file .treeinfo...                             |  676 B     00:00 ... 
Retrieving file vmlinuz...                               | 7.5 MB     00:00 ... 
Retrieving file initrd.img...                            |  59 MB     00:02 ... 
Creating domain...                                       |    0 B     00:00     
WARNING  Unable to connect to graphical console: virt-viewer not installed. Please install the 'virt-viewer' package.
Domain installation still in progress. You can reconnect to 
the console to complete the installation process.

The install should proceed more or less the same as it did for vm0001-dev and vm0002-web.

Defining vm0003-db On an-node01

We can use virsh to see that the new virtual machine exists and what state it is in. Note that I've gotten into the habit of using --all to get around virsh's default behaviour of hiding VMs that are off.

On an-node02;

virsh list --all
 Id Name                 State
----------------------------------
  2 vm0003-db            running
  - vm0001-dev           shut off
  - vm0002-web           shut off

On an-node01;

virsh list --all
 Id Name                 State
----------------------------------
  2 vm0001-dev           running
  4 vm0002-web           running

To backup the VM's configuration, we'll again use virsh, but this time with the dumpxml command.

On an-node02;

virsh dumpxml vm0003-db > /shared/definitions/vm0003-db.xml
cat /shared/definitions/vm0003-db.xml
<domain type='kvm' id='2'>
  <name>vm0003-db</name>
  <uuid>a7018001-b433-b739-bbd9-d4d3285f0a72</uuid>
  <memory>2097152</memory>
  <currentMemory>2097152</currentMemory>
  <vcpu>2</vcpu>
  <os>
    <type arch='x86_64' machine='rhel6.2.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/an02-vg0/vm0003-1'/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <interface type='bridge'>
      <mac address='52:54:00:44:83:ec'/>
      <source bridge='vbr2'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/2'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/2'>
      <source path='/dev/pts/2'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <graphics type='vnc' port='5900' autoport='yes'/>
    <video>
      <model type='cirrus' vram='9216' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </memballoon>
  </devices>
</domain>

On an-node01;

virsh define /shared/definitions/vm0003-db.xml
Domain vm0003-db defined from /shared/definitions/vm0003-db.xml

We can confirm that it now exists by re-running virsh list --all.

virsh list --all
 Id Name                 State
----------------------------------
  2 vm0001-dev           running
  4 vm0002-web           running
  - vm0003-db            shut off

Provisioning vm0004-ms

Now for something a little different!

This will be the Windows 2008 R2 virtual machine. The biggest difference this time will be that we're going to install from the ISO file rather than from a web-accessible store.

Another difference is that we're going to specify what kind of storage bus to use with this VM. We'll be using a special, virtualized bus called virtio which requires that the drivers be available to the OS at install time. These drivers will, in turn, be made available to the installer as a virtual floppy disk. It will make for quite the interesting virt-install call, as we'll see.

Preparing vm0004-ms's Storage

As before, we need to create the backing storage LV before we can provision the machine. As we planned, this will be a 100 GiB partition and will be on the an02-vg0 VG. Seeing as this LV will use up the rest of the free space in the VG, we'll again use the lvcreate -l 100%FREE instead of -L 100G as sometimes the numbers don't work out to be exactly the size we intend.

On an-node02, run;

lvcreate -l 100%FREE -n vm0004-1 /dev/an02-vg0
  Logical volume "vm0004-1" created

Before we proceed, we now need to put a copy of the install media, the OS's ISO and the virtual floppy disk, somewhere that the installer can access. I like to put files like this into the /shared/files/ directory we created earlier. How you put them there will be an exercise for the reader.

If you do not have a copy of Microsoft's server operating system, you can download a 30-day free trial here;

The driver for the virtio bus can be found from Red Hat here. Note that there is an ISO and a vfd (virtual floppy disk) file. You can use the ISO and mount it as a second CD-ROM if you wish. This tutorial will use the virtual floppy disk to show how floppy images can be used in VMs:

Note: The vfd no longer seems to exist upstream. As of Sep. 30, 2012, the latest available version is virtio-win-0.1-30.iso, which is an ISO (cd-rom) image. To use it, replace the line;

--disk path=/shared/files/virtio-win-1.1.16.vfd,device=floppy \

with;

--disk path=/shared/files/virtio-win-0.1-30.iso,device=cdrom \

For those wishing to use the floppy image:

Creating vm0004-ms's virt-install Call

Lets look at the virt-install command, then we'll discuss the main differences from the previous call for the firewall. As before, we'll put this command into a small shell script for later reference.

touch /shared/provision/vm0004-ms.sh
chmod 755 /shared/provision/vm0004-ms.sh 
vim /shared/provision/vm0004-ms.sh
virt-install --connect qemu:///system \
  --name vm0004-ms \
  --ram 2048 \
  --arch x86_64 \
  --vcpus 2 \
  --cdrom /shared/files/Windows_Server_2008_R2_64Bit_SP1.iso \
  --disk path=/dev/an02-vg0/vm0004-1,device=disk,bus=virtio \
  --disk path=/shared/files/virtio-win-1.1.16.vfd,device=floppy \
  --os-type windows \
  --os-variant win2k8 \
  --network bridge=vbr2 \
  --vnc

Let's look at the main differences;

  • --cdrom /shared/files/Windows_Server_2008_R2_64Bit_SP1.iso

Here we've swapped out the --location and --extra-args arguments for the --cdrom switch. This will create an emulated DVD-ROM drive and boot from it. The path and file is an ISO image of the installation media we want to use.

  • --disk path=/dev/an02-vg0/vm0004-1,device=disk,bus=virtio

This is the same line we used before, pointing to the new LV of course, but we've added options to it. Specifically, we've told the hardware emulator, QEMU, to not create the standard (ide or scsi) bus. This is a special bus that improves storage I/O on windows (and other) guests. Windows does not support this bus natively, which brings us to the next option.

  • --disk path=/shared/files/virtio-win-1.1.16.vfd,device=floppy

This mounts the emulated floppy disk with the virtio drivers that we'll need to allow windows to see the hard drive during the install.

The rest is more or less the same as before.

Initializing vm0004-ms's Install

As before, we'll run the script with the virt-install command in it.

On an-node02, run;

/shared/provision/vm0004-ms.sh
Starting install...
Creating domain...                                       |    0 B     00:00     
WARNING  Unable to connect to graphical console: virt-viewer not installed. Please install the 'virt-viewer' package.
Domain installation still in progress. Waiting for installation to complete.

This install isn't automated like the previous installs were, so we'll need to hand-hold the VM through the install.

Initial provision of vm0004-ms.

After you get click to select the Custom (advanced) installation method, you will

The Windows 2008 VM vm0004-ms doesn't see a hard drive.

Click on the Load Driver option on the bottom left. You will be presented with a window telling you your options for loading the drivers.

The Windows 2008 VM vm0004-ms driver prompt.

Click on the OK button and the installer will automatically find the virtual floppy disk and present you with the available drivers. Click to highlight Red Hat VirtIO SCSI Controller (A:\amd64\Win2008\viostor.inf) and click the Next button.

Selecting the Win2008 virtio driver.

At this point, the windows installer will see the virtual hard drive and you can proceed with the install as you would normally install Windows 2008 R2 server.

The Win2008 installer now is about to use the virtio-backed storage.

Once the install is complete, reboot.

Installation of vm0004-ms complete.

Post-Install Housekeeping

We have to be careful to "eject" the virtual floppy and DVD disks from the VM. If you neglect to do so, then later delete the files, virsh will fail to boot the VMs and undefine them entirely. (Yes, that is dumb, in this author's opinion). How to recover from this issue can be found below.

Note: At the time of writing this, the author could not find any manner to eject media from the command line, shy of modifying the raw XML definition file and then redefining the VM and rebooting the guest. This is part of a known bug found in libvirt prior to version 0.9.7 and EL6 ships with version 0.8.7. For this reason, we will use virt-manager here.

To "eject" the DVD-ROM and floppy drive, we will use the virt-manager graphical program. You will need to either run virt-manager on one of the nodes, or use a version of it from your workstation by connecting to the host node over SSH. This later method is what I like to do.

Using virt-manager, connect to the vm0004-ms VM.

Connecting to vm0004-ms using virt-manager from a remote workstation.

Click on View then Details and you will see the virtual machine's emulated hardware.

Looking at vm0004-ms's emulated hardware configuration.

First, let's eject the virtual floppy disk. In the left panel, click to select the Floppy 1 device.

Viewing the Floppy 1 device on vm0004-ms.

Click on the Disconnect button and the disk will be unmounted.

Viewing the Floppy 1 device after ejecting the virtual floppy disk on vm0004-ms.

Now to eject the emulated DVD-ROM, again on the left panel, click to select the IDE CDROM 1 device.

Viewing the IDE CDROM 1 device on vm0004-ms.

Click on Disconnect again to unmount the ISO image.

Viewing the IDE CDROM 1 device after ejecting the virtual floppy disk on vm0004-ms.

Now both the floppy disk and DVD image have been unmounted from the VM. We can return to the console view (View -> Console) and we will see that both the floppy disk and DVD drive no longer show any media as mounted within them.

Viewing File Manager on vm0004-ms with the virtual floppy disk and DVD ISO image now unmounted.

Done!

Defining vm0004-ms On an-node02

Now with the installation media unmounted, and as we did before, we will use virsh dumpxml to write out the XML definition file for the new VM and then virsh define it on an-node01.

On an-node02;

virsh list --all
 Id Name                 State
----------------------------------
  2 vm0003-db            running
  4 vm0004-ms            running
  - vm0001-dev           shut off
  - vm0002-web           shut off

On an-node01;

virsh list --all
 Id Name                 State
----------------------------------
  2 vm0001-dev           running
  4 vm0002-web           running
  - vm0003-db            shut off

As before, our new VM is only defined on the node we installed it on. We'll fix this now.

On an-node02;

virsh dumpxml vm0004-ms > /shared/definitions/vm0004-ms.xml
cat /shared/definitions/vm0004-ms.xml
<domain type='kvm' id='4'>
  <name>vm0004-ms</name>
  <uuid>4c537551-96f4-3b5e-209a-0e41cab41d44</uuid>
  <memory>2097152</memory>
  <currentMemory>2097152</currentMemory>
  <vcpu>2</vcpu>
  <os>
    <type arch='x86_64' machine='rhel6.2.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/an02-vg0/vm0004-1'/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <disk type='file' device='floppy'>
      <driver name='qemu' type='raw' cache='none'/>
      <target dev='fda' bus='fdc'/>
      <alias name='fdc0-0-0'/>
      <address type='drive' controller='0' bus='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <alias name='ide0-1-0'/>
      <address type='drive' controller='0' bus='1' unit='0'/>
    </disk>
    <controller type='fdc' index='0'>
      <alias name='fdc0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:5e:b1:47'/>
      <source bridge='vbr2'/>
      <target dev='vnet1'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/3'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/3'>
      <source path='/dev/pts/3'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <graphics type='vnc' port='5901' autoport='yes'/>
    <video>
      <model type='vga' vram='9216' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </memballoon>
  </devices>
</domain>

As before, defining the VM on both nodes is optional, but a habit I like to do.

On an-node01;

virsh define /shared/definitions/vm0004-ms.xml
Domain vm0004-ms defined from /shared/definitions/vm0004-ms.xml

We can confirm that it now exists by re-running virsh list --all.

virsh list --all
 Id Name                 State
----------------------------------
  2 vm0001-dev           running
  4 vm0002-web           running
  - vm0003-db            shut off
  - vm0004-ms            shut off

With that, all our VMs exist and we're ready to make them highly available!

Making Our VMs Highly Available Cluster Services

We're ready to start the final step; Making our VMs highly available cluster services! This involves two main steps:

  • Creating two new, ordered fail-over Domains; One with each node as the highest priority.
  • Adding our VMs as services, one is each new fail-over domain.

Creating the Ordered Fail-Over Domains

We have planned for two VMs, vm0001-dev and vm0002-web to normally run on an-node01 while vm0003-db and vm0004-ms to run on an-node02. Of course, should one of the nodes fail, the lost VMs will be restarted on the surviving node. For this, we will use an ordered fail-over domain.

The idea here is that each new fail-over domain will have one node with a higher priority than the other. That is, one will have an-node01 with the highest priority and the other will have an-node02 as the highest. This way, VMs that we want to normally run on a given node will be added to the matching fail-over domain.

Note: With 2-node clusters like ours, ordering is arguably useless. It's used here more to introduce the concepts rather than providing any real benefit. If you want to make production clusters unordered, you can. Just remember to run the VMs on the appropriate nodes when both are on-line.

Here are the two new domains we will create in /etc/cluster/cluster.conf;

                <failoverdomains>
                        ...
                        <failoverdomain name="primary_an01" nofailback="1" ordered="1" restricted="1">
                                <failoverdomainnode name="an-node01.alteeve.ca" priority="1"/>
                                <failoverdomainnode name="an-node02.alteeve.ca" priority="2"/>
                        </failoverdomain>
                        <failoverdomain name="primary_an02" nofailback="1" ordered="1" restricted="1">
                                <failoverdomainnode name="an-node01.alteeve.ca" priority="2"/>
                                <failoverdomainnode name="an-node02.alteeve.ca" priority="1"/>
                        </failoverdomain>
                </failoverdomains>

The two major pieces of the puzzle here are the <failoverdomain ...>'s ordered="1" attribute and the <failoverdomainnode ...>'s priority="x" attributes. The former tells the cluster that there is a preference for which node should be used when both are available. The later, which is the difference between the two new domains, tells the cluster which specific node is preferred.

The first of the new fail-over domains is primary_an01. Any service placed in this domain will prefer to run on an-node01, as its priority of 1 is higher than an-node02's priority of 2. The second of the new domains is primary_an02 which reverses the preference, making an-node02 preferred over an-node01.

Let's look at the complete cluster.conf with the new domain, and the version updated to 11 of course.

<?xml version="1.0"?>
<cluster config_version="11" name="an-cluster-A">
        <cman expected_votes="1" two_node="1"/>
        <clusternodes>
                <clusternode name="an-node01.alteeve.ca" nodeid="1">
                        <fence>
                                <method name="ipmi">
                                        <device action="reboot" name="ipmi_an01"/>
                                </method>
                                <method name="pdu">
                                        <device action="reboot" name="pdu2" port="1"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="an-node02.alteeve.ca" nodeid="2">
                        <fence>
                                <method name="ipmi">
                                        <device action="reboot" name="ipmi_an02"/>
                                </method>
                                <method name="pdu">
                                        <device action="reboot" name="pdu2" port="2"/>
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <fencedevices>
                <fencedevice agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" name="ipmi_an01" passwd="secret"/>
                <fencedevice agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" name="ipmi_an02" passwd="secret"/>
                <fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.ca" name="pdu2"/>
        </fencedevices>
        <fence_daemon post_join_delay="30"/>
        <totem rrp_mode="none" secauth="off"/>
        <rm>
                <resources>
                        <script file="/etc/init.d/drbd" name="drbd"/>
                        <script file="/etc/init.d/clvmd" name="clvmd"/>
                        <script file="/etc/init.d/gfs2" name="gfs2"/>
                        <script file="/etc/init.d/libvirtd" name="libvirtd"/>
                </resources>
                <failoverdomains>
                        <failoverdomain name="only_an01" nofailback="1" ordered="0" restricted="1">
                                <failoverdomainnode name="an-node01.alteeve.ca"/>
                        </failoverdomain>
                        <failoverdomain name="only_an02" nofailback="1" ordered="0" restricted="1">
                                <failoverdomainnode name="an-node02.alteeve.ca"/>
                        </failoverdomain>
                        <failoverdomain name="primary_an01" nofailback="1" ordered="1" restricted="1">
                                <failoverdomainnode name="an-node01.alteeve.ca" priority="1"/>
                                <failoverdomainnode name="an-node02.alteeve.ca" priority="2"/>
                        </failoverdomain>
                        <failoverdomain name="primary_an02" nofailback="1" ordered="1" restricted="1">
                                <failoverdomainnode name="an-node01.alteeve.ca" priority="2"/>
                                <failoverdomainnode name="an-node02.alteeve.ca" priority="1"/>
                        </failoverdomain>
                </failoverdomains>
                <service autostart="1" domain="only_an01" exclusive="0" name="storage_an01" recovery="restart">
                        <script ref="drbd">
                                <script ref="clvmd">
                                        <script ref="gfs2">
                                                <script ref="libvirtd"/>
                                        </script>
                                </script>
                        </script>
                </service>
                <service autostart="1" domain="only_an02" exclusive="0" name="storage_an02" recovery="restart">
                        <script ref="drbd">
                                <script ref="clvmd">
                                        <script ref="gfs2">
                                                <script ref="libvirtd"/>
                                        </script>
                                </script>
                        </script>
                </service>
        </rm>
</cluster>

Let's validate it now, but we won't bother to push it out just yet.

ccs_config_validate
Configuration validates

Good, now to create the new VM services!

Making Our VMs Clustered Services

The final piece of the puzzle, and the whole purpose of this exercise is in sight!

There is a special service in rgmanager for virtual machines which uses the vm: prefix. We will need to create four of these services; One for each of the virtual machines.

Note: There is a one main drawback of using rgmanager to manage virtual machines in our cluster. Ideally, we'd like to have the vm: services start after the storage_X services are up, and a bit of logic to say that all VMs can start on one node, should the other's storage service fail. This isn't possible though, so we will need to manually start VMs after a cold-start of the cluster.

Creating The vm: Services

We'll create four new services, one for each VM. These are simple single-element entries. Lets increment the version to 12 and take a look at the new entries.

        <rm>
                ...
                <vm name="vm0001-dev" domain="primary_an01" path="/shared/definitions/" autostart="0"
                 exclusive="0" recovery="restart" max_restarts="2" restart_expire_time="600"/>
                <vm name="vm0002-web" domain="primary_an01" path="/shared/definitions/" autostart="0"
                 exclusive="0" recovery="restart" max_restarts="2" restart_expire_time="600"/>
                <vm name="vm0003-db" domain="primary_an02" path="/shared/definitions/" autostart="0"
                 exclusive="0" recovery="restart" max_restarts="2" restart_expire_time="600"/>
                <vm name="vm0004-ms" domain="primary_an02" path="/shared/definitions/" autostart="0"
                 exclusive="0" recovery="restart" max_restarts="2" restart_expire_time="600"/>
        </rm>

Let's look at each of the attributes now;

  • name; This must match the name we created the VM with (the --name ... value when we provisioned the VMs). This is the name that will be passed to the vm.sh resource agent when managing this service, and it will be the <name>.xml used when looking under path=... for the VM's definition file.
  • domain; This tells the cluster to manage the VM using the given fail-over domain.
  • path; This tells the cluster where to look for the VM's definition file. Do not include the actual file name, just the path. This is partly why we wrote out each VM's definition to the shared directory.
  • autostart; As mentioned above, we can't have the VMs start with the cluster, because the underlying storage takes too long to come on-line. Setting this to 0 disables the auto-start behaviour.
  • exclusive; As we saw with the storage services, we want to ensure that this service is not exclusive. If it were, starting the VM would stop the storage and prevent other VMs from running on the node. This would be a bad thing™.
  • recovery; This tells the cluster what to do when the service fails. We are setting this to restart, so the cluster will try to restart the VM on the same node it was on when it failed. The alternative is relocate, which would instead start the VM on another node. More about this next.
  • max_restarts; When a VM fails, it is possible that it is because there is a subtle problem on the host node itself. So this attribute allows up to set a limit on how many times a VM will be allowed to restart before giving up and switching to a relocate police. We're setting this to 2, which means that if a VM is restarted twice, the third failure will trigger a relocate.
  • restart_expire_time; If we let the failure count increment indefinitely, than a relocate policy becomes inevitable, when there is no reason to believe that an issue with the host node exists. To account for this, we use this attribute to tell the cluster to "forget" a restart after the defined number of seconds. We're using 600 seconds (ten minutes). So if a VM fails, the failure count increments from 0 to 1. After 600 seconds though, the restart is "forgotten" and the failure count returns to 0. Said another way, a VM will have to fail three times in ten minutes to trigger the relocate recovery policy.

So let's take a look at the final, complete cluster.conf;

<?xml version="1.0"?>
<cluster config_version="12" name="an-cluster-A">
	<cman expected_votes="1" two_node="1"/>
	<clusternodes>
		<clusternode name="an-node01.alteeve.ca" nodeid="1">
			<fence>
				<method name="ipmi">
					<device action="reboot" name="ipmi_an01"/>
				</method>
				<method name="pdu">
					<device action="reboot" name="pdu2" port="1"/>
				</method>
			</fence>
		</clusternode>
		<clusternode name="an-node02.alteeve.ca" nodeid="2">
			<fence>
				<method name="ipmi">
					<device action="reboot" name="ipmi_an02"/>
				</method>
				<method name="pdu">
					<device action="reboot" name="pdu2" port="2"/>
				</method>
			</fence>
		</clusternode>
	</clusternodes>
	<fencedevices>
		<fencedevice agent="fence_ipmilan" ipaddr="an-node01.ipmi" login="root" name="ipmi_an01" passwd="secret"/>
		<fencedevice agent="fence_ipmilan" ipaddr="an-node02.ipmi" login="root" name="ipmi_an02" passwd="secret"/>
		<fencedevice agent="fence_apc_snmp" ipaddr="pdu2.alteeve.ca" name="pdu2"/>
	</fencedevices>
	<fence_daemon post_join_delay="30"/>
	<totem rrp_mode="none" secauth="off"/>
	<rm>
		<resources>
			<script file="/etc/init.d/drbd" name="drbd"/>
			<script file="/etc/init.d/clvmd" name="clvmd"/>
			<script file="/etc/init.d/gfs2" name="gfs2"/>
			<script file="/etc/init.d/libvirtd" name="libvirtd"/>
		</resources>
		<failoverdomains>
			<failoverdomain name="only_an01" nofailback="1" ordered="0" restricted="1">
				<failoverdomainnode name="an-node01.alteeve.ca"/>
			</failoverdomain>
			<failoverdomain name="only_an02" nofailback="1" ordered="0" restricted="1">
				<failoverdomainnode name="an-node02.alteeve.ca"/>
			</failoverdomain>
			<failoverdomain name="primary_an01" nofailback="1" ordered="1" restricted="1">
				<failoverdomainnode name="an-node01.alteeve.ca" priority="1"/>
				<failoverdomainnode name="an-node02.alteeve.ca" priority="2"/>
			</failoverdomain>
			<failoverdomain name="primary_an02" nofailback="1" ordered="1" restricted="1">
				<failoverdomainnode name="an-node01.alteeve.ca" priority="2"/>
				<failoverdomainnode name="an-node02.alteeve.ca" priority="1"/>
			</failoverdomain>
		</failoverdomains>
		<service autostart="1" domain="only_an01" exclusive="0" name="storage_an01" recovery="restart">
			<script ref="drbd">
				<script ref="clvmd">
					<script ref="gfs2">
						<script ref="libvirtd"/>
					</script>
				</script>
			</script>
		</service>
		<service autostart="1" domain="only_an02" exclusive="0" name="storage_an02" recovery="restart">
			<script ref="drbd">
				<script ref="clvmd">
					<script ref="gfs2">
						<script ref="libvirtd"/>
					</script>
				</script>
			</script>
		</service>
		<vm name="vm0001-dev" domain="primary_an01" path="/shared/definitions/" autostart="0" exclusive="0" recovery="restart" max_restarts="2" restart_expire_time="600"/>
		<vm name="vm0002-web" domain="primary_an01" path="/shared/definitions/" autostart="0" exclusive="0" recovery="restart" max_restarts="2" restart_expire_time="600"/>
		<vm name="vm0003-db" domain="primary_an02" path="/shared/definitions/" autostart="0" exclusive="0" recovery="restart" max_restarts="2" restart_expire_time="600"/>
		<vm name="vm0004-ms" domain="primary_an02" path="/shared/definitions/" autostart="0" exclusive="0" recovery="restart" max_restarts="2" restart_expire_time="600"/>
	</rm>
</cluster>

Let's validate one more time.

ccs_config_validate
Configuration validates

She's a beaut', eh?

Making The VM Services Active

Before we push the last cluster.conf out, lets take a look at the current state of affairs.

On an-node01;

clustat
Cluster Status for an-cluster-A @ Tue Dec 27 14:06:38 2011
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:storage_an01           an-node01.alteeve.ca          started       
 service:storage_an02           an-node02.alteeve.ca          started
virsh list --all
 Id Name                 State
----------------------------------
  2 vm0001-dev           running
  4 vm0002-web           running
  - vm0003-db            shut off
  - vm0004-ms            shut off

On an-node02;

clustat
Cluster Status for an-cluster-A @ Tue Dec 27 14:07:32 2011
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, rgmanager
 an-node02.alteeve.ca                       2 Online, Local, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:storage_an01           an-node01.alteeve.ca          started       
 service:storage_an02           an-node02.alteeve.ca          started
virsh list --all
 Id Name                 State
----------------------------------
  2 vm0003-db            running
  4 vm0004-ms            running
  - vm0001-dev           shut off
  - vm0002-web           shut off

So we can see that the cluster doesn't know about the VMs yet, as we've not yet pushed out the changes. We can also see that vm0001-dev and vm0002-web are currently running on an-node01 and vm0003-db and vm0004-ms are running on an-node02.

So let's push out the new configuration and see what happens!

cman_tool version -r
cman_tool version
6.2.0 config 12

Let's take a look at what showed up in syslog;

Dec 27 14:18:20 an-node01 modcluster: Updating cluster.conf
Dec 27 14:18:20 an-node01 corosync[2362]:   [QUORUM] Members[2]: 1 2
Dec 27 14:18:20 an-node01 rgmanager[2579]: Reconfiguring
Dec 27 14:18:22 an-node01 rgmanager[2579]: Initializing vm:vm0001-dev
Dec 27 14:18:22 an-node01 rgmanager[2579]: vm:vm0001-dev was added to the config, but I am not initializing it.
Dec 27 14:18:22 an-node01 rgmanager[2579]: Initializing vm:vm0002-web
Dec 27 14:18:22 an-node01 rgmanager[2579]: vm:vm0002-web was added to the config, but I am not initializing it.
Dec 27 14:18:22 an-node01 rgmanager[2579]: Initializing vm:vm0003-db
Dec 27 14:18:22 an-node01 rgmanager[2579]: vm:vm0003-db was added to the config, but I am not initializing it.
Dec 27 14:18:23 an-node01 rgmanager[2579]: Initializing vm:vm0004-ms
Dec 27 14:18:23 an-node01 rgmanager[2579]: vm:vm0004-ms was added to the config, but I am not initializing it.

Indeed, if we check again with clustat, we'll see the new VM services, but all four will show as disabled, despite the VMs themselves being up and running.

clustat
Cluster Status for an-cluster-A @ Tue Dec 27 14:20:10 2011
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:storage_an01           an-node01.alteeve.ca          started       
 service:storage_an02           an-node02.alteeve.ca          started       
 vm:vm0001-dev                  (none)                         disabled      
 vm:vm0002-web                  (none)                         disabled      
 vm:vm0003-db                   (none)                         disabled      
 vm:vm0004-ms                   (none)                         disabled

This highlights how the state of the VMs is not intrinsically tied to the cluster's status. The VMs were started outside of the cluster, so the cluster thinks they are off-line. We know they're running though, so we can tell the cluster to enable them now. Note that the VMs will not be rebooted or in any way effected, provided you tell the cluster to enable the VM on the node it's currently running on.

Let's start by enabling vm0001-dev, which we know is running on an-node01. Be aware that the vm: prefix is required when using clusvcadm!

clusvcadm -e vm:vm0001-dev -m an-node01.alteeve.ca
vm:vm0001-dev is now running on an-node01.alteeve.ca

Now we can see that the VM is under the cluster's control!

clustat
Cluster Status for an-cluster-A @ Tue Dec 27 14:25:08 2011
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:storage_an01           an-node01.alteeve.ca          started       
 service:storage_an02           an-node02.alteeve.ca          started       
 vm:vm0001-dev                  an-node01.alteeve.ca          started       
 vm:vm0002-web                  (none)                         disabled      
 vm:vm0003-db                   (none)                         disabled      
 vm:vm0004-ms                   (none)                         disabled

Perfect! Now to add the other three VMs. Note that all of these commands can be run from whichever node you wish, because we're specifying the target node by using the "member" switch.

clusvcadm -e vm:vm0002-web -m an-node01.alteeve.ca
vm:vm0002-web is now running on an-node01.alteeve.ca
clusvcadm -e vm:vm0003-db -m an-node02.alteeve.ca
vm:vm0003-db is now running on an-node02.alteeve.ca
clusvcadm -e vm:vm0004-ms -m an-node02.alteeve.ca
vm:vm0004-ms is now running on an-node02.alteeve.ca

Let's do a final check of the cluster's status;

clustat
Cluster Status for an-cluster-A @ Tue Dec 27 14:28:19 2011
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:storage_an01           an-node01.alteeve.ca          started       
 service:storage_an02           an-node02.alteeve.ca          started       
 vm:vm0001-dev                  an-node01.alteeve.ca          started       
 vm:vm0002-web                  an-node01.alteeve.ca          started       
 vm:vm0003-db                   an-node02.alteeve.ca          started       
 vm:vm0004-ms                   an-node02.alteeve.ca          started

The Last Step - Automatic Cluster Start

The last step is to enable automatic starting of the cman and rgmanager services when the host node boots. This is quite simple;

On both nodes, run;

chkconfig cman on && chkconfig rgmanager on
chkconfig --list | grep -e cman -e rgmanager
cman           	0:off	1:off	2:on	3:on	4:on	5:on	6:off
rgmanager      	0:off	1:off	2:on	3:on	4:on	5:on	6:off

The next time you restart the nodes, you will be able to run clustat and you should find your cluster up and running!

We're Done! Or, Are We?

That's it, ladies and gentlemen. Our cluster is completed! In theory now, any failure in the cluster will result in no lost data and, at worst, no more than a minute or two of downtime.

"In theory" just isn't good enough in clustering though. Time to take "theory" and make it a tested, known fact.

Testing; Taking Theory And Putting It Into Practice

You may have thought that we were done. Indeed, the cluster has been built, but we don't know if things actually work.

Enter testing.

In practice, when preparing production clusters for deployment, you should plan to spend at least twice as long in testing as you did in building the cluster. You need to imagine all failure scenarios, trigger those failures and see what happens.

A Note On The Importance Of Fencing

It may be tempting to think that you were careful and don't really need to test you cluster thoroughly.

You are wrong

Baring you being absolutely obsessive with testing every step of the way, you will almost certain make mistakes. Now I make no claims to genius, but I do like to think I am pretty comfortable building 2-node clusters. Despite that, while writing this testing portion of the tutorial, I found the following problems with my cluster;

  • RGManager's autostart="1" is not evaluated when a node starts, only when quorum is gained. The mistake had me assuming that the storage services would start when the node restarted, after having manually disabled the service prior to node withdrawal.
  • The behaviour of echo c > /proc/sysrq-trigger changed since EL5 and now triggers a core dump with 100% CPU load in EL6 KVM guests. This means that a previous expectation of the cluster recovering from these crashes was wrong.
  • I forgot to install the obliterate-peer.sh script for DRBD, which I didn't catch until I tried to fail a node.

You simply can't make assumptions. Test your cluster in every failure mode you can imagine. Until you do, you won't know what you might have missed!

Controlled VM Migration And Node Withdrawal

This testing will ensure that live migration works in both directions, and that each node can be cleanly removed from and then rejoin the cluster.

The test will consist of the following steps;

  1. Live migrate vm0001-dev and vm0002-web from an-node01 to an-node02. This will ensure live migration works and that all VMs will run on a single node.
  2. Withdraw an-node01 from the cluster entirely and reboot it. This will ensure that cold shut-down of the node is successful.
  3. Once an-node01 has rebooted, rejoin it to the cluster. This will ensure that rejoining the cluster works.
  4. Once an-node01 is a member of the cluster, we will wait a few minutes and ensure that vm0001-dev and vm0002-web automatically live migrate back to an-node01. This will ensure that priority is working.
  5. We will live migrate vm0003-db and vm0004-ms from an-node02 to an-node01 to ensure that migration works in the other direction.
  6. With the VMs all running on an-node01, we will withdraw an-node02 from the cluster, reboot it, rejoin it to the cluster and then confirm that vm0003-db and vm0004-ms automatically migrate back to an-node02.

With all of these tests completed, we will be able to ensure that order and controlled migration of VM services work as expected.

Live Migration - vm0001-dev And vm0002-dev To an-node02

First up, we will use the special clusvcadm switch -M, which tells the cluster to use "live migration". This is, the VM will move to the target member without shutting down. Users of the VM should notice, and worst, a brief network interruption when the cut-over occurs, without any adverse effect on their services or dropped connections.

Let's take a quick look at the state of affairs;

On an-node02, run;

clustat
Cluster Status for an-cluster-A @ Sat Dec 31 13:49:41 2011
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, rgmanager
 an-node02.alteeve.ca                       2 Online, Local, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:storage_an01           an-node01.alteeve.ca          started       
 service:storage_an02           an-node02.alteeve.ca          started       
 vm:vm0001-dev                  an-node01.alteeve.ca          started       
 vm:vm0002-web                  an-node01.alteeve.ca          started       
 vm:vm0003-db                   an-node02.alteeve.ca          started       
 vm:vm0004-ms                   an-node02.alteeve.ca          started

Lets start by live migrating vm0001-dev. Before we do though, let's ssh into it and start a ping against a target on the internet. We'll leave this running throughout the live migration.

On vm0001-dev;

Running ping alteeve.ca on vm0001-dev prior to live migration.

Now back on an-node01, let's migrate vm0001-dev over to an-node02. This will take a little while as the VM's RAM gets copied across the BCN.

clusvcadm -M vm:vm0001-dev -m an-node02.alteeve.ca
Trying to migrate vm:vm0001-dev to an-node02.alteeve.ca...Success
Mid-migration of vm0001-dev.

Once complete, check the new status of clustat;

clustat
Cluster Status for an-cluster-A @ Sat Dec 31 14:11:43 2011
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, rgmanager
 an-node02.alteeve.ca                       2 Online, Local, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:storage_an01           an-node01.alteeve.ca          started       
 service:storage_an02           an-node02.alteeve.ca          started       
 vm:vm0001-dev                  an-node02.alteeve.ca          started       
 vm:vm0002-web                  an-node01.alteeve.ca          started       
 vm:vm0003-db                   an-node02.alteeve.ca          started       
 vm:vm0004-ms                   an-node02.alteeve.ca          started

If we look again at vm0001-dev's ping, we'll see that a few packets were dropped but our ssh session remained intact. Any other active TCP session should have survived this just fine as well.

Results of the ping on vm0001-dev post live migration.

Wonderful! Now let's live migrate vm0002-web to an-node02.

clusvcadm -M vm:vm0002-web -m an-node02.alteeve.ca
Trying to migrate vm:vm0002-web to an-node02.alteeve.ca...Success

Again, check the new status of clustat;

clustat
Cluster Status for an-cluster-A @ Sat Dec 31 14:17:35 2011
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, rgmanager
 an-node02.alteeve.ca                       2 Online, Local, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:storage_an01           an-node01.alteeve.ca          started       
 service:storage_an02           an-node02.alteeve.ca          started       
 vm:vm0001-dev                  an-node02.alteeve.ca          started       
 vm:vm0002-web                  an-node02.alteeve.ca          started       
 vm:vm0003-db                   an-node02.alteeve.ca          started       
 vm:vm0004-ms                   an-node02.alteeve.ca          started

We can see now that all four VMs are running on an-node02! This is possible because of our careful planning of the VM resources earlier. This will mean more load on the host node's CPU, so things might not be as fast as we would like, but all services are on-line!

Withdraw an-node01 From The Cluster

So imagine now that we need to do some work on an-node01, like replace a bad network card or add some RAM. We've moved the VMs off, so now the only remaining service is service:storage_an01. We don't want to manually disable this service, because if we did, the service would not automatically start when the node rejoined the cluster. So we're going to just stop rgmanager and let it disable the storage_an01 service.

Check the state of the cluster;

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 16:11:56 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:storage_an01           an-node01.alteeve.ca          started       
 service:storage_an02           an-node02.alteeve.ca          started       
 vm:vm0001-dev                  an-node02.alteeve.ca          started       
 vm:vm0002-web                  an-node02.alteeve.ca          started       
 vm:vm0003-db                   an-node02.alteeve.ca          started       
 vm:vm0004-ms                   an-node02.alteeve.ca          started

Just as we expect, so now we will stop rgmanager, then stop cman.

On an-node01;

/etc/init.d/rgmanager stop
Stopping Cluster Service Manager:                          [  OK  ]
/etc/init.d/cman stop
Stopping cluster: 
   Leaving fence domain...                                 [  OK  ]
   Stopping gfs_controld...                                [  OK  ]
   Stopping dlm_controld...                                [  OK  ]
   Stopping fenced...                                      [  OK  ]
   Stopping cman...                                        [  OK  ]
   Waiting for corosync to shutdown:                       [  OK  ]
   Unloading kernel modules...                             [  OK  ]
   Unmounting configfs...                                  [  OK  ]

Checking on an-node02, we can see that all four VMs are running fine and that an-node01 is gone.

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 16:13:23 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Offline
 an-node02.alteeve.ca                       2 Online, Local, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:storage_an01           (an-node01.alteeve.ca)        stopped       
 service:storage_an02           an-node02.alteeve.ca          started       
 vm:vm0001-dev                  an-node02.alteeve.ca          started       
 vm:vm0002-web                  an-node02.alteeve.ca          started       
 vm:vm0003-db                   an-node02.alteeve.ca          started       
 vm:vm0004-ms                   an-node02.alteeve.ca          started

Test passed!

You can now power off and restart an-node01.

Rejoining an-node01 To The Cluster

If you haven't already, reboot an-node01. As we set earlier, cman and rgmanager will start automatically. The easiest thing to do for this test is to watch clustat on an-node02. If all goes well, you should see an-node01 rejoin the cluster automatically.

Connected to cluster;

Rebooting an-node01, while an-node02 hosts all four VMs.

Storage coming on-line;

Storage coming up on an-node01.

Back in business!

Back in business!

You should be able to log back into an-node01 and see that everything is back on-line. DRBD should be UpToDate, or be in the process of synchronizing.

Warning: Never migrate a VM to a node until its underlying DRBD resource is UpToDate! If the sync source node (the one that is UpToDate) goes down, DRBD will drop the resource to Secondary, making it inaccessible to the node and crashing the VM.

Migrating vm0001-dev And vm0002-web Back To an-node01

If we were putting the cluster back into its normal state, all that would be left to do is to migrate an-node01's VMs back. So let's do that.

As always, start with a check of the current cluster status.

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 16:31:06 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:storage_an01           an-node01.alteeve.ca          started       
 service:storage_an02           an-node02.alteeve.ca          started       
 vm:vm0001-dev                  an-node02.alteeve.ca          started       
 vm:vm0002-web                  an-node02.alteeve.ca          started       
 vm:vm0003-db                   an-node02.alteeve.ca          started       
 vm:vm0004-ms                   an-node02.alteeve.ca          started

Now confirm that the underlying storage is ready. Remember that DRBD resource r1 backs the VMs used by the an01-vg0 volume groups.

cat /proc/drbd
version: 8.3.12 (api:88/proto:86-96)
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03
 0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:0 dw:0 dr:12552 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
 1: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:2428 dw:2428 dr:9776 al:0 bm:4 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
 2: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:510 dw:510 dr:9744 al:0 bm:4 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

All systems ready; Let's migrate vm0001-dev and vm0002-web now.

clusvcadm -M vm:vm0001-dev -m an-node01.alteeve.ca
Trying to migrate vm:vm0001-dev to an-node01.alteeve.ca...Success
clusvcadm -M vm:vm0002-web -m an-node01.alteeve.ca
Trying to migrate vm:vm0002-web to an-node01.alteeve.ca...Success

Check the new status;

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 16:32:11 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:storage_an01           an-node01.alteeve.ca          started       
 service:storage_an02           an-node02.alteeve.ca          started       
 vm:vm0001-dev                  an-node01.alteeve.ca          started       
 vm:vm0002-web                  an-node01.alteeve.ca          started       
 vm:vm0003-db                   an-node02.alteeve.ca          started       
 vm:vm0004-ms                   an-node02.alteeve.ca          started

With that, the cluster is back in business!

Live Migration - vm0003-db And vm0004-ms To an-node01

Let's start the process of taking an-node02 out of the cluster. The first step is to move vm0003-db and vm0004-ms over to an-node01.

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 16:42:10 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:storage_an01           an-node01.alteeve.ca          started       
 service:storage_an02           an-node02.alteeve.ca          started       
 vm:vm0001-dev                  an-node01.alteeve.ca          started       
 vm:vm0002-web                  an-node01.alteeve.ca          started       
 vm:vm0003-db                   an-node02.alteeve.ca          started       
 vm:vm0004-ms                   an-node02.alteeve.ca          started

Ready to migrate.

clusvcadm -M vm:vm0003-db -m an-node01.alteeve.ca
Trying to migrate vm:vm0003-db to an-node01.alteeve.ca...Success
clusvcadm -M vm:vm0004-ms -m an-node01.alteeve.ca
Trying to migrate vm:vm0004-ms to an-node01.alteeve.ca...Success

Confirm;

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 16:42:42 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:storage_an01           an-node01.alteeve.ca          started       
 service:storage_an02           an-node02.alteeve.ca          started       
 vm:vm0001-dev                  an-node01.alteeve.ca          started       
 vm:vm0002-web                  an-node01.alteeve.ca          started       
 vm:vm0003-db                   an-node01.alteeve.ca          started       
 vm:vm0004-ms                   an-node01.alteeve.ca          started

Done!

Withdraw an-node02 From The Cluster

Double-check that all the VMs are off of an-node02 prior to withdrawal.

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 16:45:30 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:storage_an01           an-node01.alteeve.ca          started       
 service:storage_an02           an-node02.alteeve.ca          started       
 vm:vm0001-dev                  an-node01.alteeve.ca          started       
 vm:vm0002-web                  an-node01.alteeve.ca          started       
 vm:vm0003-db                   an-node01.alteeve.ca          started       
 vm:vm0004-ms                   an-node01.alteeve.ca          started

As before, we will not disable the storage_an02 service. If we did, the service would not automatically restart when the node rejoined the cluster.

So now that an-node01 is hosting all of the VMs and is running independently. Now we can stop rgmanager and cman.

On an-node02;

/etc/init.d/rgmanager stop
Stopping Cluster Service Manager:                          [  OK  ]
/etc/init.d/cman stop
Stopping cluster: 
   Leaving fence domain...                                 [  OK  ]
   Stopping gfs_controld...                                [  OK  ]
   Stopping dlm_controld...                                [  OK  ]
   Stopping fenced...                                      [  OK  ]
   Stopping cman...                                        [  OK  ]
   Waiting for corosync to shutdown:                       [  OK  ]
   Unloading kernel modules...                             [  OK  ]
   Unmounting configfs...                                  [  OK  ]

Confirm;

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 16:49:14 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Offline

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           (an-node02.alteeve.ca)        stopped
 vm:vm0001-dev                  an-node01.alteeve.ca          started
 vm:vm0002-web                  an-node01.alteeve.ca          started
 vm:vm0003-db                   an-node01.alteeve.ca          started
 vm:vm0004-ms                   an-node01.alteeve.ca          started

Done! We can now shut down and reboot an-node02 entirely.

Rejoining an-node02 To The Cluster

Exactly as we did with an-node01, we will reboot an-node02. The cman and rgmanager services should start automatically, so once again, we will just watch clustat on an-node01. If all goes well, you should see an-node02 rejoin the cluster automatically.

Connected to cluster;

Rebooting an-node02, while an-node02 hosts all four VMs.

Storage coming on-line;

Storage coming up on an-node02.

Back in business!

Back in business!

You should be able to log back into an-node02 and see that everything is back on-line. DRBD should be UpToDate, or be in the process of synchronizing.

Warning: Again; Never migrate a VM to a node until its underlying DRBD resource is UpToDate! If the sync source node (the one that is UpToDate) goes down, DRBD will drop the resource to Secondary, making it inaccessible to the node and crashing the VM.

Migrating vm0003-db And vm0004-ms Back To an-node02

The last step to restore the cluster to its ideal state is to migrate vm0003-db and vm0004-ms back to an-node02.

As always, start with a check of the current cluster status.

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 16:57:19 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, rgmanager
 an-node02.alteeve.ca                       2 Online, Local, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:storage_an01           an-node01.alteeve.ca          started       
 service:storage_an02           an-node02.alteeve.ca          started       
 vm:vm0001-dev                  an-node01.alteeve.ca          started       
 vm:vm0002-web                  an-node01.alteeve.ca          started       
 vm:vm0003-db                   an-node01.alteeve.ca          started       
 vm:vm0004-ms                   an-node01.alteeve.ca          started

Now confirm that the underlying storage is ready. Remember that DRBD resource r2 backs the VMs used by the an02-vg0 volume groups.

cat /proc/drbd
version: 8.3.12 (api:88/proto:86-96)
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03
 0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:0 dw:0 dr:8788 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
 1: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:376 dw:376 dr:5876 al:0 bm:7 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
 2: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:671 dw:671 dr:5844 al:0 bm:16 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

All systems ready; Let's migrate vm0003-db and vm0004-ms now.

clusvcadm -M vm:vm0003-db -m an-node02.alteeve.ca
Trying to migrate vm:vm0003-db to an-node02.alteeve.ca...Success
clusvcadm -M vm:vm0004-ms -m an-node02.alteeve.ca
Trying to migrate vm:vm0004-ms to an-node02.alteeve.ca...Success

Check the new status;

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 16:59:22 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, rgmanager
 an-node02.alteeve.ca                       2 Online, Local, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:storage_an01           an-node01.alteeve.ca          started       
 service:storage_an02           an-node02.alteeve.ca          started       
 vm:vm0001-dev                  an-node01.alteeve.ca          started       
 vm:vm0002-web                  an-node01.alteeve.ca          started       
 vm:vm0003-db                   an-node02.alteeve.ca          started       
 vm:vm0004-ms                   an-node02.alteeve.ca          started

All controlled migration, withdrawal and re-joining tests completed!

Uncontrolled VM Migration and Node Failure

This test will be more violent than the previous tests. Here we will test failing the VMs and ensuring that the cluster will recover the VMs by restarting them on the hosts. We will repeatedly fail the VMs three times within ten minutes to ensure that the relocate policy kicks in, as we expect it to.

Once we complete the VM failure testing, we will fail and recover both nodes, one at a time of course, and rejoin them to the cluster. This will confirm that the VMs recover on the surviving node.

The tests will be;

  • Crash all four VMs three times. The failures will be triggered by using virsh destroy <vm> on the current host node.
  • After each crash, we will confirm that the VM came back on-line before crashing it again.
  • With all of the VMs tested to recover properly, we will live-migrate them back to their designated host nodes.
  • Once the cluster is back into its ideal state, we will crash an-node01. Within a few seconds, it should be fenced and the lost VMs should restart on an-node02. Once it rejoins the cluster and the VMs return to an-node01, we will repeat the test by failing an-node02.

Failure Testing vm0001-dev

Confirm that vm0001-dev is running on an-node01.

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 18:29:10 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  an-node01.alteeve.ca          started
 vm:vm0002-web                  an-node01.alteeve.ca          started
 vm:vm0003-db                   an-node02.alteeve.ca          started
 vm:vm0004-ms                   an-node02.alteeve.ca          started

It is, perfect. Now before I kill a VM, I like to start a ping against it. It acts both as an indication of when the node is back up and acts as a crude method of timing how long it took the VM to fully recover.

Note: If your VMs are isolated, as they are in this tutorial, you may have to run the ping from another VM or from your firewall.
ping 10.254.0.1
PING 10.254.0.1 (10.254.0.1) 56(84) bytes of data.
64 bytes from 10.254.0.1: icmp_seq=1 ttl=64 time=0.737 ms
64 bytes from 10.254.0.1: icmp_seq=2 ttl=64 time=0.530 ms
64 bytes from 10.254.0.1: icmp_seq=3 ttl=64 time=0.589 ms

Now, on an-node01, forcefully shut down vm0001-dev;

virsh destroy vm0001-dev
Domain vm0001-dev destroyed

Within a few seconds (10, maximum), the cluster will detect that the VM has failed and will restart it.

Failure of vm0001-dev detected by the cluster and restarted.

We can see in an-node01's syslog that the failure was detected and automatically recovered.

Jan  1 18:38:25 an-node01 kernel: vbr2: port 2(vnet0) entering disabled state
Jan  1 18:38:25 an-node01 kernel: device vnet0 left promiscuous mode
Jan  1 18:38:25 an-node01 kernel: vbr2: port 2(vnet0) entering disabled state
Jan  1 18:38:27 an-node01 ntpd[2190]: Deleting interface #19 vnet0, fe80::fc54:ff:fe9b:3cf7#123, interface stats: received=0, sent=0, dropped=0, active_time=3058 secs
Jan  1 18:38:35 an-node01 rgmanager[2430]: status on vm "vm0001-dev" returned 7 (unspecified)
Jan  1 18:38:35 an-node01 rgmanager[2430]: Stopping service vm:vm0001-dev
Jan  1 18:38:36 an-node01 rgmanager[2430]: Service vm:vm0001-dev is recovering
Jan  1 18:38:36 an-node01 rgmanager[2430]: Recovering failed service vm:vm0001-dev
Jan  1 18:38:37 an-node01 kernel: device vnet0 entered promiscuous mode
Jan  1 18:38:37 an-node01 kernel: vbr2: port 2(vnet0) entering learning state
Jan  1 18:38:37 an-node01 rgmanager[2430]: Service vm:vm0001-dev started
Jan  1 18:38:39 an-node01 ntpd[2190]: Listening on interface #20 vnet0, fe80::fc54:ff:fe9b:3cf7#123 Enabled
Jan  1 18:38:49 an-node01 kernel: kvm: 12390: cpu0 unimplemented perfctr wrmsr: 0xc1 data 0xabcd
Jan  1 18:38:52 an-node01 kernel: vbr2: port 2(vnet0) entering forwarding state

The first four entries are related to the VM's network being torn down after it was killed. The fifth through eighth lines show the detection and recovery of the node!

Going back to the ping, we can see that the VM was down for roughly 36 seconds (time between network loss and recovery, add a bit more time for all services to start.

PING 10.254.0.1 (10.254.0.1) 56(84) bytes of data.
64 bytes from 10.254.0.1: icmp_seq=1 ttl=64 time=0.737 ms
64 bytes from 10.254.0.1: icmp_seq=2 ttl=64 time=0.530 ms
64 bytes from 10.254.0.1: icmp_seq=3 ttl=64 time=0.589 ms
64 bytes from 10.254.0.1: icmp_seq=4 ttl=64 time=0.589 ms
64 bytes from 10.254.0.1: icmp_seq=5 ttl=64 time=0.477 ms
64 bytes from 10.254.0.1: icmp_seq=6 ttl=64 time=0.482 ms
64 bytes from 10.254.0.1: icmp_seq=7 ttl=64 time=0.489 ms
64 bytes from 10.254.0.1: icmp_seq=8 ttl=64 time=0.495 ms
64 bytes from 10.254.0.1: icmp_seq=9 ttl=64 time=0.503 ms
64 bytes from 10.254.0.1: icmp_seq=10 ttl=64 time=0.513 ms
64 bytes from 10.254.0.1: icmp_seq=11 ttl=64 time=0.516 ms
64 bytes from 10.254.0.1: icmp_seq=12 ttl=64 time=0.524 ms
64 bytes from 10.254.0.1: icmp_seq=13 ttl=64 time=0.405 ms
64 bytes from 10.254.0.1: icmp_seq=14 ttl=64 time=0.536 ms
64 bytes from 10.254.0.1: icmp_seq=15 ttl=64 time=0.441 ms
64 bytes from 10.254.0.1: icmp_seq=16 ttl=64 time=0.552 ms

# Node died here, 36 pings lost at ~1 ping/sec.

64 bytes from 10.254.0.1: icmp_seq=52 ttl=64 time=0.816 ms
64 bytes from 10.254.0.1: icmp_seq=53 ttl=64 time=0.440 ms
64 bytes from 10.254.0.1: icmp_seq=54 ttl=64 time=0.354 ms
64 bytes from 10.254.0.1: icmp_seq=55 ttl=64 time=0.342 ms
64 bytes from 10.254.0.1: icmp_seq=56 ttl=64 time=0.446 ms
64 bytes from 10.254.0.1: icmp_seq=57 ttl=64 time=0.418 ms
64 bytes from 10.254.0.1: icmp_seq=58 ttl=64 time=0.441 ms
^C
--- 10.254.0.1 ping statistics ---
58 packets transmitted, 23 received, 60% packet loss, time 57949ms
rtt min/avg/max/mdev = 0.342/0.505/0.816/0.109 ms

Not bad at all!

Now let's kill it two more times and confirm that the third recovery happens on an-node02. We'll use the ping as an indicator of when the VM is back on-line before killing it the third time.

Second failure;

virsh destroy vm0001-dev
Domain vm0001-dev destroyed

Checking syslog again;

Jan  1 18:45:07 an-node01 kernel: vbr2: port 2(vnet0) entering disabled state
Jan  1 18:45:07 an-node01 kernel: device vnet0 left promiscuous mode
Jan  1 18:45:07 an-node01 kernel: vbr2: port 2(vnet0) entering disabled state
Jan  1 18:45:09 an-node01 ntpd[2190]: Deleting interface #20 vnet0, fe80::fc54:ff:fe9b:3cf7#123, interface stats: received=0, sent=0, dropped=0, active_time=390 secs
Jan  1 18:45:46 an-node01 rgmanager[2430]: status on vm "vm0001-dev" returned 7 (unspecified)
Jan  1 18:45:46 an-node01 rgmanager[2430]: Stopping service vm:vm0001-dev
Jan  1 18:45:46 an-node01 rgmanager[2430]: Service vm:vm0001-dev is recovering
Jan  1 18:45:47 an-node01 rgmanager[2430]: Recovering failed service vm:vm0001-dev
Jan  1 18:45:47 an-node01 kernel: device vnet0 entered promiscuous mode
Jan  1 18:45:47 an-node01 kernel: vbr2: port 2(vnet0) entering learning state
Jan  1 18:45:47 an-node01 rgmanager[2430]: Service vm:vm0001-dev started
Jan  1 18:45:50 an-node01 ntpd[2190]: Listening on interface #21 vnet0, fe80::fc54:ff:fe9b:3cf7#123 Enabled
Jan  1 18:45:59 an-node01 kernel: kvm: 17874: cpu0 unimplemented perfctr wrmsr: 0xc1 data 0xabcd
Jan  1 18:46:02 an-node01 kernel: vbr2: port 2(vnet0) entering forwarding state

We can see that the vm0001-dev VM is still on an-node01;

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 18:47:01 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, rgmanager
 an-node02.alteeve.ca                       2 Online, Local, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  an-node01.alteeve.ca          started
 vm:vm0002-web                  an-node01.alteeve.ca          started
 vm:vm0003-db                   an-node02.alteeve.ca          started
 vm:vm0004-ms                   an-node02.alteeve.ca          started

Now the third crash. This time it should come up on an-node02.

virsh destroy vm0001-dev
Domain vm0001-dev destroyed

Checking an-node01's syslog again, we'll see something different.

Jan  1 18:47:26 an-node01 kernel: vbr2: port 2(vnet0) entering disabled state
Jan  1 18:47:26 an-node01 kernel: device vnet0 left promiscuous mode
Jan  1 18:47:26 an-node01 kernel: vbr2: port 2(vnet0) entering disabled state
Jan  1 18:47:27 an-node01 ntpd[2190]: Deleting interface #21 vnet0, fe80::fc54:ff:fe9b:3cf7#123, interface stats: received=0, sent=0, dropped=0, active_time=97 secs
Jan  1 18:47:46 an-node01 rgmanager[2430]: status on vm "vm0001-dev" returned 7 (unspecified)
Jan  1 18:47:46 an-node01 rgmanager[2430]: Stopping service vm:vm0001-dev
Jan  1 18:47:46 an-node01 rgmanager[2430]: Service vm:vm0001-dev is recovering
Jan  1 18:47:46 an-node01 rgmanager[2430]: Restart threshold for vm:vm0001-dev exceeded; attempting to relocate
Jan  1 18:47:47 an-node01 rgmanager[2430]: Service vm:vm0001-dev is now running on member 2

The difference is the "Restart threshold for vm:vm0001-dev exceeded; attempting to relocate" line. Indeed, if we check clustat, we will in fact see it running on an-node02!

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 18:49:38 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, rgmanager
 an-node02.alteeve.ca                       2 Online, Local, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  an-node02.alteeve.ca          started
 vm:vm0002-web                  an-node01.alteeve.ca          started
 vm:vm0003-db                   an-node02.alteeve.ca          started
 vm:vm0004-ms                   an-node02.alteeve.ca          started

Success!

This test is complete, so we'll finish my migrating the VM back to an-node01.

clusvcadm -M vm:vm0001-dev -m an-node01.alteeve.ca
Trying to migrate vm:vm0001-dev to an-node01.alteeve.ca...Success

As always, confirm.

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 18:51:05 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, rgmanager
 an-node02.alteeve.ca                       2 Online, Local, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  an-node01.alteeve.ca          started
 vm:vm0002-web                  an-node01.alteeve.ca          started
 vm:vm0003-db                   an-node02.alteeve.ca          started
 vm:vm0004-ms                   an-node02.alteeve.ca          started

Excellent.

Failure Testing vm0002-web

We'll go through the same process here as we just did with vm0001-dev, but we won't cover all the details here as much. After each crash of the VM, we'll check clustat and look at the syslog on an-node01. Not shown here is a background ping running to indicate when the VM is back up enough to crash again.

Confirm that vm0002-web is on an-node01.

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 19:06:21 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, rgmanager
 an-node02.alteeve.ca                       2 Online, Local, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  an-node01.alteeve.ca          started
 vm:vm0002-web                  an-node01.alteeve.ca          started
 vm:vm0003-db                   an-node02.alteeve.ca          started
 vm:vm0004-ms                   an-node02.alteeve.ca          started

Good, we're ready. On an-node01, kill the VM.

virsh destroy vm0002-web
Domain vm0002-web destroyed

As we expect, an-node01 restarts the VM within a few seconds.

Jan  1 19:07:16 an-node01 kernel: vbr2: port 3(vnet1) entering disabled state
Jan  1 19:07:16 an-node01 kernel: device vnet1 left promiscuous mode
Jan  1 19:07:16 an-node01 kernel: vbr2: port 3(vnet1) entering disabled state
Jan  1 19:07:18 an-node01 ntpd[2190]: Deleting interface #11 vnet1, fe80::fc54:ff:fe65:3960#123, interface stats: received=0, sent=0, dropped=0, active_time=9315 secs
Jan  1 19:07:27 an-node01 rgmanager[2430]: status on vm "vm0002-web" returned 7 (unspecified)
Jan  1 19:07:27 an-node01 rgmanager[2430]: Stopping service vm:vm0002-web
Jan  1 19:07:27 an-node01 rgmanager[2430]: Service vm:vm0002-web is recovering
Jan  1 19:07:28 an-node01 rgmanager[2430]: Recovering failed service vm:vm0002-web
Jan  1 19:07:28 an-node01 kernel: device vnet1 entered promiscuous mode
Jan  1 19:07:28 an-node01 kernel: vbr2: port 3(vnet1) entering learning state
Jan  1 19:07:29 an-node01 rgmanager[2430]: Service vm:vm0002-web started
Jan  1 19:07:31 an-node01 ntpd[2190]: Listening on interface #23 vnet1, fe80::fc54:ff:fe65:3960#123 Enabled
Jan  1 19:07:38 an-node01 kernel: kvm: 1994: cpu0 unimplemented perfctr wrmsr: 0xc1 data 0xabcd
Jan  1 19:07:43 an-node01 kernel: vbr2: port 3(vnet1) entering forwarding state

Checking clustat, I can see the VM is back on-line.

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 19:09:03 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, rgmanager
 an-node02.alteeve.ca                       2 Online, Local, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  an-node01.alteeve.ca          started
 vm:vm0002-web                  an-node01.alteeve.ca          started
 vm:vm0003-db                   an-node02.alteeve.ca          started
 vm:vm0004-ms                   an-node02.alteeve.ca          started

Let's kill it for the second time.

virsh destroy vm0002-web
Domain vm0002-web destroyed

We can again see that an-node01 recovered it locally.

Jan  1 19:12:08 an-node01 kernel: vbr2: port 3(vnet1) entering disabled state
Jan  1 19:12:08 an-node01 kernel: device vnet1 left promiscuous mode
Jan  1 19:12:08 an-node01 kernel: vbr2: port 3(vnet1) entering disabled state
Jan  1 19:12:10 an-node01 ntpd[2190]: Deleting interface #23 vnet1, fe80::fc54:ff:fe65:3960#123, interface stats: received=0, sent=0, dropped=0, active_time=279 secs
Jan  1 19:12:17 an-node01 rgmanager[2430]: status on vm "vm0002-web" returned 7 (unspecified)
Jan  1 19:12:17 an-node01 rgmanager[2430]: Stopping service vm:vm0002-web
Jan  1 19:12:18 an-node01 rgmanager[2430]: Service vm:vm0002-web is recovering
Jan  1 19:12:18 an-node01 rgmanager[2430]: Recovering failed service vm:vm0002-web
Jan  1 19:12:19 an-node01 kernel: device vnet1 entered promiscuous mode
Jan  1 19:12:19 an-node01 kernel: vbr2: port 3(vnet1) entering learning state
Jan  1 19:12:19 an-node01 rgmanager[2430]: Service vm:vm0002-web started
Jan  1 19:12:22 an-node01 ntpd[2190]: Listening on interface #24 vnet1, fe80::fc54:ff:fe65:3960#123 Enabled
Jan  1 19:12:28 an-node01 kernel: kvm: 6113: cpu0 unimplemented perfctr wrmsr: 0xc1 data 0xabcd
Jan  1 19:12:34 an-node01 kernel: vbr2: port 3(vnet1) entering forwarding state

Confirm with clustat;

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 19:13:45 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, rgmanager
 an-node02.alteeve.ca                       2 Online, Local, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  an-node01.alteeve.ca          started
 vm:vm0002-web                  an-node01.alteeve.ca          started
 vm:vm0003-db                   an-node02.alteeve.ca          started
 vm:vm0004-ms                   an-node02.alteeve.ca          started

This time, it should recover on an-node02;

virsh destroy vm0002-web
Domain vm0002-web destroyed

Looking in syslog, we can see the counter was tripped.

Jan  1 19:14:26 an-node01 kernel: vbr2: port 3(vnet1) entering disabled state
Jan  1 19:14:26 an-node01 kernel: device vnet1 left promiscuous mode
Jan  1 19:14:26 an-node01 kernel: vbr2: port 3(vnet1) entering disabled state
Jan  1 19:14:27 an-node01 rgmanager[2430]: status on vm "vm0002-web" returned 7 (unspecified)
Jan  1 19:14:27 an-node01 rgmanager[2430]: Stopping service vm:vm0002-web
Jan  1 19:14:28 an-node01 rgmanager[2430]: Service vm:vm0002-web is recovering
Jan  1 19:14:28 an-node01 rgmanager[2430]: Restart threshold for vm:vm0002-web exceeded; attempting to relocate
Jan  1 19:14:28 an-node01 ntpd[2190]: Deleting interface #24 vnet1, fe80::fc54:ff:fe65:3960#123, interface stats: received=0, sent=0, dropped=0, active_time=126 secs
Jan  1 19:14:29 an-node01 rgmanager[2430]: Service vm:vm0002-web is now running on member 2

Indeed, this is confirmed with clustat.

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 19:15:57 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, rgmanager
 an-node02.alteeve.ca                       2 Online, Local, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  an-node01.alteeve.ca          started
 vm:vm0002-web                  an-node02.alteeve.ca          started
 vm:vm0003-db                   an-node02.alteeve.ca          started
 vm:vm0004-ms                   an-node02.alteeve.ca          started

Excellent, this test has passed as well! Now migrate the VM back and we'll be ready to test the third VM.

clusvcadm -M vm:vm0002-web -m an-node01.alteeve.ca
Trying to migrate vm:vm0002-web to an-node01.alteeve.ca...Success
clustat
Cluster Status for an-cluster-A @ Sun Jan  1 19:17:41 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, rgmanager
 an-node02.alteeve.ca                       2 Online, Local, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  an-node01.alteeve.ca          started
 vm:vm0002-web                  an-node01.alteeve.ca          started
 vm:vm0003-db                   an-node02.alteeve.ca          started
 vm:vm0004-ms                   an-node02.alteeve.ca          started

Done.

Failure Testing vm0003-db

This should be getting familiar now. The main difference is that the VM is now running on an-node02, so that is where will will kill the VM from and that is where we will watch syslog.

Confirm that vm0003-db is on an-node02.

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 19:25:55 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  an-node01.alteeve.ca          started
 vm:vm0002-web                  an-node01.alteeve.ca          started
 vm:vm0003-db                   an-node02.alteeve.ca          started
 vm:vm0004-ms                   an-node02.alteeve.ca          started

Good, we're ready. On an-node02, kill the VM.

virsh destroy vm0003-db
Domain vm0003-db destroyed

As we expect, an-node02 restarts the VM within a few seconds.

Jan  1 19:26:21 an-node02 kernel: vbr2: port 2(vnet0) entering disabled state
Jan  1 19:26:21 an-node02 kernel: device vnet0 left promiscuous mode
Jan  1 19:26:21 an-node02 kernel: vbr2: port 2(vnet0) entering disabled state
Jan  1 19:26:22 an-node02 ntpd[2200]: Deleting interface #10 vnet0, fe80::fc54:ff:fe44:83ec#123, interface stats: received=0, sent=0, dropped=0, active_time=8863 secs
Jan  1 19:26:35 an-node02 rgmanager[2439]: status on vm "vm0003-db" returned 7 (unspecified)
Jan  1 19:26:36 an-node02 rgmanager[2439]: Stopping service vm:vm0003-db
Jan  1 19:26:36 an-node02 rgmanager[2439]: Service vm:vm0003-db is recovering
Jan  1 19:26:36 an-node02 rgmanager[2439]: Recovering failed service vm:vm0003-db
Jan  1 19:26:37 an-node02 kernel: device vnet0 entered promiscuous mode
Jan  1 19:26:37 an-node02 kernel: vbr2: port 2(vnet0) entering learning state
Jan  1 19:26:37 an-node02 rgmanager[2439]: Service vm:vm0003-db started
Jan  1 19:26:40 an-node02 ntpd[2200]: Listening on interface #15 vnet0, fe80::fc54:ff:fe44:83ec#123 Enabled

Checking clustat, I can see the VM is back on-line.

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 19:27:06 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  an-node01.alteeve.ca          started
 vm:vm0002-web                  an-node01.alteeve.ca          started
 vm:vm0003-db                   an-node02.alteeve.ca          started
 vm:vm0004-ms                   an-node02.alteeve.ca          started

Let's kill it for the second time.

virsh destroy vm0003-db
Domain vm0003-db destroyed

We can again see that an-node02 recovered it locally.

Jan  1 19:27:40 an-node02 kernel: vbr2: port 2(vnet0) entering disabled state
Jan  1 19:27:40 an-node02 kernel: device vnet0 left promiscuous mode
Jan  1 19:27:40 an-node02 kernel: vbr2: port 2(vnet0) entering disabled state
Jan  1 19:27:41 an-node02 ntpd[2200]: Deleting interface #15 vnet0, fe80::fc54:ff:fe44:83ec#123, interface stats: received=0, sent=0, dropped=0, active_time=61 secs
Jan  1 19:27:45 an-node02 rgmanager[2439]: status on vm "vm0003-db" returned 7 (unspecified)
Jan  1 19:27:46 an-node02 rgmanager[2439]: Stopping service vm:vm0003-db
Jan  1 19:27:46 an-node02 rgmanager[2439]: Service vm:vm0003-db is recovering
Jan  1 19:27:46 an-node02 rgmanager[2439]: Recovering failed service vm:vm0003-db
Jan  1 19:27:47 an-node02 kernel: device vnet0 entered promiscuous mode
Jan  1 19:27:47 an-node02 kernel: vbr2: port 2(vnet0) entering learning state
Jan  1 19:27:47 an-node02 rgmanager[2439]: Service vm:vm0003-db started
Jan  1 19:27:50 an-node02 ntpd[2200]: Listening on interface #16 vnet0, fe80::fc54:ff:fe44:83ec#123 Enabled

Confirm with clustat;

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 19:28:21 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  an-node01.alteeve.ca          started
 vm:vm0002-web                  an-node01.alteeve.ca          started
 vm:vm0003-db                   an-node02.alteeve.ca          started
 vm:vm0004-ms                   an-node02.alteeve.ca          started

This time, it should recover on an-node01;

virsh destroy vm0003-db
Domain vm0003-db destroyed

Looking in syslog, we can see the counter was tripped.

Jan  1 19:28:36 an-node02 kernel: vbr2: port 2(vnet0) entering disabled state
Jan  1 19:28:36 an-node02 kernel: device vnet0 left promiscuous mode
Jan  1 19:28:36 an-node02 kernel: vbr2: port 2(vnet0) entering disabled state
Jan  1 19:28:37 an-node02 ntpd[2200]: Deleting interface #16 vnet0, fe80::fc54:ff:fe44:83ec#123, interface stats: received=0, sent=0, dropped=0, active_time=47 secs
Jan  1 19:28:55 an-node02 rgmanager[2439]: status on vm "vm0003-db" returned 7 (unspecified)
Jan  1 19:28:56 an-node02 rgmanager[2439]: Stopping service vm:vm0003-db
Jan  1 19:28:56 an-node02 rgmanager[2439]: Service vm:vm0003-db is recovering
Jan  1 19:28:56 an-node02 rgmanager[2439]: Restart threshold for vm:vm0003-db exceeded; attempting to relocate
Jan  1 19:28:57 an-node02 rgmanager[2439]: Service vm:vm0003-db is now running on member 1

Again, this is confirmed with clustat.

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 19:29:42 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  an-node01.alteeve.ca          started
 vm:vm0002-web                  an-node01.alteeve.ca          started
 vm:vm0003-db                   an-node01.alteeve.ca          started
 vm:vm0004-ms                   an-node02.alteeve.ca          started

This test has passed as well! As before, migrate the VM back and we'll be ready to test the last VM.

clusvcadm -M vm:vm0003-db -m an-node02.alteeve.ca
Trying to migrate vm:vm0003-db to an-node02.alteeve.ca...Success
clustat
Cluster Status for an-cluster-A @ Sun Jan  1 19:30:32 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  an-node01.alteeve.ca          started
 vm:vm0002-web                  an-node01.alteeve.ca          started
 vm:vm0003-db                   an-node02.alteeve.ca          started
 vm:vm0004-ms                   an-node02.alteeve.ca          started

Done.

Failure Testing vm0004-ms

Warning: Windows is particularly sensitive to sudden reboots. This is the nature of MS Windows and beyond the ability of the cluster to deal with. As such, be sure that you've created your recovery ISOs and taken reasonable precautions so that you can recover the guest after a hard shut down. That is, of course, what we're about to do here.

This is the last VM to test. This testing is repetitive and boring, but it is also critical. Good on you for sticking it out. Right then, let's get to it.

Confirm that vm0004-ms is on an-node02.

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 19:43:41 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  an-node01.alteeve.ca          started
 vm:vm0002-web                  an-node01.alteeve.ca          started
 vm:vm0003-db                   an-node02.alteeve.ca          started
 vm:vm0004-ms                   an-node02.alteeve.ca          started

Good, we're ready. On an-node02, kill the VM.

virsh destroy vm0004-ms
Domain vm0004-ms destroyed

As we expect, an-node02 restarts the VM within a few seconds.

Jan  1 19:43:52 an-node02 kernel: vbr2: port 3(vnet1) entering disabled state
Jan  1 19:43:52 an-node02 kernel: device vnet1 left promiscuous mode
Jan  1 19:43:52 an-node02 kernel: vbr2: port 3(vnet1) entering disabled state
Jan  1 19:43:53 an-node02 ntpd[2200]: Deleting interface #11 vnet1, fe80::fc54:ff:fe5e:b147#123, interface stats: received=0, sent=0, dropped=0, active_time=9895 secs
Jan  1 19:44:06 an-node02 rgmanager[2439]: status on vm "vm0004-ms" returned 7 (unspecified)
Jan  1 19:44:07 an-node02 rgmanager[2439]: Stopping service vm:vm0004-ms
Jan  1 19:44:07 an-node02 rgmanager[2439]: Service vm:vm0004-ms is recovering
Jan  1 19:44:07 an-node02 rgmanager[2439]: Recovering failed service vm:vm0004-ms
Jan  1 19:44:08 an-node02 kernel: device vnet1 entered promiscuous mode
Jan  1 19:44:08 an-node02 kernel: vbr2: port 3(vnet1) entering learning state
Jan  1 19:44:08 an-node02 rgmanager[2439]: Service vm:vm0004-ms started
Jan  1 19:44:11 an-node02 ntpd[2200]: Listening on interface #18 vnet1, fe80::fc54:ff:fe5e:b147#123 Enabled
Jan  1 19:44:23 an-node02 kernel: vbr2: port 3(vnet1) entering forwarding state

Checking clustat, I can see the VM is back on-line.

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 19:44:38 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  an-node01.alteeve.ca          started
 vm:vm0002-web                  an-node01.alteeve.ca          started
 vm:vm0003-db                   an-node02.alteeve.ca          started
 vm:vm0004-ms                   an-node02.alteeve.ca          started

Let's kill it for the second time.

virsh destroy vm0004-ms
Domain vm0004-ms destroyed

We can again see that an-node02 recovered it locally.

Jan  1 19:44:54 an-node02 kernel: vbr2: port 3(vnet1) entering disabled state
Jan  1 19:44:54 an-node02 kernel: device vnet1 left promiscuous mode
Jan  1 19:44:54 an-node02 kernel: vbr2: port 3(vnet1) entering disabled state
Jan  1 19:44:55 an-node02 ntpd[2200]: Deleting interface #18 vnet1, fe80::fc54:ff:fe5e:b147#123, interface stats: received=0, sent=0, dropped=0, active_time=44 secs
Jan  1 19:45:16 an-node02 rgmanager[2439]: status on vm "vm0004-ms" returned 7 (unspecified)
Jan  1 19:45:17 an-node02 rgmanager[2439]: Stopping service vm:vm0004-ms
Jan  1 19:45:17 an-node02 rgmanager[2439]: Service vm:vm0004-ms is recovering
Jan  1 19:45:17 an-node02 rgmanager[2439]: Recovering failed service vm:vm0004-ms
Jan  1 19:45:18 an-node02 kernel: device vnet1 entered promiscuous mode
Jan  1 19:45:18 an-node02 kernel: vbr2: port 3(vnet1) entering learning state
Jan  1 19:45:18 an-node02 rgmanager[2439]: Service vm:vm0004-ms started
Jan  1 19:45:21 an-node02 ntpd[2200]: Listening on interface #19 vnet1, fe80::fc54:ff:fe5e:b147#123 Enabled
Jan  1 19:45:33 an-node02 kernel: vbr2: port 3(vnet1) entering forwarding state

Confirm with clustat;

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 19:46:17 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  an-node01.alteeve.ca          started
 vm:vm0002-web                  an-node01.alteeve.ca          started
 vm:vm0003-db                   an-node02.alteeve.ca          started
 vm:vm0004-ms                   an-node02.alteeve.ca          started

This time, it should recover on an-node01;

virsh destroy vm0004-ms
Domain vm0004-ms destroyed

Looking in syslog, we can see the counter was tripped.

Jan  1 19:45:33 an-node02 kernel: vbr2: port 3(vnet1) entering forwarding state
Jan  1 19:46:30 an-node02 kernel: vbr2: port 3(vnet1) entering disabled state
Jan  1 19:46:30 an-node02 kernel: device vnet1 left promiscuous mode
Jan  1 19:46:30 an-node02 kernel: vbr2: port 3(vnet1) entering disabled state
Jan  1 19:46:32 an-node02 ntpd[2200]: Deleting interface #19 vnet1, fe80::fc54:ff:fe5e:b147#123, interface stats: received=0, sent=0, dropped=0, active_time=71 secs
Jan  1 19:46:36 an-node02 rgmanager[2439]: status on vm "vm0004-ms" returned 7 (unspecified)
Jan  1 19:46:37 an-node02 rgmanager[2439]: Stopping service vm:vm0004-ms
Jan  1 19:46:37 an-node02 rgmanager[2439]: Service vm:vm0004-ms is recovering
Jan  1 19:46:37 an-node02 rgmanager[2439]: Restart threshold for vm:vm0004-ms exceeded; attempting to relocate
Jan  1 19:46:38 an-node02 rgmanager[2439]: Service vm:vm0004-ms is now running on member 1

Indeed, this is confirmed with clustat.

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 19:48:23 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  an-node01.alteeve.ca          started
 vm:vm0002-web                  an-node01.alteeve.ca          started
 vm:vm0003-db                   an-node02.alteeve.ca          started
 vm:vm0004-ms                   an-node01.alteeve.ca          started

Wonderful! All four VMs fail and recover as we expected them to. Move the VM back and we're ready to crash the nodes!

clusvcadm -M vm:vm0004-ms -m an-node02.alteeve.ca
Trying to migrate vm:vm0004-ms to an-node02.alteeve.ca...Success
clustat
Cluster Status for an-cluster-A @ Sun Jan  1 19:49:32 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  an-node01.alteeve.ca          started
 vm:vm0002-web                  an-node01.alteeve.ca          started
 vm:vm0003-db                   an-node02.alteeve.ca          started
 vm:vm0004-ms                   an-node02.alteeve.ca          started

Done and done!

Failing and Recovery of an-node01

The final stage of testing is also the most brutal. We're going to hang an-node01 in such a way that it stops responding to messages from an-node02. Within a few seconds, an-node01 should be fenced, then shortly after the two lost VMs should boot up on an-node02.

The is a particularly important test for a somewhat non-obvious reason.

Note: It's one thing to migrate or boot VMs one at a time. The other VMs will not likely be under load, so the resources of the host should be more or less free for the VM being recovered. After a failure though, all lost VMs will be simultaneously recovered, taxing the host's resources to a greater extent. This test ensures that each node has sufficient resources to effectively recover the VMs simultaneously.

We could just shut off an-node01, but we tested this earlier when we setup fencing. What we have not yet tested is how the cluster recovers from a hung node. To hang the host, we're going to trigger a special event in the kernel, using magic SysRq triggers. We'll do this by sending the letter c to the /proc/sysrq-trigger file. This will "Reboot kexec and output a crashdump". The node should be fenced before a memory dump can complete, so don't expect to see anything in /var/crashed unless your system is extremely fast.

Warning: If you are skimming, take note! The next command will crash your node!

So, on an-node01, issue the following command to crash the node.

echo c > /proc/sysrq-trigger

This command will not return. Watching syslog on an-node02, we'll see output like this;

Jan  1 21:26:00 an-node02 kernel: block drbd1: PingAck did not arrive in time.
Jan  1 21:26:00 an-node02 kernel: block drbd1: peer( Primary -> Unknown ) conn( Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown ) susp( 0 -> 1 ) 
Jan  1 21:26:00 an-node02 kernel: block drbd1: asender terminated
Jan  1 21:26:00 an-node02 kernel: block drbd1: Terminating asender thread
Jan  1 21:26:00 an-node02 kernel: block drbd1: Connection closed
Jan  1 21:26:00 an-node02 kernel: block drbd1: conn( NetworkFailure -> Unconnected ) 
Jan  1 21:26:00 an-node02 kernel: block drbd1: helper command: /sbin/drbdadm fence-peer minor-1
Jan  1 21:26:00 an-node02 kernel: block drbd1: receiver terminated
Jan  1 21:26:00 an-node02 kernel: block drbd1: Restarting receiver thread
Jan  1 21:26:00 an-node02 kernel: block drbd1: receiver (re)started
Jan  1 21:26:00 an-node02 kernel: block drbd1: conn( Unconnected -> WFConnection ) 
Jan  1 21:26:00 an-node02 /sbin/obliterate-peer.sh: Local node ID: 2 / Remote node: an-node01.alteeve.ca
Jan  1 21:26:01 an-node02 kernel: block drbd2: PingAck did not arrive in time.
Jan  1 21:26:01 an-node02 kernel: block drbd2: peer( Primary -> Unknown ) conn( Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown ) susp( 0 -> 1 ) 
Jan  1 21:26:01 an-node02 kernel: block drbd2: asender terminated
Jan  1 21:26:01 an-node02 kernel: block drbd2: Terminating asender thread
Jan  1 21:26:01 an-node02 kernel: block drbd2: Connection closed
Jan  1 21:26:01 an-node02 kernel: block drbd2: conn( NetworkFailure -> Unconnected ) 
Jan  1 21:26:01 an-node02 kernel: block drbd2: helper command: /sbin/drbdadm fence-peer minor-2
Jan  1 21:26:01 an-node02 kernel: block drbd2: receiver terminated
Jan  1 21:26:01 an-node02 kernel: block drbd2: Restarting receiver thread
Jan  1 21:26:01 an-node02 kernel: block drbd2: receiver (re)started
Jan  1 21:26:01 an-node02 kernel: block drbd2: conn( Unconnected -> WFConnection ) 
Jan  1 21:26:01 an-node02 /sbin/obliterate-peer.sh: Local node ID: 2 / Remote node: an-node01.alteeve.ca
Jan  1 21:26:01 an-node02 /sbin/obliterate-peer.sh: kill node failed: Invalid argument
Jan  1 21:26:03 an-node02 kernel: block drbd0: PingAck did not arrive in time.
Jan  1 21:26:03 an-node02 kernel: block drbd0: peer( Primary -> Unknown ) conn( Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown ) susp( 0 -> 1 ) 
Jan  1 21:26:03 an-node02 kernel: block drbd0: asender terminated
Jan  1 21:26:03 an-node02 kernel: block drbd0: Terminating asender thread
Jan  1 21:26:03 an-node02 kernel: block drbd0: Connection closed
Jan  1 21:26:03 an-node02 kernel: block drbd0: conn( NetworkFailure -> Unconnected ) 
Jan  1 21:26:03 an-node02 kernel: block drbd0: helper command: /sbin/drbdadm fence-peer minor-0
Jan  1 21:26:03 an-node02 kernel: block drbd0: receiver terminated
Jan  1 21:26:03 an-node02 kernel: block drbd0: Restarting receiver thread
Jan  1 21:26:03 an-node02 kernel: block drbd0: receiver (re)started
Jan  1 21:26:03 an-node02 kernel: block drbd0: conn( Unconnected -> WFConnection ) 
Jan  1 21:26:03 an-node02 /sbin/obliterate-peer.sh: Local node ID: 2 / Remote node: an-node01.alteeve.ca
Jan  1 21:26:03 an-node02 /sbin/obliterate-peer.sh: kill node failed: Invalid argument
Jan  1 21:26:09 an-node02 corosync[1963]:   [TOTEM ] A processor failed, forming new configuration.
Jan  1 21:26:11 an-node02 corosync[1963]:   [QUORUM] Members[1]: 2
Jan  1 21:26:11 an-node02 corosync[1963]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
Jan  1 21:26:11 an-node02 kernel: dlm: closing connection to node 1
Jan  1 21:26:11 an-node02 corosync[1963]:   [CPG   ] chosen downlist: sender r(0) ip(10.20.0.2) ; members(old:2 left:1)
Jan  1 21:26:11 an-node02 corosync[1963]:   [MAIN  ] Completed service synchronization, ready to provide service.
Jan  1 21:26:11 an-node02 fenced[2022]: fencing node an-node01.alteeve.ca
Jan  1 21:26:11 an-node02 kernel: GFS2: fsid=an-cluster-A:shared.0: jid=1: Trying to acquire journal lock...
Jan  1 21:26:14 an-node02 fence_node[15572]: fence an-node01.alteeve.ca success
Jan  1 21:26:14 an-node02 kernel: block drbd1: helper command: /sbin/drbdadm fence-peer minor-1 exit code 7 (0x700)
Jan  1 21:26:14 an-node02 kernel: block drbd1: fence-peer helper returned 7 (peer was stonithed)
Jan  1 21:26:14 an-node02 kernel: block drbd1: pdsk( DUnknown -> Outdated ) 
Jan  1 21:26:14 an-node02 kernel: block drbd1: new current UUID 6355AAB258658E8F:4642D156D54731A1:5F8A6B05E2FCCE19:165E9B466805EC81
Jan  1 21:26:14 an-node02 kernel: block drbd1: susp( 1 -> 0 ) 
Jan  1 21:26:15 an-node02 fenced[2022]: fence an-node01.alteeve.ca success
Jan  1 21:26:15 an-node02 fence_node[15672]: fence an-node01.alteeve.ca success
Jan  1 21:26:15 an-node02 kernel: block drbd0: helper command: /sbin/drbdadm fence-peer minor-0 exit code 7 (0x700)
Jan  1 21:26:15 an-node02 kernel: block drbd0: fence-peer helper returned 7 (peer was stonithed)
Jan  1 21:26:15 an-node02 kernel: block drbd0: pdsk( DUnknown -> Outdated ) 
Jan  1 21:26:15 an-node02 kernel: block drbd0: new current UUID C1F5EF16EE80E6C1:1B503B46E6650575:234E9A10EE04FDE7:7DBC4288E230DC9B
Jan  1 21:26:15 an-node02 kernel: block drbd0: susp( 1 -> 0 ) 
Jan  1 21:26:15 an-node02 fence_node[15627]: fence an-node01.alteeve.ca success
Jan  1 21:26:15 an-node02 kernel: block drbd2: helper command: /sbin/drbdadm fence-peer minor-2 exit code 7 (0x700)
Jan  1 21:26:15 an-node02 kernel: block drbd2: fence-peer helper returned 7 (peer was stonithed)
Jan  1 21:26:15 an-node02 kernel: block drbd2: pdsk( DUnknown -> Outdated ) 
Jan  1 21:26:15 an-node02 kernel: block drbd2: new current UUID 1F79DE480F1E33C1:A674C3CB12017193:76118DDAE165C5FB:871F8081B7D527A9
Jan  1 21:26:15 an-node02 kernel: block drbd2: susp( 1 -> 0 ) 
Jan  1 21:26:16 an-node02 kernel: GFS2: fsid=an-cluster-A:shared.0: jid=1: Looking at journal...
Jan  1 21:26:16 an-node02 kernel: GFS2: fsid=an-cluster-A:shared.0: jid=1: Done
Jan  1 21:26:16 an-node02 rgmanager[2514]: Marking service:storage_an01 as stopped: Restricted domain unavailable
Jan  1 21:26:16 an-node02 rgmanager[2514]: Taking over service vm:vm0001-dev from down member an-node01.alteeve.ca
Jan  1 21:26:16 an-node02 rgmanager[2514]: Taking over service vm:vm0002-web from down member an-node01.alteeve.ca
Jan  1 21:26:17 an-node02 kernel: device vnet2 entered promiscuous mode
Jan  1 21:26:17 an-node02 kernel: vbr2: port 4(vnet2) entering learning state
Jan  1 21:26:17 an-node02 rgmanager[2514]: Service vm:vm0001-dev started
Jan  1 21:26:17 an-node02 kernel: device vnet3 entered promiscuous mode
Jan  1 21:26:17 an-node02 kernel: vbr2: port 5(vnet3) entering learning state
Jan  1 21:26:18 an-node02 rgmanager[2514]: Service vm:vm0002-web started
Jan  1 21:26:20 an-node02 ntpd[2275]: Listening on interface #12 vnet2, fe80::fc54:ff:fe9b:3cf7#123 Enabled
Jan  1 21:26:20 an-node02 ntpd[2275]: Listening on interface #13 vnet3, fe80::fc54:ff:fe65:3960#123 Enabled
Jan  1 21:26:27 an-node02 kernel: kvm: 16177: cpu0 unimplemented perfctr wrmsr: 0xc1 data 0xabcd
Jan  1 21:26:29 an-node02 kernel: kvm: 16118: cpu0 unimplemented perfctr wrmsr: 0xc1 data 0xabcd
Jan  1 21:26:32 an-node02 kernel: vbr2: port 4(vnet2) entering forwarding state
Jan  1 21:26:32 an-node02 kernel: vbr2: port 5(vnet3) entering forwarding state

Checking with clustat, we can confirm that all four VMs are now running on an-node02.

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 21:28:00 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, rgmanager
 an-node02.alteeve.ca                       2 Online, Local, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  an-node02.alteeve.ca          started
 vm:vm0002-web                  an-node02.alteeve.ca          started
 vm:vm0003-db                   an-node02.alteeve.ca          started
 vm:vm0004-ms                   an-node02.alteeve.ca          started

Perfect! This is exactly why we built the cluster!

If we wait a few minutes, we'll see that the hung node has recovered.

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 22:30:04 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, rgmanager
 an-node02.alteeve.ca                       2 Online, Local, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:storage_an01           an-node01.alteeve.ca          started       
 service:storage_an02           an-node02.alteeve.ca          started       
 vm:vm0001-dev                  an-node02.alteeve.ca          started       
 vm:vm0002-web                  an-node02.alteeve.ca          started       
 vm:vm0003-db                   an-node02.alteeve.ca          started       
 vm:vm0004-ms                   an-node02.alteeve.ca          started

Before we can push the VMs back though, we must make sure that the underlying DRBD resource has finished synchronizing.

Note: With four VMs, it will most certainly take time for underlying resource to resync. Do not migrate the VMs until this has completed!
cat /proc/drbd
version: 8.3.12 (api:88/proto:86-96)
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03
 0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:1182704 nr:1053880 dw:1052676 dr:1245848 al:0 bm:266 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
 1: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:2087568 nr:362698 dw:366444 dr:2263316 al:9 bm:411 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
 2: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:2098343 nr:1114307 dw:1065375 dr:2340421 al:10 bm:551 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0

We're ready, so lets migrate back vm0001-dev and vm0002-web.

clusvcadm -M vm:vm0001-dev -m an-node01.alteeve.ca
Trying to migrate vm:vm0001-dev to an-node01.alteeve.ca...Success
clusvcadm -M vm:vm0002-web -m an-node01.alteeve.ca
Trying to migrate vm:vm0002-web to an-node01.alteeve.ca...Success

Confirm;

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 22:37:10 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  an-node01.alteeve.ca          started
 vm:vm0002-web                  an-node01.alteeve.ca          started
 vm:vm0003-db                   an-node02.alteeve.ca          started
 vm:vm0004-ms                   an-node02.alteeve.ca          started

There we have it. Successful crash and recovery of an-node01.

Discussing the syslog Messages

Let's step back and look at the syslog output; There are a few things to discuss.

The first thing we see is that almost immediately after hanging an-node01, the first messages are from DRBD, not the cluster. This in turn trigger's DRBD's fence-handler script, obliterate-peer.sh. This is because DRBD is extremely sensitive to interruptions, even more so than the cluster itself. You will notice that DRBD reacted a full 9 seconds faster than the cluster.

The first thing the cluster does, upon realizing it has lost communication with its peer, is call a fence against the lost node. As mentioned, this involves calling obliterate-peer.sh, which is itself a very simple wrapper for cman_tool and fence_node shell calls.

Jan  1 21:26:00 an-node02 kernel: block drbd1: helper command: /sbin/drbdadm fence-peer minor-1
Jan  1 21:26:00 an-node02 kernel: block drbd1: receiver terminated
Jan  1 21:26:00 an-node02 kernel: block drbd1: Restarting receiver thread
Jan  1 21:26:00 an-node02 kernel: block drbd1: receiver (re)started
Jan  1 21:26:00 an-node02 kernel: block drbd1: conn( Unconnected -> WFConnection ) 
Jan  1 21:26:00 an-node02 /sbin/obliterate-peer.sh: Local node ID: 2 / Remote node: an-node01.alteeve.ca

Here we see DRBD calling the handler (first message), shortly after we see a log entry from obliterate-peer.sh (last entry). What you don't see is that right after that last message, obliterate-peer.sh goes into a 10-iteration loop where it calls fence_node against its peer.

Jan  1 21:26:01 an-node02 /sbin/obliterate-peer.sh: Local node ID: 2 / Remote node: an-node01.alteeve.ca
Jan  1 21:26:01 an-node02 /sbin/obliterate-peer.sh: kill node failed: Invalid argument

The fence_node call runs in the background, so the obliterate-peer.sh script goes into a short sleep before trying again (and again...). These subsequent calls will generate the kill node failed: Invalid argument because the first call is already in the process of fencing the node, and are thus safe to ignore. The important past was that this error message didn't follow the first entry.

Jan  1 21:26:15 an-node02 fenced[2022]: fence an-node01.alteeve.ca success

This is what matters. Here we see that the fence succeeded and the hung node was indeed fenced.

Failing and Recovery of an-node02

With everything back in place, we'll hang an-node02 and ensure that its VMs will recover on an-node01.

As always, check the current state.

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 22:53:43 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  an-node01.alteeve.ca          started
 vm:vm0002-web                  an-node01.alteeve.ca          started
 vm:vm0003-db                   an-node02.alteeve.ca          started
 vm:vm0004-ms                   an-node02.alteeve.ca          started

Now hang an-node02.

echo c > /proc/sysrq-trigger

As before, that command will not return. If we check an-node01's syslog though, we should see that the node is fenced and the lost VMs are recovered.

Jan  1 22:56:14 an-node01 kernel: block drbd1: PingAck did not arrive in time.
Jan  1 22:56:14 an-node01 kernel: block drbd1: peer( Primary -> Unknown ) conn( Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown ) susp( 0 -> 1 ) 
Jan  1 22:56:15 an-node01 kernel: block drbd1: asender terminated
Jan  1 22:56:15 an-node01 kernel: block drbd1: Terminating asender thread
Jan  1 22:56:15 an-node01 kernel: block drbd1: Connection closed
Jan  1 22:56:15 an-node01 kernel: block drbd1: conn( NetworkFailure -> Unconnected ) 
Jan  1 22:56:15 an-node01 kernel: block drbd1: helper command: /sbin/drbdadm fence-peer minor-1
Jan  1 22:56:15 an-node01 kernel: block drbd1: receiver terminated
Jan  1 22:56:15 an-node01 kernel: block drbd1: Restarting receiver thread
Jan  1 22:56:15 an-node01 kernel: block drbd1: receiver (re)started
Jan  1 22:56:15 an-node01 kernel: block drbd1: conn( Unconnected -> WFConnection ) 
Jan  1 22:56:15 an-node01 /sbin/obliterate-peer.sh: Local node ID: 1 / Remote node: an-node02.alteeve.ca
Jan  1 22:56:19 an-node01 kernel: block drbd0: PingAck did not arrive in time.
Jan  1 22:56:19 an-node01 kernel: block drbd0: peer( Primary -> Unknown ) conn( Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown ) susp( 0 -> 1 ) 
Jan  1 22:56:19 an-node01 kernel: block drbd0: asender terminated
Jan  1 22:56:19 an-node01 kernel: block drbd0: Terminating asender thread
Jan  1 22:56:19 an-node01 kernel: block drbd0: Connection closed
Jan  1 22:56:19 an-node01 kernel: block drbd0: conn( NetworkFailure -> Unconnected ) 
Jan  1 22:56:19 an-node01 kernel: block drbd0: helper command: /sbin/drbdadm fence-peer minor-0
Jan  1 22:56:19 an-node01 kernel: block drbd0: receiver terminated
Jan  1 22:56:19 an-node01 kernel: block drbd0: Restarting receiver thread
Jan  1 22:56:19 an-node01 kernel: block drbd0: receiver (re)started
Jan  1 22:56:19 an-node01 kernel: block drbd0: conn( Unconnected -> WFConnection ) 
Jan  1 22:56:19 an-node01 /sbin/obliterate-peer.sh: Local node ID: 1 / Remote node: an-node02.alteeve.ca
Jan  1 22:56:19 an-node01 /sbin/obliterate-peer.sh: kill node failed: Invalid argument
Jan  1 22:56:21 an-node01 kernel: block drbd2: PingAck did not arrive in time.
Jan  1 22:56:21 an-node01 kernel: block drbd2: peer( Primary -> Unknown ) conn( Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown ) susp( 0 -> 1 ) 
Jan  1 22:56:21 an-node01 kernel: block drbd2: asender terminated
Jan  1 22:56:21 an-node01 kernel: block drbd2: Terminating asender thread
Jan  1 22:56:21 an-node01 kernel: block drbd2: Connection closed
Jan  1 22:56:21 an-node01 kernel: block drbd2: conn( NetworkFailure -> Unconnected ) 
Jan  1 22:56:21 an-node01 kernel: block drbd2: receiver terminated
Jan  1 22:56:21 an-node01 kernel: block drbd2: Restarting receiver thread
Jan  1 22:56:21 an-node01 kernel: block drbd2: receiver (re)started
Jan  1 22:56:21 an-node01 kernel: block drbd2: conn( Unconnected -> WFConnection ) 
Jan  1 22:56:21 an-node01 kernel: block drbd2: helper command: /sbin/drbdadm fence-peer minor-2
Jan  1 22:56:21 an-node01 /sbin/obliterate-peer.sh: Local node ID: 1 / Remote node: an-node02.alteeve.ca
Jan  1 22:56:21 an-node01 /sbin/obliterate-peer.sh: kill node failed: Invalid argument
Jan  1 22:56:22 an-node01 corosync[1958]:   [TOTEM ] A processor failed, forming new configuration.
Jan  1 22:56:24 an-node01 corosync[1958]:   [QUORUM] Members[1]: 1
Jan  1 22:56:24 an-node01 corosync[1958]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
Jan  1 22:56:24 an-node01 kernel: dlm: closing connection to node 2
Jan  1 22:56:24 an-node01 corosync[1958]:   [CPG   ] chosen downlist: sender r(0) ip(10.20.0.1) ; members(old:2 left:1)
Jan  1 22:56:24 an-node01 corosync[1958]:   [MAIN  ] Completed service synchronization, ready to provide service.
Jan  1 22:56:24 an-node01 fenced[2014]: fencing node an-node02.alteeve.ca
Jan  1 22:56:24 an-node01 kernel: GFS2: fsid=an-cluster-A:shared.1: jid=0: Trying to acquire journal lock...
Jan  1 22:56:28 an-node01 fenced[2014]: fence an-node02.alteeve.ca success
Jan  1 22:56:29 an-node01 fence_node[638]: fence an-node02.alteeve.ca success
Jan  1 22:56:29 an-node01 kernel: block drbd2: helper command: /sbin/drbdadm fence-peer minor-2 exit code 7 (0x700)
Jan  1 22:56:29 an-node01 kernel: block drbd2: fence-peer helper returned 7 (peer was stonithed)
Jan  1 22:56:29 an-node01 kernel: block drbd2: pdsk( DUnknown -> Outdated ) 
Jan  1 22:56:29 an-node01 kernel: block drbd2: new current UUID 207F7C9279067EC1:3EEB0F756A6A289F:FD92DAC355F53A93:FD91DAC355F53A93
Jan  1 22:56:29 an-node01 kernel: block drbd2: susp( 1 -> 0 ) 
Jan  1 22:56:29 an-node01 fence_node[518]: fence an-node02.alteeve.ca success
Jan  1 22:56:29 an-node01 kernel: block drbd1: helper command: /sbin/drbdadm fence-peer minor-1 exit code 7 (0x700)
Jan  1 22:56:29 an-node01 kernel: block drbd1: fence-peer helper returned 7 (peer was stonithed)
Jan  1 22:56:29 an-node01 kernel: block drbd1: pdsk( DUnknown -> Outdated ) 
Jan  1 22:56:29 an-node01 kernel: block drbd1: new current UUID C65C044AE682D8C5:67D512BD61B70265:C1947DF86E910F8B:C1937DF86E910F8B
Jan  1 22:56:29 an-node01 kernel: block drbd1: susp( 1 -> 0 ) 
Jan  1 22:56:29 an-node01 rgmanager[2507]: Marking service:storage_an02 as stopped: Restricted domain unavailable
Jan  1 22:56:29 an-node01 fence_node[583]: fence an-node02.alteeve.ca success
Jan  1 22:56:29 an-node01 kernel: block drbd0: helper command: /sbin/drbdadm fence-peer minor-0 exit code 7 (0x700)
Jan  1 22:56:29 an-node01 kernel: block drbd0: fence-peer helper returned 7 (peer was stonithed)
Jan  1 22:56:29 an-node01 kernel: block drbd0: pdsk( DUnknown -> Outdated ) 
Jan  1 22:56:29 an-node01 kernel: block drbd0: new current UUID 295A00166167B5C3:A3F3889ECF7247F5:30313B4AFFF6F82B:30303B4AFFF6F82B
Jan  1 22:56:29 an-node01 kernel: block drbd0: susp( 1 -> 0 ) 
Jan  1 22:56:29 an-node01 kernel: GFS2: fsid=an-cluster-A:shared.1: jid=0: Looking at journal...
Jan  1 22:56:30 an-node01 kernel: GFS2: fsid=an-cluster-A:shared.1: jid=0: Done
Jan  1 22:56:30 an-node01 rgmanager[2507]: Taking over service vm:vm0003-db from down member an-node02.alteeve.ca
Jan  1 22:56:30 an-node01 rgmanager[2507]: Taking over service vm:vm0004-ms from down member an-node02.alteeve.ca
Jan  1 22:56:30 an-node01 kernel: device vnet2 entered promiscuous mode
Jan  1 22:56:30 an-node01 kernel: vbr2: port 4(vnet2) entering learning state
Jan  1 22:56:30 an-node01 rgmanager[2507]: Service vm:vm0003-db started
Jan  1 22:56:31 an-node01 kernel: device vnet3 entered promiscuous mode
Jan  1 22:56:31 an-node01 kernel: vbr2: port 5(vnet3) entering learning state
Jan  1 22:56:31 an-node01 rgmanager[2507]: Service vm:vm0004-ms started
Jan  1 22:56:34 an-node01 ntpd[2267]: Listening on interface #12 vnet3, fe80::fc54:ff:fe5e:b147#123 Enabled
Jan  1 22:56:34 an-node01 ntpd[2267]: Listening on interface #13 vnet2, fe80::fc54:ff:fe44:83ec#123 Enabled
Jan  1 22:56:40 an-node01 kernel: kvm: 1074: cpu0 unimplemented perfctr wrmsr: 0xc1 data 0xabcd
Jan  1 22:56:45 an-node01 kernel: vbr2: port 4(vnet2) entering forwarding state
Jan  1 22:56:46 an-node01 kernel: vbr2: port 5(vnet3) entering forwarding state

Checking clustat;

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 22:57:36 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Offline

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           (an-node02.alteeve.ca)        stopped
 vm:vm0001-dev                  an-node01.alteeve.ca          started
 vm:vm0002-web                  an-node01.alteeve.ca          started
 vm:vm0003-db                   an-node01.alteeve.ca          started
 vm:vm0004-ms                   an-node01.alteeve.ca          started

All four VMs are back up and running on an-node01!

Within a few moments, we should see see that an-node02 has rejoined the cluster.

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 23:00:43 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  an-node01.alteeve.ca          started
 vm:vm0002-web                  an-node01.alteeve.ca          started
 vm:vm0003-db                   an-node01.alteeve.ca          started
 vm:vm0004-ms                   an-node01.alteeve.ca          started

Now we'll wait for the backing DRBD resources to be in sync.

cat /proc/drbd
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03
 0: cs:SyncTarget ro:Primary/Primary ds:Inconsistent/UpToDate C r-----
    ns:0 nr:272884 dw:271744 dr:5700 al:0 bm:25 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:780928
	[====>...............] sync'ed: 26.4% (780928/1052672)K
	finish: 0:10:02 speed: 1,284 (1,280) want: 250 K/sec
 1: cs:SyncTarget ro:Primary/Primary ds:Inconsistent/UpToDate C r-----
    ns:0 nr:272196 dw:271048 dr:3688 al:0 bm:45 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:122292
	[=============>......] sync'ed: 70.2% (122292/393216)K
	finish: 0:01:31 speed: 1,328 (1,276) want: 250 K/sec
 2: cs:SyncTarget ro:Primary/Primary ds:Inconsistent/UpToDate C r-----
    ns:0 nr:273426 dw:272258 dr:3636 al:0 bm:47 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:781500
	[====>...............] sync'ed: 26.4% (781500/1052760)K
	finish: 0:09:49 speed: 1,308 (1,284) want: 250 K/sec

(time passes)

cat /proc/drbd
version: 8.3.12 (api:88/proto:86-96)
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03
 0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:1053812 dw:1052672 dr:6964 al:0 bm:74 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
 1: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:394560 dw:393412 dr:4988 al:0 bm:70 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
 2: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:1055190 dw:1054022 dr:4936 al:0 bm:167 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

Now we're ready to migrate vm0003-db and vm0004-ms back to an-node02.

clusvcadm -M vm:vm0003-db -m an-node02.alteeve.ca
Trying to migrate vm:vm0003-db to an-node02.alteeve.ca...Success
clusvcadm -M vm:vm0004-ms -m an-node02.alteeve.ca
Trying to migrate vm:vm0004-ms to an-node02.alteeve.ca...Success

A final check;

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 23:08:06 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  an-node01.alteeve.ca          started
 vm:vm0002-web                  an-node01.alteeve.ca          started
 vm:vm0003-db                   an-node02.alteeve.ca          started
 vm:vm0004-ms                   an-node02.alteeve.ca          started

Good!

Complete Cold Shut Down And Cold Starting The Cluster

The final testing is now complete. There is one final task to cover though; "Cold Shut Down" and "Cold Start" of the cluster. This involves shutting down all VMs, stopping rgmanager and cman on both nodes, then powering off both nodes.

The cold-start process involves simply powering both nodes on within the set post_join_delay, then manually enabling the four VMs.

Stopping All VMs

Check the status as always;

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 23:13:24 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  an-node01.alteeve.ca          started
 vm:vm0002-web                  an-node01.alteeve.ca          started
 vm:vm0003-db                   an-node02.alteeve.ca          started
 vm:vm0004-ms                   an-node02.alteeve.ca          started

All four VMs are up, so we'll stop all of them.

Note: You might want to get into the habit of stopping the windows machines, then connecting to them over RDP or using virt-manager to ensure that it has started to power down. If it hasn't, shut it down from within the OS.
clusvcadm -d vm:vm0001-dev
Local machine disabling vm:vm0001-dev...Success
clusvcadm -d vm:vm0002-web
Local machine disabling vm:vm0002-web...Success
clusvcadm -d vm:vm0003-db
Local machine disabling vm:vm0003-db...Success
clusvcadm -d vm:vm0004-ms
Local machine disabling vm:vm0004-ms...Success

Confirm;

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 23:17:29 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:storage_an01           an-node01.alteeve.ca          started
 service:storage_an02           an-node02.alteeve.ca          started
 vm:vm0001-dev                  (an-node01.alteeve.ca)        disabled
 vm:vm0002-web                  (an-node01.alteeve.ca)        disabled
 vm:vm0003-db                   (an-node02.alteeve.ca)        disabled
 vm:vm0004-ms                   (an-node02.alteeve.ca)        disabled

Good, we can now stop rgmanager on both nodes.

Shutting Down The Cluster Entirely

Note: It can sometimes take a minute or two for rgmanager to stop. Please be patient.

On an-node01;

/etc/init.d/rgmanager stop
Stopping Cluster Service Manager:                          [  OK  ]

On an-node02;

/etc/init.d/rgmanager stop
Stopping Cluster Service Manager:                          [  OK  ]

Now stop cman on both nodes.

On an-node01;

/etc/init.d/cman stop
Stopping cluster: 
   Leaving fence domain...                                 [  OK  ]
   Stopping gfs_controld...                                [  OK  ]
   Stopping dlm_controld...                                [  OK  ]
   Stopping fenced...                                      [  OK  ]
   Stopping cman...                                        [  OK  ]
   Waiting for corosync to shutdown:                       [  OK  ]
   Unloading kernel modules...                             [  OK  ]
   Unmounting configfs...                                  [  OK  ]

On an-node02;

/etc/init.d/cman stop
Stopping cluster: 
   Leaving fence domain...                                 [  OK  ]
   Stopping gfs_controld...                                [  OK  ]
   Stopping dlm_controld...                                [  OK  ]
   Stopping fenced...                                      [  OK  ]
   Stopping cman...                                        [  OK  ]
   Waiting for corosync to shutdown:                       [  OK  ]
   Unloading kernel modules...                             [  OK  ]
   Unmounting configfs...                                  [  OK  ]

We're down, we can safely power off the nodes now.

poweroff
Broadcast message from root@an-node01.alteeve.ca
	(/dev/pts/0) at 23:22 ...

The system is going down for power off NOW!

Cold-Stop achieved!

Cold-Starting The Cluster

Note: It is important to power on both nodes within post_join_delay seconds. Otherwise, the slower node will be fenced and the boot process will take longer than it needs to.

Power on both nodes. You can just hit the power button, or if you have a workstation on the BCN with fence-agents installed, you can call fence_ipmilan (or the agent you use in your cluster).

fence_ipmilan -a an-node01.ipmi -l root -p secret -o on
Powering on machine @ IPMI:an-node01.ipmi...Done
fence_ipmilan -a an-node02.ipmi -l root -p secret -o on
Powering on machine @ IPMI:an-node02.ipmi...Done

Once they're up, log into them again and check their status. You will see that the VMs are off-line.

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 23:40:16 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, Local, rgmanager
 an-node02.alteeve.ca                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:storage_an01           an-node01.alteeve.ca          started       
 service:storage_an02           an-node02.alteeve.ca          started       
 vm:vm0001-dev                  (none)                         disabled      
 vm:vm0002-web                  (none)                         disabled      
 vm:vm0003-db                   (none)                         disabled      
 vm:vm0004-ms                   (none)                         disabled

Check that DRBD is ready;

cat /proc/drbd
version: 8.3.12 (api:88/proto:86-96)
GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by dag@Build64R6, 2011-11-20 10:57:03
 0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:4 nr:0 dw:0 dr:8712 al:0 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
 1: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:0 dw:0 dr:4632 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
 2: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:0 dw:0 dr:4648 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0

Golden, let's start the VMs.

clusvcadm -e vm:vm0001-dev -m an-node01.alteeve.ca
vm:vm0001-dev is now running on an-node01.alteeve.ca
clusvcadm -e vm:vm0002-web -m an-node01.alteeve.ca
vm:vm0002-web is now running on an-node01.alteeve.ca
clusvcadm -e vm:vm0003-db -m an-node02.alteeve.ca
vm:vm0003-db is now running on an-node02.alteeve.ca
clusvcadm -e vm:vm0004-ms -m an-node02.alteeve.ca
vm:vm0004-ms is now running on an-node02.alteeve.ca

Check the new status;

clustat
Cluster Status for an-cluster-A @ Sun Jan  1 23:45:35 2012
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 an-node01.alteeve.ca                       1 Online, rgmanager
 an-node02.alteeve.ca                       2 Online, Local, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:storage_an01           an-node01.alteeve.ca          started       
 service:storage_an02           an-node02.alteeve.ca          started       
 vm:vm0001-dev                  an-node01.alteeve.ca          started       
 vm:vm0002-web                  an-node01.alteeve.ca          started       
 vm:vm0003-db                   an-node02.alteeve.ca          started       
 vm:vm0004-ms                   an-node02.alteeve.ca          started

We're back up and running!

Done and Done!

That, ladies and gentlemen, is all she wrote!

You should now be safely ready to take your cluster into production at this stage.

Happy Hacking!

Troubleshooting

The troubleshooting section seems to have pushed Media Wiki beyond it's single-article length limit. For this reason, it has been moved to it's own page.

Disabling rsyslog Rate Limiting

Please see;

 

Any questions, feedback, advice, complaints or meanderings are welcome.
Alteeve's Niche! Enterprise Support:
Alteeve Support
Community Support
© Alteeve's Niche! Inc. 1997-2024   Anvil! "Intelligent Availability®" Platform
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions.