Red Hat Cluster Service 2 Tutorial - Archive: Difference between revisions

From Alteeve Wiki
Jump to navigation Jump to search
Line 2,364: Line 2,364:
     on an-node02.alteeve.com {
     on an-node02.alteeve.com {
         device          /dev/drbd3 minor 3;
         device          /dev/drbd3 minor 3;
         disk            /dev/sda3;
         disk            /dev/sda8;
         address          ipv4 192.168.2.72:7792;
         address          ipv4 192.168.2.72:7792;
         meta-disk        internal;
         meta-disk        internal;

Revision as of 17:22, 29 March 2011

 AN!Wiki :: How To :: Red Hat Cluster Service 2 Tutorial - Archive

Overview

This paper has one goal;

  • Creating a 2-node, high-availability cluster hosting Xen virtual machines using RHCS "stable 2" using DRBD for synchronized storage.

Technologies We Will Use

  • Enterprise Linux 5; specifically we will be using CentOS v5.5.
  • Red Hat Cluster Services "Stable" version 2. This describes the following core components:
    • OpenAIS; Provides cluster communications using the totem protocol.
    • Cluster Manager (cman); Manages the starting, stopping and managing of the cluster.
    • Resource Manager (rgmanager); Manages cluster resources and services. Handles service recovery during failures.
    • Cluster Logical Volume Manager (clvm); Cluster-aware (disk) volume manager. Backs GFS2 filesystems and Xen virtual machines.
    • Global File Systems version 2 (gfs2); Cluster-aware, concurrently mountable file system.
  • Distributed Redundant Block Device (DRBD); Keeps shared data synchronized across cluster nodes.
  • Xen; Hypervisor that controls and supports virtual machines.

A Note on Patience

There is nothing inherently hard about clustering. However, there are many components that you need to understand before you can begin. The result is that clustering has an inherently steep learning curve.

You must have patience. Lots of it.

Many technologies can be learned by creating a very simple base and then building on it. The classic "Hello, World!" script created when first learning a programming language is an example of this. Unfortunately, there is no real analog to this in clustering. Even the most basic cluster requires several pieces be in place and working together. If you try to rush by ignoring pieces you think are not important, you will almost certainly waste time. A good example is setting aside fencing, thinking that your test cluster's data isn't important. The cluster software has no concept of "test". It treats everything as critical all the time and will shut down if anything goes wrong.

Take your time, work through these steps, and you will have the foundation cluster sooner than you realize. Clustering is fun because it is a challenge.

Prerequisites

It is assumed that you are familiar with Linux systems administration, specifically Red Hat Enterprise Linux and it's derivatives. You will need to have somewhat advanced networking experience as well. You should be comfortable working in a terminal (directly or over ssh). Familiarity with XML will help, but is not terribly required as it's use here is pretty self-evident.

If you feel a little out of depth at times, don't hesitate to set this tutorial aside. Branch over to the components you feel the need to study more, then return and continue on. Finally, and perhaps most importantly, you must have patience! If you have a manager asking you to "go live" with a cluster in a month, tell him or her that it simply won't happen. If you rush, you will skip important points and you will fail. Patience is vastly more important than any pre-existing skill.

Focus and Goal

There is a different cluster for every problem. Generally speaking though, there are two main problems that clusters try to resolve; Performance and High Availability. Performance clusters are generally tailored to the application requiring the performance increase. There are some general tools for performance clustering, like Red Hat's LVS (Linux Virtual Server) for load-balancing common applications like the Apache web-server.

This tutorial will focus on High Availability clustering, often shortened to simply HA and not to be confused with the HA Linux "heartbeat" cluster suite, which we will not be using here. The cluster will provide a shared file systems and will provide for the high availability on Xen-based virtual servers. The goal will be to have the virtual servers live-migrate during planned node outages and automatically restart on a surviving node when the original host node fails.

A very brief overview;

High Availability clusters like our have two main parts; Cluster management and resource management.

The cluster itself is responsible for maintaining the cluster nodes in a group. This group is part of a "Cluster Process Group", or CPG. When a node fails, the cluster manager must detect the failure, reliably eject the node from the cluster and reform the CPG. Each time the cluster changes, or "re-forms", the resource manager is called. The resource manager checks to see how the cluster changed, consults it's configuration and determines what to do, if anything.

The details of all this will be discussed in detail a little later on. For now, it's sufficient to have in mind these two major roles and understand that they are somewhat independent entities.

Platform

This tutorial was written using CentOS version 5.5, x86_64. No attempt was made to test on i686 or other EL5 derivatives. That said, there is no reason to believe that this tutorial will not apply to any variant. As much as possible, the language will be distro-agnostic. For reasons of memory constraints, it is advised that you use an x86_64 (64-bit) platform if at all possible.

Do note that as of EL5.4 and above, significant changes were made to how RHCS are supported. It is strongly advised that you use at least version 5.4 or newer while working with this tutorial.

A Word On Complexity

Clustering is not inherently hard, but it is inherently complex. Consider;

  • Any given program has N bugs.
    • RHCS uses; cman, openais, totem, fenced, rgmanager, dlm, qdisk and GFS2,
    • We will be adding DRBD, CLVM and Xen.
    • Right there, we have N^11 possible bugs. We'll call this A.
  • A cluster has Y nodes.
    • In our case, 2 nodes, each with 3 networks.
    • The network infrastructure (Switches, routers, etc). If you use managed switches, add another layer of complexity.
    • This gives us another Y^(2*3), and then ^2 again for managed switches. We'll call this B.
  • Let's add the human factor. Let's say that a person needs roughly 5 years of cluster experience to be considered an expert. For each year less than this, add a Z "oops" factor, (5-Z)^2. We'll call this C.
  • So, finally, add up the complexity, using this tutorial's layout, 0-years of experience and managed switches.
    • (N^11) * (Y^(2*3)^2) * ((5-0)^2) == (A * B * C) == an-unknown-but-big-number.

This isn't meant to scare you away, but it is meant to be a sobering statement. Obviously, those numbers are somewhat artificial, but the point remains.

Any one piece is easy to understand, thus, clustering is inherently easy. However, given the large number of variables, you must really understand all the pieces and how they work together. DO NOT think that you will have this mastered and working in a month. Certainly don't try to sell clusters as a service without a lot of internal testing.

Clustering is kind of like chess. The rules are pretty straight forward, but the complexity can take some time to master.

Overview of Components

When looking at a cluster, there is a tendency to want to dive right into the configuration file. That is not very useful in clustering.

  • When you look at the configuration file, it is quite short.

It isn't like most applications or technologies though. Most of us learn by taking something, like a configuration file, and tweaking it this way and that to see what happens. I tried that with clustering and learned only what it was like to bang my head against the wall.

  • Understanding the parts and how they work together is critical.

You will find that the discussion on the components of clustering, and how those components and concepts interact, will be much longer than the initial configuration. It is true that we could talk very briefly about the actual syntax, but it would be a disservice. Please, don't rush through the next section or, worse, skip it and go right to the configuration. You will waste far more time than you will save.

  • Clustering is easy, but it has a complex web of inter-connectivity. You must grasp this network if you want to be an effective cluster administrator!

Component; cman

This was, traditionally, the cluster manager. In the 3.0 series, it acts mainly as a service manager, handling the starting and stopping of clustered services. In the 3.1 series, cman will be removed entirely.

Component; openais / corosync

OpenAIS is the heart of the cluster. All other computers operate though this component, and no cluster component can work without it. Further, it is shared between both Pacemaker and RHCS clusters.

In Red Hat clusters, openais is configured via the central cluster.conf file. In Pacemaker clusters, it is configured directly in openais.conf. As we will be building an RHCS, we will only use cluster.conf. That said, (almost?) all openais.conf options are available in cluster.conf. This is important to note as you will see references to both configuration files when searching the Internet.

A Little History

There were significant changes between RHCS version 2, which we are using, and version 3 available on EL6 and recent Fedoras.

In the RHCS version 2, there was a component called openais which handled totem. The OpenAIS project was designed to be the heart of the cluster and was based around the Service Availability Forum's Application Interface Specification. AIS is an open API designed to provide inter-operable high availability services.

In 2008, it was decided that the AIS specification was overkill for RHCS clustering and a duplication of effort in the existing and easier to maintain corosync project. OpenAIS was then split off as a separate project specifically designed to act as an optional add-on to corosync for users who wanted AIS functionality.

You will see a lot of references to OpenAIS while searching the web for information on clustering. Understanding it's evolution will hopefully help you avoid confusion.

Concept; quorum

Quorum is defined as a collection of machines and devices in a cluster with a clear majority of votes.

The idea behind quorum is that, which ever group of machines has it, can safely start clustered services even when defined members are not accessible.

Take this scenario;

  • You have a cluster of four nodes, each with one vote.
    • The cluster's expected_votes is 4. A clear majority, in this case, is 3 because (4/2)+1, rounded down, is 3.
    • Now imagine that there is a failure in the network equipment and one of the nodes disconnects from the rest of the cluster.
    • You now have two partitions; One partition contains three machines and the other partition has one.
    • The three machines will have quorum, and the other machine will lose quorum.
    • The partition with quorum will reconfigure and continue to provide cluster services.
    • The partition without quorum will withdraw from the cluster and shut down all cluster services.

This behaviour acts as a guarantee that the two partitions will never try to access the same clustered resources, like a shared filesystem, thus guaranteeing the safety of those shared resources.

This also helps explain why an even 50% is not enough to have quorum, a common question for people new to clustering. Using the above scenario, imagine if the split were 2-nodes and 2-nodes. Because either can't be sure what the other would do, neither can safely proceed. If we allowed an even 50% to have quorum, both partition might try to take over the clustered services and disaster would soon follow.

There is one, and only one except to this rule.

In the case of a two node cluster, as we will be building here, any failure results in a 50/50 split. If we enforced quorum in a two-node cluster, there would never be high availability because and failure would cause both nodes to withdraw. The risk with this exception is that we now place the entire safety of the cluster on fencing, a concept we will cover in a second. Fencing is a second line of defense and something we are loath to rely on alone.

Even in a two-node cluster though, proper quorum can be maintained by using a quorum disk, called a qdisk. This is another topic we will touch on in a moment. This tutorial will implement a qdisk specifically so that we can get away from this two_node exception.

Concept; Virtual Synchrony

All cluster operations have to occur in the same order across all nodes. This concept is called "virtual synchrony", and it is provided by openais using "closed process groups", CPG.

Let's look at how locks are handled on clustered file systems as an example.

  • As various nodes want to work on files, they send a lock request to the cluster. When they are done, they send a lock release to the cluster.
    • Lock and unlock messages must arrive in the same order to all nodes, regardless of the real chronological order that they were issued.
  • Let's say one node sends out messages "a1 a2 a3 a4". Meanwhile, the other node sends out "b1 b2 b3 b4".
    • All of these messages go to openais which gathers them up and sends them up and sorts them.
    • It is totally possible that openais will get the messages as "a2 b1 b2 a1 b4 a3 a4 b4".
    • The openais application will then ensure that all nodes get the messages in the above order, one at a time. All nodes must confirm that they got a given message before the next message is sent to any node.

This will tie into fencing and totem, as we'll see in the next sections.

Concept; Fencing

Fencing is a absolutely critical part of clustering. Without fully working fence devices, your cluster will fail.

Was that strong enough, or should I say that again? Let's be safe:

DO NOT BUILD A CLUSTER WITHOUT PROPER, WORKING AND TESTED FENCING.

Sorry, I promise that this will be the only time that I speak so strongly. Fencing really is critical, and explaining the need for fencing is nearly a weekly event. So then, let's discuss fencing.

When a node stops responding, an internal timeout and counter start ticking away. During this time, no messages are moving through the cluster and the cluster is, essentially, hung. If the node responds in time, the timeout and counter reset and the cluster begins operating properly again.

If, on the other hand, the node does not respond in time, the node will be declared dead. The cluster will take a "head count" to see which nodes is still has contact with and will determine then if there are enough to have quorum. If so, the cluster will issue a "fence" against the silent node. This is a call to a program called fenced, the fence daemon.

The fence daemon will look at the cluster configuration and get the fence devices configured for the dead node. Then, one at a time and in the order that they appear in the configuration, the fence daemon will call those fence devices, via their fence agents, passing to the fence agent any configured arguments like username, password, port number and so on. If the first fence agent returns a failure, the next fence agent will be called. If the second fails, the third will be called, then the forth and so on. Once the last (or perhaps only) fence device fails, the fence daemon will retry again, starting back at the start of the list. It will do this indefinitely until one of the fence devices success.

Here's the flow, in point form:

  • The openais program collects messages and sends them off, one at a time, to all nodes.
  • All nodes respond, and the next message is sent. Repeat continuously during normal operation.
  • Suddenly, one node stops responding.
    • Communication freezes while the cluster waits for the silent node.
    • A timeout starts (300ms by default), and each time the timeout is hit, and error counter increments.
    • The silent node responds before the counter reaches the limit.
      • The counter is reset to 0
      • The cluster operates normally again.
  • Again, one node stops responding.
    • Again, the timeout begins and the error count increments each time the timeout is reached.
    • Time error exceeds the limit (10 is the default); Three seconds have passed (300ms * 10).
    • The node is declared dead.
    • The cluster checks which members it still has, and if that provides enough votes for quorum.
      • If there are too few votes for quorum, the cluster software freezes and the node(s) withdraw from the cluster.
      • If there are enough votes for quorum, the silent node is declared dead.
        • openais calls fenced, telling it to fence the node.
        • Which fence device(s) to use, that is, what fence_agent to call and what arguments to pass, is gathered.
        • For each configured fence device:
          • The agent is called and fenced waits for the fence_agent to exit.
          • The fence_agent's exit code is examined. If it's a success, recovery starts. If it failed, the next configured fence agent is called.
        • If all (or the only) configured fence fails, fenced will start over.
        • fenced will wait and loop forever until a fence agent succeeds. During this time, the cluster is hung.
    • Once a fence_agent succeeds, the cluster is reconfigured.
      • A new closed process group (cpg) is formed.
      • A new fence domain is formed.
      • Lost cluster resources are recovered as per rgmanager's configuration (including file system recovery as needed).
      • Normal cluster operation is restored.

This skipped a few key things, but the general flow of logic should be there.

This is why fencing is so important. Without a properly configured and tested fence device or devices, the cluster will never successfully fence and the cluster will stay hung forever.

Component; totem

The totem protocol defined message passing within the cluster and it is used by openais. A token is passed around all the nodes in the cluster, and the timeout discussed in fencing above is actually a token timeout. The counter, then, is the number of lost tokens that are allowed before a node is considered dead.

The totem protocol supports something called 'rrp', Redundant Ring Protocol. Through rrp, you can add a second backup ring on a separate network to take over in the event of a failure in the first ring. In RHCS, these rings are known as "ring 0" and "ring 1".

Component; rgmanager

When the cluster configuration changes, openais calls rgmanager, the resource group manager. It will examine what changed and then will start, stop, migrate or recover cluster resources as needed.

Component; qdisk

If you have a cluster of 2 to 16 nodes, you can use a quorum disk. This is a small partition on shared storage device that the cluster can use to make much better decisions about which nodes should have quorum when a split in the network happens.

Sadly, qdisk does not work well on DRBD, so we will not be using it in this tutorial. It is still worth knowing about though.

The way a qdisk works, at it's most basic, is to have one or more votes in quorum. Generally, but not necessarily always, the qdisk device has one vote less than the total number of nodes (N-1).

  • In a two node cluster, the qdisk would have one vote.
  • In a seven node cluster, the qdisk would have six votes.

Imagine these two scenarios; First without qdisk, the revisited to see how qdisk helps.

  • First Scenarion; A two node cluster, which we will implement here.

If the network connection on the totem ring(s) breaks, you will enter into a dangerous state called a "split-brain". Normally, this can't happen because quorum can only be held by one side at a time. In a two_node cluster though, this is allowed.

Without a qdisk, either node could potentially start the cluster resources. This is a disastrous possibility and it is avoided by a fence dual. Both nodes will try to fence the other at the same time, but only the fastest one wins. The idea behind this is that one will always live because the other will die before it can get it's fence call out. In theory, this works fine. In practice though, there are cases where fence calls can be "queued", thus, in fact, allow both nodes to die. This defeats the whole "high availability" thing, now doesn't it? Also, this possibility is why the two_node option is the only exception to the quorum rules.

So how does a qdisk help?

Two ways!

First;

The biggest way it helps is by getting away from the two_node exception. With the qdisk partition, you are back up to three votes, so there will never be a 50/50 split. If either node retains access to the quorum disk while the other loses access, then right there things are decided. The one with the disk has 2 votes and wins quorum and will fence the other. Meanwhile, the other will only have 1 votes, thus it will lose quorum, and will withdraw from the cluster and not try to fence the other node.

Second;

You can use heuristics with qdisk to have a more intelligent partition recovery mechanism. For example, let's look again at the scenario where the link(s) between the two nodes hosting the totem ring is cut. This time though, let's assume that the storage network link is still up, so both nodes have access to the qdisk partition. How would the qdisk act as a tie breaker?

One way is to have a heuristics test that checks to see if one of the nodes has access to a particular router. With this heuristics test, if only one node had access to that switch, the qdisk would give it's vote to that node and ensure that the "healthiest" node survived. Pretty cool, eh?

  • Second Scenarion; A seven node cluster with six dead members.

Admittedly, this is an extreme scenario, but it serves to illustrate the point well. Remember how we said that the general rule is that the qdisk has N-1 votes?

With our seven node cluster, on it's own, there would be a total of 7 votes, so normally quorum would require 4 nodes be alive (((7/2)+1) = (3.5+1) = 4.5, rounded down is 4). With the death of the fourth node, all cluster services would fail. We understand now why this would be the case, but what if the nodes are, for example, serving up websites? In this case, 3 nodes are still sufficient to do the job. Heck, even 1 node is better than nothing. With the rules of quorum though, it just wouldn't happen.

Let's now look at how the qdisk can help.

By giving the qdisk partition 6 votes, you raise the cluster's total expected votes from 7 to 13. With this new count, the votes needed to for quorum is 7 (((13/2)+1) = (6.5+1) = 7.5, rounded down is 7).

So looking back at the scenario where we've lost four of our seven nodes; The surviving nodes have 3 votes, but they can talk to the qdisk which provides another 6 votes, for a total of 9. With that, quorum is achieved and the three nodes are allowed to form a cluster and continue to provide services. Even if you lose all but one node, you are still in business because the one surviving node, which is still able to talk to the qdisk and thus win it's 6 votes, has a total of 7 and thus has quorum!

There is another benefit. As we mentioned in the first scenario, we can add heuristics to the qdisk. Imagine that, rather than having six nodes die, they instead partition off because of a break in the network. Without qdisk, the six nodes would easily win quorum, fence the one other node and then reform the cluster. What if, though, the one lone node was the only one with access to a critical route to the Internet? The six nodes would be useless in a web-server environment. With the heuristics provided by qdisk, that one useful node would get the qdisk's 6 votes and win quorum over the other six nodes!

A little qdisk goes a long way.

Component; DRBD

DRBD; Distributed Replicating Block Device, is a technology that takes raw storage from two or more nodes and keeps their data synchronized in real time. It is sometimes described as "RAID 1 over Nodes", and that is conceptually accurate. In this tutorial's cluster, DRBD will be used to provide that back-end storage as a cost-effective alternative to a tranditional SAN or iSCSI device.

To help visualize DRBD's use and role, look at the map of our cluster's storage.

Component; CLVM

With DRBD providing the raw storage for the cluster, we must now create partitions. This is where Clustered LVM, known as CLVM, comes into play.

CLVM is ideal in that it understands that it is clustered and therefor won't provide access to nodes outside of the formed cluster. That is, not a member of openais's closed process group, which, in turn, requires quorum.

It is ideal because it can take one or more raw devices, known as "physical volumes", or simple as PVs, and combine their raw space into one or more "volume groups", known as VGs. These volume groups then act just like a typical hard drive and can be "partitioned" into one or more "logical volumes", known as LVs. These LVs are what will be formatted with a clustered file system.

LVM is particularly attractive because of how incredibly flexible it is. We can easily add new physical volumes later, and then grow an existing volume group to use the new space. This new space can then be given to existing logical volumes, or entirely new logical volumes can be created. This can all be done while the cluster is online offering an upgrade path with no down time.

Component; GFS2

With DRBD providing the clusters raw storage space, and Clustered LVM providing the logical partitions, we can now look at the clustered file system. This is the role of the Global File System version 2, known simply as GFS2.

It works much like standard filesystem, with mkfs.gfs2, fsck.gfs2 and so on. The major difference is that it and clvmd use the cluster's distributed locking mechanism provided by dlm_controld. Once formatted, the GFS2-formatted partition can be mounted and used by any node in the cluster's closed process group. All nodes can then safely read from and write to the data on the partition simultaneously.

Component; DLM

One of the major roles of a cluster is to provide distributed locking on clustered storage. In fact, storage software can not be clustered without using DLM, as provided by the dlm_controld daemon, using openais's virtual synchrony.

Through DLM, all nodes accessing clustered storage are guaranteed to get POSIX locks, called plocks, in the same order across all nodes. Both CLVM and GFS2 rely on DLM, though other clustered storage, like OCFS2, use it as well.

Component; Xen

There are two major open-source virtualization platforms available in the Linux world today; Xen and KVM. The former is maintained by Citrix and the other by Redhat. It would be difficult to say which is "better", as they're both very good. Xen can be argued to be more mature where KVM is the "official" solution supported by Red Hat directly.

We will be using the Xen hypervisor and a "host" virtual server called dom0. In Xen, every machine is a virtual server, including the system you installed when you built the server. This is possible thanks to a small Xen micro-operating system that initially boots, then starts up your original installed operating system as a virtual server with special access to the underlying hardware and hypervisor management tools.

The rest of the virtual servers in a Xen environment are collectively called "domU" virtual servers. These will be the highly-available resource that will migrate between nodes during failure events.

Base Setup

Before we can look at the cluster, we must first build two cluster nodes and then install the operating system.

Hardware Requirements

The bare minimum requirements are;

  • All hardware must be supported by EL5. It is strongly recommended that you check compatibility before making any purchases.
  • A dual-core CPUs with hardware virtualization support.
  • Three network cards; At least one should be gigabit or faster.
  • One hard drive.
  • 2 GiB of RAM
  • A fence device. This can be an IPMI-enabled server, a Node Assassin, a switched PDU or similar.

This tutorial was written using the following hardware:

This is not an endorsement of the above hardware. I put a heavy emphasis on minimizing power consumption and bought what was within my budget. This hardware was never meant to be put into production, but instead was chosen to serve the purpose of my own study and for creating this tutorial. What you ultimately choose to use, provided it meets the minimum requirements, is entirely up to you and your requirements.

Note: I use three physical NICs, but you can get away with two by merging the storage and back-channel networks, which we will discuss shortly. If you are really in a pinch, you could create three aliases on on interface and isolate them using VLANs. If you go this route, please ensure that your VLANs are configured and working before beginning this tutorial. Pay close attention to multicast traffic.

Pre-Assembly

Before you assemble your nodes, take a moment to record the MAC addresses of each network interface and then note where each interface is physically installed. This will help you later when configuring the networks. I generally create a simple text file with the MAC addresses, the interface I intend to assign to it and where it physically is located.

-=] an-node01
48:5B:39:3C:53:15   # eth0 - onboard interface
00:1B:21:72:96:E8   # eth1 - right-most PCIe interface
00:1B:21:72:9B:56   # eth2 - left-most PCI interface

-=] an-node02
48:5B:39:3C:53:14   # eth0 - onboard interface
00:1B:21:72:9B:5A   # eth1 - right-most PCIe interface
00:1B:21:72:96:EA   # eth2 - left-most PCI interface

OS Install

Later steps will include packages to install, so the initial OS install can be minimal. I like to change the default run-level to 3, remove rhgb quiet from the grub menu, disable the firewall and disable SELinux. In a production cluster, you will want to use firewalling and selinux, but until you finish studying, leave it off to keep things simple.

  • Note: Before EL5.4, you could not use SELinux. It is now possible to use it, and it is recommended that you do so in any production cluster.
  • Note: Ports and protocols to open in a firewall will be discussed later in the networking section.

I like to minimize and automate my installs as much as possible. To that end, I run a little PXE server on my network and use a kickstart script to automate the install. Here is a simple one for use on a single-drive node:

If you decide to manually install EL5 on your nodes, please try to keep the installation as small as possible. The fewer packages installed, the fewer sources of problems and vectors for attack.

Post Install OS Changes

This section discusses changes I recommend, but are not required.

Network Planning

The most important change that is recommended is to get your nodes into a consistent networking configuration. This will prove very handy when trying to keep track of your networks and where they're physically connected. This becomes exponentially more helpful as your cluster grows.

The first step is to understand the three networks we will be creating. Once you understand their role, you will need to decide which interface on the nodes will be used for each network.

Cluster Networks

The three networks are;

Network Acronym Use
Back-Channel Network BCN Private cluster communications, virtual machine migrations, fence devices
Storage Network SN Used exclusively for storage communications. Possible to use as totem's redundant ring.
Internet-Facing Network IFN Internet-polluted network. No cluster or storage communication or devices.

Things To Consider

When planning which interfaces to connect to each network, consider the following, in order of importance:

  • If your nodes have IPMI and an interface sharing a physical RJ-45 connector, this must be on the Back-Channel Network. The reasoning is that having your fence device accessible on the Internet-Facing Network poses a major security risk. Having the IPMI interface on the Storage Network can cause problems if a fence is fired and the network is saturated with storage traffic.
  • The lowest-latency network interface should be used as the Back-Channel Network. The cluster is maintained by multicast messaging between the nodes using something called the totem protocol. Any delay in the delivery of these messages can risk causing a failure and ejection of effected nodes when no actual failure existed. This will be discussed in greater detail later.
  • The network with the most raw bandwidth should be used for the Storage Network. All disk writes must be sent across the network and committed to the remote nodes before the write is declared complete. This causes the network to become the disk I/O bottle neck. Using a network with jumbo frames and high raw throughput will help minimize this bottle neck.
  • During the live migration of virtual machines, the VM's RAM is copied to the other node using the BCN. For this reason, the second fastest network should be used for back-channel communication. However, these copies can saturate the network, so care must be taken to ensure that cluster communications get higher priority. This can be done using a managed switch. If you can not ensure priority for totem multicast, then be sure to configure Xen later to use the storage network for migrations.
  • The remain, slowest interface should be used for the IFN.

Planning the Networks

This paper will use the following setup. Feel free to alter the interface to network mapping and the IP subnets used to best suit your needs. For reasons completely my own, I like to start my cluster IPs final octal at 71 for node 1 and then increment up from there. This is entirely arbitrary, so please use what ever makes sense to you. The remainder of this tutorial will follow the convention below:

Network Interface Subnet
IFN eth0 192.168.1.0/24
SN eth1 192.168.2.0/24
BCN eth2 192.139.3.0/24

This translates to the following per-node configuration:

an-node01 an-node02
Interface IP Address Host Name(s) IP Address Host Name(s)
IFN eth0 192.168.1.71 an-node01.ifn 192.168.1.72 an-node02.ifn
SN eth1 192.168.2.71 an-node01.sn 192.168.2.72 an-node02.sn
BCN eth2 192.168.3.71 an-node01 an-node01.alteeve.com an-node01.bcn 192.168.3.72 an-node02 an-node02.alteeve.com an-node02.bcn

Network Configuration

So now we've planned the network, so it is time to implement it.

Warning About Managed Switches

WARNING: Please pay attention to this warning! The vast majority of cluster problems end up being network related. The hardest ones to diagnose are usually multicast issues.

If you use a managed switch, be careful about enabling and configuring Multicast IGMP Snooping or Spanning Tree Protocol. They have been known to cause problems by not allowing multicast packets to reach all nodes fast enough or at all. This can cause somewhat random break-downs in communication between your nodes, leading to seemingly random fences and DLM lock timeouts. If your switches support PIM Routing, be sure to use it!

If you have problems with your cluster not forming, or seemingly random fencing, try using a cheap unmanaged switch. If the problem goes away, you are most likely dealing with a managed switch configuration problem.

Disable Firewalling

To "keep things simple", we will disable all firewalling on the cluster nodes. This is not recommended in production environments, obviously, so below will be a table of ports and protocols to open when you do get into production. Until then, we will simply use chkconfig to disable iptables and ip6tables.

Note: Cluster 2 does not support IPv6, so you can skip or ignore it if you wish. I like to disable it just to be certain that it can't cause issues though.

chkconfig iptables off
chkconfig ip6tables off

Now confirm that they are off by having iptables and ip6tables list their rules.

iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
ip6tables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

When you do prepare to go into production, these are the protocols and ports you need to open between cluster nodes. Remember to allow multicast communications as well!

Port Protocol Component
5404, 5405 UDP cman
8084, 5405 TCP luci
11111 TCP ricci
14567 TCP gnbd
16851 TCP modclusterd
21064 TCP dlm
50006, 50008, 50009 TCP ccsd
50007 UDP ccsd

Disable NetworkManager, Enable network

The NetworkManager daemon is an excellent daemon in environments where a system connects to a variety of networks. The NetworkManager daemon handles changing the networking configuration whenever it senses a change in the network state, like when a cable is unplugged or a wireless network comes or goes. As useful as this is on laptops and workstations, it can be detrimental in a cluster.

To prevent the networking from changing once we've got it setup, we want to replace NetworkManager daemon with the network initialization script. The network script will start and stop networking, but otherwise it will leave the configuration alone. This is ideal in servers, and doubly-so in clusters given their sensitivity to transient network issues.

Start by removing NetworkManager:

yum remove NetworkManager NetworkManager-glib NetworkManager-gnome NetworkManager-devel NetworkManager-glib-devel

Now you want to ensure that network starts with the system.

chkconfig network on

Setup /etc/hosts

The /etc/hosts file, by default, will resolve the hostname to the lo (127.0.0.1) interface. The cluster uses this name though for knowing which interface to use for the totem protocol (and thus all cluster communications). To this end, we will remove the hostname from 127.0.0.1 and instead put it on the IP of our BCN connected interface. At the same time, we will add entries for all networks for each node in the cluster and entries for the fence devices. Once done, the edited /etc/hosts file should be suitable for copying to all nodes in the cluster.

vim /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1	localhost.localdomain localhost
::1		localhost6.localdomain6 localhost6

192.168.1.71	an-node01.ifn
192.168.2.71	an-node01.sn
192.168.3.71	an-node01 an-node01.bcn an-node01.alteeve.com

192.168.1.72	an-node02.ifn
192.168.2.72	an-node02.sn
192.168.3.72	an-node02 an-node02.bcn an-node02.alteeve.com

192.168.3.61	batou.alteeve.com	# Node Assassin
192.168.3.62	motoko.alteeve.com	# Switched PDU

Mapping Interfaces to ethX Names

Chances are good that the assignment of ethX interface names to your physical network cards is not ideal. There is no strict technical reason to change the mapping, but it will make you life a lot easier if all nodes use the same ethX names for the same subnets.

The actual process of changing the mapping is a little involved. For this reason, there is a dedicated mini-tutorial which you can find below. Please jump to it and then return once your mapping is as you like it.

Set IP Addresses

The last step in setting up the network interfaces is to manually assign the IP addresses and define the subnets for the interfaces. This involves directly editing the /etc/sysconfig/network-scripts/ifcfg-ethX files. There are a large set of options that can be set in these configuration files, but most are outside the scope of this tutorial. To get a better understanding of the available options, please see:

Here are my three configuration files which you can use as guides. Please do not copy these over your files! Doing so will cause your interfaces to fail outright as every interface's MAC address is unique. Adapt these to suite your needs.

vim /etc/sysconfig/network-scripts/ifcfg-eth0
# Internet-Facing Network
HWADDR=48:5B:39:3C:53:15
DEVICE=eth0
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.1.71
NETMASK=255.255.255.0
GATEWAY=192.168.1.254
vim /etc/sysconfig/network-scripts/ifcfg-eth1
# Storage Network
HWADDR=00:1B:21:72:96:E8
DEVICE=eth1
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.2.71
NETMASK=255.255.255.0
vim /etc/sysconfig/network-scripts/ifcfg-eth2
# Back Channel Network
HWADDR=00:1B:21:72:9B:56
DEVICE=eth2
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.3.71
NETMASK=255.255.255.0

You will also need to setup the /etc/resolv.conf file for DNS resolution. You can learn more about this file's purpose by reading it's man page; man resolv.conf. The main thing is to set valid DNS server IP addresses in the nameserver sections. Here is mine, for reference:

vim /etc/resolv.conf
search alteeve.com
nameserver 192.139.81.117
nameserver 192.139.81.1

Finally, restart network and you should have you interfaces setup properly.

/etc/init.d/network restart
Shutting down interface eth0:                              [  OK  ]
Shutting down interface eth1:                              [  OK  ]
Shutting down interface eth2:                              [  OK  ]
Shutting down loopback interface:                          [  OK  ]
Bringing up loopback interface:                            [  OK  ]
Bringing up interface eth0:                                [  OK  ]
Bringing up interface eth1:                                [  OK  ]
Bringing up interface eth2:                                [  OK  ]

You can verify your configuration using the ifconfig tool.

ifconfig
eth0      Link encap:Ethernet  HWaddr 48:5B:39:3C:53:15  
          inet addr:192.168.1.71  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::92e6:baff:fe71:82ea/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1727 errors:0 dropped:0 overruns:0 frame:0
          TX packets:655 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:208916 (204.0 KiB)  TX bytes:133171 (130.0 KiB)
          Interrupt:252 Base address:0x2000 

eth1      Link encap:Ethernet  HWaddr 00:1B:21:72:96:E8  
          inet addr:192.168.2.71  Bcast:192.168.2.255  Mask:255.255.255.0
          inet6 addr: fe80::221:91ff:fe19:9653/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:998 errors:0 dropped:0 overruns:0 frame:0
          TX packets:47 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:97702 (95.4 KiB)  TX bytes:6959 (6.7 KiB)
          Interrupt:16 

eth2      Link encap:Ethernet  HWaddr 00:1B:21:72:9B:56  
          inet addr:192.168.3.71  Bcast:192.168.3.255  Mask:255.255.255.0
          inet6 addr: fe80::20e:cff:fe59:46e4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:5241 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4439 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1714026 (1.6 MiB)  TX bytes:1624392 (1.5 MiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:42 errors:0 dropped:0 overruns:0 frame:0
          TX packets:42 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:6449 (6.2 KiB)  TX bytes:6449 (6.2 KiB)

Setting up SSH

Setting up SSH shared keys will allow your nodes to pass files between one another and execute commands remotely without needing to enter a password. This will be needed later when we want to enable applications like libvirtd and virt-manager.

SSH is, on it's own, a very big topic. If you are not familiar with SSH, please take some time to learn about it before proceeding. A great first step is the Wikipedia entry on SSH, as well as the SSH man page; man ssh.

SSH can be a bit confusing keeping connections straight in you head. When you connect to a remote machine, you start the connection on your machine as the user you are logged in as. This is the source user. When you call the remote machine, you tell the machine what user you want to log in as. This is the remote user.

You will need to create an SSH key for each source user on each node, and then you will need to copy the newly generated public key to each remote machine's user directory that you want to connect to. In this example, we want to connect to either node, from either node, as the root user. So we will create a key for each node's root user and then copy the generated public key to the other node's root user's directory.

For each user, on each machine you want to connect from, run:

# The '2047' is just to screw with brute-forces a bit. :)
ssh-keygen -t rsa -N "" -b 2047 -f ~/.ssh/id_rsa
Generating public/private rsa key pair.
Created directory '/root/.ssh'.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
a1:65:a9:50:bb:15:ae:b1:6e:06:12:4a:29:d1:68:f3 root@an-node01.alteeve.com

This will create two files: the private key called ~/.ssh/id_rsa and the public key called ~/.ssh/id_rsa.pub. The private must never be group or world readable! That is, it should be set to mode 0600.

The two files should look like:

Private key:

cat ~/.ssh/id_rsa
-----BEGIN RSA PRIVATE KEY-----
MIIEnwIBAAKCAQBTNg6FZyDKm4GAm7c+F2enpLWy+t8ZZjm4Z3Q7EhX09ukqk/Qm
MqprtI9OsiRVjce+wGx4nZ8+Z0NHduCVuwAxG0XG7FpKkUJC3Qb8KhyeIpKEcfYA
tsDUFnWddVF8Tsz6dDOhb61tAke77d9E01NfyHp88QBxjJ7w+ZgB2eLPBFm6j1t+
K50JHwdcFfxrZFywKnAQIdH0NCs8VaW91fQZBupg4OGOMpSBnVzoaz2ybI9bQtbZ
4GwhCghzKx7Qjz20WiqhfPMfFqAZJwn0WXfjALoioMDWavTbx+J2HM8KJ8/YkSSK
dDEgZCItg0Q2fC35TDX+aJGu3xNfoaAe3lL1AgEjAoIBABVlq/Zq+c2y9Wo2q3Zd
yjJsLrj+rmWd8ZXRdajKIuc4LVQXaqq8kjjz6lYQjQAOg9H291I3KPLKGJ1ZFS3R
AAygnOoCQxp9H6rLHw2kbcJDZ4Eknlf0eroxqTceKuVzWUe3ev2gX8uS3z70BjZE
+C6SoydxK//w9aut5UJN+H5f42p95IsUIs0oy3/3KGPHYrC2Zgc2TIhe25huie/O
psKhHATBzf+M7tHLGia3q682JqxXru8zhtPOpEAmU4XDtNdL+Bjv+/Q2HMRstJXe
2PU3IpVBkirEIE5HlyOV1T802KRsSBelxPV5Y6y5TRq+cEwn0G2le1GiFBjd0xQd
0csCgYEA2BWkxSXhqmeb8dzcZnnuBZbpebuPYeMtWK/MMLxvJ50UCUfVZmA+yUUX
K9fAUvkMLd7V8/MP7GrdmYq2XiLv6IZPUwyS8yboovwWMb+72vb5QSnN6LAfpUEk
NRd5JkWgqRstGaUzxeCRfwfIHuAHikP2KeiLM4TfBkXzhm+VWjECgYBilQEBHvuk
LlY2/1v43zYQMSZNHBSbxc7R5mnOXNFgapzJeFKvaJbVKRsEQTX5uqo83jRXC7LI
t14pC23tpW1dBTi9bNLzQnf/BL9vQx6KFfgrXwy8KqXuajfv1ECH6ytqdttkUGZt
TE/monjAmR5EVElvwMubCPuGDk9zC7iQBQKBgG8hEukMKunsJFCANtWdyt5NnKUB
X66vWSZLyBkQc635Av11Zm8qLusq2Ld2RacDvR7noTuhkykhBEBV92Oc8Gj0ndLw
hhamS8GI9Xirv7JwYu5QA377ff03cbTngCJPsbYN+e/uj6eYEE/1X5rZnXpO1l6y
G7QYcrLE46Q5YsCrAoGAL+H5LG4idFEFTem+9Tk3hDUhO2VpGHYFXqMdctygNiUn
lQ6Oj7Z1JbThPJSz0RGF4wzXl/5eJvn6iPbsQDpoUcC1KM51FxGn/4X2lSCZzgqr
vUtslejUQJn96YRZ254cZulF/YYjHyUQ3byhDRcr9U2CwUBi5OcbFTomlvcQgHcC
gYEAtIpaEWt+Akz9GDJpKM7Ojpk8wTtlz2a+S5fx3WH/IVURoAzZiXzvonVIclrH
5RXFiwfoXlMzIulZcrBJZfTgRO9A2v9rE/ZRm6qaDrGe9RcYfCtxGGyptMKLdbwP
UW1emRl5celU9ZEZRBpIVTES5ZVWqD2RkkkNNJbPf5F/x+w=
-----END RSA PRIVATE KEY-----

Public key (wrapped to make it more readable):

cat ~/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQBTNg6FZyDKm4GAm7c+F2enpLWy+t8Z
Zjm4Z3Q7EhX09ukqk/QmMqprtI9OsiRVjce+wGx4nZ8+Z0NHduCVuwAxG0XG7FpK
kUJC3Qb8KhyeIpKEcfYAtsDUFnWddVF8Tsz6dDOhb61tAke77d9E01NfyHp88QBx
jJ7w+ZgB2eLPBFm6j1t+K50JHwdcFfxrZFywKnAQIdH0NCs8VaW91fQZBupg4OGO
MpSBnVzoaz2ybI9bQtbZ4GwhCghzKx7Qjz20WiqhfPMfFqAZJwn0WXfjALoioMDW
avTbx+J2HM8KJ8/YkSSKdDEgZCItg0Q2fC35TDX+aJGu3xNfoaAe3lL1 root@an
-node01.alteeve.com

Copy the public key and then ssh normally into the remote machine as the root user. Create a file called ~/.ssh/authorized_keys and paste in the key.

From an-node01, type:

ssh root@an-node02
The authenticity of host 'an-node02 (192.168.3.72)' can't be established.
RSA key fingerprint is 55:58:c3:32:e4:e6:5e:32:c1:db:5c:f1:36:e2:da:4b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'an-node02,192.168.3.72' (RSA) to the list of known hosts.
Last login: Fri Mar 11 20:45:58 2011 from 192.168.1.202

You will now be logged into an-node02 as the root user. Create the ~/.ssh/authorized_keys file and paste into it the public key from an-node01. If the remote machine's user hasn't used ssh yet, their ~/.ssh directory will not exist.

(Wrapped to make it more readable)

cat ~/.ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQBTNg6FZyDKm4GAm7c+F2enpLWy+t8Z
Zjm4Z3Q7EhX09ukqk/QmMqprtI9OsiRVjce+wGx4nZ8+Z0NHduCVuwAxG0XG7FpK
kUJC3Qb8KhyeIpKEcfYAtsDUFnWddVF8Tsz6dDOhb61tAke77d9E01NfyHp88QBx
jJ7w+ZgB2eLPBFm6j1t+K50JHwdcFfxrZFywKnAQIdH0NCs8VaW91fQZBupg4OGO
MpSBnVzoaz2ybI9bQtbZ4GwhCghzKx7Qjz20WiqhfPMfFqAZJwn0WXfjALoioMDW
avTbx+J2HM8KJ8/YkSSKdDEgZCItg0Q2fC35TDX+aJGu3xNfoaAe3lL1 root@an
-node01.alteeve.com

Now log out and then log back into the remote machine. This time, the connection should succeed without having entered a password!

Various applications will connect to the other node using different methods and networks. Each connection, when first established, will prompt for you to confirm that you trust the authentication, as we saw above. Many programs can't handle this prompt and will simply fail to connect. So to get around this, I will ssh into both nodes using all hostnames. This will populate a file called ~/.ssh/known_hosts. Once you do this on one node, you can simply copy the known_hosts to the other nodes and user's ~/.ssh/ directories.

I simply paste this into a terminal, answering yes and then immediately exiting from the ssh session. This is a bit tedious, I admit. Take the time to check the fingerprints as they are displayed to you. It is a bad habit to blindly type yes.

Alter this to suit your host names.

ssh root@an-node01 && \
ssh root@an-node01.alteeve.com && \
ssh root@an-node01.bcn && \
ssh root@an-node01.sn && \
ssh root@an-node01.ifn && \
ssh root@an-node02 && \
ssh root@an-node02.alteeve.com && \
ssh root@an-node02.bcn && \
ssh root@an-node02.sn && \
ssh root@an-node02.ifn

Keeping Time In Sync

It is very important that time on both nodes be kept in sync. The way to do this is to setup [[[NTP]], the network time protocol. I like to use the tick.redhat.com time server, though you are free to substitute your preferred time source.

First, add the timeserver to the NTP configuration file by appending the following lines to the end of the

echo server tick.redhat.com$'\n'restrict tick.redhat.com mask 255.255.255.255 nomodify notrap noquery >> /etc/ntp.conf
tail -n 4 /etc/ntp.conf
# Specify the key identifier to use with the ntpq utility.
#controlkey 8
server tick.redhat.com
restrict tick.redhat.com mask 255.255.255.255 nomodify notrap noquery

Now make sure that the ntpd service starts on boot, then start it manually.

chkconfig ntpd on
/etc/init.d/ntpd start
Starting ntpd:                                             [  OK  ]

Altering Boot Up

Note: These are an optional steps.

There are two changes I like to make on my nodes. These are not required, but I find it helps to keep things as simple as possible. Particularly in the earlier learning and testing stages.

Changing the Default Run-Level

If you choose not to implement it, please change any referenced to /etc/rc3.d to /etc/rc5.d later in this tutorial.

I prefer to minimize the running daemons and apps on my nodes for two reasons; Performance and security. One of the simplest ways to minimize the number of running programs is to change the run-level to 3 by editing /etc/inittab. This tells the node when it boots not to start the graphical interface and instead simply boot to a bash shell.

This change is actually quite simple. Simple edit /etc/inittab and change the line id:5:initdefault: to id:3:initdefault:.

vim /etc/inittab
# Default runlevel. The runlevels used by RHS are:
#   0 - halt (Do NOT set initdefault to this)
#   1 - Single user mode
#   2 - Multiuser, without NFS (The same as 3, if you do not have networking)
#   3 - Full multiuser mode
#   4 - unused
#   5 - X11
#   6 - reboot (Do NOT set initdefault to this)
# 
id:3:initdefault:

If you are still in a graphical environment and want to disable the GUI without rebooting, you can run init 3. Conversely, if you want to start the GUI for a certain task, you can do so my running init 5.

Making Boot Messages Visible

Another optional step, in-line with the change above, is to disable the rhgb (Red Hat Graphical Boot) and quiet kernel arguments. These options provide the clean boot screen you normally see with EL5, but they also hide a lot of boot messages that we may find helpful.

To make this change, edit the grub boot-loader menu and remove the rhgb quiet arguments from the kernel /vmlinuz... line. These arguments are usually the last ones on the line. If you leave this until later you may see two or more kernel entries. Delete these arguments where ever they are found.

vim /boot/grub/menu.lst

Change:

title CentOS (2.6.18-194.32.1.el5)
        root (hd0,0)
        kernel /vmlinuz-2.6.18-194.32.1.el5 ro root=LABEL=/ rhgb quiet
        initrd /initrd-2.6.18-194.32.1.el5.img

To:

title CentOS (2.6.18-194.32.1.el5)
        root (hd0,0)
        kernel /vmlinuz-2.6.18-194.32.1.el5 ro root=LABEL=/
        initrd /initrd-2.6.18-194.32.1.el5.img

There is nothing more to do now. Future reboots will be simple terminal displays.

Installing Packages We Will Use

There are several packages we will need. They can all be installed in one go with the following command.

If you have a slow or metered Internet connection, you may want to alter /etc/yum.conf and change keepcache=0 to keepcache=1 before installing packages. This way, you can then run you updates and installs on one node and then rsync the downloaded files from the first node to the second node. Once done, when you run the updates and installs on that second node, nothing more will be downloaded. To copy the cached RPMs, simply run rsync -av /var/cache/yum root@an-node02:/var/cache/ (assuming you did the initial downloads from an-node01).

Note: This is not complete yet.

yum install cman openais rgmanager lvm2-cluster gfs2-utils xen xen-libs kmod-xenpv \
            drbd83 kmod-drbd83-xen virt-manager virt-viewer libvirt libvirt-python \
            python-virtinst luci ricci

This will drag in a good number of dependencies, which is fine.

Setting Up Xen

It may seem premature to discuss Xen before the cluster itself. The reason we need to look at it now, before the cluster, is because Xen makes some fairly significant changes to the networking. Given how changes to networking can effect the cluster, we will want to get these changes out of the way.

We're not going to provision any virtual machines until the cluster is built.

A Brief Overview

Xen is a hypervisor the converts the installed operating system into a virtual machine running on a small Xen kernel. This same small kernel also runs all of the virtual machines you will add later. In this way, you will always be working in a virtual machine once you switch to booting a Xen kernel. In Xen terminology, virtual machines are known as domains.

The "host" operating system is known as dom0 (domain 0) and has a special view of the hardware plus contains the configuration and control of Xen itself. All other Xen virtual machines are known as domU (domain U). This is a collective term that represents the transient ID number assigned to all virtual machines. For example, when you boot the first virtual machine, it is known as dom1. The next will be dom2, then dom3 and so on. Do note that if a domU shuts down, it's ID is not reused. So when it restarts, it will use the next free ID (ie: dom4 in this list, despite it having been, say, dom1 initially).

This makes Xen somewhat unique in the virtualization world. Most others do not touch or alter the "host" OS, instead running the guest VMs fully withing the context of the host operating system.

Understanding Networking in Xen

Xen uses a fairly complex networking system. This is, perhaps, it's strongest point. The trade off though is that it can be a little tricky to wrap your head around. To help you become familiar, there is a short tutorial dedicated to this topic. Please read it over before proceeding in you are not familiar with Xen's networking.

Taking the time to read and understand the mini-paper below will save you a lot of heartache in the following stages.

Making Network Interfaces Available To Xen Clients

As discussed above, Xen makes some significant changes to the dom0 network, which happens to be where the cluster will operate. These changes including shutting down and moving around the interfaces. As we will discuss later, this behaviour can trigger cluster failures. This is the main reason for dealing with Xen now. Once the changes are in place, the network is stable and safe for running the cluster on.

A Brief Overview

By default, Xen only makes eth0 available to the virtual machines. We will want to add eth2 as well, as we will use the Back Channel Network for inter-VM communication. We do not want to add the Storage Network to Xen though! Doing so puts the DRBD link at risk. Should xend get shut down, it could trigger a split-brain in DRBD.

What Xen does, in brief, is move the "real" eth0 over to a new device called peth0. Then it creates a virtual "clone" of the network interface called eth0. Next, Xen creates a bridge called xenbr0. Finally, both the real peth0 and the new virtual eth0 are connected to the xenbr0 bridge.

The reasoning behind all this is to separate the traffic coming to and from dom0 from any traffic doing to the various domUs. Think of it sort of like the bridge being a network switch, the peth0 being an uplink cable to the outside world and the virtual eth0 being dom0's "port" on the switch. We want the same to be done to the interface on the Back-Channel Network, too. The Storage Network will never be exposed to the domU machines, so combining the risk to the underlying storage, there is no reason to add eth1 to Xen's control.

Disable the 'qemu' Bridge

By default, libvirtd creates a bridge called virbr0 designed to connect virtual machines to the first eth0 interface. Our system will not need this, so we will remove it. This bridge is configured in the /etc/libvirt/qemu/networks/default.xml file, so to remove this bridge, simply delete the contents of the file.

cat /dev/null >/etc/libvirt/qemu/networks/default.xml

The next time you reboot, that bridge will be gone.

Create /etc/xen/scripts/an-network-script

We will create a script that Xen will be told to use for bringing up the "xenified" network interfaces.

Please note:

  1. You don't need to use the name 'an-network-script'. I suggest this name mainly to keep in line with the rest of the 'AN!x' naming used on this wiki.
  2. If you install convirt (not discussed further here), it will create it's own bridge script called convirt-xen-multibridge. Other tools may do something similar.

First, touch the file and then chmod it to be executable.

touch /etc/xen/scripts/an-network-script
chmod 755 /etc/xen/scripts/an-network-script

Now edit it to contain the following:

vim /etc/xen/scripts/an-network-script
#!/bin/sh
dir=$(dirname "$0")
"$dir/network-bridge" "$@" vifnum=0 netdev=eth0 bridge=xenbr0
"$dir/network-bridge" "$@" vifnum=2 netdev=eth2 bridge=xenbr2

Now tell Xen to reference that script by editing /etc/xen/xend-config.sxp file and changing the network-script argument to point to this new script (this is line 91 in the default xend-config.sxp script):

vim /etc/xen/xend-config.sxp
#(network-script network-bridge)
(network-script an-network-script)

Finally, check that it works by (re)starting xend:

/etc/init.d/xend restart
restart xend:                                              [  OK  ]

Now we'll use ifconfig to see the new network configuration (with a dash of creative grep to save screen space):

ifconfig |grep "Link encap" -A 1
eth0      Link encap:Ethernet  HWaddr 48:5B:39:3C:53:15
          inet addr:192.168.1.71  Bcast:192.168.1.255  Mask:255.255.255.0
--
eth1      Link encap:Ethernet  HWaddr 00:1B:21:72:96:E8
          inet addr:192.168.2.71  Bcast:192.168.2.255  Mask:255.255.255.0
--
eth2      Link encap:Ethernet  HWaddr 00:1B:21:72:9B:56
          inet addr:192.168.3.71  Bcast:192.168.3.255  Mask:255.255.255.0
--
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
--
peth0     Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF  
          inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
--
peth2     Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF  
          inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
--
vif0.0    Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF  
          inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
--
vif0.2    Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF  
          inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
--
xenbr0    Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF  
          UP BROADCAST RUNNING NOARP  MTU:1500  Metric:1
--
xenbr2    Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF  
          UP BROADCAST RUNNING NOARP  MTU:1500  Metric:1

If you see this, the Xen networking is setup properly!

Altering When xend Starts

As was mentioned, xend rather dramatically modified the networking when it starts. We need to now make sure that xend starts before cman, which is not the case by default. To do this, we will edit the /etc/init.d/xend script and change it's default start position from 98 to 12.

Edit /etc/init.d/xend:

vim /etc/init.d/xend

And change the chkconfig: 2345 98 01 header from:

#!/bin/bash
#
# xend          Script to start and stop the Xen control daemon.
#
# Author:       Keir Fraser <keir.fraser@cl.cam.ac.uk>
#
# chkconfig: 2345 98 01
# description: Starts and stops the Xen control daemon.

To chkconfig: 2345 12 01:

#!/bin/bash
#
# xend          Script to start and stop the Xen control daemon.
#
# Author:       Keir Fraser <keir.fraser@cl.cam.ac.uk>
#
# chkconfig: 2345 12 01
# description: Starts and stops the Xen control daemon.

Now delete and re-add the xend initialization script using chkconfig.

chkconfig xend off
chkconfig xend on

If it worked, you should see it now higher up the start list than cman (ignore xendomains:

ls -lah /etc/rc3.d/ | grep -e cman -e xend
lrwxrwxrwx  1 root root   20 Mar  2 11:36 K00xendomains -> ../init.d/xendomains
lrwxrwxrwx  1 root root   14 Mar 14 11:38 S12xend -> ../init.d/xend
lrwxrwxrwx  1 root root   14 Mar 14 11:38 S21cman -> ../init.d/cman

That's it! The initial Xen configuration is done and we can start on the cluster configuration itself!

Cluster Setup

In Red Hat Cluster Services, the heart of the cluster is found in the /etc/cluster/cluster.conf XML configuration file.

There are three main ways of editing this file. Two are already well documented, so I won't bother discussing them, beyond introducing them. The third way is by directly hand-crafting the cluster.conf file. This method is not very well documented, and directly manipulating configuration files is my preferred method. As my boss loves to say; The more computers do for you, the more they do to you". I've grudging come to agree with him.

The first two, well documented, graphical interface methods are:

  • system-config-cluster, older GUI tool run directly from one of the cluster nodes.
  • Conga, comprised of the ricci node-side client and the luci web-based server (can be run on machines outside the cluster).

I do like the tools above, but I often find issues that send me back to the command line. I'd recommend setting them aside for now as well. Once you feel comfortable with cluster.conf syntax, then by all means, go back and use them. I'd recommend not relying on them though, which might be the case if you try to use them too early in your studies.

The First cluster.conf Foundation Configuration

The very first stage of building the cluster is to create a configuration file that is as minimal as possible. To do that, we need to define a few thing;

  • The name of the cluster and the cluster file version.
    • Define cman options
    • The nodes in the cluster
      • The fence method for each node
    • Define fence devices
    • Define fenced options

That's it. Once we've defined this minimal amount, we will be able to start the cluster for the first time! So lets get to it, finally.

Name the Cluster and Set The Configuration Version

The cluster tag is the parent tag for the entire cluster configuration file.

<?xml version="1.0"?>
<cluster name="an-cluster" config_version="1">
</cluster>

This has two attributes that we need to set are name="" and config_version="".

The name="" attribute defines the name of the cluster. It must be unique amongst the clusters on your network. It should be descriptive, but you will not want to make it too long, either. You will see this name in the various cluster tools and you will enter in, for example, when creating a GFS2 partition later on. This tutorial uses the cluster name an_cluster.

The config_version="" attribute is an integer marking the version of the configuration file. Whenever you make a change to the cluster.conf file, you will need to increment this version number by 1. If you don't increment this number, then the cluster tools will not know that the file needs to be reloaded. As this is the first version of this configuration file, it will start with 1. Note that this tutorial will increment the version after every change, regardless of whether it is explicitly pushed out to the other nodes and reloaded. The reason is to help get into the habit of always increasing this value.

Configuring cman Options

We are going to setup a special case for our cluster; A 2-Node cluster.

This is a special case because traditional quorum will not be useful. With only two nodes, each having a vote of 1, the total votes is 2. Quorum needs 50% + 1, which means that a single node failure would shut down the cluster, as the remaining node's vote is 50% exactly. That kind of defeats the purpose to having a cluster at all.

So to account for this special case, there is a special attribute called two_node="1". This tells the cluster manager to continue operating with only one vote. This option requires that the expected_votes="" attribute be set to 1. Normally, expected_votes is set automatically to the total sum of the defined cluster nodes' votes (which itself is a default of 1). This is the other half of the "trick", as a single node's vote of 1 now always provides quorum (that is, 1 meets the 50% + 1 requirement).

<?xml version="1.0"?>
<cluster name="an-cluster" config_version="2">
	<cman expected_votes="1" two_node="1"/>
</cluster>

Take note of the self-closing <... /> tag. This is an XML syntax that tells the parser not to look for any child or a closing tags.

Defining Cluster Nodes

This example is a little artificial, please don't load it into your cluster as we will need to add a few child tags, but one thing at a time.

This actually introduces two tags.

The first is parent clusternodes tag, which takes no variables of it's own. It's sole purpose is to contain the clusternode child tags.

<?xml version="1.0"?>
<cluster name="an-cluster" config_version="3">
	<cman expected_votes="1" two_node="1"/>
	<clusternodes>
		<clusternode name="an-node01.alteeve.com" nodeid="1" />
		<clusternode name="an-node02.alteeve.com" nodeid="2" />
	</clusternodes>
</cluster>

The clusternode tag defines each cluster node. There are many attributes available, but we will look at just the two required ones.

The first is the name="" attribute. This must match the name given by uname -n when run on each node. The IP address that the name resolves to also sets the interface and subnet that the totem ring will run on. That is, the main cluster communications, which we are calling the Back-Channel Network. This is why it is so important to setup our /etc/hosts file correctly.

The second attribute is nodeid="". This must be a unique integer amongst the <clusternode ...> tags. It is used by the cluster to identify the node.

Defining Fence Devices

Fencing devices are designed to forcible eject a node from a cluster. This is done by forcing it to power off or reboot, generally. Some SAN switches can logically disconnect a node from the shared storage device, which has the same effect of guaranteeing that the defective node can not alter the shared storage. A common, third type of fence device is one that cuts the mains power to the server.

All fence devices are contained withing the parent fencedevices tag. This parent tag has no attributes. Within this parent tag are one or more fencedevice child tags.

<?xml version="1.0"?>
<cluster name="an-cluster" config_version="4">
	<cman expected_votes="1" two_node="1"/>
	<clusternodes>
		<clusternode name="an-node01.alteeve.com" nodeid="1" />
		<clusternode name="an-node02.alteeve.com" nodeid="2" />
	</clusternodes>
	<fencedevices>
		<fencedevice agent="fence_na" ipaddr="batou.alteeve.com" login="admin" name="batou" passwd="secret" quiet="1"/>
	</fencedevices>
</cluster>

Every fence device used in your cluster will have it's own fencedevice tag. If you are using IPMI, this means you will have a fencedevice entry for each node, as each physical IPMI BMC is a unique fence device.

All fencedevice tags share two basic attributes; name="" and agent="".

  • The name attribute must be unique among all the fence devices in your cluster. As we will see in the next step, this name will be used within the <clusternode...> tag.
  • The agent tag tells the cluster which fence agent to use when the fenced daemon needs to communicate with the physical fence device. A fence agent is simple a shell script that acts as a glue layer between the fenced daemon and the fence hardware. This agent takes the arguments from the daemon, like what port to act on and what action to take, and executes the node. The agent is responsible for ensuring that the execution succeeded and returning an appropriate success or failure exit code, depending. For those curious, the full details are described in the FenceAgentAPI. If you have two or more of the same fence device, like IPMI, then you will use the same fence agent value a corresponding number of times.

Beyond these two attributes, each fence agent will have it's own subset of attributes. The scope of which is outside this tutorial, though we will see examples for IPMI, a switched PDU and a Node Assassin. Most, if not all, fence agents have a corresponding man page that will show you what attributes it accepts and how they are used. The two fence agents we will see here have their attributes defines in the following man pages.

  • man fence_na - Node Assassin fence agent
  • man fence_ipmilan - IPMI fence agent

The example above is what this tutorial will use.

Example <fencedevice...> Tag For Node Assassin

This is the device used throughout this tutorial. It is for the open source, open hardware Node Assassin fence device that you can build yourself.

	<fencedevices>
		<fencedevice agent="fence_na" ipaddr="batou.alteeve.com" login="admin" name="batou" passwd="secret" quiet="1"/>
	</fencedevices>

Being a network-attached fence device, as most fence devices are, the attributes for fence_na include connection information. The attribute variable names are generally the same across fence agents, and they are:

  • ipaddr; This is the resolvable name or IP address of the device. If you use a resolvable name, it is strongly advised that you put the name in /etc/hosts as DNS is another layer of abstraction which could fail.
  • login; This is the login name to use when the fenced daemon connects to the device. This is configured in /etc/cluster/fence_na.conf.
  • passwd; This is the login password to use when the fenced daemon connects to the device. This is also configured in /etc/cluster/fence_na.conf.
  • name; This is the name of this particular fence device within the cluster which, as we will see shortly, is matched in the <clusternode...> element where appropriate.
  • quiet; This is a Node Assassin specific argument. It is used to generate no output to STDOUT when run, as there is no terminal to print to or user to view it.

Example <fencedevice...> Tag For IPMI

Here we will show what IPMI <fencedevice...> tags look like. We won't be using it ourselves, but it is quite popular as a fence device so I wanted to show an example of it's use.

	<fencedevices>
		<fencedevice name="an01_ipmi" agent="fence_ipmilan" ipaddr="192.168.4.71" login="admin" passwd="secret" />
		<fencedevice name="an02_ipmi" agent="fence_ipmilan" ipaddr="192.168.4.72" login="admin" passwd="secret" />
	</fencedevices>
  • ipaddr; This is the resolvable name or IP address of the device. If you use a resolvable name, it is strongly advised that you put the name in /etc/hosts as DNS is another layer of abstraction which could fail.
  • login; This is the login name to use when the fenced daemon connects to the device. This is configured in /etc/cluster/fence_na.conf.
  • passwd; This is the login password to use when the fenced daemon connects to the device. This is also configured in /etc/cluster/fence_na.conf.
  • name; This is the name of this particular fence device within the cluster which, as we will see shortly, is matched in the <clusternode...> element where appropriate.

Note: We will see shortly that, unlike switched PDUs, Node Assassin or other network fence devices, IPMI does not have ports. This is because each IPMI BMC supports just it's host system. More on that later.

Using the Fence Devices

Now we have nodes and fence devices defined, we will go back and tie them together. This is done by:

  • Defining a fence tag containing all fence methods and devices.
    • Defining one or more method tag(s) containing the device call(s) needed for each fence attempt.
      • Defining one or more device tag(s) containing attributes describing how to call the fence device to kill this node.

This tutorial will be using just a Node Assassin fence device. We'll look at an example adding IPMI in a moment though, as IPMI is a very common fence device and one you will very likely use.

<?xml version="1.0"?>
<cluster name="an-cluster" config_version="5">
	<cman expected_votes="1" two_node="1"/>
	<clusternodes>
		<clusternode name="an-node01.alteeve.com" nodeid="1">
			<fence>
				<method name="node_assassin">
					<device name="batou" port="01" action="reboot"/>
				</method>
			</fence>
		</clusternode>
		<clusternode name="an-node02.alteeve.com" nodeid="2">
			<fence>
				<method name="node_assassin">
					<device name="batou" port="02" action="reboot"/>
				</method>
			</fence>
		</clusternode>
	</clusternodes>
	<fencedevices>
		<fencedevice name="batou" agent="fence_na" ipaddr="batou.alteeve.com" login="admin" passwd="secret" quiet="1"/>
	</fencedevices>
</cluster>

First, notice that the fence tag has no attributes. It's merely a container for the method(s).

The next level is the method named node_assassin. This name is merely a description and can be whatever you will is most appropriate. It's purpose is simply to help you distinguish this method from other methods. The reason for method tags is that some fence device calls will have two or more steps. A classic example would be a node with a redundant power supply on a switch PDU acting as the fence device. In such a case, you will need to define multiple device tags, one for each power cable feeding the node. In such a case, the cluster will not consider the fence a success unless and until all contained device calls execute successfully.

The actual fence device configuration is the final piece of the puzzle. It is here that you specify per-node configuration options and link these attributes to a given fencedevice. Here, we see the link to the fencedevice via the name, batou in this example.

Let's step through an example fence call to help show how the per-cluster and fence device attributes are combined during a fence call.

  • The cluster manager decides that a node needs to be fenced. Let's say that the victim is an-node02.
  • The first method in the fence section under an-node02 is consulted. Within it there is just one device, named batou and having two attributes;
    • port; This tells the cluster that an-node02 is connected to the Node Assassin's port number 02.
    • action; This tells the cluster that the fence action to take is reboot. How this action is actually interpreted depends on the fence device in use, though the name certainly implies that the node will be forced off and then restarted.
  • The cluster searches in fencedevices for a fencedevice matching the name batou. This fence device has five attributes;
    • agent; This tells the cluster to call the fence_na fence agent script, as we discussed earlier.
    • ipaddr; This tells the fence agent where on the network to find this particular Node Assassin. This is how multiple fence devices of the same type can be used in the cluster.
    • login; This is the login user name to use when authenticating against the fence device.
    • passwd; This is the password to supply along with the login name when authenticating against the fence device.
    • quiet; This is a device-specific argument that Node Assassin uses (see man fence_na for details).
  • With this information collected and compiled, the fenced daemon will call the fence agent and pass it the attribute variable=value pairs, one per line. Thus, the fenced daemon will call:
/sbin/fence_na

Then it will pass to that agent the following arguments:

ipaddr=batou.alteeve.com
login=admin
passwd=secret
quiet=1
port=02
action=reboot

As you can see then, the first four arguments are from the fencedevice attributes and the last two are from the device attributes under an-node02's clusternode's fence tag.

When you have two or more method tags defined, then the first in the list will be tried. If any of it's device tags fail, then the method is considered to have failed and the next method is consulted. This will repeat until all method entries have been tried. At that point, the cluster goes back to the first method and tries again, repeating the walk through of all methods. This loop will continue until one method succeeds, regardless of how long that might take.

An Example Showing IPMI's Use

This is a full configuration file showing what it would look like if we were using IPMI and a Node Assassin for redundant fencing.

<?xml version="1.0"?>
<cluster name="an-cluster" config_version="6">
	<cman expected_votes="1" two_node="1"/>
	<clusternodes>
		<clusternode name="an-node01.alteeve.com" nodeid="1">
			<fence>
				<method name="node_assassin">
					<device name="batou" port="01" action="reboot"/>
				</method>
				<method name="an-node01_ipmi">
					<device name="an01_ipmi" action="reboot"/>
				</method>
			</fence>
		</clusternode>
		<clusternode name="an-node02.alteeve.com" nodeid="2">
			<fence>
				<method name="node_assassin">
					<device name="batou" port="02" action="reboot"/>
				</method>
				<method name="an-node02_ipmi">
					<device name="an02_ipmi" action="reboot"/>
				</method>
			</fence>
		</clusternode>
	</clusternodes>
	<fencedevices>
		<fencedevice name="batou" agent="fence_na" ipaddr="batou.alteeve.com" login="admin" passwd="secret" quiet="1"/>
		<fencedevice name="an01_ipmi" agent="fence_ipmilan" ipaddr="192.168.4.71" login="admin" passwd="secret" />
		<fencedevice name="an02_ipmi" agent="fence_ipmilan" ipaddr="192.168.4.72" login="admin" passwd="secret" />
	</fencedevices>
</cluster>

We now see three elements in fencedevices. The first is the original Node Assassin entry plus two IPMI entries, one for each node in the cluster. As we touched on earlier, this is because each node has it's own IPMI BMC. In the same vein, we also now see that the device entries in each node's method element have no port setting.

Notice that the Node Assassin's method is above the IPMI method. This means that the Node Assassin is the primary fence device and the IPMI is the secondary. When deciding which order to assign the fence devices, consider the device's potential for failure and how that might effect cluster recovery time. For example, many IPMI BMCs rely on the node's power supply to operate. Thus, if the node's power supply fails and the IPMI is the first fence device, then recovery will be delayed as the cluster will try, and then wait until it times out, before moving on to the networked fence device, Node Assassin in this instance.

Give Nodes More Time To Start

Clusters with more than three nodes will have to gain quorum before they can fence other nodes. As we saw earlier though, this is not really the case when using the two_node="1" attribute in the cman tag. What this means in practice is that if you start the cluster on one node and then wait too long to start the cluster on the second node, the first will fence the second.

The logic behind this is; When the cluster starts, it will try to talk to it's fellow node and then fail. With the special two_node="1" attribute set, the cluster knows that it is allowed to start clustered services, but it has no way to say for sure what state the other node is in. It could well be online and hosting services for all it knows. So it has to proceed on the assumption that the other node is alive and using shared resources. Given that, and given that it can not talk to the other node, it's only safe option is to fence the other node. Only then can it be confident that it is safe to start providing clustered services.

<?xml version="1.0"?>
<cluster name="an-cluster" config_version="7">
	<cman expected_votes="1" two_node="1"/>
	<clusternodes>
		<clusternode name="an-node01.alteeve.com" nodeid="1">
			<fence>
				<method name="node_assassin">
					<device name="batou" port="01" action="reboot"/>
				</method>
			</fence>
		</clusternode>
		<clusternode name="an-node02.alteeve.com" nodeid="2">
			<fence>
				<method name="node_assassin">
					<device name="batou" port="02" action="reboot"/>
				</method>
			</fence>
		</clusternode>
	</clusternodes>
	<fencedevices>
		<fencedevice name="batou" agent="fence_na" ipaddr="batou.alteeve.com" login="admin" passwd="secret" quiet="1"/>
	</fencedevices>
        <fence_daemon post_join_delay="60"/>
</cluster>

The new tag is fence_daemon, seen near the bottom if the file above. The change is made using the post_join_delay="60" attribute. By default, the cluster will declare the other node dead after just 6 seconds. The reason is that the larger this value, the slower the start-up of the cluster services will be. During testing and development though, I find this value to be far too short and frequently led to unnecessary fencing. Once your cluster is setup and working, it's not a bad idea to reduce this value to the lowest value that you are comfortable with.

Configuring Totem

This is almost a misnomer, as we're more or less not configuring the totem protocol in this cluster.

<?xml version="1.0"?>
<cluster name="an-cluster" config_version="8">
	<cman expected_votes="1" two_node="1"/>
	<clusternodes>
		<clusternode name="an-node01.alteeve.com" nodeid="1">
			<fence>
				<method name="node_assassin">
					<device name="batou" port="01" action="reboot"/>
				</method>
			</fence>
		</clusternode>
		<clusternode name="an-node02.alteeve.com" nodeid="2">
			<fence>
				<method name="node_assassin">
					<device name="batou" port="02" action="reboot"/>
				</method>
			</fence>
		</clusternode>
	</clusternodes>
	<fencedevices>
		<fencedevice name="batou" agent="fence_na" ipaddr="batou.alteeve.com" login="admin" passwd="secret" quiet="1"/>
	</fencedevices>
        <fence_daemon post_join_delay="60"/>
        <totem rrp_mode="none" secauth="off"/>
</cluster>

In the spirit of "keeping it simple", we're not configuring redundant ring protocol in this cluster. RRP is an optional second ring that can be used for cluster communication in the case of a break down in the first ring. This is not the simplest option to setup, as recovery must be done manually. However, if you wish to explore it further, please take a look at the clusternode element tag called <altname...>. When altname is used though, then the rrp_mode attribute will need to be changed to either active or passive (the details of which are outside the scope of this tutorial).

The second option we're looking at here is the secauth="off" attribute. This controls whether the cluster communications are encrypted or not. We can safely disable this because we're working on a known-private network, which yields two benefits; It's simpler to setup and it's a lot faster. If you must encrypt the cluster communications, then you can do so here. The details of which are also outside the scope of this tutorial though.

Validating Out /etc/cluster/cluster.conf File

The cluster software validates the /etc/cluster/cluster.conf file against /usr/share/system-config-cluster/misc/cluster.ng using the xmllint program. If it fails to validate, the cluster will refuse to start.

So now that we've got the foundation of our cluster ready, the last step is to validate it. To do so, simply run:

xmllint --relaxng /usr/share/system-config-cluster/misc/cluster.ng /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster name="an-cluster" config_version="8">
	<cman expected_votes="1" two_node="1"/>
	<clusternodes>
		<clusternode name="an-node01.alteeve.com" nodeid="1">
			<fence>
				<method name="node_assassin">
					<device name="batou" port="01" action="reboot"/>
				</method>
			</fence>
		</clusternode>
		<clusternode name="an-node02.alteeve.com" nodeid="2">
			<fence>
				<method name="node_assassin">
					<device name="batou" port="02" action="reboot"/>
				</method>
			</fence>
		</clusternode>
	</clusternodes>
	<fencedevices>
		<fencedevice name="batou" agent="fence_na" ipaddr="batou.alteeve.com" login="admin" passwd="secret" quiet="1"/>
	</fencedevices>
        <fence_daemon post_join_delay="60"/>
        <totem rrp_mode="none" secauth="off"/>
</cluster>
/etc/cluster/cluster.conf validates

If there was a problem, you need to go back and fix it. DO NOT proceed until your configuration validates. Once it does, we're ready to move on!

Starting the Cluster For The First Time

At this point, we have the foundation of the cluster in place and we can start it up!

Keeping an Eye on Things

I've found a layout of four terminal windows, the left ones being 80 columns wide and the right ones filling the rest of the screen, works well. I personally run a tail -f -n 0 /var/log/messages in the right windows so that I can keep an eye on things.

The terminal layout I use to monitor and operate the two nodes in the cluster.

Of course, what you use is entirely up to you, your screen real-estate and your preferences.

A Note on Timing

Remember that you have post_join_delay seconds to start both nodes, which is 60 seconds in our configuration. So be sure that you can start the cman daemon quickly on both nodes. I generally ensure that both terminal windows have the start command typed in, so that I can quickly press <enter> on both nodes. Again, how you do this is entirely up to you.

All Systems Are Go!

Time to start cman on both nodes!

On both nodes, run the following command:

/etc/init.d/cman start
Starting cluster: 
   Loading modules... done
   Mounting configfs... done
   Starting ccsd... done
   Starting cman... done
   Starting daemons... done
   Starting fencing... done
                                                           [  OK  ]

If things went well, you should see something like this in the /var/log/messages terminal on both nodes:

Mar 27 22:10:30 an-node01 ccsd[6229]: Starting ccsd 2.0.115: 
Mar 27 22:10:30 an-node01 ccsd[6229]:  Built: Nov 11 2010 13:23:04 
Mar 27 22:10:30 an-node01 ccsd[6229]:  Copyright (C) Red Hat, Inc.  2004  All rights reserved. 
Mar 27 22:10:30 an-node01 ccsd[6229]: cluster.conf (cluster name = an-cluster, version = 8) found. 
Mar 27 22:10:31 an-node01 openais[6235]: [MAIN ] AIS Executive Service RELEASE 'subrev 1887 version 0.80.6' 
Mar 27 22:10:31 an-node01 openais[6235]: [MAIN ] Copyright (C) 2002-2006 MontaVista Software, Inc and contributors. 
Mar 27 22:10:31 an-node01 openais[6235]: [MAIN ] Copyright (C) 2006 Red Hat, Inc. 
Mar 27 22:10:31 an-node01 openais[6235]: [MAIN ] AIS Executive Service: started and ready to provide service. 
Mar 27 22:10:31 an-node01 openais[6235]: [MAIN ] Using default multicast address of 239.192.122.47 
Mar 27 22:10:31 an-node01 openais[6235]: [TOTEM] Token Timeout (10000 ms) retransmit timeout (495 ms) 
Mar 27 22:10:31 an-node01 openais[6235]: [TOTEM] token hold (386 ms) retransmits before loss (20 retrans) 
Mar 27 22:10:31 an-node01 openais[6235]: [TOTEM] join (60 ms) send_join (0 ms) consensus (2000 ms) merge (200 ms) 
Mar 27 22:10:31 an-node01 openais[6235]: [TOTEM] downcheck (1000 ms) fail to recv const (50 msgs) 
Mar 27 22:10:31 an-node01 openais[6235]: [TOTEM] seqno unchanged const (30 rotations) Maximum network MTU 1402 
Mar 27 22:10:31 an-node01 openais[6235]: [TOTEM] window size per rotation (50 messages) maximum messages per rotation (17 messages) 
Mar 27 22:10:31 an-node01 openais[6235]: [TOTEM] missed count const (5 messages) 
Mar 27 22:10:31 an-node01 openais[6235]: [TOTEM] send threads (0 threads) 
Mar 27 22:10:31 an-node01 openais[6235]: [TOTEM] RRP token expired timeout (495 ms) 
Mar 27 22:10:31 an-node01 openais[6235]: [TOTEM] RRP token problem counter (2000 ms) 
Mar 27 22:10:31 an-node01 openais[6235]: [TOTEM] RRP threshold (10 problem count) 
Mar 27 22:10:31 an-node01 openais[6235]: [TOTEM] RRP mode set to none. 
Mar 27 22:10:31 an-node01 openais[6235]: [TOTEM] heartbeat_failures_allowed (0) 
Mar 27 22:10:31 an-node01 openais[6235]: [TOTEM] max_network_delay (50 ms) 
Mar 27 22:10:31 an-node01 openais[6235]: [TOTEM] HeartBeat is Disabled. To enable set heartbeat_failures_allowed > 0 
Mar 27 22:10:31 an-node01 openais[6235]: [TOTEM] Receive multicast socket recv buffer size (262142 bytes). 
Mar 27 22:10:31 an-node01 openais[6235]: [TOTEM] Transmit multicast socket send buffer size (262142 bytes). 
Mar 27 22:10:32 an-node01 openais[6235]: [TOTEM] The network interface [192.168.3.71] is now up. 
Mar 27 22:10:32 an-node01 openais[6235]: [TOTEM] Created or loaded sequence id 552.192.168.3.71 for this ring. 
Mar 27 22:10:32 an-node01 openais[6235]: [TOTEM] entering GATHER state from 15. 
Mar 27 22:10:32 an-node01 openais[6235]: [CMAN ] CMAN 2.0.115 (built Nov 11 2010 13:23:08) started 
Mar 27 22:10:32 an-node01 openais[6235]: [MAIN ] Service initialized 'openais CMAN membership service 2.01' 
Mar 27 22:10:32 an-node01 openais[6235]: [SERV ] Service initialized 'openais extended virtual synchrony service' 
Mar 27 22:10:32 an-node01 openais[6235]: [SERV ] Service initialized 'openais cluster membership service B.01.01' 
Mar 27 22:10:32 an-node01 openais[6235]: [SERV ] Service initialized 'openais availability management framework B.01.01' 
Mar 27 22:10:32 an-node01 openais[6235]: [SERV ] Service initialized 'openais checkpoint service B.01.01' 
Mar 27 22:10:32 an-node01 openais[6235]: [SERV ] Service initialized 'openais event service B.01.01' 
Mar 27 22:10:32 an-node01 openais[6235]: [SERV ] Service initialized 'openais distributed locking service B.01.01' 
Mar 27 22:10:32 an-node01 openais[6235]: [SERV ] Service initialized 'openais message service B.01.01' 
Mar 27 22:10:32 an-node01 openais[6235]: [SERV ] Service initialized 'openais configuration service' 
Mar 27 22:10:32 an-node01 openais[6235]: [SERV ] Service initialized 'openais cluster closed process group service v1.01' 
Mar 27 22:10:32 an-node01 openais[6235]: [SERV ] Service initialized 'openais cluster config database access v1.01' 
Mar 27 22:10:32 an-node01 openais[6235]: [SYNC ] Not using a virtual synchrony filter. 
Mar 27 22:10:32 an-node01 openais[6235]: [TOTEM] Creating commit token because I am the rep. 
Mar 27 22:10:32 an-node01 openais[6235]: [TOTEM] Saving state aru 0 high seq received 0 
Mar 27 22:10:32 an-node01 openais[6235]: [TOTEM] Storing new sequence id for ring 22c 
Mar 27 22:10:32 an-node01 openais[6235]: [TOTEM] entering COMMIT state. 
Mar 27 22:10:32 an-node01 openais[6235]: [TOTEM] entering RECOVERY state. 
Mar 27 22:10:32 an-node01 openais[6235]: [TOTEM] position [0] member 192.168.3.71: 
Mar 27 22:10:32 an-node01 openais[6235]: [TOTEM] previous ring seq 552 rep 192.168.3.71 
Mar 27 22:10:32 an-node01 openais[6235]: [TOTEM] aru 0 high delivered 0 received flag 1 
Mar 27 22:10:32 an-node01 openais[6235]: [TOTEM] Did not need to originate any messages in recovery. 
Mar 27 22:10:32 an-node01 openais[6235]: [TOTEM] Sending initial ORF token 
Mar 27 22:10:32 an-node01 openais[6235]: [CLM  ] CLM CONFIGURATION CHANGE 
Mar 27 22:10:32 an-node01 openais[6235]: [CLM  ] New Configuration: 
Mar 27 22:10:32 an-node01 openais[6235]: [CLM  ] Members Left: 
Mar 27 22:10:32 an-node01 openais[6235]: [CLM  ] Members Joined: 
Mar 27 22:10:32 an-node01 openais[6235]: [CLM  ] CLM CONFIGURATION CHANGE 
Mar 27 22:10:32 an-node01 openais[6235]: [CLM  ] New Configuration: 
Mar 27 22:10:32 an-node01 openais[6235]: [CLM  ] 	r(0) ip(192.168.3.71)  
Mar 27 22:10:32 an-node01 openais[6235]: [CLM  ] Members Left: 
Mar 27 22:10:32 an-node01 openais[6235]: [CLM  ] Members Joined: 
Mar 27 22:10:32 an-node01 openais[6235]: [CLM  ] 	r(0) ip(192.168.3.71)  
Mar 27 22:10:32 an-node01 openais[6235]: [SYNC ] This node is within the primary component and will provide service. 
Mar 27 22:10:32 an-node01 openais[6235]: [TOTEM] entering OPERATIONAL state. 
Mar 27 22:10:32 an-node01 openais[6235]: [CMAN ] quorum regained, resuming activity 
Mar 27 22:10:32 an-node01 openais[6235]: [CLM  ] got nodejoin message 192.168.3.71 
Mar 27 22:10:32 an-node01 ccsd[6229]: Initial status:: Quorate 
Mar 27 22:10:33 an-node01 openais[6235]: [TOTEM] entering GATHER state from 11. 
Mar 27 22:10:33 an-node01 openais[6235]: [TOTEM] Creating commit token because I am the rep. 
Mar 27 22:10:33 an-node01 openais[6235]: [TOTEM] Saving state aru e high seq received e 
Mar 27 22:10:33 an-node01 openais[6235]: [TOTEM] Storing new sequence id for ring 234 
Mar 27 22:10:33 an-node01 openais[6235]: [TOTEM] entering COMMIT state. 
Mar 27 22:10:33 an-node01 openais[6235]: [TOTEM] entering RECOVERY state. 
Mar 27 22:10:33 an-node01 openais[6235]: [TOTEM] position [0] member 192.168.3.71: 
Mar 27 22:10:33 an-node01 openais[6235]: [TOTEM] previous ring seq 556 rep 192.168.3.71 
Mar 27 22:10:33 an-node01 openais[6235]: [TOTEM] aru e high delivered e received flag 1 
Mar 27 22:10:33 an-node01 openais[6235]: [TOTEM] position [1] member 192.168.3.72: 
Mar 27 22:10:33 an-node01 openais[6235]: [TOTEM] previous ring seq 560 rep 192.168.3.72 
Mar 27 22:10:33 an-node01 openais[6235]: [TOTEM] aru c high delivered c received flag 1 
Mar 27 22:10:33 an-node01 openais[6235]: [TOTEM] Did not need to originate any messages in recovery. 
Mar 27 22:10:33 an-node01 openais[6235]: [TOTEM] Sending initial ORF token 
Mar 27 22:10:33 an-node01 openais[6235]: [CLM  ] CLM CONFIGURATION CHANGE 
Mar 27 22:10:33 an-node01 openais[6235]: [CLM  ] New Configuration: 
Mar 27 22:10:33 an-node01 openais[6235]: [CLM  ] 	r(0) ip(192.168.3.71)  
Mar 27 22:10:33 an-node01 openais[6235]: [CLM  ] Members Left: 
Mar 27 22:10:33 an-node01 openais[6235]: [CLM  ] Members Joined: 
Mar 27 22:10:33 an-node01 openais[6235]: [CLM  ] CLM CONFIGURATION CHANGE 
Mar 27 22:10:33 an-node01 openais[6235]: [CLM  ] New Configuration: 
Mar 27 22:10:33 an-node01 openais[6235]: [CLM  ] 	r(0) ip(192.168.3.71)  
Mar 27 22:10:33 an-node01 openais[6235]: [CLM  ] 	r(0) ip(192.168.3.72)  
Mar 27 22:10:33 an-node01 openais[6235]: [CLM  ] Members Left: 
Mar 27 22:10:33 an-node01 openais[6235]: [CLM  ] Members Joined: 
Mar 27 22:10:33 an-node01 openais[6235]: [CLM  ] 	r(0) ip(192.168.3.72)  
Mar 27 22:10:33 an-node01 openais[6235]: [SYNC ] This node is within the primary component and will provide service. 
Mar 27 22:10:33 an-node01 openais[6235]: [TOTEM] entering OPERATIONAL state. 
Mar 27 22:10:33 an-node01 openais[6235]: [CLM  ] got nodejoin message 192.168.3.71 
Mar 27 22:10:33 an-node01 openais[6235]: [CLM  ] got nodejoin message 192.168.3.72 
Mar 27 22:10:33 an-node01 openais[6235]: [CPG  ] got joinlist message from node 1

What you see is:

  • The cluster configuration system daemon, ccsd, starts up and reads in /etc/cluster/cluster.conf. It reports the name of the cluster, an-cluster and the version, 8.
  • OpenAIS then starts up, reports it's multicast address it will use, reports many of it's variable values and what IP address it will use for cluster communications.
  • The Cluster Manager, cman, starts and reports the version of various services in use.
  • The totem protocol is started and it forms an initial configuration containing just itself. These messages have the prefix CLM, CLuster Membership.
    • Then it waits to see if the other node will join. On the other node's log, you will see it start off and immediately join with this first node.
  • The initial configuration is sufficient to gain quorum and declares that it will provide services.
  • The second node announces that it wants to join the first node's cluster membership and the cluster reconfigures.

If you got this, then you're cluster is up and running, congratulations!

Setting Up Clustered Storage

The next few steps will cover setting up the DRBD resources, using them in clustered LVM and the creating a GFS2 partition. Next, we will add it all as cluster resources and then create a service for each node to start up all of the clustered storage.

Creating Our DRBD Resources

We're going to create four DRBD resources;

  • A resource to back our shared GFS2 partition which will hold shared files, like our virtual machine configuration files.
  • A resource to back the VMs running primarily on an-node01.
  • A resource to back the VMs running primarily on an-node02.
  • A final resource that will be left alone for future expansion. This is optional, of course.

The "Why" of Our Layout

The reason for this is to minimize the chance of data loss in a split-brain event.

A split-brain occurs when a DRBD resource loses it's network link while in Primary/Primary mode. The problem is that, after the split, any write to either node is not replicated to the other node. Thus, after even one byte is written, the DRBD resource is out of sync. Once this happens, there is no real way to automate recovery. You will need to go in and manual flag one side of the resource to discard it's changes and then manually re-connect the two sides before the resource will be usable again.

We will take steps to prevent this, but it always a possibility with shared storage.

Given then that there is no sure way to avoid this, we're going to mitigate risk by breaking up our DRBD resources so that we can be more selective in choosing what parts to invalidate after a split brain event.

  • The small GFS2 partition will be the hardest to manage. For this reason, it is on it's own. For the same reason, we will be using it as little as we can, and copies of files we care about will be stored on each node. The main thing here are the VM configuration files. This should be written to rarely, so with luck, in a split brain condition, simply nothing will be written to either side so recovery should be arbitrary and simple.
  • The VMs that will primarily run on an-node01 will get their own resource. This way we can simply invalidate the DRBD device on the node that was not running the VMs during the split brain.
  • Likewise, the VMs primarily running on an-node02 will get their own resource. This way, if a split brain happens and VMs are running on both nodes, it should be easily to invalidate opposing nodes for the respective DRBD resource.
  • The fourth DRBD resource will just contain free space. This can later be added whole to an existing LVM VG or further divided up as needed in the future.

Modifying the Physical Storage

Warning: Multiple assumptions ahead. If you are comfortable fdisk (and possibly mdadm), you can largely skip this section. You will need to create four partitions; This tutorial uses a 10 GiB for shared files, two 100 GiB and the remainder of the space in the last partition. These will be four extended partitions, /dev/sda5, /dev/sda6, /dev/sda7 and /dev/sda8 respectively.

This tutorial, in the interest of simplicity and not aiming to be a disk management tutorial, uses single-disk storage on each node. If you only have one disk, or if you have hardware RAID, this is sufficient. However, if you have multiple disks and want to use software RAID on your nodes, you will need to create /dev/mdX devices to match the layout we will be creating.

We will need four new partitions; a 10 GiB partition for the GFS2 resource, two 100 GiB partitions for the VMs on either node and the remainder of the disk's free space for the last partition. To do this, we will use the fdisk tool. Be aware; This tool directly edits the hard drive's geometry. This is obviously risky! All along, this tutorial has assumed that you are working on test nodes, but it bears repeating again. Do not do this on a machine with data you care about! At the very least, have a good backup.

Finally, this assumes that you used the kickstart script when setting up your nodes. More to the point, it assumes an existing fourth primary partition which we will delete, convert to an extended partition and then within that create the four usable partitions.

So first, delete the fourth partition.

fdisk /dev/sda
The number of cylinders for this disk is set to 60801.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Confirm that the layout is indeed four partitions.

Command (m for help): p
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          32      257008+  83  Linux
/dev/sda2              33        2643    20972857+  83  Linux
/dev/sda3            2644        3165     4192965   82  Linux swap / Solaris
/dev/sda4            3166       60801   462961170   83  Linux

It is, so let's delete /dev/sda4 and then confirm that it is gone.

Command (m for help): d
Partition number (1-4): 4

Command (m for help): p
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          32      257008+  83  Linux
/dev/sda2              33        2643    20972857+  83  Linux
/dev/sda3            2644        3165     4192965   82  Linux swap / Solaris

It is, so now we'll create the extended partition.

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
e
Selected partition 4
First cylinder (3166-60801, default 3166): <enter>
Using default value 3166
Last cylinder or +size or +sizeM or +sizeK (3166-60801, default 60801): <enter>
Using default value 60801

Again, a quick check to make sure the extended partition is now there.

Command (m for help): p
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          32      257008+  83  Linux
/dev/sda2              33        2643    20972857+  83  Linux
/dev/sda3            2644        3165     4192965   82  Linux swap / Solaris
/dev/sda4            3166       60801   462961170    5  Extended

Finally, let's create the four partitions.

Command (m for help): n
First cylinder (3166-60801, default 3166): 
Using default value 3166
Last cylinder or +size or +sizeM or +sizeK (3166-60801, default 60801): +10G
Command (m for help): n
First cylinder (4383-60801, default 4383): <enter>
Using default value 4383
Last cylinder or +size or +sizeM or +sizeK (4383-60801, default 60801): +100G
Command (m for help): n
First cylinder (16542-60801, default 16542): <enter>
Using default value 16542
Last cylinder or +size or +sizeM or +sizeK (16542-60801, default 60801): +100G
Command (m for help): n
First cylinder (28701-60801, default 28701): <enter>
Using default value 28701
Last cylinder or +size or +sizeM or +sizeK (28701-60801, default 60801): <enter>
Using default value 60801

Finally, check that the four new partitions exist.

Command (m for help): p
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          32      257008+  83  Linux
/dev/sda2              33        2643    20972857+  83  Linux
/dev/sda3            2644        3165     4192965   82  Linux swap / Solaris
/dev/sda4            3166       60801   462961170    5  Extended
/dev/sda5            3166        4382     9775521   83  Linux
/dev/sda6            4383       16541    97667136   83  Linux
/dev/sda7           16542       28700    97667136   83  Linux
/dev/sda8           28701       60801   257851251   83  Linux

We do! So now we'll commit the changes to disk and exit.

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.

Warning: Repeat the steps on the other node and double-check that the output of fdisk -l /dev/sda shows the same tart and End boundaries. If they do not match, fix it before proceeding.

Note: This was done on the same disk as the host OS, so we'll need to reboot before we can proceed.

Creating the DRBD Resources

Now that we have either node's storage ready, we can configure and start the DRBD resources. DRBD has "resource names", which is it's internal reference to the "array". These names are used whenever you are working on the resource using drbdadm or similar tools. The tradition is to name the resources as rX, with X being a sequence number starting at 0. The resource itself is made available as a normal /dev/ block device. The tradition is to name this device /dev/drbdX where X matches the resource's sequence number.

The DRBD Fence Script

Red Hat's Lon Hohberger created a DRBD script called obliterate that allows DRBD to trigger a fence call through the cluster when it detects a split-brain condition. The goal behind this is to stop the resource(s) from being flagged as "split-brain" in the first place, thus avoiding manual recovery.

Download the script below and save it under your /sbin/ directory.

Then ensure that it is executable.

chmod 755 /sbin/obliterate
ls -lah /sbin/obliterate
-rwxr-xr-x 1 root root 2.1K Mar  4 23:44 /sbin/obliterate

Our Desired Layout in Detail

Let's review how we will bring the devices together.

an-node01 an-node02 DRBD Resource DRBD Device Size Note
/dev/sda5 /dev/sda5 r0 /dev/drbd0 10 GB GFS2 partition for VM configurations and shared files
/dev/sda6 /dev/sda6 r1 /dev/drbd1 100 GB Host VMs that will primarily run on an-node01
/dev/sda7 /dev/sda7 r2 /dev/drbd2 100 GB Host VMs that will primarily run on an-node02
/dev/sda8 /dev/sda8 r3 /dev/drbd3 Free space that can later be allocated to an existing VG as-is or further divided up into two or more DRBD resources as future needs dictate.

Configuring /etc/drbd.conf

With this plan then, we can now create the /etc/drbd.conf configuration file.

The initial file is very sparse;

cat /etc/drbd.conf
#
# please have a a look at the example configuration file in
# /usr/share/doc/drbd83/drbd.conf
#

Setting up the 'global' Directive

There are a lot of options available to you, many of which are outside the scope of this tutorial. You can get a good overview of all option by reading the man page; man drbd.conf.

The first section we will add is the global { } directive. There is only one argument we will set, which tells DRBD that it can count our install in the Linbit user information. If you have privacy concerns, set this to no.

# The 'global' directive covers values that apply to RBD in general.
global {
        # This tells Linbit that it's okay to count us as a DRBD user. If you
        # have privacy concerns, set this to 'no'.
        usage-count     yes;
}

Setting up the 'common' Directive

The next directive is common { }. This sets values to be used on all DRBD resources by default. You can override common values in any given resource directive later.

The example below is well documented, so please take a moment to look at the example for r0.

# The 'common' directive sets defaults values for all resources.
common {
        # Protocol 'C' tells DRBD to not report a disk write as complete until
        # it has been confirmed written to both nodes. This is required for
        # Primary/Primary use.
        protocol C;

        # This sets the default sync rate to 15 MiB/sec. Be careful about
        # setting this too high! High speed sync'ing can flog your drives and
        # push disk I/O times very high.
        syncer {
                rate 15M;
        }
        
        # This tells DRBD what policy to use when a fence is required.
        disk {
                # This tells DRBD to block I/O (resource) and then try to fence
                # the other node (stonith). The 'stonith' option requires that
                # we set a fence handler below. The name 'stonith' comes from
                # "Shoot The Other Nide In The Head" and is a term used in
                # other clustering environments. It is synonomous with with
                # 'fence'.
                fencing         resource-and-stonith;
        }

        # We set 'stonith' above, so here we tell DRBD how to actually fence
        # the other node.
        handlers {
                # The term 'outdate-peer' comes from other scripts that flag
                # the other node's resource backing device as 'Inconsistent'.
                # In our case though, we're flat-out fencing the other node,
                # which has the same effective result.
                outdate-peer    "/sbin/obliterate";
        }

        # Here we tell DRBD that we want to use Primary/Primary mode. It is
        # also where we define split-brain (sb) recovery policies. As we'll be
        # running all of our resources in Primary/Primary, only the
        # 'after-sb-2pri' really means anything to us.
        net {
                # Tell DRBD to allow dual-primary.
                allow-two-primaries;

                # Set the recover policy for split-brain recover when no device
                # in the resource was primary.
                after-sb-0pri   discard-zero-changes;

                # Now if one device was primary.
                after-sb-1pri   discard-secondary;

                # Finally, set the policy when both nodes were Primary. The
                # only viable option is 'disconnect', which tells DRBD to
                # simply tear-down the DRBD resource right away and wait for
                # the administrator to manually invalidate one side of the
                # resource.
                after-sb-2pri   disconnect;
        }

        # This tells DRBD what to do when the resource starts.
        startup {
                # In our case, we're telling DRBD to promote both devices in
                # our resource to Primary on start.
                become-primary-on       both;
        }
}

Let's stop for a moment and talk about DRBD synchronization.

A DRBD resource does not have to be synced before it can be made Primary/Primary. For this reason, the default sync rate for DRBD is very, very low (320 KiB/sec). This means that you can normally start your DRBD in Primary/Primary on both nodes and get to work while the synchronization putters along in the background.

However!

If the UpToDate node goes down, the surviving Inconsistent node will demote to Secondary, thus becoming unusable. In a high-availability environment like ours, this is pretty useless. So for this reason we will want to get the resources in sync as fast as possible. Likewise, while a node is sync'ing, we will not be able to run the VMs on the Inconsistent node.

The temptation then is to set rate above to the maximum write speed of our disks. This is a bad idea!

We will have four separate resources sharing the same underlying disks. If you drive the sync rate very high, and I/O on the other UpToDate resources will be severely impacted. So much so that I've seen crashes caused by this. So you will want to keep this value at a sane level. That is, you will want to set the rate to as high as you can while still leaving the disks themselves sufficiently unbound that other I/O is still feasible. I've personally found 15M on single-drive and simple RAID machines to be a good value. Feel free to experiment for yourself.

Setting up the Resource Directives

We now define the resources themselves. Each resource will be contained in a directive called resource x, where x is the actual resource name (r0, r1, r2 and r3 in our case). Within this directive, all resource-specific options are set.

The example below is well documented, so please take a moment to look at the example for r0.

# The 'resource' directive defines a given resource and must be followed by the
# resource's name.
# This will be used as the GFS2 partition for shared files.
resource r0 {
        # This is the /dev/ device to create to make available this DRBD
        # resource.
        device          /dev/drbd0;

        # This tells DRBD where to store it's internal state information. We
        # will use 'internal', which tells DRBD to store the information at the
        # end of the resource's space.
        meta-disk       internal;

        # The next two 'on' directives setup each individual node's settings.
        # The value after the 'on' directive *MUST* match the output of
        # `uname -n` on each node.
        on an-node01.alteeve.com {
                # This is the network IP address on the network interface and
                # the TCP port to use for communication between the nodes. Note
                # that the IP address below in on our Storage Network. The TCP
                # port must be unique per resource, but the interface itself
                # can be shared. 
                # IPv6 is usable with 'address ipv6 [address]:port'.
                address         192.168.2.71:7789;

                # This is the node's storage device that will back this
                # resource.
                disk            /dev/sda5;
        }

        # Same as above, but altered to reflect the second node.
        on an-node02.alteeve.com {
                address         192.168.2.72:7789;
                disk            /dev/sda5;
        }
}

The r1, r2 and r3 resources should be nearly identical to the example above. The main difference will the device value and within each node's on x { } directive. We will incrementing the TCP ports to 7790, 7791 and 7792 respectively. Likewise, we will need to alter the disk to /dev/sda6, /dev/sda7 and /dev/sda8 respectively. Finally, the device will be incremented to /dev/drbd1, /dev/drbd2 and /dev/drbd3 respectively.

Housekeeping Before Starting Our DRBD Resources

Let's take a look at the complete /etc/drbd.conf file, validate it for use and then push it to the second node.

The Finished /etc/drbd.conf File

The finished /etc/drbd.conf file should look for or less like this:

#
# please have a a look at the example configuration file in
# /usr/share/doc/drbd83/drbd.conf
#

# The 'global' directive covers values that apply to RBD in general.
global {
	# This tells Linbit that it's okay to count us as a DRBD user. If you
	# have privacy concerns, set this to 'no'.
	usage-count	yes;
}

# The 'common' directive sets defaults values for all resources.
common {
	# Protocol 'C' tells DRBD to not report a disk write as complete until
	# it has been confirmed written to both nodes. This is required for
	# Primary/Primary use.
        protocol	C;

	# This sets the default sync rate to 15 MiB/sec. Be careful about
	# setting this too high! High speed sync'ing can flog your drives and
	# push disk I/O times very high.
        syncer {
                rate	15M;
        }
	
	# This tells DRBD what policy to use when a fence is required.
        disk {
		# This tells DRBD to block I/O (resource) and then try to fence
		# the other node (stonith). The 'stonith' option requires that
		# we set a fence handler below. The name 'stonith' comes from
		# "Shoot The Other Nide In The Head" and is a term used in
		# other clustering environments. It is synonomous with with
		# 'fence'.
                fencing		resource-and-stonith;
        }

	# We set 'stonith' above, so here we tell DRBD how to actually fence
	# the other node.
        handlers {
		# The term 'outdate-peer' comes from other scripts that flag
		# the other node's resource backing device as 'Inconsistent'.
		# In our case though, we're flat-out fencing the other node,
		# which has the same effective result.
                outdate-peer	"/sbin/obliterate";
        }
	
	# Here we tell DRBD that we want to use Primary/Primary mode. It is
	# also where we define split-brain (sb) recovery policies. As we'll be
	# running all of our resources in Primary/Primary, only the
	# 'after-sb-2pri' really means anything to us.
        net {
		# Tell DRBD to allow dual-primary.
                allow-two-primaries;

		# Set the recover policy for split-brain recover when no device
		# in the resource was primary.
                after-sb-0pri	discard-zero-changes;

		# Now if one device was primary.
                after-sb-1pri	discard-secondary;

		# Finally, set the policy when both nodes were Primary. The
		# only viable option is 'disconnect', which tells DRBD to
		# simply tear-down the DRBD resource right away and wait for
		# the administrator to manually invalidate one side of the
		# resource.
                after-sb-2pri	disconnect;
        }
	
	# This tells DRBD what to do when the resource starts.
        startup {
		# In our case, we're telling DRBD to promote both devices in
		# our resource to Primary on start.
                become-primary-on 	both;
        }
}

# The 'resource' directive defines a given resource and must be followed by the
# resource's name.
# This will be used as the GFS2 partition for shared files.
resource r0 {
	# This is the /dev/ device to create to make available this DRBD
	# resource.
        device 		/dev/drbd0;
	
	# This tells DRBD where to store it's internal state information. We
	# will use 'internal', which tells DRBD to store the information at the
	# end of the resource's space.
        meta-disk 	internal;
	
	# The next two 'on' directives setup each individual node's settings.
	# The value after the 'on' directive *MUST* match the output of
	# `uname -n` on each node.
        on an-node01.alteeve.com {
		# This is the network IP address on the network interface and
		# the TCP port to use for communication between the nodes. Note
		# that the IP address below in on our Storage Network. The TCP
		# port must be unique per resource, but the interface itself
		# can be shared. 
		# IPv6 is usable with 'address ipv6 [address]:port'.
                address 	192.168.2.71:7789;
		
		# This is the node's storage device that will back this
		# resource.
                disk    	/dev/sda5;
        }
	
	# Same as above, but altered to reflect the second node.
        on an-node02.alteeve.com {
                address 	192.168.2.72:7789;
                disk    	/dev/sda5;
        }
}

# This will be used to host VMs running primarily on an-node01.
resource r1 {
        device          /dev/drbd1;

        meta-disk       internal;

        on an-node01.alteeve.com {
                address         192.168.2.71:7790;
                disk            /dev/sda6;
        }

        on an-node02.alteeve.com {
                address         192.168.2.72:7790;
                disk            /dev/sda6;
        }
}

# This will be used to host VMs running primarily on an-node02.
resource r2 {
        device          /dev/drbd2;

        meta-disk       internal;

        on an-node01.alteeve.com {
                address         192.168.2.71:7791;
                disk            /dev/sda7;
        }

        on an-node02.alteeve.com {
                address         192.168.2.72:7791;
                disk            /dev/sda7;
        }
}

# This will be set aside as free space for future expansion.
resource r3 {
        device          /dev/drbd3;

        meta-disk       internal;

        on an-node01.alteeve.com {
                address         192.168.2.71:7792;
                disk            /dev/sda8;
        }

        on an-node02.alteeve.com {
                address         192.168.2.72:7792;
                disk            /dev/sda3;
        }
}

Validating the /etc/drbd.conf Syntax

To check for errors, we will validate the /etc/drbd.conf file. To do this, run drbdadm dump. If there are syntactical errors, fix them before proceeding. Once the file is correct, it will be dump it's view of the configuration to the screen with minimal commenting. Don't worry about slight differences (ie: meta-disk external; being inside the on { } directives).

The first time you ever do this, you will also see a note telling you that you are the nth DRBD user.

drbdadm dump
  --==  Thank you for participating in the global usage survey  ==--
The server's response is:

you are the 8286th user to install this version
# /etc/drbd.conf
common {
    protocol               C;
    net {
        allow-two-primaries;
        after-sb-0pri    discard-zero-changes;
        after-sb-1pri    discard-secondary;
        after-sb-2pri    disconnect;
    }
    disk {
        fencing          resource-and-stonith;
    }
    syncer {
        rate             15M;
    }
    startup {
        become-primary-on both;
    }
    handlers {
        fence-peer       /sbin/obliterate;
    }
}

# resource r0 on an-node01.alteeve.com: not ignored, not stacked
resource r0 {
    on an-node01.alteeve.com {
        device           /dev/drbd0 minor 0;
        disk             /dev/sda5;
        address          ipv4 192.168.2.71:7789;
        meta-disk        internal;
    }
    on an-node02.alteeve.com {
        device           /dev/drbd0 minor 0;
        disk             /dev/sda5;
        address          ipv4 192.168.2.72:7789;
        meta-disk        internal;
    }
}

# resource r1 on an-node01.alteeve.com: not ignored, not stacked
resource r1 {
    on an-node01.alteeve.com {
        device           /dev/drbd1 minor 1;
        disk             /dev/sda6;
        address          ipv4 192.168.2.71:7790;
        meta-disk        internal;
    }
    on an-node02.alteeve.com {
        device           /dev/drbd1 minor 1;
        disk             /dev/sda6;
        address          ipv4 192.168.2.72:7790;
        meta-disk        internal;
    }
}

# resource r2 on an-node01.alteeve.com: not ignored, not stacked
resource r2 {
    on an-node01.alteeve.com {
        device           /dev/drbd2 minor 2;
        disk             /dev/sda7;
        address          ipv4 192.168.2.71:7791;
        meta-disk        internal;
    }
    on an-node02.alteeve.com {
        device           /dev/drbd2 minor 2;
        disk             /dev/sda7;
        address          ipv4 192.168.2.72:7791;
        meta-disk        internal;
    }
}

# resource r3 on an-node01.alteeve.com: not ignored, not stacked
resource r3 {
    on an-node01.alteeve.com {
        device           /dev/drbd3 minor 3;
        disk             /dev/sda8;
        address          ipv4 192.168.2.71:7792;
        meta-disk        internal;
    }
    on an-node02.alteeve.com {
        device           /dev/drbd3 minor 3;
        disk             /dev/sda8;
        address          ipv4 192.168.2.72:7792;
        meta-disk        internal;
    }
}

Copying The /etc/drbd.conf to the Second Node

Assuming you write the first /etc/drbd.conf file on an-node01. So now we need to copy it to an-node02 before we can start things up.

rsync -av /etc/drbd.conf root@an-node02:/etc/
building file list ... done
drbd.conf

sent 5552 bytes  received 48 bytes  11200.00 bytes/sec
total size is 5454  speedup is 0.97

Loading the DRBD Module

By default, the /etc/init.d/drbd initialization script handles loading and unloading the drbd module. It's too early for us to start the DRBD resources using the initialization script, so we need to manually load the module ourselves. This will only need to be done once. After you get the DRBD resources up for the first time, you can safely use /etc/init.d/drbd.

To load the module, run:

modprobe drbd

You can verify that the module is loaded using lsmod.

lsmod |grep drbd
drbd                  277144  0

The module also creates a /proc file called drbd. By cat'ing this, we can watch the progress of our work. I'd recommend opening a terminal windows for each node and tracking it using watch.

watch cat /proc/drbd
Every 2.0s: cat /proc/drbd                                                                     Tue Mar 29 13:03:44 2011

version: 8.3.8 (api:88/proto:86-94)
GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by mockbuild@builder10.centos.org, 2010-06-04 08:04:27

In the steps ahead, I will show what the output from watch'ing /proc/drbd will be.

Initializing Our Resources

Before we can start each resource, we must first initialize each of the backing device partitions. This is done by running drbdadm create-md x. We'll run this on both nodes, replacing x with the four resource names.

The first time you do this, the command will execute right away.

drbdadm create-md r0
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
success

If you've ever used the partition in a DRBD device before though, you will need to confirm that you want to over-write the existing meta-data.

drbdadm create-md r0

Type yes when prompted.

You want me to create a v08 style flexible-size internal meta data block.
There appears to be a v08 flexible-size internal meta data block
already in place on /dev/sda5 at byte offset 10010128384
Do you really want to overwrite the existing v08 meta-data?
[need to type 'yes' to confirm] yes
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.

Repeat for all four resource names, then do the same on the other node.


Notes

These are notes to work in later that I don't want to forget.

  • You must have clvmd running when you run vgcreate, otherwise it will not automatically be created as a cluster VG. To be safe, it's probably safer to just always use vgcreate -c y ... (--cluster yes). See rhbz 684896. Change the fallback_to_local_locking = 0 when not using LVM on the nodes (which we won't).
  • Because we will be live-migrating the VMs across the clustered LVs, we need to configure volume_list in /etc/lvm/lvm.conf. This value restricts what hosts and VGs/LVs are to be used on the system. The tag name itself is not so important, only that it be consistently used in all our VGs and is prefixed with @. We will use A-Z a-z 0-9 _ + . -. We will use @an-cluster in this tutorial. Thus, we will set volume_list to volume_list = ["@an-cluster"]. With that set, that actual command to tag each VG is vgchange --addtag @an-cluster /dev/VGNAME after the fact, or add --addtag @an-cluster to the vgcreate command. So the order of setup is; setup DRBD, set volume_list in /etc/lvm/lvm.conf, use pvcreate to assign the DRBD resources as PVs, use vgcreate with the --addtag @an-cluster switch and then create the LVs as normal. If the tag is not set or if the volume_list is not configured properly, you will see the error /dev/drbd_sh1_vg0/xen_shared: not found: device not cleared (newline) Aborting. Failed to wipe start of new LV..
  • When repairing a split-brain, never use --overwrite-data-of-peer, always use --discard-my-data (which is really "discard my modifications). See this.
  • Create a GFS2 partition with mkfs.gfs2 -p lock_dlm -j 2 -t xencluster03:xen_shared /dev/mapper/drbd_sh0_vg0-xen_shared.
  • Here is an example provision command:
virt-install --connect xen \
	--name vm0002_c5_lz7_1 \
	--ram 2048 \
	--arch x86_64 \
	--vcpus 1 \
	--cpuset 1-7 \
	--location http://10.255.0.1/c5/x86_64/img \
	--extra-args "ks=http://10.255.0.1/c5/x86_64/ks/labzilla_c5.ks" \
	--os-type linux \
	--os-variant rhel5.4 \
	--disk path=/dev/drbd_x13_vg0/vm0002_c5_lz7_1 \
	--network bridge=xenbr0 \
	--vnc \
	--paravirt
  • Remember to configure /etc/ntp.conf and to enable ntpd at start.

 

Any questions, feedback, advice, complaints or meanderings are welcome.
Alteeve's Niche! Enterprise Support:
Alteeve Support
Community Support
© Alteeve's Niche! Inc. 1997-2024   Anvil! "Intelligent Availability®" Platform
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions.