2-Node Red Hat KVM Cluster Tutorial - Archive: Difference between revisions
No edit summary |
|||
Line 97: | Line 97: | ||
|<span class="code">10.255.0.x</span> | |<span class="code">10.255.0.x</span> | ||
|} | |} | ||
== Setting Up the Network == | |||
To setup our network, we will need to edit the <span class="code">ifcfg-ethX</span>, <span class="code">ifcfg-bondX</span> and <span class="code">ifcfg-vbrX</span> scripts. The last one will create bridges which will be used to route network connections to the virtual machines. We '''won't''' be creating an <span class="code">vbr1</span> bridge though, and <span class="code">bond1</span> will be dedicated to the storage and never used by a VM. The bridges will have the [[IP]] addresses, not the bonded interfaces. They will instead be slaved to their respective bridges. | |||
So our setup will be: | |||
{|class="wikitable" | |||
!Node | |||
!BCN IP and Device | |||
!SN IP and Device | |||
!IFN IP and Device | |||
|- | |||
|<span class="code">an-node01</span> | |||
|<span class="code">10.20.0.1</span> on <span class="code">vbr0</span> (<span class="code">bond0</span> slaved) | |||
|<span class="code">10.10.0.1</span> on <span class="code">bond1</span> | |||
|<span class="code">10.255.0.1</span> on <span class="code">vbr2</span> (<span class="code">bond2</span> slaved) | |||
|- | |||
|<span class="code">an-node02</span> | |||
|<span class="code">10.20.0.2</span> on <span class="code">vbr0</span> (<span class="code">bond0</span> slaved) | |||
|<span class="code">10.10.0.2</span> on <span class="code">bond1</span> | |||
|<span class="code">10.255.0.2</span> on <span class="code">vbr2</span> (<span class="code">bond2</span> slaved) | |||
|} | |||
= Thanks = | |||
{{footer}} | {{footer}} |
Revision as of 01:31, 10 September 2011
AN!Wiki :: How To :: 2-Node Red Hat KVM Cluster Tutorial - Archive |
This paper has one goal;
- Creating a 2-node, high-availability cluster hosting KVM virtual machines using RHCS "stable 3" with DRBD and clustered LVM for synchronizing storage data. This is an updated version of the earlier Red Hat Cluster Service 2 Tutorial Tutorial. You will find much in common with that tutorial if you've previously followed that document. Please don't skip large sections though. There are some differences that are subtle but important.
Grab a coffee, a comfy chair, put on some nice music and settle in for some geekly fun.
The Task Ahead
Before we start, let's take a few minutes to discuss clustering and it's complexities.
Technologies We Will Use
- Enterprise Linux 6; You can use a derivative like CentOS v6.
- Red Hat Cluster Services "Stable" version 3. This describes the following core components:
- Corosync; Provides cluster communications using the totem protocol.
- Cluster Manager (cman); Manages the starting, stopping and managing of the cluster.
- Resource Manager (rgmanager); Manages cluster resources and services. Handles service recovery during failures.
- Clustered Logical Volume Manager (clvm); Cluster-aware (disk) volume manager. Backs GFS2 filesystems and KVM virtual machines.
- Global File Systems version 2 (gfs2); Cluster-aware, concurrently mountable file system.
- Distributed Redundant Block Device (DRBD); Keeps shared data synchronized across cluster nodes.
- KVM; Hypervisor that controls and supports virtual machines.
A Note on Patience
There is nothing inherently hard about clustering. However, there are many components that you need to understand before you can begin. The result is that clustering has an inherently steep learning curve.
You must have patience. Lots of it.
Many technologies can be learned by creating a very simple base and then building on it. The classic "Hello, World!" script created when first learning a programming language is an example of this. Unfortunately, there is no real analogue to this in clustering. Even the most basic cluster requires several pieces be in place and working together. If you try to rush by ignoring pieces you think are not important, you will almost certainly waste time. A good example is setting aside fencing, thinking that your test cluster's data isn't important. The cluster software has no concept of "test". It treats everything as critical all the time and will shut down if anything goes wrong.
Take your time, work through these steps, and you will have the foundation cluster sooner than you realize. Clustering is fun because it is a challenge.
Network
The cluster will use three Class B networks, broken down as:
Purpose | Subnet | Notes |
---|---|---|
Internet-Facing Network (IFN) | 10.255.0.0/16 |
|
Storage Network (SN) | 10.10.0.0/16 |
|
Back-Channel Network (SN) | 10.20.0.0/16 |
Miscellaneous equipment in the cluster, like managed switches, will use 10.20.3.z where z is a simple sequence. |
We will be using six interfaces, bonded into three pairs of two NICs in Active/Passive (mode 1). Each link of each bond will be on alternate, unstacked switches. This configuration is the only configuration supported by Red Hat in clusters.
If you can not install six interfaces in your server, then four interfaces will do with the SN and BCN networks merged.
In this tutorial, we will use two D-Link DGS-3100-24, unstacked, using three VLANs to isolate the three networks.
You could just as easily use four or six unmanaged 5 port or 8 port switches. What matters is that the three subnets are isolated and that each link of each bond is on a separate switch. Lastly, only connect the IFN switches or VLANs to the Internet for security reasons.
The actual mapping of interfaces to bonds to networks will be:
Subnet | Link 1 | Link 2 | Bond | IP |
---|---|---|---|---|
BCN | eth0 | eth3 | bond0 | 10.20.0.x |
SN | eth1 | eth4 | bond1 | 10.10.0.x |
IFN | eth2 | eth5 | bond2 | 10.255.0.x |
Setting Up the Network
To setup our network, we will need to edit the ifcfg-ethX, ifcfg-bondX and ifcfg-vbrX scripts. The last one will create bridges which will be used to route network connections to the virtual machines. We won't be creating an vbr1 bridge though, and bond1 will be dedicated to the storage and never used by a VM. The bridges will have the IP addresses, not the bonded interfaces. They will instead be slaved to their respective bridges.
So our setup will be:
Node | BCN IP and Device | SN IP and Device | IFN IP and Device |
---|---|---|---|
an-node01 | 10.20.0.1 on vbr0 (bond0 slaved) | 10.10.0.1 on bond1 | 10.255.0.1 on vbr2 (bond2 slaved) |
an-node02 | 10.20.0.2 on vbr0 (bond0 slaved) | 10.10.0.2 on bond1 | 10.255.0.2 on vbr2 (bond2 slaved) |
Thanks
Any questions, feedback, advice, complaints or meanderings are welcome. | |||
Alteeve's Niche! | Enterprise Support: Alteeve Support |
Community Support | |
© Alteeve's Niche! Inc. 1997-2024 | Anvil! "Intelligent Availability®" Platform | ||
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions. |