DRBD on Fedora 13

From Alteeve Wiki
Revision as of 17:52, 24 August 2010 by Digimer (talk | contribs) (Created page with '{{howto_header}} '''Warning''': Until this warning is removed, do not use or trust this document. When complete and tested, this warning will be removed. This article covers in…')
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

 AN!Wiki :: How To :: DRBD on Fedora 13

Warning: Until this warning is removed, do not use or trust this document. When complete and tested, this warning will be removed.

This article covers installing and configuring DRBD on a two-node Fedora 13 cluster.

Why DRBD?

DRBD is useful in small clusters as it provides real-time mirroring of data across two (or more) nodes. In two-node clusters, this can be used to host clustered LVM physical volumes. On these volumes you can create logical volumes to host GFS2 partitions, virtual machines, iSCSI and so forth.

Install

yum -y install drbd.x86_64 drbd-xen.x86_64

If You're Running Xen From Mercurial

The above RPM installs will drag in the Xen 3.4.3 hypervisor and tools. To get back to the Xen 4.0.0 tools, rerun the make install-*.

Note: Change to the directory where you checked out the git repo.

cd xen-4.0-testing.hg
make install-xen
make install-tools
make install-stubdom

Configure

We need to see how much space you have left on you LVM PV. The pvscan tool will show you this.

pvscan
  PV /dev/sda2   VG an-lvm01   lvm2 [465.50 GB / 443.97 GB free]
  Total: 1 [465.50 GB] / in use: 1 [465.50 GB] / in no VG: 0 [0   ]

On my nodes, each of which has a single 500GB drive, I've allocated only 20GB to dom0 so I've got over 440GB left free. I like to leave a bit of space unallocated because I never know where I might need it, so I will allocate 400GB even to DRBD and keep the remaining 44GB set aside for future growth. The space you have left and how you want to allocate is an exercise you must settle based on your own needs.

Next, check that the name you will give to the new LV isn't used yet. The lvscan tool will show you what names have been used.

lvscan
  ACTIVE            '/dev/an-lvm01/lv01' [19.53 GB] inherit
  ACTIVE            '/dev/an-lvm01/lv00' [2.00 GB] inherit

We see from the above output that lv00 and lv01 are used, so we will use lv02 for the DRBD partition. Of course, you can use drbd or pretty much anything else you want.

Now that we know that we want to create a 400GB logical volume called lv02, we can proceed.

Now to create the logical volume for the DRBD device on each node. The next two commands show what I need to call on my nodes. If you've used different names of have a different amount of free space, be sure to edit the following arguments to match your nodes.

On an-node01:

lvcreate -L 400G -n lv02 /dev/an-lvm01

On an-node02:

lvcreate -L 400G -n lv02 /dev/an-lvm02
  Logical volume "lv02" created

If I re-run lvscan now, I will see the new volume:

lvscan
  ACTIVE            '/dev/an-lvm01/lv01' [19.53 GB] inherit
  ACTIVE            '/dev/an-lvm01/lv00' [2.00 GB] inherit
  ACTIVE            '/dev/an-lvm01/lv02' [400.00 GB] inherit

We can now proceed with the DRBD setup!


 

Any questions, feedback, advice, complaints or meanderings are welcome.
Alteeve's Niche! Enterprise Support:
Alteeve Support
Community Support
© Alteeve's Niche! Inc. 1997-2024   Anvil! "Intelligent Availability®" Platform
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions.