DRBD on Fedora 13: Difference between revisions

From Alteeve Wiki
Jump to navigation Jump to search
No edit summary
Line 31: Line 31:


<source lang="bash">
<source lang="bash">
rpm -ivh --force http://fedorapeople.org/~myoung/dom0/x86_64/kernel-headers-2.6.32.17-157.xendom0.fc12.x86_64.rpm
rpm -ivh --force http://fedorapeople.org/~myoung/dom0/x86_64/kernel-headers-2.6.32.17-157.xendom0.fc12.x86_64.rpm http://fedorapeople.org/~myoung/dom0/x86_64/kernel-devel-2.6.32.17-157.xendom0.fc12.x86_64.rpm
rpm -ivh --force http://fedorapeople.org/~myoung/dom0/x86_64/kernel-devel-2.6.32.17-157.xendom0.fc12.x86_64.rpm
</source>
</source>


Line 39: Line 38:
<source lang="bash">
<source lang="bash">
rpm -ivh http://fedora.mirror.iweb.ca/releases/13/Everything/source/SRPMS/drbd-8.3.7-2.fc13.src.rpm
rpm -ivh http://fedora.mirror.iweb.ca/releases/13/Everything/source/SRPMS/drbd-8.3.7-2.fc13.src.rpm
cd ~/rpmbuild/SPECS/
cd /root/rpmbuild/SPECS/
rpmbuild -bp drbd.spec  
rpmbuild -bp drbd.spec  
cd ../BUILD/drbd-8.3.7/
cd /root/rpmbuild/BUILD/drbd-8.3.7/
./configure --enable-spec --with-km
./configure --enable-spec --with-km
cp drbd-km.spec ../../SPECS/
cp /root/rpmbuild/BUILD/drbd-8.3.7/drbd-km.spec /root/rpmbuild/SPECS/
cd ../../SPECS/
cd /root/rpmbuild/SPECS/
rpmbuild -ba drbd-km.spec
rpmbuild -ba drbd-km.spec
cd ~/rpmbuild/RPMS/x86_64
cd /root/rpmbuild/RPMS/x86_64
rpm -Uvh drbd-km-*
rpm -Uvh drbd-km-*
</source>
</source>

Revision as of 18:14, 30 August 2010

 AN!Wiki :: How To :: DRBD on Fedora 13

Warning: Until this warning is removed, do not use or trust this document. When complete and tested, this warning will be removed.

This article covers installing and configuring DRBD on a two-node Fedora 13 cluster.

Why DRBD?

DRBD is useful in small clusters as it provides real-time mirroring of data across two (or more) nodes. In two-node clusters, this can be used to host clustered LVM physical volumes. On these volumes you can create logical volumes to host GFS2 partitions, virtual machines, iSCSI and so forth.

Install

yum install drbd.x86_64 drbd-xen.x86_64

Compile the DRBD module for Xen dom0

If you are running the custom Xen dom0, you will need to build the DRBD module from the source RPM.

Install the build environment:

yum -y groupinstall "Development Libraries"
yum -y groupinstall "Development Tools"

Install the kernel headers and development library for the dom0 kernel:

Note: The following commands use --force to get past the fact that the headers for the 2.6.33 are already installed, thus making RPM think that these are too old and will conflict. Please proceed with caution.

rpm -ivh --force http://fedorapeople.org/~myoung/dom0/x86_64/kernel-headers-2.6.32.17-157.xendom0.fc12.x86_64.rpm http://fedorapeople.org/~myoung/dom0/x86_64/kernel-devel-2.6.32.17-157.xendom0.fc12.x86_64.rpm

Download, prepare, build and install the source RPM:

rpm -ivh http://fedora.mirror.iweb.ca/releases/13/Everything/source/SRPMS/drbd-8.3.7-2.fc13.src.rpm
cd /root/rpmbuild/SPECS/
rpmbuild -bp drbd.spec 
cd /root/rpmbuild/BUILD/drbd-8.3.7/
./configure --enable-spec --with-km
cp /root/rpmbuild/BUILD/drbd-8.3.7/drbd-km.spec /root/rpmbuild/SPECS/
cd /root/rpmbuild/SPECS/
rpmbuild -ba drbd-km.spec
cd /root/rpmbuild/RPMS/x86_64
rpm -Uvh drbd-km-*

You should be good to go now!

Configure

We need to see how much space you have left on you LVM PV. The pvscan tool will show you this.

pvscan
  PV /dev/sda2   VG vg_01   lvm2 [465.50 GiB / 424.44 GiB free]
  Total: 1 [465.50 GiB] / in use: 1 [465.50 GiB] / in no VG: 0 [0   ]

On my nodes, each of which has a single 500GB drive, I've allocated only 20GB to dom0 so I've got over 440GB left free. I like to leave a bit of space unallocated because I never know where I might need it, so I will allocate 400GB even to DRBD and keep the remaining 44GB set aside for future growth. The space you have left and how you want to allocate is an exercise you must settle based on your own needs.

Next, check that the name you will give to the new LV isn't used yet. The lvscan tool will show you what names have been used.

lvscan
  ACTIVE            '/dev/vg_01/lv_root' [39.06 GiB] inherit
  ACTIVE            '/dev/vg_01/lv_swap' [2.00 GiB] inherit

We see from the above output that lv_root and lv_swap are used, so we will use lv_drbd for the DRBD partition. Of course, you can use pretty much any name you want.

Now that we know that we want to create a 400GB logical volume called lv_drbd, we can proceed.

Now to create the logical volume for the DRBD device on each node. The next two commands show what I need to call on my nodes. If you've used different names of have a different amount of free space, be sure to edit the following arguments to match your nodes.

On an-node01:

lvcreate -L 400G -n lv_drbd /dev/vg_01
  Logical volume "lv_drbd" created

If I re-run lvscan now, I will see the new volume:

lvscan
  ACTIVE            '/dev/vg_01/lv_root' [39.06 GiB] inherit
  ACTIVE            '/dev/vg_01/lv_swap' [2.00 GiB] inherit
  ACTIVE            '/dev/vg_01/lv_drbd' [400.00 GiB] inherit

We can now proceed with the DRBD setup!

Create or Edit /etc/drbd.conf

DRBD is controlled from a single /etc/drbd.conf configuration file that must be identical on both nodes. This file tells DRBD what devices to use on each node, what interface to use and so on.

Full details on all the drbd.conf configuration file directives and arguments can be found here.

vim /etc/drbd.d/global_common.conf
global {
	usage-count yes;
}

common {
	protocol C;
	
	syncer {
		rate 33M;
	}
}


vim /etc/drbd.d/r0.res
resource r0 {
	device    /dev/drbd0;
	
	net {
		allow-two-primaries;
	}
	
	startup { 
		become-primary-on both;
	}

	meta-disk	internal;
	
	on an-node01.alteeve.com {
		address		10.0.0.71:7789;
		disk		/dev/an-lvm01/lv02;
	}
	
	on an-node02.alteeve.com {
		address		10.0.0.72:7789;
		disk		/dev/an-lvm02/lv02;
	}
}

The main things to note are:

  • The on argument must match the name returned by the 'uname -n' shell call.
  • 'Protocol C' tells DRBD to not tell the OS that a write was complete until both nodes have done so. This effects performance but is required for the later step when we will configure cluster-aware LVM.

With that file in place on both nodes, run the following command and make sure the output is the contents of the file above in a somewhat altered syntax. If you get an error, address it before proceeding.

drbdadm dump

If it's all good, you should see something like this:

  --==  Thank you for participating in the global usage survey  ==--
The server's response is:

you are the 10464th user to install this version
# /etc/drbd.conf
common {
    protocol               C;
    syncer {
        rate             33M;
    }
}

# resource r0 on an-node01.alteeve.com: not ignored, not stacked
resource r0 {
    on an-node01.alteeve.com {
        device           /dev/drbd0 minor 0;
        disk             /dev/an-lvm01/lv02;
        address          ipv4 10.0.0.71:7789;
        meta-disk        internal;
    }
    on an-node02.alteeve.com {
        device           /dev/drbd0 minor 0;
        disk             /dev/an-lvm02/lv02;
        address          ipv4 10.0.0.72:7789;
        meta-disk        internal;
    }
    net {
        allow-two-primaries;
    }
    startup {
        become-primary-on both;
    }
}

Once you see this, you can proceed.

 

Any questions, feedback, advice, complaints or meanderings are welcome.
Alteeve's Niche! Enterprise Support:
Alteeve Support
Community Support
© Alteeve's Niche! Inc. 1997-2024   Anvil! "Intelligent Availability®" Platform
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions.