DRBD on Fedora 13

From Alteeve Wiki
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

 AN!Wiki :: How To :: DRBD on Fedora 13

Warning: Until this warning is removed, do not use or trust this document. When complete and tested, this warning will be removed.

This article covers installing and configuring DRBD on a two-node Fedora 13 cluster.

Why DRBD?

DRBD is useful in small clusters as it provides real-time mirroring of data across two (or more) nodes. In two-node clusters, this can be used to host clustered LVM physical volumes. On these volumes you can create logical volumes to host GFS2 partitions, virtual machines, iSCSI and so forth.

Install

yum install drbd.x86_64 drbd-xen.x86_64

Compile the DRBD module for Xen dom0

If you are running the custom Xen dom0, you will need to build the DRBD module from the source RPM.

Install the build environment:

yum -y groupinstall "Development Libraries"
yum -y groupinstall "Development Tools"

Install the kernel headers and development library for the dom0 kernel:

Note: The following commands use --force to get past the fact that the headers for the 2.6.33 are already installed, thus making RPM think that these are too old and will conflict. Please proceed with caution.

rpm -ivh --force http://fedorapeople.org/~myoung/dom0/x86_64/kernel-headers-2.6.32.17-157.xendom0.fc12.x86_64.rpm http://fedorapeople.org/~myoung/dom0/x86_64/kernel-devel-2.6.32.17-157.xendom0.fc12.x86_64.rpm

Download, prepare, build and install the source RPM:

rpm -ivh http://fedora.mirror.iweb.ca/releases/13/Everything/source/SRPMS/drbd-8.3.7-2.fc13.src.rpm
cd /root/rpmbuild/SPECS/
rpmbuild -bp drbd.spec 
cd /root/rpmbuild/BUILD/drbd-8.3.7/
./configure --enable-spec --with-km
cp /root/rpmbuild/BUILD/drbd-8.3.7/drbd-km.spec /root/rpmbuild/SPECS/
cd /root/rpmbuild/SPECS/
rpmbuild -ba drbd-km.spec
cd /root/rpmbuild/RPMS/x86_64
rpm -Uvh drbd-km-*

You should be good to go now!

Configure

We need to see how much space you have left on you LVM PV. The pvscan tool will show you this.

pvscan
  PV /dev/sda2   VG vg_01   lvm2 [465.50 GiB / 424.44 GiB free]
  Total: 1 [465.50 GiB] / in use: 1 [465.50 GiB] / in no VG: 0 [0   ]

On my nodes, each of which has a single 500GB drive, I've allocated only 20GB to dom0 so I've got over 440GB left free. I like to leave a bit of space unallocated because I never know where I might need it, so I will allocate 400GB even to DRBD and keep the remaining 44GB set aside for future growth. The space you have left and how you want to allocate is an exercise you must settle based on your own needs.

Next, check that the name you will give to the new LV isn't used yet. The lvscan tool will show you what names have been used.

lvscan
  ACTIVE            '/dev/vg_01/lv_root' [39.06 GiB] inherit
  ACTIVE            '/dev/vg_01/lv_swap' [2.00 GiB] inherit

We see from the above output that lv_root and lv_swap are used, so we will use lv_drbd for the DRBD partition. Of course, you can use pretty much any name you want.

Now that we know that we want to create a 400GB logical volume called lv_drbd, we can proceed.

Now to create the logical volume for the DRBD device on each node. The next two commands show what I need to call on my nodes. If you've used different names of have a different amount of free space, be sure to edit the following arguments to match your nodes.

On an-node01:

lvcreate -L 400G -n lv_drbd /dev/vg_01
  Logical volume "lv_drbd" created

If I re-run lvscan now, I will see the new volume:

lvscan
  ACTIVE            '/dev/vg_01/lv_root' [39.06 GiB] inherit
  ACTIVE            '/dev/vg_01/lv_swap' [2.00 GiB] inherit
  ACTIVE            '/dev/vg_01/lv_drbd' [400.00 GiB] inherit

We can now proceed with the DRBD setup!

Configuration Files

DRBD uses a global configuration file, /etc/drbd.d/global_common.conf, and one or more resource files. The resource files need to be created in the /etc/drbd.d/ directory and must have the suffix .res. For this example, we will create a single resource called r0 which we will configure in /etc/drbd.d/r0.res.

Full details on all the drbd.conf configuration file directives and arguments can be found here.

Here are example files from a working server.

vim /etc/drbd.d/global_common.conf
global {
        usage-count yes;
        # minor-count dialog-refresh disable-ip-verification
}

common {
        protocol C;

        handlers {
                pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
                pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
                local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
                # fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
                # split-brain "/usr/lib/drbd/notify-split-brain.sh root";
                # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
                # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
                # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
        }

        startup {
                # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb;
        }

        disk {
                # on-io-error fencing use-bmbv no-disk-barrier no-disk-flushes
                # no-disk-drain no-md-flushes max-bio-bvecs
        }

        net {
                # snd‐buf-size rcvbuf-size timeout connect-int ping-int ping-timeout max-buffers
                # max-epoch-size ko-count allow-two-primaries cram-hmac-alg shared-secret
                # after-sb-0pri after-sb-1pri after-sb-2pri data-integrity-alg no-tcp-cork
        }

        syncer {
                # rate after al-extents use-rle cpu-mask verify-alg csums-alg
                rate 100M;
        }
}

This is the important part. This defines the resource to use, and must reflect the IP addresses and

vim /etc/drbd.d/r0.res
# This is for the initial LV

resource r0 {
        device    /dev/drbd0;

        net {
                allow-two-primaries;
        }

        startup {
                become-primary-on both;
        }

        meta-disk       internal;

        # The name below must match the output from `uname -n` on the node.
        on xenmaster001.iplink.net {
                # This must be the IP address of the interface on the storage network.
                address         10.249.0.1:7789;
                # This is the underlying partition to use for this resource on this node.
                disk            /dev/vg_01/lv_drbd;
        }

        # Repeat as above, but for the other node.
        on xenmaster002.iplink.net {
                address         10.249.0.2:7789;
                disk            /dev/vg_01/lv_drbd;
        }
}

These files must be copied to BOTH nodes and must match before you proceed.


Once you see this, you can proceed.

 

Any questions, feedback, advice, complaints or meanderings are welcome.
Alteeve's Niche! Enterprise Support:
Alteeve Support
Community Support
© Alteeve's Niche! Inc. 1997-2024   Anvil! "Intelligent Availability®" Platform
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions.