DRBD on Fedora 13

From Alteeve Wiki
Jump to navigation Jump to search

 AN!Wiki :: How To :: DRBD on Fedora 13

Warning: Until this warning is removed, do not use or trust this document. When complete and tested, this warning will be removed.

This article covers installing and configuring DRBD on a two-node Fedora 13 cluster.

Why DRBD?

DRBD is useful in small clusters as it provides real-time mirroring of data across two (or more) nodes. In two-node clusters, this can be used to host clustered LVM physical volumes. On these volumes you can create logical volumes to host GFS2 partitions, virtual machines, iSCSI and so forth.

Install

yum install drbd.x86_64 drbd-xen.x86_64

Compile the DRBD module for Xen dom0

If you are running the custom Xen dom0, you will probably need to build the DRBD module from the source RPM. You can try the following RPMs, but be aware that if the dom0 kernel gets updated, you will need to rebuild these RPMs using the following steps.

You can install the two above RPMs with this command:

rpm -ivh https://alteeve.com/files/an-cluster/drbd-km-2.6.32.17_157.xendom0.fc12.x86_64-8.3.7-12.fc13.x86_64.rpm https://alteeve.com/files/an-cluster/drbd-km-debuginfo-8.3.7-12.fc13.x86_64.rpm

If the above RPMs don't work or if the dom0 kernel you are using in any way differs, please continue.

Install the build environment:

yum -y groupinstall "Development Libraries"
yum -y groupinstall "Development Tools"

Install the kernel headers and development library for the dom0 kernel:

Note: The following commands use --force to get past the fact that the headers for the 2.6.33 are already installed, thus making RPM think that these are too old and will conflict. Please proceed with caution.

rpm -ivh --force http://fedorapeople.org/~myoung/dom0/x86_64/kernel-headers-2.6.32.17-157.xendom0.fc12.x86_64.rpm http://fedorapeople.org/~myoung/dom0/x86_64/kernel-devel-2.6.32.17-157.xendom0.fc12.x86_64.rpm

Download, prepare, build and install the source RPM:

rpm -ivh http://fedora.mirror.iweb.ca/releases/13/Everything/source/SRPMS/drbd-8.3.7-2.fc13.src.rpm
cd /root/rpmbuild/SPECS/
rpmbuild -bp drbd.spec 
cd /root/rpmbuild/BUILD/drbd-8.3.7/
./configure --enable-spec --with-km
cp /root/rpmbuild/BUILD/drbd-8.3.7/drbd-km.spec /root/rpmbuild/SPECS/
cd /root/rpmbuild/SPECS/
rpmbuild -ba drbd-km.spec
cd /root/rpmbuild/RPMS/x86_64
rpm -Uvh drbd-km-*

You should be good to go now!

Configure

We need to see how much space you have left on you LVM PV. The pvscan tool will show you this.

pvscan
  PV /dev/sda2   VG vg_01   lvm2 [465.50 GiB / 424.44 GiB free]
  Total: 1 [465.50 GiB] / in use: 1 [465.50 GiB] / in no VG: 0 [0   ]

On my nodes, each of which has a single 500GB drive, I've allocated only 20GB to dom0 so I've got over 440GB left free. I like to leave a bit of space unallocated because I never know where I might need it, so I will allocate 400GB even to DRBD and keep the remaining 44GB set aside for future growth. The space you have left and how you want to allocate is an exercise you must settle based on your own needs.

Next, check that the name you will give to the new LV isn't used yet. The lvscan tool will show you what names have been used.

lvscan
  ACTIVE            '/dev/vg_01/lv_root' [39.06 GiB] inherit
  ACTIVE            '/dev/vg_01/lv_swap' [2.00 GiB] inherit

We see from the above output that lv_root and lv_swap are used, so we will use lv_drbd for the DRBD partition. Of course, you can use pretty much any name you want.

Now that we know that we want to create a 400GB logical volume called lv_drbd, we can proceed.

Now to create the logical volume for the DRBD device on each node. The next two commands show what I need to call on my nodes. If you've used different names of have a different amount of free space, be sure to edit the following arguments to match your nodes.

On an-node01:

lvcreate -L 400G -n lv_drbd /dev/vg_01
  Logical volume "lv_drbd" created

If I re-run lvscan now, I will see the new volume:

lvscan
  ACTIVE            '/dev/vg_01/lv_root' [39.06 GiB] inherit
  ACTIVE            '/dev/vg_01/lv_swap' [2.00 GiB] inherit
  ACTIVE            '/dev/vg_01/lv_drbd' [400.00 GiB] inherit

We can now proceed with the DRBD setup!

Configuration Files

DRBD uses a global configuration file, /etc/drbd.d/global_common.conf, and one or more resource files. The resource files need to be created in the /etc/drbd.d/ directory and must have the suffix .res. For this example, we will create a single resource called r0 which we will configure in /etc/drbd.d/r0.res.

Full details on all the drbd.conf configuration file directives and arguments can be found here. Note: That link doesn't show this new configuration format. Please see Novell's link.

These are example files from a working server.

/etc/drbd.d/global_common.conf

vim /etc/drbd.d/global_common.conf
global {
        usage-count yes;
        # minor-count dialog-refresh disable-ip-verification
}

common {
        protocol C;

        handlers {
                pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
                pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
                local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
                # fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
                # split-brain "/usr/lib/drbd/notify-split-brain.sh root";
                # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
                # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
                # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
        }

        startup {
                # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb;
        }

        disk {
                # on-io-error fencing use-bmbv no-disk-barrier no-disk-flushes
                # no-disk-drain no-md-flushes max-bio-bvecs
        }

        net {
                # snd‐buf-size rcvbuf-size timeout connect-int ping-int ping-timeout max-buffers
                # max-epoch-size ko-count allow-two-primaries cram-hmac-alg shared-secret
                # after-sb-0pri after-sb-1pri after-sb-2pri data-integrity-alg no-tcp-cork
        }

        syncer {
                # rate after al-extents use-rle cpu-mask verify-alg csums-alg
                rate 100M;
        }
}

/etc/drbd.d/r0.res

This is the important part. This defines the resource to use, and must reflect the IP addresses and

vim /etc/drbd.d/r0.res
# This is for the initial LV

resource r0 {
        device    /dev/drbd0;

        net {
                allow-two-primaries;
        }

        startup {
                become-primary-on both;
        }

        meta-disk       internal;

        # The name below must match the output from `uname -n` on the node.
        on xenmaster001.iplink.net {
                # This must be the IP address of the interface on the storage network.
                address         10.249.0.1:7789;
                # This is the underlying partition to use for this resource on this node.
                disk            /dev/vg_01/lv_drbd;
        }

        # Repeat as above, but for the other node.
        on xenmaster002.iplink.net {
                address         10.249.0.2:7789;
                disk            /dev/vg_01/lv_drbd;
        }
}

These files must be copied to BOTH nodes and must match before you proceed.

Starting The DRBD Resource

From the rest of this section, pay attention to whether you see

  • Primary
  • Secondary
  • Both

These indicate which node to run the following commands on. There is no functional difference between either node, so just randomly choose one to be Primary and the other will be Secondary. Once you've chosen which is which, be consistent with which node you run the commands on. Of course, if a command block is proceeded by Both, run the following code block on both nodes.

Both

/etc/init.d/drbd start

FILL THIS IN.

Setting up CLVM

The goal of DRBD in the cluster is to provide clustered LVM to the nodes. This is done by turning the DRBD partition into an LVM physical volume.

So now we will "stack" LVM by creating a PV on top of the new DRBD partition, /dev/drbd0, that we created in the previous step. Since this new LVM PV will exist on top of the shared DRBD partition, whatever get written to it's logical volumes will be immediately available on either node, regardless of which node actually initiated the write.

This capability is the underlying reason for creating this cluster; Neither machine is truly needed so if one machine dies, anything on top of the DRBD partition will still be available. When the failed machine returns, the surviving node will have a list of what blocks changed while the other node was gone and can use this list to quickly re-sync the other server.

Making LVM Cluster-Aware

Normally, LVM is run on a single server. This means that at any time, the LVM can write data to the underlying drive and not need to worry if any other device might change anything. In clusters, this isn't the case. The other node could try to write to the shared storage, so then nodes need to enable "locking" to prevent the two nodes from trying to work on the same bit of data at the same time.

The process of enabling this locking is known as making LVM "cluster-aware".

Enabling Cluster Locking

LVM has a built-in tool called lvmconf that can be used to enable LVM locking. This is provided as part of the lvm2-cluster package. Install it and then run:

yum install lvm2-cluster.x86_64

Before you continue, we want to tell clvmd that it should start after the drbd daemon. To do this, edit it's /etc/init.d/clvmd init script and add drbd to it's start block.

vim /etc/init.d/clvmd

Change the header by adding drbd to Required-Start and Required-Stop:

### BEGIN INIT INFO
# Provides: clvmd
# Required-Start: $local_fs drbd
# Required-Stop: $local_fs drbd
# Default-Start:
# Default-Stop: 0 1 6
# Short-Description: Clustered LVM Daemon
### END INIT INFO

Change the start order by removing and re-adding all cluster-related daemons:

Note': This has extra entries to make sure all relevant entries are sorted when re-added. If you do not have some of the daemons listed below installed yet, simply skip them.

chkconfig xend off; chkconfig cman off; chkconfig drbd off; chkconfig clvmd off; chkconfig xendomains off
chkconfig xend on; chkconfig cman on; chkconfig drbd on; chkconfig clvmd on; chkconfig xendomains on

Now to enable cluster awareness in LVM, simply run:

lvmconf --enable-cluster

There won't be any output from that command.

By default, clvmd, the cluster lvm daemon, is stopped and not set to run on boot. Now that we've enabled LVM locking, we need to start it:

/etc/init.d/clvmd status
clvmd is stopped
active volumes: lv_drbd lv_root lv_swap

As expected, it is stopped, so lets start it:

/etc/init.d/clvmd start
Stopping clvm:                                             [  OK  ]
Starting clvmd:                                            [  OK  ]
Activating VGs:   3 logical volume(s) in volume group "an-lvm01" now active
                                                           [  OK  ]

Creating a new PV using the DRBD Partition

We can now proceed with setting up the new DRBD-based LVM physical volume. Once the PV is created, we can create a new volume group and start allocating space to logical volumes.

Note: As we will be using our DRBD device, and as it is a shared block device, most of the following commands only need to be run on one node. Once the block device changes in any way, those changes will near-instantly appear on the other node. For this reason, unless explicitly stated to do so, only run the following commands on one node.

To setup the DRBD partition as an LVM PV, run pvcreate:

pvcreate /dev/drbd0
  Physical volume "/dev/drbd0" successfully created

Now, on both nodes, check that the new physical volume is visible by using pvdisplay:

pvdisplay
  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               vg_01
  PV Size               465.52 GiB / not usable 15.87 MiB
  Allocatable           yes 
  PE Size               32.00 MiB
  Total PE              14896
  Free PE               782
  Allocated PE          14114
  PV UUID               BuR5uh-R74O-kACb-S1YK-MHxd-9O69-yo1EKW
   
  "/dev/drbd0" is a new physical volume of "399.99 GiB"
  --- NEW Physical volume ---
  PV Name               /dev/drbd0
  VG Name               
  PV Size               399.99 GiB
  Allocatable           NO
  PE Size               0   
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               LYOE1B-22fk-LfOn-pu9v-9lhG-g8vx-cjBnsY

If you see PV Name /dev/drbd0 on both nodes, then your DRBD setup and LVM configuration changes are working perfectly!

Creating a VG on the new PV

Now we need to create the volume group using the vgcreate command:

vgcreate -c y drbd_vg0 /dev/drbd0
  Clustered volume group "drbd_vg0" successfully created

Now we'll check that the new VG is visible on both nodes using vgdisplay:

vgdisplay
  --- Volume group ---
  VG Name               vg_01
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  6
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               465.50 GiB
  PE Size               32.00 MiB
  Total PE              14896
  Alloc PE / Size       14114 / 441.06 GiB
  Free  PE / Size       782 / 24.44 GiB
  VG UUID               YbHSKn-x64P-oEbe-8R0S-3PjZ-UNiR-gdEh6T
   
  --- Volume group ---
  VG Name               drbd_vg0
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  Clustered             yes
  Shared                no
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               399.98 GiB
  PE Size               4.00 MiB
  Total PE              102396
  Alloc PE / Size       0 / 0   
  Free  PE / Size       102396 / 399.98 GiB
  VG UUID               NK00Or-t9Z7-9YHz-sDC8-VvBT-NPeg-glfLwy

If the new VG is visible on both nodes, we are ready to create our first logical volume using the lvcreate tool.

Creating the First Two LVs on the new VG

Now we'll create a simple 20 GiB logical volumes. We will use it as a shared GFS store for source ISOs (and Xen domU config files) later on.

lvcreate -L 20G -n iso_store drbd_vg0
  Logical volume "iso_store" created

As before, we will check that the new logical volume is visible from both nodes by using the lvdisplay command:

lvdisplay
  --- Logical volume ---
  LV Name                /dev/vg_01/lv_root
  VG Name                vg_01
  LV UUID                dl6jxD-asN7-bGYL-H4yO-op6q-Nt6y-RxkPnt
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                39.06 GiB
  Current LE             1250
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Name                /dev/vg_01/lv_swap
  VG Name                vg_01
  LV UUID                VL3G06-Ob0o-sEB9-qNX3-rIAJ-nzW5-Auf64W
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                2.00 GiB
  Current LE             64
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
   
  --- Logical volume ---
  LV Name                /dev/vg_01/lv_drbd
  VG Name                vg_01
  LV UUID                SRT3N5-kA84-I3Be-LI20-253s-qTGT-fuFPfr
  LV Write Access        read/write
  LV Status              available
  # open                 2
  LV Size                400.00 GiB
  Current LE             12800
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2
   
  --- Logical volume ---
  LV Name                /dev/drbd_vg0/iso_store
  VG Name                drbd_vg0
  LV UUID                H0M5fL-Wxb6-o8cb-Wb30-Rla3-fwzp-tzdR62
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                20.00 GiB
  Current LE             5120
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:3

The last two are the new logical volumes.

Create the Shared GFS FileSystem

GFS is a cluster-aware file system that can be simultaneously mounted on two or more nodes at once. We will use it as a place to store ISOs that we'll use to provision our virtual machines.

Start by installing the GFS2 tools:

yum install gfs2-utils.x86_64

As before, modify the gfs2 init script to start after clvmd, and then modify xendomains to start after gfs2. Finally, use chkconfig to reconfigure the boot order:

chkconfig xend off; chkconfig cman off; chkconfig drbd off; chkconfig clvmd off; chkconfig xendomains off; chkconfig gfs2 off
chkconfig xend on; chkconfig cman on; chkconfig drbd on; chkconfig clvmd on; chkconfig xendomains on; chkconfig gfs2 on

The following example is designed for the cluster used in this paper.

  • If you have more than 2 nodes, increase the -j 2 to the number of nodes you want to mount this file system on.
  • If your cluster is named something other than an-cluster (as set in the cluster.conf file), change -t an-cluster:iso_store to match you cluster's name. The iso_store can be whatever you like, but it must be unique in the cluster. I tend to use a name that matches the LV name, but this is my own preference and is not required.

To format the partition run:

mkfs.gfs2 -p lock_dlm -j 2 -t xencluster001:iso_store /dev/drbd_vg0/iso_store

If you are prompted, press y to proceed.

Once the format completes, you can mount /dev/drbd_vg0/iso_store as you would a normal file system.

Both:

To complete the example, lets mount the GFS2 partition we made just now on /shared.

mkdir /shared
mount /dev/drbd_vg0/iso_store /shared

Done!

Growing a GFS2 Partition

To grow a GFS2 partition, you must know where it is mounted. You can not grow an unmounted GFS2 partition, as odd as that may seem at first. Also, you only need to run grow commands from one node. Once completed, all nodes will see and use the new free space automatically.

This requires two steps to complete:

  1. Extend the underlying LVM logical volume
  2. Grow the actual GFS2 partition

Extend the LVM LV

To keep things simple, we'll just use some of the free space we left on our /dev/drbd0 LVM physical volume. If you need to add more storage to your LVM first, please follow the instructions in the article: "Adding Space to an LVM" before proceeding.

Let's add 50GB to our GFS2 logical volume /dev/drbd_vg0/iso_store from the /dev/drbd0 physical volume, which we know is available because we left more than that back when we first setup our LVM. To actually add the space, we need to use the lvextend command:

lvextend -L +50G /dev/drbd_vg0/iso_store /dev/drbd0

Which should return:

  Extending logical volume iso_store to 70.00 GB
  Logical volume iso_store successfully resized

If we run lvdisplay /dev/drbd_vg0/iso_store now, we should see the extra space.

  --- Logical volume ---
  LV Name                /dev/drbd_vg0/iso_store
  VG Name                drbd_vg0
  LV UUID                svJx35-KDXK-ojD2-UDAA-Ah9t-UgUl-ijekhf
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                70.00 GB
  Current LE             17920
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:3

You're now ready to proceed.

Grow The GFS2 Partition

This step is pretty simple, but you need to enter the commands exactly. Also, you'll want to do a dry-run first and address any resulting errors before issuing the final gfs2_grow command.

To get the exact name to use when calling gfs2_grow, run the following command:

gfs2_tool df
/shared:
  SB lock proto = "lock_dlm"
  SB lock table = "an-cluster:iso_store"
  SB ondisk format = 1801
  SB multihost format = 1900
  Block size = 4096
  Journals = 2
  Resource Groups = 80
  Mounted lock proto = "lock_dlm"
  Mounted lock table = "an-cluster:iso_store"
  Mounted host data = "jid=1:id=196610:first=0"
  Journal number = 1
  Lock module flags = 0
  Local flocks = FALSE
  Local caching = FALSE

  Type           Total Blocks   Used Blocks    Free Blocks    use%           
  ------------------------------------------------------------------------
  data           5242304        1773818        3468486        34%
  inodes         3468580        94             3468486        0%

From this output, we know that GFS2 expects the name "/shared". Even adding something as simple as a trailing slash will not work. The program we will use is called gfs2_grow with the -T switch to run the command as a test to work out possible errors.

For example, if you added the trailing slash, this is the kind of error you would see:

Bad command:

gfs_grow -T /shared/
GFS Filesystem /shared/ not found

Once we get it right, it will look like this:

gfs_grow -T /shared
(Test mode--File system will not be changed)
FS: Mount Point: /shared
FS: Device:      /dev/mapper/drbd_vg0-iso_store
FS: Size:        5242878 (0x4ffffe)
FS: RG size:     65535 (0xffff)
DEV: Size:       18350080 (0x1180000)
The file system grew by 51200MB.
gfs2_grow complete.

This looks good! We're now ready to re-run the command without the -T switch:

gfs_grow /shared
FS: Mount Point: /shared
FS: Device:      /dev/mapper/drbd_vg0-iso_store
FS: Size:        5242878 (0x4ffffe)
FS: RG size:     65535 (0xffff)
DEV: Size:       18350080 (0x1180000)
The file system grew by 51200MB.
gfs2_grow complete.

You can check that the new space is available on both nodes now using a simple call like df -h.


 

Any questions, feedback, advice, complaints or meanderings are welcome.
Alteeve's Niche! Enterprise Support:
Alteeve Support
Community Support
© Alteeve's Niche! Inc. 1997-2024   Anvil! "Intelligent Availability®" Platform
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions.