DRBD on Fedora 13: Difference between revisions

From Alteeve Wiki
Jump to navigation Jump to search
Line 66: Line 66:
We can now proceed with the DRBD setup!
We can now proceed with the DRBD setup!


== Create or Edit /etc/drbd.conf ==
DRBD is controlled from a single <span class="code">/etc/drbd.conf</span> configuration file that must be identical on both nodes. This file tells DRBD what devices to use on each node, what interface to use and so on.
* [[2-Node Fedora 13 drbd.conf|drbd.conf]]
Full details on all the <span class="code">drbd.conf</span> configuration file directives and arguments can be found [http://www.drbd.org/users-guide/re-drbdconf.html here].
<source lang="bash">
vim /etc/drbd.d/global_common.conf
</source>
<source lang="bash">
global {
usage-count yes;
}
common {
protocol C;
syncer {
rate 33M;
}
}
</source>
<source lang="bash">
vim /etc/drbd.d/r0.res
</source>
<source lang="bash">
resource r0 {
device    /dev/drbd0;
net {
allow-two-primaries;
}
startup {
become-primary-on both;
}
meta-disk internal;
on an-node01.alteeve.com {
address 10.0.0.71:7789;
disk /dev/an-lvm01/lv02;
}
on an-node02.alteeve.com {
address 10.0.0.72:7789;
disk /dev/an-lvm02/lv02;
}
}
</source>
The main things to note are:
* The '''on''' argument must match the name returned by the '<span class="code">uname -n</span>' shell call.
* '<span class="code">Protocol C</span>' tells DRBD to not tell the OS that a write was complete until both nodes have done so. This effects performance but is required for the later step when we will configure cluster-aware LVM.
With that file in place on both nodes, run the following command and make sure the output is the contents of the file above in a somewhat altered syntax. If you get an error, address it before proceeding.
<source lang="bash">
drbdadm dump
</source>
If it's all good, you should see something like this:
<source lang="text">
  --==  Thank you for participating in the global usage survey  ==--
The server's response is:
you are the 10464th user to install this version
# /etc/drbd.conf
common {
    protocol              C;
    syncer {
        rate            33M;
    }
}
# resource r0 on an-node01.alteeve.com: not ignored, not stacked
resource r0 {
    on an-node01.alteeve.com {
        device          /dev/drbd0 minor 0;
        disk            /dev/an-lvm01/lv02;
        address          ipv4 10.0.0.71:7789;
        meta-disk        internal;
    }
    on an-node02.alteeve.com {
        device          /dev/drbd0 minor 0;
        disk            /dev/an-lvm02/lv02;
        address          ipv4 10.0.0.72:7789;
        meta-disk        internal;
    }
    net {
        allow-two-primaries;
    }
    startup {
        become-primary-on both;
    }
}
</source>
Once you see this, you can proceed.


{{footer}}
{{footer}}

Revision as of 00:03, 26 August 2010

 AN!Wiki :: How To :: DRBD on Fedora 13

Warning: Until this warning is removed, do not use or trust this document. When complete and tested, this warning will be removed.

This article covers installing and configuring DRBD on a two-node Fedora 13 cluster.

Why DRBD?

DRBD is useful in small clusters as it provides real-time mirroring of data across two (or more) nodes. In two-node clusters, this can be used to host clustered LVM physical volumes. On these volumes you can create logical volumes to host GFS2 partitions, virtual machines, iSCSI and so forth.

Install

yum install drbd.x86_64 drbd-xen.x86_64

Configure

We need to see how much space you have left on you LVM PV. The pvscan tool will show you this.

pvscan
  PV /dev/sda2   VG vg_01   lvm2 [465.50 GiB / 424.44 GiB free]
  Total: 1 [465.50 GiB] / in use: 1 [465.50 GiB] / in no VG: 0 [0   ]

On my nodes, each of which has a single 500GB drive, I've allocated only 20GB to dom0 so I've got over 440GB left free. I like to leave a bit of space unallocated because I never know where I might need it, so I will allocate 400GB even to DRBD and keep the remaining 44GB set aside for future growth. The space you have left and how you want to allocate is an exercise you must settle based on your own needs.

Next, check that the name you will give to the new LV isn't used yet. The lvscan tool will show you what names have been used.

lvscan
  ACTIVE            '/dev/vg_01/lv_root' [39.06 GiB] inherit
  ACTIVE            '/dev/vg_01/lv_swap' [2.00 GiB] inherit

We see from the above output that lv_root and lv_swap are used, so we will use lv_drbd for the DRBD partition. Of course, you can use pretty much any name you want.

Now that we know that we want to create a 400GB logical volume called lv_drbd, we can proceed.

Now to create the logical volume for the DRBD device on each node. The next two commands show what I need to call on my nodes. If you've used different names of have a different amount of free space, be sure to edit the following arguments to match your nodes.

On an-node01:

lvcreate -L 400G -n lv_drbd /dev/vg_01
  Logical volume "lv_drbd" created

If I re-run lvscan now, I will see the new volume:

lvscan
  ACTIVE            '/dev/vg_01/lv_root' [39.06 GiB] inherit
  ACTIVE            '/dev/vg_01/lv_swap' [2.00 GiB] inherit
  ACTIVE            '/dev/vg_01/lv_drbd' [400.00 GiB] inherit

We can now proceed with the DRBD setup!

Create or Edit /etc/drbd.conf

DRBD is controlled from a single /etc/drbd.conf configuration file that must be identical on both nodes. This file tells DRBD what devices to use on each node, what interface to use and so on.

Full details on all the drbd.conf configuration file directives and arguments can be found here.

vim /etc/drbd.d/global_common.conf
global {
	usage-count yes;
}

common {
	protocol C;
	
	syncer {
		rate 33M;
	}
}


vim /etc/drbd.d/r0.res
resource r0 {
	device    /dev/drbd0;
	
	net {
		allow-two-primaries;
	}
	
	startup { 
		become-primary-on both;
	}

	meta-disk	internal;
	
	on an-node01.alteeve.com {
		address		10.0.0.71:7789;
		disk		/dev/an-lvm01/lv02;
	}
	
	on an-node02.alteeve.com {
		address		10.0.0.72:7789;
		disk		/dev/an-lvm02/lv02;
	}
}

The main things to note are:

  • The on argument must match the name returned by the 'uname -n' shell call.
  • 'Protocol C' tells DRBD to not tell the OS that a write was complete until both nodes have done so. This effects performance but is required for the later step when we will configure cluster-aware LVM.

With that file in place on both nodes, run the following command and make sure the output is the contents of the file above in a somewhat altered syntax. If you get an error, address it before proceeding.

drbdadm dump

If it's all good, you should see something like this:

  --==  Thank you for participating in the global usage survey  ==--
The server's response is:

you are the 10464th user to install this version
# /etc/drbd.conf
common {
    protocol               C;
    syncer {
        rate             33M;
    }
}

# resource r0 on an-node01.alteeve.com: not ignored, not stacked
resource r0 {
    on an-node01.alteeve.com {
        device           /dev/drbd0 minor 0;
        disk             /dev/an-lvm01/lv02;
        address          ipv4 10.0.0.71:7789;
        meta-disk        internal;
    }
    on an-node02.alteeve.com {
        device           /dev/drbd0 minor 0;
        disk             /dev/an-lvm02/lv02;
        address          ipv4 10.0.0.72:7789;
        meta-disk        internal;
    }
    net {
        allow-two-primaries;
    }
    startup {
        become-primary-on both;
    }
}

Once you see this, you can proceed.

 

Any questions, feedback, advice, complaints or meanderings are welcome.
Alteeve's Niche! Enterprise Support:
Alteeve Support
Community Support
© Alteeve's Niche! Inc. 1997-2024   Anvil! "Intelligent Availability®" Platform
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions.