Sheepdog on EL6: Difference between revisions

From Alteeve Wiki
Jump to navigation Jump to search
(Created page with "{{howto_header}} {{warning|1=This is essentially just a compilation of notes I made while playing with [http://www.osrg.net/sheepdog/ NTT's Sheepdog]. It may or may not become a...")
 
No edit summary
 
(6 intermediate revisions by the same user not shown)
Line 4: Line 4:


Installing, configuring and using Sheepdog in [[EL6]].
Installing, configuring and using Sheepdog in [[EL6]].
= Base Cluster Config =
{{note|1=Replace <span class="code">bindnetaddr</span>'s value with the [[IP]] address of the interface you want the cluster communication on ([[BCN]], generally).}}
<source lang="bash">
vim /etc/corosync/corosync.conf
</source>
<source lang="xml">
# Please read the corosync.conf 5 manual page
compatibility: whitetank
totem {
  version: 2
  secauth: off
  threads: 0
  # Note, fail_recv_const is only needed if you're
  # having problems with corosync crashing under
  # heavy sheepdog traffic. This crash is due to
  # delayed/resent/misordered multicast packets.
  # fail_recv_const: 5000
  interface {
    ringnumber: 0
    bindnetaddr: 192.168.3.{3..7}
    mcastaddr: 226.94.1.1
    mcastport: 5405
  }
}
logging {
  fileline: off
  to_stderr: no
  to_logfile: yes
  to_syslog: yes
  # the pathname of the log file
  logfile: /var/log/cluster/corosync.log
  debug: off
  timestamp: on
  logger_subsys {
    subsys: AMF
    debug: off
  }
}
amf {
  mode: disabled
}
</source>
Now start <span class="code">corosync</span> on the first node and then check <span class="code">corosync-objctl | grep member</span> to make sure it came up.
<source lang="bash">
/etc/init.d/corosync start
</source>
<source lang="text">
Starting Corosync Cluster Engine (corosync):              [  OK  ]
</source>
<source lang="bash">
corosync-objctl |grep member
</source>
<source lang="text">
runtime.totem.pg.mrp.srp.members.1224976576.ip=r(0) ip(192.168.3.73)
runtime.totem.pg.mrp.srp.members.1224976576.join_count=1
runtime.totem.pg.mrp.srp.members.1224976576.status=joined
</source>
Now start <span class="code">corosync</span> on the other nodes. You should then see the other nodes, five total in my case, come online.
<source lang="bash">
corosync-objctl |grep member
</source>
<source lang="text">
runtime.totem.pg.mrp.srp.members.1224976576.ip=r(0) ip(192.168.3.73)
runtime.totem.pg.mrp.srp.members.1224976576.join_count=1
runtime.totem.pg.mrp.srp.members.1224976576.status=joined
runtime.totem.pg.mrp.srp.members.1241753792.ip=r(0) ip(192.168.3.74)
runtime.totem.pg.mrp.srp.members.1241753792.join_count=1
runtime.totem.pg.mrp.srp.members.1241753792.status=joined
runtime.totem.pg.mrp.srp.members.1258531008.ip=r(0) ip(192.168.3.75)
runtime.totem.pg.mrp.srp.members.1258531008.join_count=1
runtime.totem.pg.mrp.srp.members.1258531008.status=joined
runtime.totem.pg.mrp.srp.members.1275308224.ip=r(0) ip(192.168.3.76)
runtime.totem.pg.mrp.srp.members.1275308224.join_count=1
runtime.totem.pg.mrp.srp.members.1275308224.status=joined
runtime.totem.pg.mrp.srp.members.1292085440.ip=r(0) ip(192.168.3.77)
runtime.totem.pg.mrp.srp.members.1292085440.join_count=1
runtime.totem.pg.mrp.srp.members.1292085440.status=joined
</source>


= Install =
= Install =
Line 12: Line 98:


<source lang="bash">
<source lang="bash">
yum -y install corosynclib-devel subversion nss-devel zlib-devel
yum -y install nss-devel zlib-devel  
</source>
</source>


Line 18: Line 104:


<source lang="bash">
<source lang="bash">
yum -y remove corosync corosynclib corosynclib-devel
cd ~
cd ~
yum -y remove corosync
git clone git://github.com/corosync/corosync.git
svn co http://svn.fedorahosted.org/svn/corosync/branches/flatiron
cd corosync
cd flatiron
git checkout -b flatiron origin/flatiron
./autogen.sh
./autogen.sh
./configure
./configure --enable-nss
make install
make install
cd ~
cd ~
Line 32: Line 119:
<source lang="bash">
<source lang="bash">
cd ~
cd ~
yum remove libvirt
yum -y remove libvirt
git clone git://git.sv.gnu.org/qemu.git
git clone git://git.sv.gnu.org/qemu.git
cd qemu
cd qemu
Line 40: Line 127:
</source>
</source>


Sheepdog;
Sheepdog (we don't build the RPMs for missing dependencies reasons);


<source lang="bash">
<source lang="bash">
Line 48: Line 135:
./autogen.sh
./autogen.sh
./configure
./configure
make rpm
make install
rpm -ivh x86_64/sheepdog-0.*
cd ~
cd ~
</source>
</source>


= Using Sheepdog =
You need a "store" directory. At this point in development, the <span class="code">/etc/init.d/sheepdog</span> initialization script is hard coded to use <span class="code">/var/lib/sheepdog</span>, so that is what we will use. I formatted this partition on <span class="code">/dev/sda5</span> as an <span class="code">[[ext4]]</span> partition.
<source lang="bash">
mkfs.ext4 /dev/sda5
mkdir /var/lib/sheepdog
tune2fs -l /dev/sda5 | grep "Filesystem UUID" | sed -e "s/^Filesystem UUID:\s*\(.*\)/UUID=\L\1\E\t\/var/lib/sheepdog\text4\tdefaults,user_xattr\t1 3/" >> /etc/fstab
mount /var/lib/sheepdog
mount
</source>
<source lang="text">
/dev/sda2 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sda1 on /boot type ext3 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
/dev/sda5 on /var/lib/sheepdog type ext4 (rw,user_xattr)
</source>
Note the <span class="code">(rw,user_xattr)</span>, this is important.


Now create the store on all nodes.


= Base Cluster Config =
<source lang="bash">
sheep /var/lib/sheepdog
</source>
 
You can verify which nodes are in the sheepdog cluster using the following command;


<source lang="bash">
<source lang="bash">
vim /etc/cluster/cluster.com
</source>
</source>
<source lang="xml">
 
# Please read the corosync.conf 5 manual page
Tell it to create five copies.
compatibility: whitetank
 
totem {
<source lang="bash">
  version: 2
  secauth: off
  threads: 0
  # Note, fail_recv_const is only needed if you're
  # having problems with corosync crashing under
  # heavy sheepdog traffic. This crash is due to
  # delayed/resent/misordered multicast packets.
  # fail_recv_const: 5000
  interface {
    ringnumber: 0
    bindnetaddr: -YOUR IP HERE-
    mcastaddr: 226.94.1.1
    mcastport: 5405
  }
}
logging {
  fileline: off
  to_stderr: no
  to_logfile: yes
  to_syslog: yes
  # the pathname of the log file
  logfile: /var/log/cluster/corosync.log
  debug: off
  timestamp: on
  logger_subsys {
    subsys: AMF
    debug: off
  }
}
amf {
  mode: disabled
}
</source>
</source>




<span class="code"></span>
<source lang="bash">
</source>


{{footer}}
{{footer}}

Latest revision as of 05:21, 24 November 2011

 AN!Wiki :: How To :: Sheepdog on EL6

Warning: This is essentially just a compilation of notes I made while playing with NTT's Sheepdog. It may or may not become a complete tutorial someday.

Installing, configuring and using Sheepdog in EL6.

Base Cluster Config

Note: Replace bindnetaddr's value with the IP address of the interface you want the cluster communication on (BCN, generally).
vim /etc/corosync/corosync.conf
# Please read the corosync.conf 5 manual page
compatibility: whitetank
totem {
  version: 2
  secauth: off
  threads: 0
  # Note, fail_recv_const is only needed if you're 
  # having problems with corosync crashing under 
  # heavy sheepdog traffic. This crash is due to 
  # delayed/resent/misordered multicast packets. 
  # fail_recv_const: 5000
  interface {
    ringnumber: 0
    bindnetaddr: 192.168.3.{3..7}
    mcastaddr: 226.94.1.1
    mcastport: 5405
  }
}
logging {
  fileline: off
  to_stderr: no
  to_logfile: yes
  to_syslog: yes
  # the pathname of the log file
  logfile: /var/log/cluster/corosync.log
  debug: off
  timestamp: on
  logger_subsys {
    subsys: AMF
    debug: off
  }
}
amf {
  mode: disabled
}

Now start corosync on the first node and then check corosync-objctl | grep member to make sure it came up.

/etc/init.d/corosync start
Starting Corosync Cluster Engine (corosync):               [  OK  ]
corosync-objctl |grep member
runtime.totem.pg.mrp.srp.members.1224976576.ip=r(0) ip(192.168.3.73) 
runtime.totem.pg.mrp.srp.members.1224976576.join_count=1
runtime.totem.pg.mrp.srp.members.1224976576.status=joined

Now start corosync on the other nodes. You should then see the other nodes, five total in my case, come online.

corosync-objctl |grep member
runtime.totem.pg.mrp.srp.members.1224976576.ip=r(0) ip(192.168.3.73) 
runtime.totem.pg.mrp.srp.members.1224976576.join_count=1
runtime.totem.pg.mrp.srp.members.1224976576.status=joined
runtime.totem.pg.mrp.srp.members.1241753792.ip=r(0) ip(192.168.3.74) 
runtime.totem.pg.mrp.srp.members.1241753792.join_count=1
runtime.totem.pg.mrp.srp.members.1241753792.status=joined
runtime.totem.pg.mrp.srp.members.1258531008.ip=r(0) ip(192.168.3.75) 
runtime.totem.pg.mrp.srp.members.1258531008.join_count=1
runtime.totem.pg.mrp.srp.members.1258531008.status=joined
runtime.totem.pg.mrp.srp.members.1275308224.ip=r(0) ip(192.168.3.76) 
runtime.totem.pg.mrp.srp.members.1275308224.join_count=1
runtime.totem.pg.mrp.srp.members.1275308224.status=joined
runtime.totem.pg.mrp.srp.members.1292085440.ip=r(0) ip(192.168.3.77) 
runtime.totem.pg.mrp.srp.members.1292085440.join_count=1
runtime.totem.pg.mrp.srp.members.1292085440.status=joined

Install

Done as root (ya, I know...) on CentOS 6.0;

Install dependencies;

yum -y install nss-devel zlib-devel

Corosync from source;

yum -y remove corosync corosynclib corosynclib-devel
cd ~
git clone git://github.com/corosync/corosync.git
cd corosync
git checkout -b flatiron origin/flatiron
./autogen.sh
./configure --enable-nss
make install
cd ~

QEMU (needs >= v0.13);

cd ~
yum -y remove libvirt
git clone git://git.sv.gnu.org/qemu.git
cd qemu
./configure
make install
cd ~

Sheepdog (we don't build the RPMs for missing dependencies reasons);

cd ~
git clone git://github.com/collie/sheepdog.git
cd sheepdog
./autogen.sh
./configure
make install
cd ~

Using Sheepdog

You need a "store" directory. At this point in development, the /etc/init.d/sheepdog initialization script is hard coded to use /var/lib/sheepdog, so that is what we will use. I formatted this partition on /dev/sda5 as an ext4 partition.

mkfs.ext4 /dev/sda5
mkdir /var/lib/sheepdog
tune2fs -l /dev/sda5 | grep "Filesystem UUID" | sed -e "s/^Filesystem UUID:\s*\(.*\)/UUID=\L\1\E\t\/var/lib/sheepdog\text4\tdefaults,user_xattr\t1 3/" >> /etc/fstab
mount /var/lib/sheepdog
mount
/dev/sda2 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sda1 on /boot type ext3 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
/dev/sda5 on /var/lib/sheepdog type ext4 (rw,user_xattr)

Note the (rw,user_xattr), this is important.

Now create the store on all nodes.

sheep /var/lib/sheepdog

You can verify which nodes are in the sheepdog cluster using the following command;

Tell it to create five copies.


 

Any questions, feedback, advice, complaints or meanderings are welcome.
Alteeve's Niche! Enterprise Support:
Alteeve Support
Community Support
© Alteeve's Niche! Inc. 1997-2024   Anvil! "Intelligent Availability®" Platform
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions.