RHCS Release Manager: Difference between revisions

From Alteeve Wiki
Jump to navigation Jump to search
No edit summary
Line 183: Line 183:
</source>
</source>


== Testing v3.1.8 ==
== Old Tests ==


The machines being used to build the cluster need to have the following two packages installed;
* [[Previous cluster Release Tests]]
 
<source lang="bash">
yum -y install fence-agents resource-agents modcluster ricci
</source>
 
Now make sure that <span class="code">ricci</span> is running and that <span class="code">selinux</span>, <span class="code">iptables</span> and <span class="code">ip6tables</span> are off.
 
<source lang="bash">
sed -e "s/SELINUX=enforcing/SELINUX=disabled/" -i /etc/selinux/config
systemctl disable ip6tables.service
systemctl disable iptables.service
systemctl enable modclusterd.service
systemctl enable ricci.service
systemctl stop ip6tables.service
systemctl stop iptables.service
systemctl start modclusterd.service
systemctl restart ricci.service
</source>
 
Reboot if <span class="code">selinux</span> was updated.
 
=== Distro Build Tests ===
 
{|class="wikitable sortable"
!Distro
!style="white-space: nowrap;"|Arch
!style="white-space: nowrap;"|Date tested<br /><span class="code">(YYYY-MM-DD)</span>
!style="white-space: nowrap;"|Results
!Notes
|-
|style="white-space: nowrap;"|Fedora Rawhide
|x86_64
|<span class="code">2011-12-03</span>
|style="color: green;"|Success
|
|-
|Fedora Rawhide
|i386
|<span class="code">2011-12-03</span>
|style="color: green;"|Success
|
|-
|Fedora 16
|x86_64
|<span class="code">2011-12-03</span>
|style="color: green;"|Success
|
|-
|Fedora 16
|i386
|<span class="code">2011-12-03</span>
|style="color: green;"|Success
|
|-
|Fedora 15
|x86-64
|<span class="code">2011-12-03</span>
|style="color: green;"|Success
|
|-
|Fedora 15
|i386
|<span class="code">2011-12-03</span>
|style="color: green;"|Success
|
|-
|Ubuntu 11.10
|amd64
|<span class="code">2011-12-03</span>
|style="color: green;"|Success
|
|-
|Ubuntu 11.10
|i386
|<span class="code">2011-12-03</span>
|style="color: green;"|Success
|
|}
 
=== Cluster Tests ===
 
Host Nodes are Fedora 16, x86_64.
 
{|class="wikitable sortable"
!style="white-space: nowrap;"|Test
!style="white-space: nowrap;"|Result
!Notes
|-
!style="white-space: nowrap;"|Install via <span class="code">make install</span>
|style="color: green;"|Pass
|
|-
!style="white-space: nowrap;"|Full Cluster Start
|style="color: green;"|Pass
|
|-
!style="white-space: nowrap;"|Withdraw One Node, Retain Quorum
|style="color: green;"|Pass
|
|-
!style="white-space: nowrap;"|Withdraw Second Node, Drop Quorum
|style="color: green;"|Pass
|
|-
!style="white-space: nowrap;"|Push out updated <span class="code">cluster.conf</span>
|style="color: green;"|Pass
|
|-
!style="white-space: nowrap;"|Start the service via <span class="code">rgmanager</span>
|style="color: green;"|Pass
|
|-
!style="white-space: nowrap;"|Manual relocate the service
|style="color: green;"|Pass
|
|-
!style="white-space: nowrap;"|Fence node/recover service
|style="color: red;"|Failed
|This may well be a configuration issue.
<source lang="bash">
fence_node test-node-3
</source>
<source lang="text">
fence test-node-3 failed
</source>
Syslog;
<source lang="text">
Dec  6 00:50:15 test-node-1 fence_node[3744]: fence test-node-3 failed
</source>
|}
 
Cluster configuration used;
 
<source lang="xml">
<?xml version="1.0"?>
<cluster config_version="1" name="rm-cluster">
        <totem rrp_mode="none" secauth="off" />
        <clusternodes>
                <clusternode name="test-node-1.alteeve.com" nodeid="1">
                        <fence>
                                <method name="virt">
                                        <device action="reboot" name="virt-1" port="test-node-1" />
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="test-node-2.alteeve.com" nodeid="2">
                        <fence>
                                <method name="virt">
                                        <device action="reboot" name="virt-1" port="test-node-2" />
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="test-node-3.alteeve.com" nodeid="3">
                        <fence>
                                <method name="virt">
                                        <device action="reboot" name="virt-1" port="test-node-3" />
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <fencedevices>
                <fencedevice name="virt-1" agent="fence_virsh" ipaddr="192.168.1.102" login="root" passwd="secret" />
        </fencedevices>
        <rm log_level="5">
                <resources>
                        <ip address="192.168.1.30" />
                </resources>
                <failoverdomains>
                        <failoverdomain name="virt-ip" nofailback="0" ordered="1" restricted="0">
                                <failoverdomainnode name="test-node-1.alteeve.com" priority="1" />
                                <failoverdomainnode name="test-node-2.alteeve.com" priority="2" />
                                <failoverdomainnode name="test-node-3.alteeve.com" priority="3" />
                        </failoverdomain>
                </failoverdomains>
                <service autostart="1" domain="virt-ip" name="float_ip" recovery="relocate">
                        <ip ref="192.168.1.30" />
                </service>
        </rm>
</cluster>
</source>


= Distro Testing =
= Distro Testing =

Revision as of 18:35, 24 December 2011

 AN!Wiki :: How To :: RHCS Release Manager

These are notes, primarily for me, used in my RHCS release manager tasks.

Test Results

These tests are run by copying over the tarball created on the build machine (my laptop), untar'ing it and running ./configure && make.

Testing v3.1.9x

The machines being used to build the cluster need to have the following two packages installed;

yum -y install fence-agents resource-agents modcluster ricci

Now make sure that ricci is running and that selinux, iptables and ip6tables are off.

sed -e "s/SELINUX=enforcing/SELINUX=disabled/" -i /etc/selinux/config
systemctl disable ip6tables.service
systemctl disable iptables.service
systemctl enable modclusterd.service
systemctl enable ricci.service
systemctl stop ip6tables.service
systemctl stop iptables.service
systemctl start modclusterd.service
systemctl restart ricci.service

Reboot if selinux was updated.

Distro Build Tests

Distro Arch Date tested
(YYYY-MM-DD)
Results Notes
Fedora Rawhide x86_64 2011-12-24
Fedora Rawhide i386 2011-12-24
Fedora 16 x86_64 2011-12-24
Fedora 16 i386 2011-12-24
Fedora 15 x86-64 2011-12-24
Fedora 15 i386 2011-12-24
Ubuntu 11.10 amd64 2011-12-24
Ubuntu 11.10 i386 2011-12-24

Cluster Tests

Host Nodes are Fedora 16, x86_64.

Test Result Notes
Install via make install
Full Cluster Start
Withdraw One Node, Retain Quorum
Withdraw Second Node, Drop Quorum
Push out updated cluster.conf
Start the service via rgmanager
Manual relocate the service
Fence node/recover service

Cluster configuration used;

<?xml version="1.0"?>
<cluster config_version="1" name="rm-cluster">
        <totem rrp_mode="none" secauth="off" />
        <clusternodes>
                <clusternode name="test-node-1.alteeve.com" nodeid="1">
                        <fence>
                                <method name="virt">
                                        <device action="reboot" name="virt-1" port="test-node-1" />
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="test-node-2.alteeve.com" nodeid="2">
                        <fence>
                                <method name="virt">
                                        <device action="reboot" name="virt-1" port="test-node-2" />
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="test-node-3.alteeve.com" nodeid="3">
                        <fence>
                                <method name="virt">
                                        <device action="reboot" name="virt-1" port="test-node-3" />
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <fencedevices>
                <fencedevice name="virt-1" agent="fence_virsh" ipaddr="192.168.1.102" login="root" passwd="secret" />
        </fencedevices>
        <rm log_level="5">
                <resources>
                        <ip address="192.168.1.30" />
                </resources>
                <failoverdomains>
                        <failoverdomain name="virt-ip" nofailback="0" ordered="1" restricted="0">
                                <failoverdomainnode name="test-node-1.alteeve.com" priority="1" />
                                <failoverdomainnode name="test-node-2.alteeve.com" priority="2" />
                                <failoverdomainnode name="test-node-3.alteeve.com" priority="3" />
                        </failoverdomain>
                </failoverdomains>
                <service autostart="1" domain="virt-ip" name="float_ip" recovery="relocate">
                        <ip ref="192.168.1.30" />
                </service>
        </rm>
</cluster>

Old Tests

Distro Testing

Note: Always update the OS before running tests!

Fedora 15, 16 and Rawhide

Packages to install;

yum -y groupinstall "Development Libraries" "Development Tools" "Fedora Packager"
yum -y install vim wget corosynclib-devel openaislib-devel

Debian 6

Debian 6 does not support RHEL's Cluster 3.1+ as it's included version of corosync is too old. No further compatibility testing will be run for this version of Debian.

Ubuntu 11.10

Packages to install;

apt-get update
apt-get -y dist-upgrade
apt-get -y install linux-headers-$(uname -r) libxml2-dev libcorosync-dev libldap2-dev zlib1g-dev libopenais-dev libdbus-1-dev \
 libslang2-dev libncurses5-dev

Build Environment

This is how to setup a machine for building and releasing new version of RHCS. This requires a proper FAS account.

yum -y groupinstall "Development Libraries" "Development Tools" "Fedora Packager"
yum -y install vim wget gnupg

Build cluster.

git clone ssh://git.fedorahosted.org/git/cluster.git
cd cluster/
git branch stable31 --track origin/STABLE31
git checkout stable31

Change this to reflect the appropriate versions.

make -f make/release.mk version=3.1.8 oldversion=3.1.7

Pushing Changes to git

Initial Setup of git

Don't have notes, this is from bash's history. Sort this out later.

  1. Misc commands

man git-send-email git clone http://git.fedorahosted.org/git/cluster.git man git git branch http://git.fedorahosted.org/git/cluster.git man git git show-branch http://git.fedorahosted.org/git/cluster.git git clone http://git.fedorahosted.org/git/cluster.git git show-branch git branch </source>

Pushing To origin

Confirming the changes before committing.

git diff
diff --git a/configure b/configure
index 9c7a773..4e9bc2a 100755
--- a/configure
+++ b/configure
@@ -254,13 +254,14 @@ sub kernel_version {
     }
     close MAKEFILE;
     # Warn and continue if kernel version was not found
-    if (!$build_version || !$build_patchlevel || !$build_sublevel) {
+    if (not defined $build_version || not defined $build_patchlevel || not defined $build_sublevel) {
        print " WARNING: Could not determine kernel version.\n";
        print "          Build might fail!\n";
        return 1;
     }
     # checking VERSION, PATCHLEVEL and SUBLEVEL for the supplied kernel
-    if ($build_version >= $version[0] &&
+    if (($build_version > $version[0]) ||
+        $build_version == $version[0] &&
         $build_patchlevel >= $version[1] &&
         $build_sublevel >= $version[2]) {
       print " Current kernel version appears to be OK\n";

Pushing changes;

Commit locally with signature.

git commit -a -s

(open's editor)

[stable31 9be00f8] Changes the kernel version check to handle 3.x.y kernels. Now if the 'x' version of the running kernel is higher than the 'x' version of the minimum kernel, the test passes. Also changed the method of checking that version numbers were gathered so that a version number of '0' would be seen as valid.
 Committer: Digital Mermaid <digimer@lework.alteeve.com>
Your name and email address were configured automatically based
on your username and hostname. Please check that they are accurate.
You can suppress this message by setting them explicitly:

    git config --global user.name "Your Name"
    git config --global user.email you@example.com

After doing this, you may fix the identity used for this commit with:

    git commit --amend --reset-author

 1 files changed, 3 insertions(+), 2 deletions(-)

Check the current branch name.

git branch
  master
* stable31

This shows that stable31 is active.

Now push up to the main git repo.

git push origin stable31:STABLE31
Counting objects: 5, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 543 bytes, done.
Total 3 (delta 2), reused 0 (delta 0)
To ssh://git.fedorahosted.org/git/cluster.git
   991bfb0..9be00f8  stable31 -> STABLE31

An email should have been automatically sent to the appropriate mailing lists.

Pushing The Release

 

Any questions, feedback, advice, complaints or meanderings are welcome.
Alteeve's Niche! Enterprise Support:
Alteeve Support
Community Support
© Alteeve's Niche! Inc. 1997-2024   Anvil! "Intelligent Availability®" Platform
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions.