Managing Software RAID Arrays

From Alteeve Wiki
Revision as of 15:29, 6 April 2011 by Digimer (talk | contribs) (Created page with '{{howto_header}} This quickie covers: * Tearing down an existing software RAID 5 array. * Deleting it's underlying primary partition on each array member. * Creating a new s…')
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

 AN!Wiki :: How To :: Managing Software RAID Arrays

This quickie covers:

  • Tearing down an existing software RAID 5 array.
  • Deleting it's underlying primary partition on each array member.
  • Creating a new set of four extended partitions.
  • Creating four new RAID 5 arrays out of the new extended partitions.
  • Ensuring that /etc/mdtab and /etc/fstab are updated.

Warnings And Assumptions

This tutorial covers fundamentally changing the storage on a server. There is a very high, very real chance that all data can be lost.

DO NOT FOLLOW THIS TUTORIAL ON A LIVE MACHINE! Use this to practice on a test machine only.

It is assumed that you have a fundamental understanding of and comfort with the Linux command line, specifically, the bash terminal.

Let's Begin

So then, let's get to work.

Viewing The Current Configuration

We want to look at three things:

  • The current RAID devices.
  • What is using the RAID device we plan to delete.
  • The current partition layout.

Current RAID Configuration

Checking the current RAID devices involves see what devices are online and what are configured. The first is checked by looking at the /proc/mdstat file, and the later by looking in the /etc/mdstat file.

cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid1] 
md0 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0]
      264960 blocks [4/4] [UUUU]
      
md1 : active raid5 sdd3[3] sdc3[2] sdb3[1] sda3[0]
      6289152 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU]
      
md3 : active raid5 sdd4[3] sdc4[2] sdb4[1] sda4[0]
      1395148608 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      
md2 : active raid5 sdd2[3] sdc2[2] sdb2[1] sda2[0]
      62917632 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU]
      
unused devices: <none>
cat /etc/mdadm.conf
DEVICE partitions
MAILADDR root
ARRAY /dev/md2 level=raid5 num-devices=4 uuid=7af5fde9:646394dd:d46d09a3:eb495b50
ARRAY /dev/md0 level=raid1 num-devices=4 uuid=2280ed9e:24f99bf5:4cb4f32c:f3b58eb4
ARRAY /dev/md1 level=raid5 num-devices=4 uuid=5ae2c898:5837f4a0:a3f0a617:955802c1
ARRAY /dev/md3 level=raid5 num-devices=4 metadata=0.90 spares=1 UUID=a2636590:fcb1e82a:3f1d7145:41a20e6d

So we see that four devices are configured and operating. We will tear down /dev/md3.

Ensuring that /dev/md3 is no longer in use

We need to ensure that /dev/md3 is no longer in use is by checking what is mounted and confirming that no other program uses it.

There are *many* applications that might use the raw storage space;

  • A normal file systems.
  • DRBD
  • LVM
  • others.

We'll look for for local file systems by using df and checking /etc/fstab.

df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/md2               57G  2.5G   51G   5% /
/dev/md0              251M   37M  201M  16% /boot
tmpfs                 7.7G     0  7.7G   0% /dev/shm
none                  7.7G   40K  7.7G   1% /var/lib/xenstored

It's not mounted.

cat /etc/fstab
/dev/md2                /                       ext4    defaults        1 1
/dev/md0                /boot                   ext3    defaults        1 2
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/dev/md1                swap                    swap    defaults        0 0

There is no corresponding entry.

Now let's see if it's part of a DRBD resource. This involves checking /proc/drbd, if it exists, and then checking /etc/drbd.conf. If you're not using DRBD, neither of these files will exist and you can move on.

Note: If you have a DRBD resource using /dev/md3, make sure that the resource is not in use by LVM before destroying the DRBD resource. If you don't remove LVM first, then you might have trouble later as the LVM signature may persist.

First, look in /etc/drbd.conf and see if any of the configured resources use /dev/md3. If they do, make sure you tear down the matching resource on the other node.

cat /etc/drbd.conf
#
# please have a a look at the example configuration file in
# /usr/share/doc/drbd83/drbd.conf
#

global {
        usage-count     yes;
}

common {
        protocol        C;
        syncer {
                rate    15M;
        }

        disk {
                fencing         resource-and-stonith;
        }

        handlers {
                outdate-peer    "/sbin/obliterate";
        }

        net {
                allow-two-primaries;
                after-sb-0pri   discard-zero-changes;
                after-sb-1pri   discard-secondary;
                after-sb-2pri   disconnect;
        }

        startup {
                become-primary-on       both;
        }
}

resource r0 {
        device          /dev/drbd0;
        meta-disk       internal;

        on an-node01.alteeve.com {
                address         192.168.2.71:7789;
                disk            /dev/md3;
        }

        on an-node02.alteeve.com {
                address         192.168.2.72:7789;
                disk            /dev/md3;
        }
}

Here we see that /dev/md3 is in fact in use by DRBD and it's block device name is /dev/drbd0. Given this, we'll want to go back and look at df and /etc/fstab again to make sure that /dev/drbd0 wasn't listed. It wasn't, so we're ok to proceed.

We won't actually destroy this resource yet, we'll come back to it once we know that LVM is not using /dev/md3 or /dev/drbd0.


Now let's look to make sure it's not an LVM physical volume.

 

Any questions, feedback, advice, complaints or meanderings are welcome.
Alteeve's Niche! Enterprise Support:
Alteeve Support
Community Support
© Alteeve's Niche! Inc. 1997-2024   Anvil! "Intelligent Availability®" Platform
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions.