NSLU2-Linux
view · edit · print · history

Please note: This Howto describes to set up RAID1? with LVM2? using SlugOS/LE for data partitions. If you also want to mirror your root partition (without LVM), please give a look to TurnupToRAID.

I've played around for half a day now until the config finally worked for me - I will now try to recall my progress as good as possible.

Prerequisites

  • A functional NSLU2
  • A pair of working flash memory sticks, or powered external disk drives
  • A suitable system from which you can login to the NSLU2 using an appropriate SSH client.

Agenda:

1. The RAID 1.1 Preparations for the RAID 1.2 Setting up the RAID 2. Setting up LVM2? 3. Modifing the System to automatically recognize LVM devices.

After you have done the basic installation following the existing Guides InstallandTurnupABasicSlugOSSystem or TurnupToRAID you should now have you SLUG up and running.

1. The RAID

1.1 Preparations for the RAID

First of all, please make sure you have the basic packages already installed:

# opkg -force-overwrite install joe nano vim bash file gawk util-linux procps coreutils mdadm

Prepare the partitions for the RAID1?. The disks should be plugged in (referred as /dev/sdb and /dev/sdc in this Howto) and have a look at dmesg how they will be named on your system.

Use fdisk to create a partition with Type 83 (Linux). As we are going to setup LVM, I used to whole disk space for a single partition.

# fdisk -l /dev/sdb
Disk /dev/sdb: 251.0 GB, 251000193024 bytes
255 heads, 63 sectors/track, 30515 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1       30515   245111706   83  Linux 

Create a partition of the same size on /dev/sdc (no matter if the device has a higher capacity - the size of the partition is important here):

# fdisk -l /dev/sdc
Disk /dev/sdc: 251.0 GB, 251000193024 bytes
255 heads, 63 sectors/track, 30515 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1       30515   245111706   83  Linux 

1.2 Setting up the RAID

Now, it's getting interesting. With these partitions set up, it's time to create the RAID:

# mdadm --create --auto=yes /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1

You should get a confirmation: mdadm: array /dev/md0 started.

Update the Information in your /etc/mdadm.conf. mdadm *should* be able to identify the partitions for the RAID based on a signature written during the creation process - but you never now...

# mdadm --detail --scan >/etc/mdadm.conf

Thumbs up - Your RAID is now set up! You may want to check the progress of the sync by the following command:

# cat /proc/mdstat

If everything is okay, it should look similar to this:

Personalities : [raid1] 
md0 : active raid1 sdb1[0] sdc1[1]
      245111616 blocks [2/2] [UU]

unused devices: <none>

2. Setting up LVM2?

Install the LVM related packages:

# opkg install kernel-module-dm-mod lvm2 device-mapper

After successful installation of the packages you need to load the dm_mod Kernel module:

# modprobe dm_mod

Let's make this persistent, so that the module is available after reboot as well. Just add a line to /etc/modules

# echo "dm_mod" >> /etc/modules

Create the Physical Volume on /dev/md0:

# pvcreate /dev/md0

Create the Volume Group on this physical volume ('vg00' is how I named my Volume Group - feel free to choose whatever you like):

# vgcreate vg00 /dev/md0

Maybe you need to activate the VG - just do, to be on the safe side:

# vgchange -a y

Create logical volumes. I've choosen to have 30GB for my /home and 200GB for /media/storage

# lvcreate -L 200G -n storage vg00
# lvcreate -L 30G -n home vg00

Setting up Filesystem on our new Logical Volumes:

# mkfs.ext3 /dev/vg00/storage
# mkfs.ext3 /dev/vg00/home

This will now take some minutes - but we're nearly done!

3. Modifing the System to automatically recognize LVM devices.

Add the created partitions to your /etc/fstab:

# echo "/dev/vg00/home /home ext3 defaults 0 0" >> /etc/fstab
# echo "/dev/vg00/storage /media/storage ext3 defaults 0 0" >> /etc/fstab

Create /media/storage so that mount command will not fail:

# mkdir /media/storage

Mount the new partitions:

# mount -a

And check if it works:

df -h 
...
/dev/vg00/home         30G   33M   30G   1% /home
/dev/vg00/storage     200G  149G   52G  75% /media/storage
...

Great!

Now this is the point which took most of my time today - howto get the SLUG detect and mount the LVM stuff automagically after boot - played a bit with configs in /etc/modprobe.d/ but without success in the end.

What finally helped me out, was the file /etc/init.d/mountall.sh which is called during the boot process anyway. But before we edit something, let's make a backup of the file:

# cp /etc/init.d/mountall.sh /etc/init.d/mountall.sh.orig

I added the following part right *after* the section for *mdadm* (line 17), but before the definite *mount* command and it's describing comment:

#
# Added vgscan and vgchange to enable LVM2 configured partitions
#
if test -x /usr/sbin/lvm
then
        test "$VERBOSE" != no && echo "Scanning for LVM volume groups and enable..."
        vgscan
        vgchange -a y
fi

The next section contains the call of "mount -at (...)", so no need to add anything else at this stage.

Just reboot and check if everything works as expected. If not, lsmod helped me a lot to locate the problem. The following modules should be loaded for our config to work:

# lsmod
Module                  Size  Used by
raid1                  16288  1 
md_mod                 61908  2 raid1
dm_mod                 34056  6 
view · edit · print · history · Last edited by Patrick Prodoehl p2.
Based on work by Patrick Prodoehl p2.
Originally by Michael Bhola thaughbaer.
Page last modified on June 14, 2009, at 03:11 PM