This document is available in: English Castellano Deutsch Francais Nederlands Russian Turkce Korean |
by Antonio Castro <acastro(at)ctv.es> About the author: I am computer scientist and have had the opportunity to work with several Unix flavors. Furthermore I have done different tasks from software design to system administration. Translated to English by: Miguel A Sepulveda <Miguel.Sepulveda(at)disney.com> Content: |
Installation and Configuration of a RAID SystemAbstract:
RAID (Redundant Array of Inexpensive Disks) consists of a
series of systems to organize several disk drives into a single
entity that behaves as a single virtual drive but making the
various disks work in parallel ,thus improving the access
performance and saving the information stored from accidental
crashes. |
There are also other RAID implementations based of cards that allow a user to manage several identical disks devices as a RAID thanks to a simple Z80 chip and on board software. Under these specifications it is not possible to claim that this solution would give better efficiency than a Linux based solution.
Implementations based on controller cards are expensive and
also force the user to purchase only identical disk devices.
Linux, on the other hand, given the appropriate device drivers
could use some of these cards, but that would not be an
interesting solution since Linux allows a free software based
solution, equally efficient that avoids the expensive hardware
alternatives.
|
By contrast using multiple disk devices on the same IDE controller card means that these devices will never be able to be accessed simultaneously. It is a pity that SCSI disks are still much more expensive than its IDE counterparts. The software solution for a Linux RAID system is equally efficient (if not more) than those based on special cards and of course cheaper and more flexible in terms of the disk devices permitted.
While in a SCSI bus a device can be dumping data to the bus
while another is retrieving it, on an IDE interface a disk is
first accessed and the other afterwards.
It is necessary also to take into account that the Linux
system must start from a non RAID disk device, and of small
size so that the root partition is relatively free.
Name | NumBits | NumDev | MB/s | Connector | Max Cable Length |
---|---|---|---|---|---|
SCSI-1 | 8 | 7 | 5 | 50 pins LowDens | 6 mts |
SCSI-2 (alias) Fast scsi, o Narrow scsi |
8 | 7 | 10 | 50 pins HighDens | 3 mts |
SCSI-3 (alias) Ultra, o Fast20 |
8 | 7 | 20 | 50 pins HighDens | 3 mts |
Ultra Wide (alias) Fast scsi-3 |
16 | 15 | 40 | 68 pins HighDens | 1.5 mts |
Ultra2 | 16 | 15 | 80 | 68 pins HighDens | 12 mts |
IDE devices have file devices under Linux named /dev/hd..., to SCSI devices correspond /dev/sd..., and to metadisks there will be /dev/md.. after compiling the kernel with the options specified later. Four such devices should be present:
brw-rw---- 1 root disk 9, 0 may 28 1997 md0 brw-rw---- 1 root disk 9, 1 may 28 1997 md1 brw-rw---- 1 root disk 9, 2 may 28 1997 md2 brw-rw---- 1 root disk 9, 3 may 28 1997 md3Our first goal should be trying to make the swap access time as small as possible, for that purpose it is best to use a small metadisk on the RAID, or to spread the swap in the traditional fashion among all the physical disks. If several swap partitions are used, each on a different physical disk, then the swap subsystem of linux takes care of managing the load among them, therefore the RAID would be unnecesary in this scenario.
In this mode the ultimate goal is to balance the advantages of the type RAID0 and RAID1. Data is organized mixing both methods. The physical 1 to N-1 are organized in striping mode (RAID0) and the Nth stores the parity of the individual bits corresponding to blocks 1 to N-1. If any of the disks fails, it is possible to recover by using the parity information on the Nth hard disk. Efficiency during read operations is N-1 and during write operations is 1/2 (because writing a data block now involves writing also to the parity disk). In order to restore a broken hard disk, one only has to re-read the information and re-write it (it reads from the parity disk but it writes to the newly install hard disk).
|
If the reader cannot use identical disks take into account
that RAID systems always work with identical blocks of
information. It is possible that the slow hard disks will be
forced to work harder, but in any case the RAID configuration
will still yield a better performance. The increase of
performance on a RAID system that is properly configure is
truly spectacular. It is almost true to say that the
performance increases linearly with the number of hard disks in
the RAID.
RAID0 has no redundancy but consider that to have redundancy it is advised a large number of disks in order not to waste too much disk capacity. Wasting a whole disk when we only have three is a waste. Furthermore, it does not cover all the possible cases of information lost but only those due to physical deterioration of the hard disks, a very uncommon event. If 10 hard disks were available, then using one for parity control is not so much of a waste. On a RAID0 having a disk failure on any of the disks means losing all the information stored in all the physical disks, consequently we recommend an appropriate policy of backups.
The first step to take is adding the appropriate drivers to the kernel. For Linux 2.0.xx RAID the options are:
Multiple devices driver support (CONFIG_BLK_DEV_MD) [Y/n/?] Y Linear (append) mode (CONFIG_MD_LINEAR) [Y/m/n/?] Y RAID-0 (striping) mode (CONFIG_MD_STRIPED) [Y/m/n/?] YAfter booting the system with the new kernel the /proc file will have the entry mdstat containing the status on the four (which is the default value) devices newly created as md0, md1, md2 and md3. Since none of them have been initialized yet, there should all appear inactive and there should not be usable yet.
-mdadd -mdrun -mdstop -mdopIt can be downloaded from:sweet-smoke.ufr-info-p7.ibp.fr /pub/Linux, but they are often part of most distributions.
For kernels 2.1.62 and higher there is a different package called 'RAIDtools' that permits to use a RAID0, RAID4 or RAID5.
In the following example we illustrate how to define a RAID0
metadisk that uses two hard disks, more specifically
/dev/sdb1 and /dev/sdc1.
meta-device | RAID Mode | Disk Partition 1 | Disk Partition 1 |
---|---|---|---|
/dev/md0 | linear | /dev/sdb1 | /dev/sdc1 |
Once the metadisk is formatted it should not be altered under any cricunstance or all the information in it would be lost.
mdadd -a mdrun -aAt this moment md0 should appear initialized already. To format it:
mke2fs /dev/md0And to mount it
mkdir /mount/md0 mount /dev/md0 /mount/md0If everything worked so far, the reader can now proceed to include these commands in the booting scripts so that next time the system reboots the RAID0 metadisk gets mounted automatically. To automatically mount the RAID0 system it is first necessary to add an entry to the /etc/fstab file as well as to run the commands 'mdadd -a' and 'mdrun -a' from a script file executed prior to mounting. On a Debian distribution, a good place for these commands is the /etc/init.d/checkroot.sh script file, just before remounting in read/write mode the root filesystem, that is just before the "mount -n -o remount,rw /" line
/ | /bigTemp + /incoming | swap | 2Gb(RAID) hda4 |
HD 4.2Gb SCSI
C: | D: | swap | 2Gb(RAID) sda4 |
HD 2Gb SCSI
swap | 2Gb(RAID) sdb2 |
#######</etc/fstab>################################################ # <file system> <mount point> <type> <options> <dump> <pass> /dev/hda1 / ext2 defaults 0 1 /dev/hda2 /mnt/hda2 ext2 defaults 0 2 /dev/md0 /mnt/md0 ext2 defaults 0 2 proc /proc proc defaults 0 2 /dev/hda3 none swap sw,pri=10 /dev/sdb1 none swap sw,pri=10 /dev/sda3 none swap sw,pri=10
#########</etc/mdtab>####################################### # <meta-device> <RAID-mode> <DskPart1> <DskPart1> <DskPart1> /dev/md0 RAID0,8k /dev/hda4 /dev/sda4 /dev/sdb2The root partition is located on the 6Gb disk as hda1 and then there is a large partition used for the dowload from Internet, CD images storage, etc. This partition does not account for too much load because it is not used often. The 4 Gb disk does not have partitions that can penalize the efficiency of the RAID because they are MSDOS partitions hardly ever used from Linux. The 2G disk is almost fully dedicated to the RAID system. There is a small area reserved in each disk as swap space.
We should try to make all disks (partitions) in the RAID of approximately the same size because large differences will decrease the RAID performance. Small differences are not significant. We use all the space available so that all the data from the disks that can be entangled is and the remaining data remains free.
Mounting several IDE disks on a single RAID is not very efficient, but mounting an IDE with various SCSI works very well. IDE disks do not allow concurrent access, while SCSI disks do.
Webpages maintained by the LinuxFocus Editor team
© Antonio Castro, FDL LinuxFocus.org |
Translation information:
|
2002-11-03, generated by lfparser version 2.34