Contents
[hide]
- 1 Sponsors
- 2 Introduction
- 3 RAID Types
- 3.1 Linear Mode RAID
- 3.2 RAID 0
- 3.3 RAID 1
- 3.3.1 Figure 26-1 RAID 0 And RAID 1 Operation
- 3.4 RAID 4
- 3.5 RAID 5
- 3.5.1 Figure 26-2 RAID 5 Operation
- 4 Before You Start
- 4.1 IDE Drives
- 4.2 Serial ATA Drives
- 4.3 SCSI Drives
- 4.4 Should I Use Software RAID Partitions Or Entire Disks?
- 4.5 Backup Your System First
- 4.6 Configure RAID In Single User Mode
- 5 Configuring Software RAID
- 5.1 RAID Partitioning
- 5.1.1 Determining Available Partitions
- 5.1.2 Unmount the Partitions
- 5.1.3 Prepare The Partitions With FDISK
- 5.1.4 Use FDISK Help
- 5.1.5 Set The ID Type
- 5.1.6 Make Sure The Change Occurred
- 5.1.7 Save The Changes
- 5.1.8 Repeat For The Other Partitions
- 5.2 Preparing the RAID Set
- 5.2.1 Create the RAID Set
- 5.2.2 Confirm RAID Is Correctly Inititalized
- 5.2.3 Format The New RAID Set
- 5.2.4 Create the mdadm.conf Configuration File
- 5.2.5 Create A Mount Point For The RAID Set
- 5.2.6 Edit The /etc/fstab File
- 5.2.7 Mount The New RAID Set
- 5.2.8 Check The Status Of The New RAID
- For Raid 1 with Linux OS
- grub -> root (hd0,0) -> setup (hd0) -> root (hd1,0) -> setup (hd1)
- Chu y: hd0 la o cung raid1(1), hd1 la o cung raid1(2)
- 5.1 RAID Partitioning
- 6 Conclusion
Introduction
The main goals of using redundant arrays of inexpensive disks (RAID) are to improve disk data performance and provide data redundancy.
RAID can be handled either by the operating system software or it may be implemented via a purpose built RAID disk controller card without having to configure the operating system at all. This chapter will explain how to configure the software RAID schemes supported by RedHat/Fedora Linux.
For the sake of simplicity, the chapter focuses on using RAID for partitions that include neither the /boot or the root (/) filesystems.
RAID Types
Whether hardware- or software-based, RAID can be configured using a variety of standards. Take a look at the most popular.
Linear Mode RAID
In the Linear RAID, the RAID controller views the RAID set as a chain of disks. Data is written to the next device in the chain only after the previous one is filled.
The aim of Linear RAID is to accommodate large filesystems spread over multiple devices with no data redundancy. A drive failure will corrupt your data.
Linear mode RAID is not supported by Fedora Linux.
RAID 0
With RAID 0, the RAID controller tries to evenly distribute data across all disks in the RAID set.
Envision a disk as if it were a plate, and think of the data as a cake. You have four cakes- chocolate, vanilla, cherry and strawberry-and four plates. The initialization process of RAID 0, divides the cakes and distributes the slices across all the plates. The RAID 0 drivers make the operating system feel that the cakes are intact and placed on one large plate. For example, four 9GB hard disks configured in a RAID 0 set are seen by the operating system to be one 36GB disk.
Like Linear RAID, RAID 0 aims to accommodate large filesystems spread over multiple devices with no data redundancy. The advantage of RAID 0 is data access speed. A file that is spread over four disks can be read four times as fast. You should also be aware that RAID 0 is often called striping.
RAID 0 can accommodate disks of unequal sizes. When RAID runs out of striping space on the smallest device, it then continues the striping using the available space on the remaining drives. When this occurs, the data access speed is lower for this portion of data, because the total number of RAID drives available is reduced. For this reason, RAID 0 is best used with drives of equal size.
RAID 0 is supported by Fedora Linux. Figure 26.1 illustrates the data allocation process in RAID 0.
RAID 1
With RAID 1, data is cloned on a duplicate disk. This RAID method is therefore frequently called disk mirroring. Think of telling two people the same story so that if one forgets some of the details you can ask the other one to remind you.
When one of the disks in the RAID set fails, the other one continues to function. When the failed disk is replaced, the data is automatically cloned to the new disk from the surviving disk. RAID 1 also offers the possibility of using a hot standby spare disk that will be automatically cloned in the event of a disk failure on any of the primary RAID devices.
RAID 1 offers data redundancy, without the speed advantages of RAID 0. A disadvantage of software-based RAID 1 is that the server has to send data twice to be written to each of the mirror disks. This can saturate data busses and CPU use. With a hardware-based solution, the server CPU sends the data to the RAID disk controller once, and the disk controller then duplicates the data to the mirror disks. This makes RAID-capable disk controllers the preferred solution when implementing RAID 1.
A limitation of RAID 1 is that the total RAID size in gigabytes is equal to that of the smallest disk in the RAID set. Unlike RAID 0, the extra space on the larger device isn't used.
RAID 1 is supported by Fedora Linux. Figure 26.1 illustrates the data allocation process in RAID 1.
Figure 26-1 RAID 0 And RAID 1 Operation
RAID 4
RAID 4 operates likes RAID 0 but inserts a special error-correcting or parity chunk on an additional disk dedicated to this purpose.
RAID 4 requires at least three disks in the RAID set and can survive the loss of a single drive only. When this occurs, the data in it can be recreated on the fly with the aid of the information on the RAID set's parity disk. When the failed disk is replaced, it is repopulated with the lost data with the help of the parity disk's information.
RAID 4 combines the high speed provided of RAID 0 with the redundancy of RAID 1. Its major disadvantage is that the data is striped, but the parity information is not. In other words, any data written to any section of the data portion of the RAID set must be followed by an update of the parity disk. The parity disk can therefore act as a bottleneck. For this reason, RAID 4 isn't used very frequently.
RAID 4 is not supported by Fedora Linux.
RAID 5
RAID 5 improves on RAID 4 by striping the parity data between all the disks in the RAID set. This avoids the parity disk bottleneck, while maintaining many of the speed features of RAID 0 and the redundancy of RAID 1. Like RAID 4, RAID 5 can survive the loss of a single disk only.
RAID 5 is supported by Fedora Linux. Figure 26.2 illustrates the data allocation process in RAID 5.
Linux RAID 5 requires a minimum of three disks or partitions.
Figure 26-2 RAID 5 Operation
Before You Start
Specially built hardware-based RAID disk controllers are available for both IDE and SCSI drives. They usually have their own BIOS, so you can configure them right after your system's the power on self test (POST). Hardware-based RAID is transparent to your operating system; the hardware does all the work.
If hardware RAID isn't available, then you should be aware of these basic guidelines to follow when setting up software RAID.
IDE Drives
To save costs, many small business systems will probably use IDE disks, but they do have some limitations.
- The total length of an IDE cable can be only a few feet long, which generally limits IDE drives to small home systems.
- IDE drives do not hot swap. You cannot replace them while your system is running.
- Only two devices can be attached per controller.
- The performance of the IDE bus can be degraded by the presence of a second device on the cable.
- The failure of one drive on an IDE bus often causes the malfunctioning of the second device. This can be fatal if you have two IDE drives of the same RAID set attached to the same cable.
Serial ATA Drives
Serial ATA type drives are rapidly replacing IDE, or Ultra ATA, drives as the preferred entry level disk storage option because of a number of advantages:
- The drive data cable can be as long as 1 meter in length versus IDE's 18 inches.
- Serial ATA has better error checking than IDE.
- There is only one drive per cable which makes hot swapping, or the capability to replace components while the system is still running, possible without the fear of affecting other devices on the data cable.
- There are no jumpers to set on Serial ATA drives to make it a master or slave which makes them simpler to configure.
- IDE drives have a 133Mbytes/s data rate whereas the Serial ATA specification starts at 150 Mbytes/sec with a goal of reaching 600 Mbytes/s over the expected ten year life of the specification.
SCSI Drives
SCSI hard disks have a number of features that make them more attractive for RAID use than either IDE or Serial ATA drives.
- SCSI controllers are more tolerant of disk failures. The failure of a single drive is less likely to disrupt the remaining drives on the bus.
- SCSI cables can be up to 25 meters long, making them suitable for data center applications.
- Much more than two devices may be connected to a SCSI cable bus. It can accommodate 7 (single-ended SCSI) or 15 (all other SCSI types) devices.
- Some models of SCSI devices support "hot swapping" which allows you to replace them while the system is running.
- SCSI currently supports data rates of up to 640 Mbytes/s making them highly desirable for installations where rapid data access is imperative.
Should I Use Software RAID Partitions Or Entire Disks?
It is generally a not a good idea to share RAID-configured partitions with non-RAID partitions. The reason for this is obvious: A disk failure could still incapacitate a system.
If you decide to use RAID, all the partitions on each RAID disk should be part of a RAID set. Many people simplify this problem by filling each disk of a RAID set with only one partition.
Backup Your System First
Software RAID creates the equivalent of a single RAID virtual disk drive made up of all the underlying regular partitions used to create it. You have to format this new RAID device before your Linux system can store files on it. Formatting, however, causes all the old data on the underlying RAID partitions to be lost. It is best to backup the data on these and any other partitions on the disk drive on which you want implement RAID. A mistake could unintentionally corrupt valid data.
Configure RAID In Single User Mode
As you will be modifying the disk structure of your system, you should also consider configuring RAID while your system is running in single-user mode from the VGA console. This makes sure that most applications and networking are shutdown and that no other users can access the system, reducing the risk of data corruption during the exercise.
[root@bigboy tmp]# init 1
Once finished, issue the exit command, and your system will boot in the default runlevel provided in the /etc/inittab file.
Configuring Software RAID
Configuring RAID using Fedora Linux requires a number of steps that need to be followed carefully. In the tutorial example, you'll be configuring RAID 5 using a system with three pre-partitioned hard disks. The partitions to be used are:
/dev/hde1
/dev/hdf2
/dev/hdg1
Be sure to adapt the various stages outlined below to your particular environment.
RAID Partitioning
You first need to identify two or more partitions, each on a separate disk. If you are doing RAID 0 or RAID 5, the partitions should be of approximately the same size, as in this scenario. RAID limits the extent of data access on each partition to an area no larger than that of the smallest partition in the RAID set.
Determining Available Partitions
First use the fdisk -l command to view all the mounted and unmounted filesystems available on your system. You may then also want to use the df -k command, which shows only mounted filesystems but has the big advantage of giving you the mount points too.
These two commands should help you to easily identify the partitions you want to use. Here is some sample output of these commands.
[root@bigboy tmp]# fdisk -l
Disk /dev/hda: 12.0 GB, 12072517632 bytes
255 heads, 63 sectors/track, 1467 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 13 104391 83 Linux
/dev/hda2 14 144 1052257+ 83 Linux
/dev/hda3 145 209 522112+ 82 Linux swap
/dev/hda4 210 1467 10104885 5 Extended
/dev/hda5 210 655 3582463+ 83 Linux
...
...
/dev/hda15 1455 1467 104391 83 Linux
[root@bigboy tmp]#
[root@bigboy tmp]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hda2 1035692 163916 819164 17% /
/dev/hda1 101086 8357 87510 9% /boot
/dev/hda15 101086 4127 91740 5% /data1
...
...
...
/dev/hda7 5336664 464228 4601344 10% /var
[root@bigboy tmp]#
Unmount the Partitions
You don't want anyone else accessing these partitions while you are creating the RAID set, so you need to make sure they are unmounted.
[root@bigboy tmp]# umount /dev/hde1
[root@bigboy tmp]# umount /dev/hdf2
[root@bigboy tmp]# umount /dev/hdg1
Prepare The Partitions With FDISK
You have to change each partition in the RAID set to be of type FD (Linux raid autodetect), and you can do this with fdisk. Here is an example using /dev/hde1.
[root@bigboy tmp]# fdisk /dev/hde
The number of cylinders for this disk is set to 8355.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help):
Use FDISK Help
Now use the fdisk m command to get some help:
Command (m for help): m
...
...
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
...
...
Command (m for help):
n
p
1
Enter
Enter
Set The ID Type
Partition /dev/hde1 is the first partition on disk /dev/hde. Modify its type using the t command, and specify the partition number and type code. You also should use the L command to get a full listing of ID types in case you forget. In this case, RAID uses type fd, it may be different for your version of Linux.
Command (m for help): t
Partition number (1-5): 1
Hex code (type L to list codes): L
...
...
...
16 Hidden FAT16 61 SpeedStor f2 DOS secondary
17 Hidden HPFS/NTF 63 GNU HURD or Sys fd Linux raid auto
18 AST SmartSleep 64 Novell Netware fe LANstep
1b Hidden Win95 FA 65 Novell Netware ff BBT
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help):
Make Sure The Change Occurred
Use the p command to get the new proposed partition table:
Command (m for help): p
Disk /dev/hde: 4311 MB, 4311982080 bytes
16 heads, 63 sectors/track, 8355 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Device Boot Start End Blocks Id System
/dev/hde1 1 4088 2060320+ fd Linux raid autodetect
/dev/hde2 4089 5713 819000 83 Linux
/dev/hde4 6608 8355 880992 5 Extended
/dev/hde5 6608 7500 450040+ 83 Linux
/dev/hde6 7501 8355 430888+ 83 Linux
Command (m for help):
Save The Changes
Use the w command to permanently save the changes to disk /dev/hde:
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root@bigboy tmp]#
The error above will occur if any of the other partitions on the disk is mounted.
Repeat For The Other Partitions
For the sake of brevity, I won't show the process for the other partitions. It's enough to know that the steps for changing the IDs for /dev/hdf2 and /dev/hdg1 are very similar.
Preparing the RAID Set
Now that the partitions have been prepared, we have to merge them into a new RAID partition that we'll then have to format and mount. Here's how it's done.
Create the RAID Set
You use the mdadm command with the --create option to create the RAID set. In this example we use the --level option to specify RAID 5, and the --raid-devices option to define the number of partitions to use.
Raid 5:
Trích dẫn:
|
[root@bigboy tmp]# mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/hde1 /dev/hdf2 /dev/hdg1or for Raid 10: mdadm -v --create /dev/md0 --level=raid10 --raid-devices=4 /dev/sda2 /dev/sdb1 /dev/sdc1 /dev/sdd1 or for Raid 1: mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdd1 mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 64K mdadm: /dev/hde1 appears to contain an ext2fs file system size=48160K mtime=Sat Jan 27 23:11:39 2007 mdadm: /dev/hdf2 appears to contain an ext2fs file system size=48160K mtime=Sat Jan 27 23:11:39 2007 mdadm: /dev/hdg1 appears to contain an ext2fs file system size=48160K mtime=Sat Jan 27 23:11:39 2007 mdadm: size set to 48064K Continue creating array? y mdadm: array /dev/md0 started. [root@bigboy tmp]# |
Confirm RAID Is Correctly Inititalized
The /proc/mdstat file provides the current status of all RAID devices. Confirm that the initialization is finished by inspecting the file and making sure that there are no initialization related messages. If there are, then wait until there are none.
[root@bigboy tmp]# cat /proc/mdstat
Personalities : [raid5]
read_ahead 1024 sectors
md0 : active raid5 hdg1[2] hde1[1] hdf2[0]
4120448 blocks level 5, 32k chunk, algorithm 3 [3/3] [UUU]
unused devices: <none>
[root@bigboy tmp]#
Notice that the new RAID device is called /dev/md0. This information will be required for the next step.
Format The New RAID Set
Your new RAID partition now has to be formatted. The mkfs.ext3 command is used to do this.
[root@bigboy tmp]# mkfs.ext3 /dev/md0
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
36144 inodes, 144192 blocks
7209 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
18 block groups
8192 blocks per group, 8192 fragments per group
2008 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 33 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@bigboy tmp]#
Create the mdadm.conf Configuration File
Your system doesn't automatically remember all the component partitions of your RAID set. This information has to be kept in the mdadm.conf file. The formatting can be tricky, but fortunately the output of the mdadm --detail --scan --verbose command provides you with it. Here we see the output sent to the screen.
[root@bigboy tmp]# mdadm --detail --scan --verbose
ARRAY /dev/md0 level=raid5 num-devices=4
UUID=77b695c4:32e5dd46:63dd7d16:17696e09
devices=/dev/hde1,/dev/hdf2,/dev/hdg1
[root@bigboy tmp]#
Here we export the screen output to create the configuration file.
[root@bigboy tmp]# mdadm --detail --scan --verbose > /etc/mdadm.conf
Create A Mount Point For The RAID Set
The next step is to create a mount point for /dev/md0. In this case we'll create one called /mnt/raid
[root@bigboy mnt]# mkdir /mnt/raid
Edit The /etc/fstab File
The /etc/fstab file lists all the partitions that need to mount when the system boots. Add an Entry for the RAID set, the /dev/md0 device.
/dev/md0 /mnt/raid ext3 defaults 1 2
Do not use labels in the /etc/fstab file for RAID devices; just use the real device name, such as /dev/md0. In older Linux versions, the /etc/rc.d/rc.sysinit script would check the /etc/fstab file for device entries that matched RAID set names listed in the now unused /etc/raidtab configuration file. The script would not automatically start the RAID set driver for the RAID set if it didn't find a match. Device mounting would then occur later on in the boot process. Mounting a RAID device that doesn't have a loaded driver can corrupt your data and produce this error.
Starting up RAID devices: md0(skipped)
Checking filesystems
/raiddata: Superblock has a bad ext3 journal(inode8)
CLEARED.
***journal has been deleted - file system is now ext 2 only***
/raiddata: The filesystem size (according to the superblock) is 2688072 blocks.
The physical size of the device is 8960245 blocks.
Either the superblock or the partition table is likely to be corrupt!
/boot: clean, 41/26104 files, 12755/104391 blocks
/raiddata: UNEXPECTED INCONSISTENCY; Run fsck manually (ie without -a or -p options).
If you are not familiar with the /etc/fstab file use the man fstab command to get a comprehensive explanation of each data column it contains.
The /dev/hde1, /dev/hdf2, and /dev/hdg1 partitions were replaced by the combined /dev/md0 partition. You therefore don't want the old partitions to be mounted again. Make sure that all references to them in this file are commented with a # at the beginning of the line or deleted entirely.
#/dev/hde1 /data1 ext3 defaults 1 2
#/dev/hdf2 /data2 ext3 defaults 1 2
#/dev/hdg1 /data3 ext3 defaults 1 2
Mount The New RAID Set
Use the mount command to mount the RAID set. You have your choice of methods:
- The mount command's -a flag causes Linux to mount all the devices in the /etc/fstab file that have automounting enabled (default) and that are also not already mounted.
- You can also mount the device manually.
Check The Status Of The New RAID
The /proc/mdstat file provides the current status of all the devices.
[root@bigboy tmp]# raidstart /dev/md0
[root@bigboy tmp]# cat /proc/mdstat
Personalities : [raid5]
read_ahead 1024 sectors
md0 : active raid5 hdg1[2] hde1[1] hdf2[0]
4120448 blocks level 5, 32k chunk, algorithm 3 [3/3] [UUU]
unused devices: <none>
[root@bigboy tmp]#
Software RAID Howto
Redhat Enterprise 3 doesn't contain a good guide on how to install and manage a RHEL3 system to a pair of mirrored disks using software RAID. Here's is my guide. This guide should work equally well for the clones of RHEL, e.g. Whitebox linux, CentOS, Tao Linux ...
Installing RHEL
My hardware for installing to was a Pentium 4 machine, with two 80GB Maxtor IDE hard disks, 1GB RAM. I booted RHEL off disk 1, and started working through the installer.
At the point where disk partitioning takes place, I chose Disk Druid (instead of fdisk / auto) to partition the disks. I created two 100MB software RAID primary paritions, one on each disk, two 512MB linux swap partitions, two 79GB paritions to fill the rest of the disk. I made the two 100MB partitions a single RAID 1 device, mounted on /boot, and the other two a RAID 1 device mounted on /. The rest of the install proceeds as normal.
When the machine reboots back into RHEL, it will have working software RAID, however the boot loader will only be installed on the first disk (/dev/hda). To install this on the second disk (/dev/hdc), we need to run grub.
$ grubThe next thing to do is to take a backup of the parition table on the disk - you will need this to restore to an extra disk. You can get this by running fdisk, and picking option 'p' (print partition table).
grub> device (hd0) /dev/hdc
grub> root (hd0,0)
Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd0)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
Running "embed /grub/e2fs_stage1_5 (hd0)"... 15 sectors are embedded.
succeeded
Running "install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2
/grub/grub.conf"... succeeded
Done.
Một vài trường hợp bạn gập lỗi sau
grub> root (hd1,0)
root (hd1,0)
Filesystem type unknown, partition type 0xfd
grub> setup (hd1)
setup (hd1) Error 1: Cannot mount selected partition
grub>
Lỗi này là do partition boot đã được thay đổi thứ tự bạn vào đây để xem thứ tự partition boot
/boot/grub/grub.conf
rồi gõ lại lệnh sau
$ grub
$ root (hd0,1)
$ setup (hd0)
$ root (hd1,1)
$ setup (hd1)
$ quit
/dev/hda
Device Boot Start End Blocks id System
/dev/hda1 * 1 203 102280+ fd Linux raid autodetect
/dev/hda2 204 1243 524160 82 Linux swap
/dev/hda3 1244 158816 79416792 fd Linux raid autodetect
How to remove Raid array
Gia su ta co 2 o cung sdc va sdd dang chay Raid 1( ten o Raid la: md4), ta khong muon cau hinh Raid cho 2 o cung nay nua, ta lam nhu sau
- Dau tien chung ta lam cho o cung sdd bi Fail bang lenh
mdadm /dev/md4 --fail /dev/sdd1
- Xoa sdd1 khoi Raid
mdadm /dev/md4 --remove /dev/sdd1
Monitoring the RAID array
This is for my setup with two disks, /dev/hda, /dev/hdc both with identical data on.
$ cat /proc/mdstat
Personalities : [raid1]
read_ahead 1024 sectors
Event: 1
md0 : active raid1 hda2[0] hdc2[1]
119925120 blocks [2/2] [UU]
...
This will give the status of the raid array, if both disks are operating it looks like this
md0 : active raid1 hdc3[1] hda3[0]
If it's broken and only one disk is operating it looks like this
md0 : active raid1 hdc3[1]
If it's recovering from a failed disk it looks like this
md0 : active raid1 hda3[1] hdc[1]
....
[.>.........] recovery = 3% (.../...) finish=128min speed=10000k/sec
...
More information comes from
mdadm --query --detail /dev/md0
.... lots of stuff ...
Number Major Minor RaidDevice State
0 0 0 0 faulty, removed
1 222 3 1 active sync /dev/hdc3
This tells us that device 0 is missing - device 1 is working fine.
In theory the mdmonitor
How to restore from a broken raid array
In this case /dev/hda has failed and I'm inserting a replacement disk. I start by rebooting the machine from CD on disk 1, and running the rescue mode by typing 'linux rescue' at the command prompt on the CD.
Do not mount any disks, or set up the network. You will be dropped into a command prompt.
Partition the new disk with the same partition table as the old disk. It is very important to make sure you partition the correct disk. You may wish to unplug the working disk during this step to insure yourself against user error.
$ fdisk /dev/hda
n (new)
p 1 (patition #1)
1 203 (start and end cylinders)
t 1 fd (set the partition type to linux raid)
n
p 2
204 1243
t 2 82 (set the partition type to linux swap)
n
p 3
1244 158816
t 3 fd (set the partition type to linux raid)
I then boot the machine from it's working disk. I then need to add the replacement disk into the array and trigger the rebuild.
mdadm --manage --add /dev/md0 /dev/hda3
mdadm --manage --add /dev/md1 /dev/hda1
The new disk has no boot sector - that's not covered by the RAID array. We need to write this back to the disk as earlier.
$ grub
$ root (hd0,0)
$ setup (hd0)
$ root (hd1,0)
$ setup (hd1)
$ quit
Một vài trường hợp bạn gập lỗi sau
grub> root (hd1,0)
root (hd1,0)
Filesystem type unknown, partition type 0xfd
grub> setup (hd1)
setup (hd1) Error 1: Cannot mount selected partition
grub>
Lỗi này là do partition boot đã được thay đổi thứ tự bạn vào đây để xem thứ tự partition boot
/boot/grub/grub.conf
rồi gõ lại lệnh sau
$ grub
$ root (hd0,1)
$ setup (hd0)
$ root (hd1,1)
$ setup (hd1)
$ quit
Sau đó cần kiểm tra lại bằng cách rút từng ổ đĩa.
Notes
It's entirely possible to do the recovery by booting from the working disk rather than a rescue CD. This increases the chance of accidently destroying all your data. I'd recommend not doing that, until you can perform a recovery with a CD without referencing this guide at any point.
Conclusion
Linux software RAID provides redundancy across partitions and hard disks, but it tends to be slower and less reliable than RAID provided by a hardware-based RAID disk controller.
Hardware RAID configuration is usually done via the system BIOS when the server boots up, and once configured, it is absolutely transparent to Linux. Unlike software RAID, hardware RAID requires entire disks to be dedicated to the purpose and when combined with the fact that it usually requires faster SCSI hard disks and an additional controller card, it tends to be expensive.
Remember to take these factors into consideration when determining the right solution for your needs and research the topic thoroughly before proceeding. Weighing cost versus reliability is always a difficult choice in systems administration.
Về lý thuyết thì nếu RAID 5 mà hư hơn 1 ổ cứng thì coi như tiêu. Nhưng như ông bà ta thường nói "Còn nước còn tát" nên hy vọng cách sau sẽ có thể giúp được bạn.
Lưu ý: Mọi rủi ro mất mát dữ liệu tui không chịu trách nhiệm à nha
Hướng dẫn này giả định 2 ổ đĩa thuộc dải RAID 5 bị hư. Chúng ta ví dụ hệ thống RAID 5 bao gồm các ổ đĩa sau: /dev/sda1, /dev/sdb1, /dev/sdc1, /dev/sdd1, /dev/sde1, và hai ổ đĩa bị hư là /dev/sdc và /dev/sdd
Bước 1: Bạn sẽ cần ít nhất một ổ cứng mới dùng để thế vào vị trí của một trong các ổ bị lỗi.
Bước 2: Dùng ổ cứng mới này để biến nó thành một bản sao của một trong những ổ cứng bị hư. Câu hỏi đặt ra là biến ổ cứng mới này thành bản sao của ổ cứng cũ nào đây? Bạn cứ giả định ổ nào được xem là hư ít nhất và lấy ổ đó. Trong trường hợp này chúng ta sao chép toàn bộ ổ đĩa /dev/sdc thành ổ mới. Thay thế ổ hư này bằng ổ mới
Bước 3: Lấy thông tin về các thông số của RAID5 bằng cách chạy lệnh sau:
Mã:
|
mdadm -E /dev/sdb1 |
Bước 4: Tạo lại hệ thống RAID ở chế độ "degraded". Để làm điều này bạn cần biết các thông số sau:
* XXX - Số lượng ổ đĩa tham gia vào hệ thống RAID.
* YYY - Kích thước của một chunk trong RAID. (xác định được ở bước 3).
* ZZZ - Layout của hệ thống RAID (xác định được ở bước 3)
* Ổ cứng thiếu (trong trường hợp này là /dev/sdd1).
Thực thi câu lệnh sau:
Mã:
|
mdadm --create /dev/md0 -n XXX -c YYY -l 5 -p ZZZ --assume-clean /dev/sda1 /dev/sdb1 /dev/sdc1 missing /dev/sde1 |
Giờ đây bạn đã có hệ thống RAID chạy ổ chế độ "degraded" hãy tiến hành cứu lại dữ liệu hoặc thay thế các ổ cứng còn thiếu bằng ổ cứng mới và rebuild lại RAID.
Tham khảo bài viết tiếng Anh tại địa chỉ: http://wiki.centos.org/TipsAndTricks..._RAID5_Volumes