Penguin
Blame: SoftwareRaid
EditPageHistoryDiffInfoLikePages
Annotated edit history of SoftwareRaid version 13 showing authors affecting page license. View with all changes included.
Rev Author # Line
6 StuartYeates 1 Linux SoftwareRaid is a feature of the LinuxKernel that allows [RAID] to be performed in [Software] rather than in [Hardware]. See SoftwareRaidVsHardwareRaid for more details.
2
12 KragenSitaker 3 If you are wanting to setup SoftwareRaid on a Linux Machine you should first read the [Software RAID HOWTO | http://www.tldp.org/HOWTO/Software-RAID-HOWTO.html] which is an excellent [introduction to RAID|http://www.icc-usa.com/store/pc/viewContent.asp?idpage=7] in general. Or, if you want to spend a minute instead of an hour reading, read RaidNotes.
6 StuartYeates 4
5 !!! How To Setup a machine using Software RAID-1
6
7 Allthough I setup this machine up with Debian I am pretty sure that most of the steps listed below will be distribution independent. We will start from the basic hardware that doesn't do anything useful and progress until we have a machine that boots up into a full working Debian install off the raid array.
8
9 The machine that I used for this configuration has a Intel S845WD1-E Motherboard with two on-board [IDE] channels plus an extra two channels controlled by a Promise PDC20267 RAID chipset (commonly known as a Promise FastTrak100). Unfortunately Promise is not clever enough to release open source drivers for the RAID portion of this chipset so it is only useable as an IDE controller under linux. The rest of the machine is configured as follows. There is one 40GB Seagate IDE Disk on each of the FastTrak's channels. We want to use these two disks to create a RAID-1 array for redundant storage.
10
11 Assuming that the physical installation has been completed correctly with a single disk on each IDE channel, 80pin IDE cables, etc. The next problem we face is that the Debian installer (and all other distributions installers?) cannot install directly onto a raid array as the standard kernels do not have software raid support included. To get around this problem I used the first method described in [Software RAID HOWTO | http://www.tldp.org/HOWTO/Software-RAID-HOWTO.html] of placing a 3rd disk on the first on-board IDE channel and installing a Basic Debian install on to that.
12
13 !Install Debian onto a different disk to the ones in your RAID array (On the first on-board IDE controller)
14
15 !Plain IDE support for the Promise PDC20267 is only stable in 2.4.21 so I downloaded the latest kernel and compiled it with the following options in the main kernel (NOT as options)
16 ** CONFIG_MD
17 ** CONFIG_BLK_DEV_MD
18 ** CONFIG_MD_RAID1
19 ** CONFIG_BLK_DEV_LVM (Probably not needed)
20 ** General IDE Support
21 ** CONFIG_BLK_DEV_OFFBOARD
22 ** CONFIG_BLK_DEV_PDC202XX_OLD
23 ** CONFIG_PDC202XX_BURST
24 ** CONFIG_PDC202XX_FORCE
25
26 !After rebooting you should cat /proc/mdstat and check that the necessary RAID personalities are listed. In the case of the above the output would be
27 mgmt:/usr/src/linux-2.4.21# cat /proc/mdstat
28 Personalities : [[raid1]
29 read_ahead 1024 sectors
30
31 !Partition the disks - Both disks need to be identically partitioned - I choose the following partition layout
32
33 / 500 MB
34 swap 1024 MB
35 /usr 5GB
36 /var 5GB
37 /tmp 2GB
38 /home ~25GB (The Rest)
39
40 Which results in a partition table that should look like this.
41
42 mgmt:~# fdisk -l /dev/hde
43
44 Disk /dev/hde: 255 heads, 63 sectors, 4865 cylinders
45 Units = cylinders of 16065 * 512 bytes
46
47 Device Boot Start End Blocks Id System
48 /dev/hde1 1 61 489951 fd Linux raid autodetect
49 /dev/hde2 62 185 996030 82 Linux swap
50 /dev/hde3 186 4865 37592100 5 Extended
51 /dev/hde5 186 793 4883728+ fd Linux raid autodetect
52 /dev/hde6 794 1401 4883728+ fd Linux raid autodetect
53 /dev/hde7 1402 1644 1951866 fd Linux raid autodetect
54 /dev/hde8 1645 4865 25872651 fd Linux raid autodetect
55
56 Notice that the type of all the partitions (except the swap partition) is set to Linux raid autodetect (0xFD) - this is important.
57
58 ! Verify that your partition table is the same on both disks
59
60 ! Create the raid partitions
61 I am creating 5 seperate RAID-1 partitions for /, /usr, /var, /tmp, and /home on my machine. Each of these RAID partitions is mirrored on both disks. To create these partitions execute the following commands.
62
63 mdadm --create /dev/md0 --level=1 --raid-disks=2 /dev/hde1 /dev/hdg1
64 mdadm --create /dev/md1 --level=1 --raid-disks=2 /dev/hde5 /dev/hdg5
65 mdadm --create /dev/md2 --level=1 --raid-disks=2 /dev/hde6 /dev/hdg6
66 mdadm --create /dev/md3 --level=1 --raid-disks=2 /dev/hde7 /dev/hdg7
67 mdadm --create /dev/md4 --level=1 --raid-disks=2 /dev/hde8 /dev/hdg8
68
69 Notice that we specify the SoftwareRaid device that the RAID partition should be located at, the RAID level for the partition, the number of disks in the array and the raw disk partitions to include in the array.
70
71 You should cat /proc/mdstat now and you will see that 5 raid partitions listed. You will also notice that they are currently being initialised. Each one will have a progress bar and an ETA for the time at which the construction will be finished. This process happens transparently and you can use the RAID partion while it is being constructed.
72
73 !Format The Partitions
74 While we are waiting for the kernel to finish constructing the RAID partitions we will go ahead and format the partitions so that they are ready to use. I am using Ext3 for all of the partitions. The RAID device can be formatted in the same way that you format any other block device. If you don't understand what the commands below do then you probably need to learn about them before you consider doing much more with your RAID array.
75 mke2fs -j /dev/md0
76 mkswap /dev/hde2
77 mkswap /dev/hdg2
78 mke2fs -j /dev/md1
79 mke2fs -j /dev/md2
80 mke2fs -j /dev/md3
81 mke2fs -j /dev/md4
82
83 !Reboot when the arrays are constructed
84 While you could start transferring data onto the new RAID partitions as soon as they are formatted I like to reboot first to ensure that they are all detected correctly before you transfer any data onto them incase I've done something wrong. If you are impatient you can skip this step and move straight on to copying the data onto the partitions.
85
86 Before you actually reboot you should make sure that the kernel has finished constructing each RAID array. It will not harm the array if you reboot before it has finished - it will just start again when you reboot - but you might as well let it finish so that you can start putting data on when you reboot. When the array has been initialised the /proc/mdstat output will look like this
87 md0 : active raid1 hde1[[1] hdg1[[0]
88 489856 blocks [[2/2] [[UU]
89
90 You might notice that we haven't defined the configuration of our RAID arrays in any files yet, we simply issued the 5 commands above and yet when we reboot they are magically there! Information about each RAID array is stored in the superblock of the disk which allows the kernel to automatically locate and assemble the portions of the array as it boots. This allows us to do some really cool stuff as you'll soon see.
91
92 ! Mount the Raid Arrays
93 To copy the filesystem from our temporary disk over onto the raid array we mount the various partitions and different places inside /mnt
94 mount /dev/md0 /mnt
95 mkdir /mnt/usr
96 mkdir /mnt/home
97 mkdir /mnt/var
98 mkdir /mnt/tmp
99 mount /dev/md1 /usr
100 mount /dev/md2 /var
101 mount /dev/md3 /tmp
102 mount /dev/md4 /home
103 And then copy the filesystem over
104 cp -ax / /mnt
105
106 Make sure you update /etc/fstab to mount the various portions of the filesystem correctly!!
107
108 ! Make the system bootable
109 Make sure your /etc/lilo.conf contains the following lines (as well as a kernel stanza pointing to your kernel with SoftwareRAID support)
110 disk=/dev/hde
111 bios=0x80
112 boot=/dev/hde
113 root=/dev/md0
114 This tells lilo to install the boot record in the MBR of /dev/hde (the first disk in the RAID array) and that the root filesystem will be located at /dev/md0 (the first RAID array)
115
116 To install this boot block you need to run
117 # chroot /mnt lilo -v
118
119 ! Reboot
120 While you are turning the machine off to reboot remove the extra drive that you used for the initial install and make sure that you BIOS is correctly configured to boot of the Promise Controller.
121
122 On the Intel Motherboard that I was using this required setting the BIOS boot order to Harddisk First and then setting the HardDisk type to FT Something... (your motherboard is most probably different)
123 We also had to define a single Array of type Span on the first disk only.
124
125 ! Final Tidy-ups
126
127 When you get back into linux you will note that the disk you booted off has now become hda!!!! And your arrays are still working. This is because the array information is stored inside the superblock of the physical drive, so regardless of where it is logically in your system linux can find the array information and setup your arrays.
128
129 Having an array consisting of parts from /dev/hda and parts from /dev/hde is not very nice. This is why we enabled the "Boot Off Board Chipsets First" option in the kernel earlier. This option tells linux to use the second IDE controller of the motherboard rather than the first as it's initial boot device. To enable this edit /etc/lilo.conf again so that it contains the following
130 disk=/dev/hda
131 bios=0x80
132 boot=/dev/hda
133 And make sure that the following line exists in every kernel stanza that you want to boot from
134 append="ide=reverse"
135
136 Run lilo to install the new MBR, Reboot, Make sure your swap is mounted in the correct locations and you're away. You can now use this system as you would any other!
137
138 The final output of /proc/mdstat on this system is as follows.
139 mgmt:/usr/src/linux-2.4.21# cat /proc/mdstat
140 Personalities : [[raid1]
141 read_ahead 1024 sectors
142 md0 : active raid1 hdc1[[1] hda1[[0]
143 489856 blocks [[2/2] [[UU]
144
145 md1 : active raid1 hdc5[[1] hda5[[0]
146 4883648 blocks [[2/2] [[UU]
147
148 md2 : active raid1 hdc6[[1] hda6[[0]
149 4883648 blocks [[2/2] [[UU]
150
151 md3 : active raid1 hdc7[[1] hda7[[0]
152 1951744 blocks [[2/2] [[UU]
153
154 md4 : active raid1 hdc8[[1] hda8[[0]
155 25872576 blocks [[2/2] [[UU]
156
157 unused devices: <none>
158
159 ! Debian initrd kernel support
160 You can also do root-on-RAID1 using a recent stock debian kernel without compiling your own custom kernel. To do so, use mkinitrd to create a custom initrd image with the necessary RAID1 modules.
161
162 This step should happen during the __Make the system bootable__ section above, before running "chroot /mnt lilo -v".
163
164 Modify "/etc/mkinitrd/modules" to contain the following lines:
165 raid1
166 md
167 ext2
168 ext3
169 ide-disk
170 ide-probe-mod
171 ide-mod
172
173 This will tell mkinitrd to include those kernel modules at bootup time. Also, add the following to "/etc/mkinitrd/files":
174 /dev/md0
175
176 Change the line in "/etc/mkinitrd/mkinitrd.conf" that reads
177 ROOT=probe
178 to read ROOT=/dev/md0
179
180 Now, create a new initrd image.
181 # mkinitrd -o /mnt/initrd-raid1.img
182
183 You now have an initrd image that supports your RAID setup. You will need to edit "/mnt/etc/lilo.conf" and include the following in each stanza you intend to boot into.
184 initrd=/initrd-raid1.img
185
186 Finally, install the new lilo configuration
187 # chroot /mnt lilo -v
188 and continue with the rest of the instructions.
189
190 ! Recent Debian Kernels do this Automatically
191
192 The most recent versions (eg [LinuxKernel2.6]) of the Debian kernel-image packages
193 build a new initrd image upon installation. They should automatically notice if the
194 root device is /dev/md* and arrange for the appropriate modules to be present in the
195 initrd image and loaded appropriately. So if the software raid array is actually
196 your root filesystem when you do the kernel install, everything should just work.
10 GianPerrone 197
198 ! Udev and Software RAID
199
200 While trying to do software RAID on an Ubuntu machine, I ran into the problem of my /dev/md0 device spontaneously disappearing at the most inconvenient times (like when trying to boot the machine, for example). To fix this, I added the following line to /etc/udev/links.conf:
201
202 M md0 b 9 0
203
204 And then restarted udev. This should apply for Debian Sarge too.
6 StuartYeates 205
206 ----
9 DanielLawson 207 CategoryOperatingSystem CategorySystemAdministration
208
209 See Also: RaidOnLinux

PHP Warning

lib/blame.php:177: Warning: Invalid argument supplied for foreach()