Put your Pro's and Con's for SoftwareRaid vs HardwareRaid here.

One persons opinion

<yem> gotta rebuild this mail server and i have now sworn off kernelraid <jsr> Ah, then ebay have some 2-channel units for, like, US$50 <Remosi> why? <yem> Remosi: its nice if you don't want to spend any money. but it can be a

pain to sort out the lilo issues if you have to replace a drive or shift the drives to another box

<Remosi> ahh excellent point

  • Remosi goes and wiki's the answer

<yem> the existing mailserver has hda and hdc in raid1. when i move those

drives to another box (same channels, etc) it wont boot

<Remosi> hmm <Remosi> I wonder why <yem> yeah <yem> nice while it works.. <Remosi> yah <yem> one server here is going on 18 months old on kernelraid <yem> it has kept the system up under disc failure at least once

GlennRamsey's opinion:

On my home office box the backups have become too large for writing to CDRWs so I installed a second hard drive and faubackup. Faubackup runs as a daily (ana)cron job and this works OK except that when doing the backup the machine becomes sluggish. Hence I'm thinking about using RAID 1 instead. Browsing through the HOWTO's and docs on the web its seems that software RAID is a bit tricky to set up and is something that you should only do if you know exactly what you are doing. Given that a RAID card is quite cheap now it would seem that software RAID is a poor choice for the user who has better things to do than wrestle with the configuration of software RAID.

GerwinVanDeSteeg's opinion:

Finding the correct RAID card for your distribution can also be tricky. Software RAID is not too difficult to manage depending on your choice of distribution. Some of the more "user friendly" distros (Read RedHat, Fedora) come with an install process capable of installing the machine on a SoftwareRaid ensuring there is no fuss whatsoever. I've setup many systems with SoftwareRaid and HardwareRaid and found that if you've got the wrong card or driver things don't always work the way you want.

DanielLawson's opinion:

I've got servers running software and hardware IDE RAID. I've not spent a lot of time spent a lot of time benchmarking them, but I've heard a number of reports suggesting that most hardware IDE RAID 0 or RAID 1 implementations are basically crap. To get a decent RAID 0 or RAID 1 card still cost a fair bit last time I looked, and doesn't neccesarily offer many benefits (specially if they have binary drivers, or only have drivers for some kernels, etc). On the other side of things, I also have a server with a 4 port 3Ware SATA RAID0/1/5 card in it, running a 360 GB RAID5 array. It was very painless to set up, and performance is good - as you'd expect. I've not tested software RAID 5 to compare however. If you can afford to drop a thousand dollars on a raid controller, then you get what you pay for :)

As of Jun 2005, there are basically no "true" two port hardware IDE/SATA raid solutions. The only cards that fit this bill are the 3ware 7006-2 PATA adapter (old), and 8006-2 SATA adapter. If you want two channel RAID 0 or RAID 1, you have to get a so-called fakeraid card. These rely on software (in the form of a kernel module) to perform the RAID. has some good information on the state of SATA RAID, in particular looking at linux support.

3ware still seems to be the preferred vendor for PCI based hardware SATA raid, although Areca has some interesting offerings.

LawrenceDoliveiro's opinion:

Out of half a dozen client machines I've dealt with, with different RAID controllers in them, not one has been an entirely trouble-free experience. The best was a Dell with a PERC 3/Di controller (supported by the aacraid Linux module). Dell provided a utility called afacli, which allowed you to examine the health of the array, reconfigure disks etc. Another Dell with a PERC 4/Di controller (supported by the megaraid Linux module) has been working, but as far as I know there is no good management utility comparable to afacli. An older machine with a Promise FastTrak? TX2000 controller gave us some interesting trouble: SuSE Linux stopped including support for this controller, but I was able to download "partial" sources from the Promise Website and build them for an updated kernel. This seemed to work at first, but then it would crash every few days.

Conclusion: stick to software RAID. It works the same on all hardware configurations and all Linux distros. The mdadm(8)? utility is powerful and includes a range of functions for monitoring the health of your array, reconfiguring disks and so on. You have no redundancy in RAID if you can't tell whether your disks are OK or not.

Oh yes, and there's also this reason.

See also: Software, Hardware, RaidOnLinux


Part of CategorySystemAdministration