Differences between version 10 and predecessor to the previous major change of RAID.
Other diffs: Previous Revision, Previous Author, or view the Annotated Edit History
| Newer page: | version 10 | Last edited on Thursday, June 23, 2005 4:22:25 pm | by PerryLorier | Revert |
| Older page: | version 9 | Last edited on Monday, May 30, 2005 1:01:29 am | by PhilMurray | Revert |
@@ -104,10 +104,13 @@
* Lots of wasted disk space
* If two disks on opposing arrays die, you lose the entire array, where 1+0 would require two disks in the same position to die before you lose the array which is far less probable.
http://www.raidarray.eu.com/raid0+1.html
+
+Visual explaination of various RAID setups:
+[http://fun.sdinet.de/pics/raid.jpg]
----
One suggested way of calculating the Stripe size for RAID systems that are doing a lot of random I/O (machines that are serving multiple users, eg email, compute servers etc) is to figure out the maximum throughput you can get through your disks (including controllers, PCI bus bandwidth etc). Then plug it into this formula:
stripesize = throughput / (drives * RPM/60)
then round down the stripesize to the nearest multiple of your filesystem cluster size (usually 4k).
Suggestions for the improvement of the estimation of optimal stripe size is solicited.
