Dailydave mailing list archives

Re: The Small Company's Guide to Hard Drive Failure and Linux


From: "Anthony.zboralski" <bcs2005 () bellua com>
Date: Fri, 19 Nov 2004 01:51:41 +0700

Frank Berger said:

using RAID-1 is most of the times also okay as a software RAID
configuration. Normally you do not see much more CPU load doing RAID 1
as software...

CPU overhead has never really been the argument against software RAID-1. Even with RAID-5, where it matters more, it's a forced argument at best. The advantage of RAID-1 in hardware is that you are reducing the traffic
on the bus. In a two-disk hardware RAID-1, you send a single packet
(block) of data across your system buss and the controller replicates that block out to the disks. With software RAID-1, you have two blocks flying
across the bus for every write. Maybe system bus saturation isn't
important to you and so then the point is moot.

As was pointed out earlier (but perhaps not with enough forcefulness)
nearly all hardware RAID controllers for ATA (IDE) are a lie. Unless you are buying something high quality, what you are really getting is firmware
RAID-- software RAID on a chip. Who do you trust more to write a better
implementation of RAID? Neil Brown, who benefits from peer review, or some anonymous software engineer at a motherboard manufacturer? In addition to
this, there is the tendency by these same vendors (many of whom provide
integrated motherboard support) to provide very badly written drivers that
hook into the SCSI layer-- making your array crawl.

On the other hand, SW RAID-1 is fast on Linux and if one side of your
mirror dies, you can bring up the remaining disks as a standalone device
(sans RAID)-- just mount the partitions normally. Presumably you aren't
worried about system bus saturation in this case, which I suspect 99/100
people are not.

Shameless plug (though it's getting a bit outdated):
http://www.oreilly.com/catalog/mraidlinux/index.html

Also let me second the endorsement of Pilosoft. They have been very
helpful through several power supply failures.

Make sure you stay away from "hardware RAID" as most of the implementations don't even support RAID5 and the performance is really poor 15 meg/second again 100+ with software raid. Plus you're stuck with a vendor with poor support.

There is a benchmark somewhere on google, in which linux software raid clearly comes
really ahead of most implementations (*bsd, hardware, etc.)

I am using a 350gig Linux 2.6.9 Software RAID5 array (4x120gig + 1 spare) and I am really happy with it . The setup was done using IBM's volume manager, EVMS, http://evms.sourceforge.net . it has really nice interface and documentation.

root@dis:/home/acz# hdparm -t /dev/md0

/dev/md0:
 Timing buffered disk reads:  168 MB in  3.01 seconds =  55.79 MB/sec
root@dis:/home/acz# hdparm -T /dev/md0

/dev/md0:
 Timing cached reads:   1176 MB in  2.00 seconds = 586.62 MB/sec

CPU usage is really minimal on this machine (1.8ghz AMD 2500+, 1gig of DDR ram),
the only time CPU usage climbs up is after a crash (my UPS died on my
a few times without warning while the power was up; I guess it is time to replace it; it's kind of stupid that my UPS is a single point of failure, anyone know an easy way to run 2 in parallel.)

RAID5 or RAID6 is really the best way to go in terms of security and performance. RAID5 allows 1 drive failure (and will rebuilt its state automatically if you have a spare) and RAID6 allows 2 drives to fail at the same time. Using other raid modes for anything is pure waste unless you work with big temporary files, for which the performance boost of a stripping array will come handy; 1 disk failure on a stripping array and you can say bye to your data.

Oh and btw even though you use RAID5/6 you are not protected from filesystem
corruption and human stupidity; you still need to do backup regularly.

Anthony

_______________________________________________
Dailydave mailing list
Dailydave () lists immunitysec com
https://lists.immunitysec.com/mailman/listinfo/dailydave


Current thread: