nanog mailing list archives

Re: Got a call at 4am - RAID Gurus Please Read


From: Javier J <javier () advancedmachines us>
Date: Wed, 10 Dec 2014 17:18:49 -0500

I'm just going to chime in here since I recently had to deal with bit-rot
affecting a 6TB linux raid5 setup using mdadm (6x 1TB disks)

We couldn't rebuild because of 5 URE sectors on one of the other disks in
the array after a power / ups issue rebooted our storage box.

We are now using ZFS RAIDZ and the question I ask myself is, why wasn't I
using ZFS years ago?

+1 for ZFS and RAIDZ



On Wed, Dec 10, 2014 at 8:40 AM, Rob Seastrom <rs () seastrom com> wrote:


The subject is drifting a bit but I'm going with the flow here:

Seth Mos <seth.mos () dds nl> writes:

Raid10 is the only valid raid format these days. With the disks as big
as they get these days it's possible for silent corruption.

How do you detect it?  A man with two watches is never sure what time it
is.

Unless you have a filesystem that detects and corrects silent
corruption, you're still hosed, you just don't know it yet.  RAID10
between the disks in and of itself doesn't help.

And with 4TB+ disks that is a real thing.  Raid 6 is ok, if you accept
rebuilds that take a week, literally. Although the rebuild rate on our
11 disk raid 6 SSD array (2TB) is less then a day.

I did a rebuild on a RAIDZ2 vdev recently (made out of 4tb WD reds).
It took nowhere near a day let alone a week.  Theoretically takes 8-11
hours if the vdev is completely full, proportionately less if it's
not, and I was at about 2/3 in use.

-r




Current thread: