nanog mailing list archives

Re: Got a call at 4am - RAID Gurus Please Read


From: "Allen McKinley Kitchen (gmail)" <allenmckinleykitchen () gmail com>
Date: Tue, 9 Dec 2014 20:15:41 -0500

+1 on the most important statement below, from my point of view: RAID 5 and RAID 10 are totally separate animals and 
while you can set up a separate RAID 10 array and migrate your data to it (as soon as possible!!!) you cannot migrate 
from 5 to 10 in place absent some utter magic that I am unaware of.

10 requires more raw drive space but offers significant write performance advantages when correctly configured (which 
isn't really too difficult). 5 is fine for protection against losing one drive, but 5 requires much more internal 
processing of writeable data before it begins the writes and, not too long ago, was considered completely inappropriate 
for applications with high numbers of writes, such as a transactional database.

Still, 5 is often used for database systems in casual installations just because it's easy, cheap (relatively) and 
modern fast boxes are fast enough. 

Ok, getting down off my RAID soapbox - good luck.

..Allen

On Dec 9, 2014, at 17:22, Michael Brown <michael () supermathie net> wrote:

If the serveraid7k cards are LSI and not Adaptec based (I think they are) you should just be able to plug in a new 
adapter and import the foreign configuration.

You do have a good backup, yes?

Switching to write-through has already happened (unless you specified WriteBackModeEvenWithNoBBU - not the default) - 
these (LSI) cards ‎by default only WB when "safe".

If WT, RAID10 much better perf. BUT you just can't migrate from R5 to R10 non-destructively.

- Michael from Kitchener
  Original Message  
From: symack
Sent: Tuesday, December 9, 2014 16:04
To: nanog () nanog org
Subject: Got a call at 4am - RAID Gurus Please Read

Server down..... Got to colo at 4:39 and an old IBM X346 node with
Serveraid-7k has failed. Opened it up to find a swollen cache battery that
has bent the card in three different axis. Separated the battery. (i)
Inspect card and plug back in, (ii) reboot, and got (code 2807) Not
functioning....
Return to (i) x3 got same result. Dusted her off and let it sit for a while
plugged in, rebooted to see if I can get her to write-through mode, disks
start spinning. Horay.

Plan of action, (and the reason for my post):

* Can I change from an active (ie, disks with data) raid 5 to raid 10.
There are 4 drives
in the unit, and I have two on the shelf that I can plug in.
* If so, will I have less of performance impact with RAID 10 + write-thru
then RAID 5 + write through
* When the new raid card comes in, can I just plug it in without loosing my
data? I would:

i) RAID 10
ii) Write-thru
iii) Replace card

The new card is probably coming with a bad battery that would put us kind
of in square one. New batteries are 200+ if I can find them. Best case
scenario is move it over to RAID 10+Write-thru, and feel less of the
performance pinch.

Given I can move from RAID 5 to RAID 10 without loosing data. How long to
anticipate downtime for this process? Is there heavy sector re-arranging
happening here? And the same for write-thru, is it done quick?

I'm going to go lay down just for a little white.

Thanks in Advance,

Nick from Toronto.


Current thread: