Why RAID 5 Stops Working In 2009 803
Lally Singh recommends a ZDNet piece predicting the imminent demise of RAID 5, noting that increasing storage and non-decreasing probability of disk failure will collide in a year or so. This reader adds, "Apparently, RAID 6 isn't far behind. I'll keep the ZFS plug short. Go ZFS. There, that was it." "Disk drive capacities double every 18-24 months. We have 1 TB drives now, and in 2009 we'll have 2 TB drives. With a 7-drive RAID 5 disk failure, you'll have 6 remaining 2 TB drives. As the RAID controller is busily reading through those 6 disks to reconstruct the data from the failed drive, it is almost certain it will see an [unrecoverable read error]. So the read fails ... The message 'we can't read this RAID volume' travels up the chain of command until an error message is presented on the screen. 12 TB of your carefully protected — you thought! — data is gone. Oh, you didn't back it up to tape? Bummer!"
Carefully protected? (Score:5, Insightful)
12 TB of your carefully protected â" you thought! â" data is gone. Oh, you didn't back it up to tape? Bummer!
If it wasn't backed up to an offsite location, then it wasn't carefully protected.
Re: (Score:3, Interesting)
Don't panic! (Score:4, Insightful)
RAID 5 will still be orders of magnitude more reliable than just having a single disk.
Re:Don't panic! (Score:5, Insightful)
No, it won't. That's the point of this not-news article. It's getting to the point where (due to the size of the disks) a rebuild takes longer than the statistically "safe" window between individual disk failures. Two disks kick it in the same timeframe (the chance of which increases as you add disks) and you're screwed.
A poorly designed multi-disk storage system can easily be worse than a single disk.
Re:Don't panic! (Score:5, Insightful)
Using the same failure rate figures as the article, you WILL get an unrecoverable read error each and every time you back up your 12 TB of data. You will be able to recover from the single block failure because of the RAID 5 setup.
With that kind of error rate, drive manufacturers will be forced to design to higher standards, they won't be able to sell drives that fail at that rate.
Re:Don't panic! (Score:5, Insightful)
Yes. It's amazing that the article presents the basic point so horribly poorly. The problem is not the capacity of the disks.
The problem is that the capacity has been growing faster than the transfer-bandwith. Thus it takes a longer and longer time to read (or write) a complete disk. This gives a larger window for double-failure.
Simple as that.
Re:Don't panic! (Score:4, Informative)
The problem is that the capacity has been growing faster than the transfer-bandwith. Thus it takes a longer and longer time to read (or write) a complete disk. This gives a larger window for double-failure.
No, the point is that (statistically) you can't actually read all of the data without having another read error (statistically speaking).
Whether you read it all at 100MB/sec or 10MB/sec (ie: how long it takes) is irrelevant (within reason). The problem is that published URE rates are such that you "will" have at least one during the rebuild (because of the amount of data).
The solution, as outlined by a few other posters, are more intelligent RAID5 implementations that don't take an entire disk offline just because of a single sector read error (some already act like this, most don't).
Re:Don't panic! (Score:5, Insightful)
And there is parity, that's how raid5 works.
You are probably referring to "silent" errors, which for performance reasons, isn't read/detected by most raid5 implementations. And in reality there is little reason to actively read parity, unless they are running/recovering in degraded mode: Sure, you'll be informed that there is data corruption, but there is no way to tell whether the parity, or the original data is at fault (though its true, some implementations will scrub/update the parity to match the original data on an occasional basis).
I don't see a single set of raid5 disks as a backup solution at any measure though (disk reliability is only one aspect of this, hardware/driver/filesystem bugs can also cause hard or impossible to detect corruption), but it is a great 'best effort' to prevent a bit of downtime on high availability disks.
Re:Don't panic! (Score:5, Informative)
How reliable RAID5 is depends, because actually the more disks you have, the greater the likelihood that one of them will fail in any set period of time. So obviously if you have a RAID 0 of lots of disks, then there is a much better chance that the RAID will fail than that any particular disk will fail.
So the purpose of RAID5 is not so much to make it orders of magnitude more reliable than just having a single disk, but rather to mitigate the increased risk that would come from having a RAID0. So you'd have to calculate, for the number of disks and the failure rate of any particular drive, what are the chances of having 2 drives fail at the same time (given a certain response rate to drive failure). If you have enough drives and a slow enough response to disk failures, it's at least theoretically possible (I haven't done the math) that a single drive is safer.
Re:Don't panic! (Score:5, Insightful)
You seem to misunderstand the article. They are saying that if you need 12T of storage RAID 5 is not reliable. You would be better off with a single 12T disk if such a thing existed.
Thats not what the article says at all.
The article says that if you build your RAID arrays from the biggest disks available (which no one with half a brain does) like 1-3TB drives, and you have them filled, then the numbers come out as presented.
But there's a reason why no one on the planet builds important raid arrays out of 1TB drives. Rebuild time is too long.
This is also one of the big reasons why you see so many 73GB and 140GB SAS/SATA drives in raid arrays, and why server storage drives dont grow anything like as fast as consumer garbage drives.
Re:Carefully protected? (Score:4, Informative)
"Safe" production data ...with nightly replays/screenshots ...
Exactly. You make backups, no matter what. Anyone that relies on RAID for backups will get what they deserve, sooner than later.
RAID and SANs are for uptime (reliability) and/or performance. SANs with snapshots and RAID with backups are for data recovery.
Re:Carefully protected? (Score:5, Insightful)
I love how you use the language "get what they deserve".
What about my situation, where I have to store ~ 1TB of unique data per office in 3 offices that are roughly 1000 km apart and I have to keep everything backed up with a budget of less than ~AU$ 4000 IN TOTAL?
I have to run a 4 x 1TB RAID arrays on the file servers and use rsync to synchronise all the data between the offices nightly "effectively" doing offsites, and have a 3 TB linux NAS (also using RAID 5) for incrementals at the main site.
That is all I can afford, and I feel that I'm doing my best for my employer given my budget and still maintaining my professional integrity as a sysad.
Why do I "get what they deserve" when I can't afford the necessary LTO4 drives, servers and tapes (I worked it out I'd need ~ AU$ 30,000) to do it any other way?
Re:Carefully protected? (Score:5, Insightful)
If you're replicating data between all three offices (and a fourth backup system?) then you are making backups. The vitriol is aimed at people who set up a RAID-5 array and then say "hooray my data is protected forevermore!".
Tape systems, especially high capacity tapes, are very expensive, and even those are prone to failures. Online backups to other hard drives are the only affordable means of backing up today's high capacity, low cost hard drives. To do it properly though, you need to make sure you do have separate physical locations for protection from natural disasters, fires, etc. Which you have.
The only concern your system may have is: how do you handle corrupted data, or user error? If you've got a TB of data at each site it's unlikely that mistakes will be noticed quickly, so after the nightly synchronisation all your backups will now have the corrupt data and when someone realises in a month's time that someone deleted a file they shouldn't have or saved crap data over a file, how do you restore it? Hopefully your incremental backups can be used to recover the most recent good copy of the data, but how long do you keep those for?
Re:Carefully protected? (Score:4, Informative)
External TB drives are around $150 bucks. Buy several. Make rotating copies. It's doable on your budget. (We're in the same boat, btw, and that was our solution for the dev machines)
However, the real issue is your employer has decided on the budget, and what you do with it is how well you're protected. Sometimes we don't get a Fibre NAS with remote backup, no matter how much we want it. Sometimes we have to get by with the old rsync, dd, or pure copy or even tar/zip with rotating media. (Anything less is suicide)
Re:Carefully protected? (Score:5, Interesting)
Judging by the budget you quoted, it's a combination of all of the above: you are a crappy sysadmin for a crappy company with limited growth potential.
Sigh. *ignores flamebait*
Anyway, here's the actual reality of the situation:
I'm a not brilliant (but certainly not crappy either) sysad who is working for a company that has rapidly expanded to the point where they need a full time sysad, and then felt the kaboom of the subprime mortgage debacle, since they consult to the property market. Hence why my original upgrade budget got shrunk big time.
The company BOTH cares about their data AND can't afford a proper backup system.
Re:Carefully protected? (Score:5, Funny)
The company BOTH cares about their data AND can't afford a proper backup system.
In this case, linux has one last resort for you:
sudo apt-get install bible
darkpixel@hoth:~$ bible
bible: Debian/BRS Release 4.18, $Date: 2005/01/23 11:29:22 $
Hit '?' for help.
-snip-
bible(KJV) [Gen1:1]> ec3:6
Ecclesiastes 3
6 A time to get, and a time to lose; a time to keep, and a time to cast away;
bible(KJV) [Ec3:6]>
Mainly pay attention to that whole '...and a time to lose' part.
Re:Carefully protected? (Score:5, Insightful)
It can be that the company cares, but doesn't care enough to budget for potential data recovery. All you can do is to make sure the risks are explained, with budget option and well documented paper trail is cover your nether regions. Been there, done that. The typical response is that backups are not important, until a failure and a few days of uncertainty is forced upon the company.
Having the same, potentially corrupted, data at multiple sites mitigates against the loss of a disk, or even the loss of a single site. User error or database corruption can wind up copied over your good data. Needing to go back for more than a day or two can may not be practical in a disk to disk backup environment.
It's a part of system manager's role to spell out potential problems in easy to understand power point sound bytes and show what options are available. The better you can do this, the more toys you'll have to play with.
Re:Carefully protected? (Score:5, Informative)
Re:Carefully protected? (Score:5, Funny)
That's why serious IT people use Fedex.
Re:Carefully protected? (Score:5, Funny)
Re:Carefully protected? (Score:5, Interesting)
I always love it when Fed-Ex destroys something and then tries to hide it. One day I walked past the shipping office and I smelled the very strong odor of hydraulic oil coming from the room. I take a look inside since we shouldn't be receiving anything that has hydraulic oil in it. I found a bunch of boxes with the local Detroit Airport logo all over them and sealed with DET labeled tape. The cardboard was completely soaked through with the oil.
I carefully opened one of the boxes and found it contained servers! It appears that the original boxes got in some sort of accident at the airport and were completely soaked. At the airport Fed-Ex or the baggage handlers did us a "favor" and re-boxed everything. The servers were so coated (and filled) that even the new boxes were completely soaked through and the bottoms of the boxes were starting to pull apart. The Fe-Ex guy (so we wouldn't refuse them) dropped them off at lunch and then got some random person in the hall to sign off on it.
We had to pay for new servers to be built ASAP and shipped overnight (UPS this time) at huge cost for us. Since someone had signed off on the package we then had a very long fight to get Fed-Ex to pay for the equipment they destroyed. We never got the extra cost for the overnight shipping and the rush build reimbursed.
Re:Carefully protected? (Score:4, Informative)
"Unrecoverable" implies that it is not possible to read the data anymore.
Also, data on the disk is addressed by sectors, so if one fails, this means you typically have at least 512 bytes lost.
It's true that even that might not completely break some kind of large media file, but you have to remember that RAID5 is a layer below your file system data, so if an error occurs when its trying to rebuild itself, it will not be able to give you your data back.
You might be able to recover a lot of your data from an error of this kind, but don't count on the RAID implementation to do it for you.
Re:Carefully protected? (Score:5, Insightful)
Yea, because we all backup 12TB of home data to an offsite location. Mine is my private evil island, and I've bioengineered flying death monkeys to carry the tapes for me. They make 11 trips a day. I'm hoping for 12 trips with the next generation of monkeys, but they're starting to want coffee breaks.
I'm sorry, but I'm getting seriously tired of people looking down from the pedestal of how it "ought" to be done, how you do it at work, how you would do it if you had 20k to blow on a backup solution, and trying to apply that to the home user. Even the tape comment in the summary is horseshit, because even exceptionally savvy home users are not going to pay for a tape drive and enough tapes to archive serious data, more less handle shipping the backups offsite professionally.
This is serious news. As it stands, the home user that actually sets up a RAID 5 raid is in the top percentile for actually giving a crap about home data. Once that becomes a non-issue, then the point has come when a reasonable backup is out of reach of 99% of private individuals. This, at the same time as more and more people are actually needing a decent solution.
Re:Carefully protected? (Score:4, Informative)
you know the other solution is to not use RAID5 with these big drives, or to go to RAID1, or to actually back up the data you want to save to DVD and accept a disk failure will cost you the rest.
Now, while 1TB onto DVDs seems like quite a chore (and I'll admit it's not trivial), some level of data staging can help out immensely, as well as incrementally backing up files, not trying to actually get a full drive snapshot.
Say you backup like this:
my pictures as of 21oct2008
my documents (except pictures and videos) as of 22 oct2008
etc.
while you will still lose data in a disk failure, your loss can be mitigated, especially if you only try to backup what is important. With digital cameras I would argue that home movies and pictures are the two biggest data consumers that people couldn't backup to a single dvd and that they would be genuinely distressed to lose.
-nB
Re:Carefully protected? (Score:5, Insightful)
Yea, but DVD is transient crap. How long will those last? A few years? You cannot rely on home-burned optical media for long term storage, and while burning 12 terabytes of information on to one set of 1446 dvds (double layer) may not seem like a big deal, having to do it every three years for the rest of your life is bound to get old.
For any serious storage you need magnetic media, and though we all hate tape, 5 year old tape is about a million times more reliable than a hard drive that hasn't been plugged in in 5 years.
So either you need tape in the sort of quantity that the private user cannot justify, or you're going to have to spring for a hefty RAID and arrange for another one like it as a backup. Offsite if you're lucky, but it's probably just going to be out in your garage/basement/tool shed.
Now, what do you do if you can't rely on RAID? No other storage is as reliable and cheap as the hard drive. ZFS and RAID-Z may solve the problem, but they may not...You can still have failures, and as hard disk sizes increase, the amount of data jeopardized by a single failure increases as well.
Re: (Score:3, Insightful)
Re:Carefully protected? (Score:5, Insightful)
SSDs are going to become a very viable option for the home backup
Yeah, I love paying much more for my backup than for my primary storage.
Re:Carefully protected? (Score:5, Insightful)
Sure, right now. The first hard drive I ever bought was 8 megabytes and cost 600 dollars. 4 years ago I bought a 1gb usb flash drive for 300 dollars, now they're running 10-20 bucks.
In a few years solid state will be something I'm looking at VERY seriously. It has serious potential for long term storage. Yea, it's too expensive...right now...But in the long run it's the most promising thing out there.
Re: (Score:3, Insightful)
I agree that SSDs are inevitable... for primary storage. Once I've switched my laptop over to SSD I'll still use a hard disk for backup, though.
Re:Carefully protected? (Score:5, Funny)
That's okay. We'll just gang them together in a RAID 5 configuration.
Re: (Score:3, Insightful)
Yea, but DVD is transient crap. How long will those last?
But DVD is *cheap* transient crap, and perfectly adequate for home backups.
I've got something in the area of 200GB of data on the machine which I'm currently using to type this, but very little of that data has any intrinsic or sentimental value to me. Most of it is applications and games that could easily be reinstalled from the original media or re-downloaded. A DVD or two could easily hold all of the data I *need* and even cheap optical media will outlive this machine's usefulness.
Re:Carefully protected? (Score:5, Interesting)
I can't vouch for DVD-R but I have el-cheapo store brand CD-Rs that I backed up my MP3 collection to 11 years ago and they work just fine. My solution is this:
Back everything up that's not media (mp3/video) every 6 months to CD-R, and once a year, copy all my old data onto a new hard drive that's 20+% larger than the one I bought last year and unplug the old one. I have 11 old hard drives sitting in the closet should I ever need that data, and the likelihood of a hard drive failing in the first year (after the first 30 days) is phenomenally low. Any document that I CAN'T lose between now and the next CD-R backup goes on a thumb drive or it's own CD-R and/or email it to myself.
Re:Carefully protected? (Score:4, Interesting)
I just wish all the density improvements that hard disks get would propagate to tape. Tape used to be a decent backup mechanism, matching hard disk capacities, but in recent time, tape drives that have the ability to back up a modern hard disk are priced well out of reach for most home users. Pretty much, you are looking at several thousand as your ticket of entry for the mechanism, not to mention the card and a dedicated computer because tape drives have to run at full speed, or they get "shoe-shining" errors, similar to buffer underruns in a CD burn, where the drive has to stop, back up, write the data again and continue on, shortening tape life.
I'd like to see some media company make a tape drive that has a decently sized RAM buffer (1-2GB), USB 2, USB 3, or perhaps eSATA for an interface port, and bundled with some decent backup software that offers AES encryption (Backup Exec, BRU, or Retrospect are good utilities that all have stood the test of time.)
Of course, disk engineering and tape engineering are solving different problems. Tape heads always touch the actual tape while the disk heads do not touch the platter unless bumped. Tape also has more real estate than disk, but tape needs a *lot* more error correction because cartridges are expected to last decades and still have data easily retrievable from them.
Re:Carefully protected? (Score:4, Informative)
Re: (Score:3, Informative)
Re:Carefully protected? (Score:5, Informative)
I have CD-Rs dating back to 1994 or 1995 that are just fine -- and they're off-brand media too. "Good" media was $12 to $20 per CD then, and "cheap" media was $7.00 per CD.
I have DVD-Rs dating back to 2002 or 2003 -- again, just fine.
While it's good to be cautious, some in here are crying wolf regarding optical media.
Re:Carefully protected? (Score:5, Informative)
I've got a mainframe circa 1984 that's been using the same type of drive since 1989. Last year we pulled all the year-end financial numbers off the yearly backups dating back to that point. Zero failed tapes.
Consumer-grade CDs and DVDs use a photosensitive dye to record information. It can degrade in anywhere between 2 to 5 years...Longer if you keep it in a cool dark place, but not 20 years.
Re:Carefully protected? (Score:5, Insightful)
Oh come on. Do you have 12TB of home data? Seriously? And if you do, it's not that hard to have another another 12TB of external USB drives at some relatives place.
I've got about 500GB of data that I care about at home & the whole lot's backed up onto a terrabyte external HDD at my Dad's. It's not that hard.
If you think raid is protecting your data, you're crazy.
Re:Carefully protected? (Score:5, Funny)
Oh come on. Do you have 12TB of home data? Seriously? And if you do, it's not that hard to have another another 12TB of external USB drives at some relatives place.
Not all of us have relatives, you insensitive...[URE]
Re:Carefully protected? (Score:5, Funny)
Buying a computer system you cannot afford to properly use is crazy. Yes, some people are crazy, and those crazy people are going to lose data, but there's no sense in defending it.
Well, i guess i'm crazy, i have 3TB of space on my home PC, and no way to back it all up offsite. I do have some important folders from one drive automatically copy to another drive periodically, so if one drive dies the other will be okay, but if i lose them both or the place burns down or i get a nasty virus, it's all going to hell.
Most of my space is taken up by pirated... err... backed up... HD movies. And porn, lots of porn.
Either way, i'm not too worried if i lose that, it's just the things i back up i really care about.
The thing is, i was going to RAID 3 of the drives into a secure 1TB array, but now i hear all these issues with RAID and i worry that it may be WORSE than just copying over the files periodically. I want a DROBO but those are expensive as hell.
This article has inspired me to look into Tape Backup but i worry that it's not cost effective (i haven't looked yet).
I should fill up some tapes with a few hundred gigs of porn, write "confidential" on them, and stash them in a bag, under some bush, across the street from HP near my apartment. I'm sure some curious person would come looking, only to discover their contents and wonder why the hell someone went to all that trouble....
God i'm strange.
-Taylor
Re:Carefully protected? (Score:5, Funny)
Next they'll want to unionize. At that point you've lost everything.
All data is not of equal value. (Score:4, Insightful)
Prioritize your data. I cannot believe that a home user has 12TB of important stuff. Back up your critical records both on site and off [1]. Back up the important stuff on site with whatever is convenient. Let the rest go hang.
[1] Use DVDs in the unlikely event you have that much critical data. Few home users will have a critical need for that stuff beyond the life of the media. Any that do can copy it over every five years, and take the opportunity to delete the obsolete stuff.
Re:RAID doesn't protect against your worst enemy (Score:5, Insightful)
Wow, how incite-ful. Doesn't matter what the discussion is, some geek is bound to weigh in with all the shortcomings of any idea.
Newsflash: there is no perfect backup! No method is foolproof, especially when it's bound to be boring as hell, and you've got an inevitable human factor. You get lazy moving the tapes offsite, you put off fixing a dead drive because there are 4 others, you wipe your main partition upgrading your distro and forget that your CRON rsync script uses the handy --delete flag, and BOOM wipes out your backup.
Shit happens. Pointing out what we all already know doesn't do anything helpful.
Re:RAID doesn't protect against your worst enemy (Score:5, Funny)
My data backup scheme is to steganographically embed my entire filesystem into nude pictures of Sarah Palin, and then upload them to usenet.
Re:RAID doesn't protect against your worst enemy (Score:4, Funny)
Re: (Score:3, Insightful)
"Shit happens. Pointing out what we all already know doesn't do anything helpful."
Actually, it gives posters like you a chance to remind everyone else that shit happens.
I believe there would be many fewer frustrated/bitter IT workers if more people meditated on the fact that shit just happens. In today's marketplace it is usually IT left holding the bag when things go south anyhow... gotta get acclimated to that and roll on.
Anyhow, I doubt there are many IT veterans not familiar with really expensive, real
Re:RAID doesn't protect against your worst enemy (Score:4, Interesting)
No method is foolproof, especially when it's bound to be boring as hell, and you've got an inevitable human factor. You get lazy moving the tapes offsite, you put off fixing a dead drive because there are 4 others, you wipe your main partition upgrading your distro and forget that your CRON rsync script uses the handy --delete flag, and BOOM wipes out your backup.
Jesus Christ, you must be one unlucky soul. Do you live your entire life in a worst-case scenario?
The system that I use for data storage is as follows:
This system may not be foolproof (what is?), but it is pretty frickin' safe, and costs me roughly $3 or $4 per month. Not too shabby for what I would consider to be a fairly robust backup system for a home user.
I suppose the biggest challenge is deciding what goes into rsnapshot. If my RAID array suffered a massive failure, I would definitely lose data. But this is mostly video content, and really, if I lose my mythtv shows, it is not exactly as catastrophic as if I lost, say, my quickbooks data.
There are a lot of things that keep me awake at night, but loss of important data is not one of them.
Re: (Score:3, Insightful)
The vast majority of Egypts writings were stored on perishable papyrus, not carved or painted on stone. Of all that they ever wrote or stored, we have but the tiniest fraction remaining.
If we lost technology today, there would be nothing left but paper in 20 years. In a thousand, there wouldn't even be much paper.
Comment removed (Score:5, Funny)
Re:RAID doesn't protect against your worst enemy (Score:5, Funny)
And in a thousand years some bearded guy will discover couple of those stones, come down the mountain and will base a religion around it. These things are cyclical.
Re:RAID doesn't protect against your worst enemy (Score:4, Funny)
Lets hope he discovers some porn this time...
Re:RAID doesn't protect against your worst enemy (Score:4, Informative)
"This time?"
Ah, I see you've never read "Song of Songs"
Re:RAID doesn't protect against your worst enemy (Score:4, Funny)
You leave RMS out of this!
Re: (Score:3, Funny)
The Egyptians found a way to preserve their message over thousands of years, surely we can come up with something. :)
And they would have saved future generations from vast amounts of confusion and effort, if they'd only been a little more diligent backing up their pyramid construction HOWTO files.
Re:RAID doesn't protect against your worst enemy (Score:5, Insightful)
RAID doesn't protect against your worst enemy
rm -r *
nor is it supposed to. not being a moron seems to have protected me from "my worst enemy" just fine. RAID has protected me from random disk failures. seems to be working as designed
Re:RAID doesn't protect against your worst enemy (Score:5, Funny)
That doesn't work for me. Try
Re:RAID doesn't protect against your worst enemy (Score:4, Funny)
Re:RAID doesn't protect against your worst enemy (Score:5, Funny)
(though it's been running since '04 without any problems, and my HD health monitors show it in good shape)
Oh man.... you didn't just say that out loud did you???
Re:RAID doesn't protect against your worst enemy (Score:4, Informative)
Redundancy... You keep using that word. I do not think it means what you think it means.
RAID 0, psudo-ironically, is not redundant at all. RAID 1, often called mirroring, are the arrays that are redundant.
Re: (Score:3, Insightful)
If you source the original term 'RAID', it goes to an ACM article describing Redundant Arrays of Inexpensive Disks. In RAID 0, which is actually a marketing term, there's striping, but no redundancy that can infer the contents of a missing member of the array. From the perspective of availability, it has none. As you cite, RAID 1 is a mirrored pair, usually the same type of drive, and it also is likely the fastest RAID-- and most expensive in terms of available net data after redundancy for availability. Th
Re:RAID doesn't protect against your worst enemy (Score:5, Informative)
Well, Windows does. Taking a snapshot of NTFS, even on a heavily used 1TB+ file server, takes only a few seconds, and under normal operation the file system is still fast.
NTFS is actually a pretty good file system. It's probably because it was originally designed by IBM.
Re:Carefully protected? (Score:5, Insightful)
Re:Carefully protected? (Score:4, Funny)
Patent that right NOW, I think we've got a winner to replace RAID-5.
Re: (Score:3, Interesting)
True.
Also FWIW I only run RAID 1 and JBOD.
For things that must be on-line, or are destined for JBOD but not yet archived to backup media, they are located on one of the RAID volumes. For everything else it's off to JBOD, where things are better than RAID5
Why?
I have 6 TB of JBOD storage and 600(2x300 volumes) GB of RAID 1. If I striped the JBOD into 6TB (7 drives) and one drive failed all the near-line data would be virtually off-line (and certainly read-only) while the array re-built. With JBOD, should a
Re: (Score:3, Insightful)
I do the same thing, but I want to warn you...
I've had TWO occasions where it has failed me. Once, a lightning strike that zotched both drives. The second time a rubber isolator failed in the case and the master drive fell onto the backup.
In both cases the bad spots in the two drives were different so I got back most of my data, but now I use Mozy as well as mirroring. I REALLLLLLLY don't want to lose all of my digital photos. :)
Confessions of a reformed RAID addict (Score:5, Funny)
You get your first RAID controller from a trusted friend. "Here" he says "try this" and hands you a Mylex board. It has a 64 bit bus and 3 SCSI LVD connectors. Oooh. That looks fast. So you start ebaying drives, cables, adapters, more controllers, the inevitable megawatt power supply and you mess around with raid 1, raid 0 raid 1+0 and raid 5. Suddenly every system falls prey to RAIDMANIA; eventually for yourself you build a system with 3 controllers, with 3 busses each and a drive on each one of 9 busses. With a controller for swap, one for data and one for the system will Windows now be fast? Yeah, sorta. Those drives sure are quiet - from a click-click busy noise perspective, NOT from a "sounds liks a jet airplane when running" perspective. Heat is an issue, too.
http://rs79.vrx.net/works/photoblog/2005/Sep/15/DSCF0007s.jpg [vrx.net]
But oh my are the failure modes spectacular.
I just use a laptop now and make several sets of backup DVDs or just copy to spare drives. I love RAID to death. But it's really only marginally worth the effort in the real world. But if you need fast, OMG.
RAID != Backup (Score:4, Insightful)
I mean, WTF? Many people regard RAID as something magical that will keep their data no matter what happens. Well ... it's not.
Furthermore, for many enterprise applications disk size is not the main concern, but rather I/O throughput and reliability. Few need 7 disks of 2 TB in RAID5.
Re:RAID != Backup (Score:4, Insightful)
Furthermore, for many enterprise applications disk size is not the main concern, but rather I/O throughput and reliability. Few need 7 disks of 2 TB in RAID5.
Some of us do need a large amount of reasonably priced storage with fast read speed & slower write speed. This pattern of data access is extremely common for all sorts of applications.
And this raid 5 "problem" is simply the fact that modern sata disks have a certain error rate. But as the amount of data becomes huge, it becomes very likely that errors will occur when rebuilding a failed disk. But errors can also occur during normal operation!
The problem is that sata disks have gotten a lot bigger without the error rate dropping.
So you have a few choices:
- use more reliable disks (like scsi/sas) which reduce the error rate even further
- use a raid geometry that is more tolerant of errors (like raid 6)
- use a file system that is more tolerant of errors
- replicate & backup your data
Re:RAID != Backup (Score:4, Insightful)
I've always understood it as RAID exists to keep you running either during the 'outage' (i.e. until a new disk is built) or at least long enough to shut things down safely and coherently (as opposed to computer just locking up or some such).
It's designed to give you redundancy until you fix the problem. It's designed to let you limp along. It's not designed to be a backup solution.
As others have mentioned: if you want a backup set of hard drives, you run RAID 10 or 15 or something where you have two(+) full copies of your data. And even that won't work in many situations (i.e. computer suddenly finds it's self in a flood).
All that said, the guy has a possible point. How long would it take to build a new 1TB drive into an array? That could be problematic.
There is a reason SANs and other such things have 2+ hot spares in them.
Re:RAID != Backup (Score:5, Informative)
But that's growing from a previous capacity to a larger capacity.
Using mdadm to fake a failure by removing and adding a single drive, the recover time generally was 4-5 hours.
What. (Score:4, Insightful)
Just double-up on everythign (Score:4, Informative)
If you have one RAID5 box, just build another one that replicates it. Use that for your "hot backup". Then back that up to tape, if you must.
Storage is so cheap these days (especially if you don't need super-fast speeds and can use regular SATA drives), that you might as well just go crazy with mirroring/replicating all your drives all over the place for fault-tolerance and disaster-recovery.
Testable assertion (Score:4, Interesting)
This is trivially testable. Any slashdotters have experience rebuilding 7TB RAID 5 arrays?
You'd think, if this were really an issue, we'd be hearing stories from the front lines of this happening with increasing frequency. Instead we have a blog post based entirely on theory, without a single real-world example for corroboration.
What's more, who even uses RAID 5 anymore? I thought it was all RAID 10 and whatnot these days.
Re:Testable assertion (Score:4, Informative)
Re: (Score:3, Interesting)
It really only deals with SATA drive (SAS probably has lower failure rates) and it only becomes a statistical issue with mammoth amounts of data (the amount quoted in the article is 1 data read error per 14TB)
Sounds.. well. Stupid (Score:5, Insightful)
I can see a lot of people getting into a tizzy over this. The RAID 5 this guy is talking about is controlled by one STUPID controller.
There are a lot of methods, and patented technology that prevent just the situation he is talking about. Here is just one example:
RAID is not perfect, not by any stretch, but if you use it properly it will serve it's purpose quite nicely. If your data is that critical, having it on a single raid is ill advised anyways. If you are talking about databases, then RAID 10 is more preferable and replicating the databases across multiple sites, even more so.
Smells Like FUD. (Score:5, Insightful)
What is this article about?
They say that since there is more data, you're more likely to encounter problems during a rebuild.
The issue isn't with RAID, it's with the file system. Use larger blocks/sectors.
Losing all of your data requires you to have a shitty RAID controller. A decent one will reconstruct what it can.
The odds of you encountering a physical issue increases as capacity increases, and decreases as reliability increases. In theory, the 1 TB and up drives are pretty reliable. Anything worth protecting should be on server-grade hard drives anyway.
The likelihood of a physical problem popping up during your rebuild is no higher with new drives than it was with old drives. I haven't noticed my larger drives failing at higher rates than my older, smaller drives. I haven't heard of them failing at higher rates.
Remember, folks, RAID is a redundant array of inexpensive disks. The purpose of RAID is to be fault-tolerant, in the sense that a few failures don't put you out of production. You also get the nice bonus of being able to lump a bunch of drives together to get a larger total capacity.
RAID is not a backup solution.
RAID 5 and RAID 6, specifically, are still viable solutions for most setups. If you want more reliability, go with RAID 1+0, RAID 5+0, whatever.
Choosing the right RAID level has always depended on your needs, setup, budget, and priorities.
Smells like FUD.
Taking published stats too seriously? (Score:3, Interesting)
The whole argument boils down the published URE rate being both accurate, and a foregone conclusion. Will disk makers _really_ make drives that have a sector failure for every 2 terabytes, or will they improve whatever technology is causing these URE's to be much more rare? (if the rate was real in the first place).
RAID Is not a Backup !!!!! (Score:5, Insightful)
How many times does this have to be said.
RAID is not a backup. RAID is designed to protect against hardware failures. It can also increase your I/O speed, which is more important in some cases. Backups are different.
Depending on what you are doing, you may or may need a RAID, but you definitely need backups.
The problem is time, not reliability (Score:3, Interesting)
In practice, this means that while your array is rebuilding, your performance SLAs go out of the window. If this is for an interactive server, such as a TP database or web service you end up with lots of complaints and a large backlog of work.
The result is that as disks get bigger, the recovery takes longer. This is what make RAID less desirable, not the possibility of a subsequent failure - that can always be worked around.
RAID6 = Win (Score:3, Insightful)
Scrub once a week, or once every two weeks.
RAID6 isn't about losing any two disks, it's about having two parity stripes. It's about being able to survive sector errors without any worry.
It's about losing ONE drive and still have enough parity to replace it without any errors.
RAID6 on 5 drives is retarded, tho, because it leaves you absurdly close to RAID1 in kept space. RAID6 is for when you have 8-10 drives. At that point you barely notice the (N - 2) effect and you have a fast (provided your processor can handle it all) chunk of throughput along with an incredibly reliable system. Well, N-3 with a hotswap.
Personally, I think I'd go RAID-Z2 via ZFS if only because it's a little bit sturdier a filesystem to begin with.
1 in 10^14 bit is not what I observe (Score:5, Informative)
My observed error rate with about 4TB of storage is much, much lower. I did run a full surface scan every 15 days for two years and did not have a single read error in about two years. (The hardware has since been decomissioned and replace dby 5 RAID6 Arrays with 4TB each.)
So, I did read roughly 100 times 4TB. That is 400TB = 3.2 * 10^15 bits with 0 errors. That does not take into account normal read from the disks, which should be substantially more.
Re: 1 in 10^14 bit is not what I observe (Score:4, Interesting)
Modern drives make extensive use of error-correcting codes. It's not that expensive, space-wise, to have a code which can recover from problems to almost any desired degree of confidence. I'd be shocked of any hard drive manufacturer wasn't using an ECC that gave their devices a very near zero chance of any user experiencing a corrupted read for the entire lifetime of the drive.
My solution (Score:3, Insightful)
I'm in the process of building a new 8x 1T array. I'm not using any fancy raid card. Just a LSI 1068E chipset with a SAS expander to handle LOTS (16 slots in the case, using 8 right now).
I'm not putting the entire thing into one big array. I'm breaking up each 1T drive into ~100GB slices that I will use to build several overlapping arrays. Each MD device will be no more than 4-5 slices. This way if an error occurs on one disk in one part of a disk I will have a higher probability of recovery.
I may also use RAID 6 to give me more chance of rebuilding.
Disk errors tend to not be whole disk errors, just small broken physical parts of a single disk.
SMART will give me more chance to detect and replace dying drives.
1 Controller Error from Failure + Year Old Story (Score:3, Insightful)
First off, Isn't this story a year+ old? Sheesh.
Second off, if you're worried about URE on X number of disks, what about a single capacitor cooking off on the raid controller? No serious data is stored on a single raid controller system, without good backups or another raid'd system on completely unique hardware. Yes, if you put a lot of disk on one controller and have a failure you have a higher risk of *another* failure. That's why important data doesn't depend on *only* RAID, and why lots of places use mirroring, replication, data shuttling, etc. This isn't new. Most folks that can't afford to rebuild from backups or from a mirror'd remote device also couldn't have used 12TB for anything *but* bulk offline file storage because it's slower than christmas VS a 'real' storage array. Using it for the uber HD DVR? Great. Oh no, you lose X-files's last episodes. This isn't banking data we're talking here.
Scrub your arrays (Score:5, Interesting)
This is why you scrub your RAID arrays once a week. If you're using software RAID on Linux, for example:
echo check > /sys/block/md0/md/sync_action
The above will scrub array md0 and initiate sector reallocation if needed. You do this while you have redundancy so the bad data can be recovered. Over time, weak sectors get reallocated from the spare bands, and when you do have a failure the probability of a secondary failure is very low over the interval needed for drive replacement.
Most non-crap hardware controllers also provide this function. Read the documentation.
Re: (Score:3, Informative)
Punch Cards (Score:3, Funny)
RAID6 is far better. (Score:3, Informative)
Not only are there two parity drives, but the operating system can perform automatic scanning of the drives to ensure that all data and parity disks are correct and silently correct any errors that occur on only one disk. It only takes a few days to scan 12 TB, and if this is done often enough the probability of a two failed disks plus a previously undetected unrecoverable error on a third disk is quite a bit lower than the failure rate for RAID5. RAID5 volumes can be automatically scanned, but if corruption is detected there's no way to know which of the disks was actually incorrect, barring an actual message from the hard disk. Silent corruption is a much bigger enemy of RAID5 than RAID6.
I don't know why the article focuses on RAID5; RAID1 or RAID10 will have exactly the same issues at a slightly lower frequency than RAID5, but more frequently than RAID6.
Ultimately, the solution is simply more redundancy, or more reliable hardware. RAID with 3 parity disks is not much slower than RAID6, and dedicated hardware or increasing CPU speed will take care of that faster than drive speeds increase.
Raid 5 - Kills Drives Dead(tm) (Score:5, Funny)
RAID???!!! Aaaaaaah! (Drive dies.)
I'm convinced. (Score:5, Interesting)
I have to say, the ZFS folks have convinced me. There are simply too many places where bit rot can creep in these days even when the drive itself is perfect. The fact that the drive is not perfect just puts a big exclamation point on the issue. Add other problems into the fray, such as phantom writes (which have also been demonstrated to occur), and it gets very scary very quickly.
I don't agree with ZFS's race-to-root block updating scheme for filesystem integrity but I do agree with the necessity of not completely trusting the block storage subsystem and of building checks into the filesystem data structures themselves.
Even more specifically, if one is managing very large amounts of data one needs a way to validate that the filesystem contains what it is supposed to contain. It simply isn't possible to do that with storage-system logic. The filesystem itself must contain sufficient information to make validation possible. The filesystem itself must contain CRCs and hierarchical validation mechanisms to have a proper end-to-end check. I plan on making some adjustments to HAMMER to fix some holes in validation checking that I missed in the first round.
-Matt
The Black Swan (Score:5, Interesting)
A Black Swan is an event that is highly improbably, but statistically probable.
Yes, it is possible for a drive in a RAID 5 array to become absolutely inoperable, and for one of the other drives to have a read failure at the same time. This is highly unlikely though, and is not the Black Swan. The math use to calculate the likelihood of these two events occurring at the same time is faulty. The MTBF metric for hard drives is measured in 'soft failures'; this is very different from a 'hard failure'.
The difference between the two types of failures is that a soft failure, while a serious error, is something that the controlling operating system can work around if it detects it. It is extremely unlikely that a hard drive will exhibit a hard failure without having several soft failures first. It is even more unlikely that two drives in the same array will exhibit a hard failure within the length of time it takes to rebuild the array. In my experience, it is more likely that the software controlling the array will run into a bug rebuilding the array. I've seen this with several consumer-grade RAID controllers.
The true Black Swan is when a disk in the array catches fire, or does something equally as destructive to the entire array.
To echo other people's points, RAID increases availability, but only an off-site backup solves the data retention problem.
Re:Dont worry too much (Score:5, Informative)
The real issue is one that anyone who has ever had to recover a multi-drive array can tell you instantly: if one drive fails, and the other drive was bought at the same time, and has had a nearly identical usage pattern, the odds of the other drive failing are well above average.
I once had a single drive fail in a 24 disk array. The disks were arranged, RAID 5, in groups of 3, glued together by Veritas (from back before it got bought by crappy symantec). By the time the smoke cleared we had replaced 19 out of 24 drives. They had all been bought at the same time, and as they thrashed rebuilding their failed buddies, they started dying themselves. The remaining 5 drives we replaced anyway, just because.
That's a worst case, but multiple failures are far from uncommon, and very few people correctly cycle in new drives periodically to reduce the chance of a mass failure.
Re:Dont worry too much (Score:5, Insightful)
... very few people correctly cycle in new drives periodically to reduce the chance of a mass failure.
That is also because very few people buy a Raid setup piecemeal. Most end up buying a solution, fully populated. The idea of swapping out some drives as you go, or growing your RAID over time doesn't always look good, either to the PHBs who usually run the budget, or to the vendor. We had a vendor trying to sell us a iSCSI SAN device tell us that varying the drive lots and dates increased the chances of failure. Needless to say we went elsewhere.
When we bought the RAID array for our Exchange box, this is going back a few years, everybody looked at my like an idiot because I asked for drives with different lot numbers. It was the best I could do as buying over time was not an option. HP was actually pretty cool about this request and out of 8 disks, no 3 have the same lot number or manufacture date.
Of course we are also running RAID on that machine for non-backup and do a nightly replication, so your mileage may vary.
Re:RAID is fine, stupid admins are not! (Score:4, Insightful)
RAID 5, as well as RAID 6 is nothing more at an attempt to add some amount of redundancy without sacrificing too much space. Go RAID 1 instead with the same number of disks.
As far as I'm concerned, RAID 5 really has no redeeming features (it's slow, not particularly safe, but lulls people into a false sense of security).
From a data integrity perspective, though, RAID6 is a better solution than RAID1.
Given arrays of equal sizes, with RAID6 your data can survive the loss of *any* two disks; with RAID1, if you lose two disks which happen to be a mirrored pair, then you're hosed.
But, as you point out, RAIDn doesn't really qualify as "carefully protected"
What's your beef with RAID 5? (Score:4, Insightful)
Seriously - what's the problem with RAID 5? It's not a FALSE sense of security: It actually DOES prevent data loss or down time on a single disk failure. If you're a moron, you're creating 14 disk arrays. If you're smart, you keep it to 7 disks at the very most.
RAID 5 is great. It's fast, unless you have a shit controller without enough cache. It's going to prevent down time on a single disk failure (which is overwhelmingly the most common type of failure) and it doesn't cost you too much capacity.
Usually I'm more concerned with a fire or flood than a double-disk failure.
RAID 6 is good, but you get the same (actually worse) performance hit over RAID 5. More parity calculations. You can lose any two disks, which is nice, and if you can spare the space, go for it!
I don't see RAID 6 as being all that much more of a big deal over RAID 5 and actually it shouldn't really have it's own number since it's exactly the same technology and parity system as 5. It should be RAID 5.1 or something. Or maybe RAID5+1. The only reason it's become more available now is because controllers have gotten fast enough to deal with the additional parity.
RAID is about avoiding PRODUCTION downtime. (Score:3, Informative)
Spell it out for everyone.
RAID won't save your data if there is a fire.
Or if you delete a file.
Or if two drives fail.
Or a thousand other scenarios.
All RAID does is prevent the system from going down when a single drive fails (except RAID 0). Thus giving everyone in the office time to finish up their important work and log out for the day so you can swap the drive. Or, if you're brave, swap the drive during regular work hours.
For the home user (not working on huge graphic files) RAID 1 (mirroring) should be
Dumbass. (Score:3, Insightful)
I guess you should be considered a new age Luddite?
Are you the same guy that always waits for SP1 before using any software? I thought so.
RAID is a proven technology and it's use in nearly all business IT systems from big to tiny.
RAID isn't meant as a replacement to backups. It's one PART of the entire system of preventing unnecessary data lose, and more importantly, down time. You can keep on running your server while the failed disk is replaced and rebuilt.
So, while I eat cheeto's and surf Slashdot w
Re:Can I tell you where to insert your plug? (Score:5, Informative)
Wow. I love your FUD. If you're going to lie, at least make it seem truthful.
Lacking in file system utilities (yes, fsck IS necessary even on healthy filesystems, especially on desktops and portables)
Why no fsck? [opensolaris.org] And if you really feel the need to do something:
License-incompatible with anything worth running it on, other than Solaris itself... which is NOT worth running (see #1 above)
What you mean to say is "Some Operating Systems whose merits can be debated are license incompatible with the license of ZFS." FreeBSD can implement ZFS. Why can't Linux? Because of its license, not that of ZFS.
Re: (Score:3, Informative)
You DID see my previous reply, right?
Yes, I did. It quotes an explanation that you can only fix errors in redundant configuration. Considering that the whole basis for this discussion is RAID-5, I think that's a feasible thing. However, metadata is written in multiple places, so if you want a ZFS fsck to correct a corrupted superblock, it's kinda silly since that superblock is written in multiple places anyway. Also, you can tell ZFS to do a manual scrub (as I shown) which has the advantage of running while