

RAID's Days May Be Numbered 444
storagedude sends in an article claiming that RAID is nearing the end of the line because of soaring rebuild times and the growing risk of data loss. "The concept of parity-based RAID (levels 3, 5 and 6) is now pretty old in technological terms, and the technology's limitations will become pretty clear in the not-too-distant future — and are probably obvious to some users already. In my opinion, RAID-6 is a reliability Band Aid for RAID-5, and going from one parity drive to two is simply delaying the inevitable. The bottom line is this: Disk density has increased far more than performance and hard error rates haven't changed much, creating much greater RAID rebuild times and a much higher risk of data loss. In short, it's a scenario that will eventually require a solution, if not a whole new way of storing and protecting data."
simple idea (Score:3, Interesting)
Don't consider an entire drive is dead if you get a piddly one-sector error.
Just mark it read only and keep chugging.
reallocate on write (Score:3, Informative)
Or just regenerate and write the one sector from the parity data since all modern hard disks reallocate bad sectors on write.
Re:reallocate on write (Score:5, Informative)
Re:simple idea (Score:5, Informative)
Enterprise arrays copy all the good data off the drive to a spare drive, use RAID to recover the failed sector(s), then fail the broken disk.
Re:simple idea (Score:5, Insightful)
Enterprise arrays are also very VERY different from what most people know as RAID. Smart controllers, smart drive cages, drives that are a magnitude better than the consumer grade garbage.
The Summary talks about how speed has not kept up with capacity, Yes that is correct in the low grade consumer junk. Enterprise server class RAID drives are a different story. The 15,000 RPM drives I have in my RAID 50 array here on the Database server are insanely fast. Plus server class drives are not silly unstable capacities like 1Tb or 1.5Tb they area "OMG small" 300gb size but are stable as a rock.
So I guess the question is, Is the summary talking about RAID on junk drives or RAID on real drives?
Re: (Score:3, Interesting)
They aren't talking about drive speeds as much as failure rate:
The bottom line is this: Disk density has increased far more than performance and hard error rates haven't changed much, creating much greater RAID rebuild times and a much higher risk of data loss.
They are talking about the MTBF of drives has not gone up as fast as the capacity, and the fact that a missed write is actually quite likely with a modern high capacity drive. Even saying drive speeds haven't gone up is very accurate, 15k RPM drives have been around for quite a while now, at least for 10 years, and there has not been an improvement in speed in that time. Where are my 30k RPM drives?~
Also, I have a bit of a problem with your st
Re:simple idea (Score:5, Interesting)
You're not likely to see 30k RPM drives any time soon. The speed of a 15k drive means that the outer edge of the 3 1/2" drive is spinning pretty fast... getting close to the speed of sound and the lions share of power consumed by 15k drives is consumed in counteracting the air buffeting the heads. With 2 1/2" drives we could go faster but while drives are open to the air it's not likely we'll see much in the short term.
It's why CDROM speeds haven't gone up much since the old day of 52x.
As areal density improves the drives will be able to push out more raw MB/sec just like DVD is better than CD, but in terms of IOPs it's not likely to dramatically improve.
Re: (Score:3, Funny)
Until some genius figures out how to build one with no air inside?
Re:simple idea (Score:4, Informative)
Re:simple idea (Score:5, Interesting)
Even partial evacuation would help, but you run into the problem that the read heads are designed to use the air to keep them from contacting the platters, so you'd need to replace that effect somehow.
The Space shuttle and ISS even have special sensors to shut the hard drives down if the air pressure goes too low. Reading about which was how I found out that hard drives are designed to use air.
Not to mention that you're now trying to build an air tight container, but if you're looking at ultra-high performance drives that's less of an issue.
Still, you have to look at how much such a drive would cost, and whether the cost would ever be repaid - if I was looking at investing in such technology I'd be concerned that Flash would outpace my vacuum drives before I got them released. Even if I DO manage to find a niche, would the niche last long enough against flash memory that's getting faster and cheaper so quickly?
For certain data sets and access patterns, flash is already much cheaper than the old raid options - the best example I saw was a dataset of a few hundred gigabytes that was mostly read-only, but accessed so much so randomly they had to mirror it on 10 hard drives to meet the read demands. One professional level SSD performed BETTER, while costing less than half of the setup.
fill the drive with helium (Score:4, Interesting)
Filling the drive with helium should help; the speed of sound in helium is 3x higher than in air, and it offers less resistance.
(Hydrogen would be even better, but it has a tendency to interact with metals in unfortunate ways.)
Re:fill the drive with helium (Score:4, Informative)
Filling the drive with helium should help;
Yeah. For about half a week. Helium has the smallest "gas particles" there are - Hydrogen atoms would
be smaller, but those really like to bond, and an H_2 molecule is quite a bit larger than a Helium atom
That's why He leaks out of everything. No exception. It diffuses through "leakproof" welds for vacuum tanks.
It diffuses through the steel walls of tanks (albeit more slowly). That's also why He is used in leakage detection:
If you see less than $not_so_few He atoms on the outside of the container you test within a couple of seconds after you injected a little bit of He, the container is considered airtight.
The only way to keep a HE atmosphere in your drive would be to constantly refill it. I don't think that there'll be any scenario where this would seem like an even remotely good idea.
Re: (Score:3, Interesting)
Filling the drive with helium should help; the speed of sound in helium is 3x higher than in air, and it offers less resistance.
(Hydrogen would be even better, but it has a tendency to interact with metals in unfortunate ways.)
Thinking about it, methane might be a more practical choice. Yes, it's denser than helium so the effect won't be anything like as strong (the speed of sound in methane is only about 40% faster) but it's also very cheap and available, and won't cause too many problems from interacting with the rest of the drive. Having to seal the drive is an issue, yes, but that's not far off what's needed now; it's imperative that dust is kept out of the platter enclosure anyway...
Re: (Score:3, Insightful)
there are these things called filters.
They work pretty well.
Re: (Score:3, Informative)
don't get it, why not just seal them hermetically with helium inside, and not worry about outside air pressure?
1. Hasn't been necessary
2. Helium is expensive
3. Sealing something Helium-tight is expensive, about as bad as trying to seal in hydrogen*
4. Fairly sensitive to pressure - not a problem in a non-airtight HD, but a problem in a sealed HD that's heating up.
5. Cooling can be an issue
*Mostly because He tends to stay monoatomic, H pairs up into H2. End result is that the H2 molecule is around the same size as a He atom.
Re:simple idea (Score:4, Interesting)
Can you say "instantaneous heat death" ? Vacuum is an excellent insulator.
Re:simple idea (Score:4, Funny)
lions share of power consumed by 15k drives is consumed in counteracting the air buffeting the heads
Until some genius figures out how to build one with no air inside?
Lions need air.
Re:simple idea (Score:4, Informative)
Speed of sound at sea level: 340.29 m/s verify [google.com]
((3.5 inches) * (2.54 (cm / inches)) * pi) * (((15000 / minute) * (1 minute)) / (60 second)) * (0.01 (meter / centimeter)) = 69.8218967 m / s verify [google.com]
If my calculation is correct, the outer edge of a 3.5" plate spinning at 15000 RPM is moving at 69.82m/s, which is about 20% of speed of sound. It's fast, but it's nowhere near the speed of sound.
Re: (Score:3, Informative)
You know google does the conversion for you: 2*pi*3.5 inches * 15,000 minute^-1 in m/s [google.com] = 140 m / s
Re: (Score:3, Funny)
340.29 m/s is the speed of sound in a vaccuum.
Moran.
Re: (Score:3, Informative)
Besides which I have no idea what the speed of sound has to do with the theoretical upper limit of the speed of a spinning disk. It's not like an airplane wing with a trailing shock wave. I would think there would be much more pressing problems that are keeping us from seeing 30K RPM hard drives anytime soon, like:
- Shear strength of the platter material
- Total mass of the platter, especially near the edge
- Heat generated in the bearings
- Energy necessary to spin the platter at that speed
- Torsional forces
Re: (Score:3, Informative)
I'm surprised that nobody has mentioned the issue of failure of the drive material itself at higher rotational velocities.
I believe CDs are limited to 52X because the polycarbonate they are constructed of explodes when you get too much higher than that (with a safety factor of course).
A metal hard drive probably can take more speed, but I'm sure that at some point you get deformation of the platter. You also have bearings/etc to deal with. 30k is a pretty fast rotation rate - and we're talking about a dev
Re:simple idea (Score:4, Insightful)
You'd need a whole new way of keeping the head off the platter. You'd have a problem with lubricants vaporizing. Heat would be a problem as well.
Re:simple idea (Score:5, Informative)
Air is necessary for the read/write head to operate. The piece that comes into close proximity of the platter is essentially a tiny hovercraft. It's about the size of a pepper flake, and has a microscopic pattern called an "air bearing" carved into the side facing the platter. Designing this air bearing is an exercise in fluid dynamics -- it is the shape of the bearing and how air flows over it that allows the read/write head to skim over the surface of the platter at a distance measured in microns without actually contacting the surface of the platter.
If the read/write head does contact the surface of the platter, that is called a head crash, and is bad.
Re: (Score:3, Interesting)
Why not add multiple heads to the same platter?
Keep the disk spinning at 15K but add heads with their own actuator and everything. One could read only the other write only. Whatever makes sense.
Re:simple idea (Score:4, Interesting)
Re: (Score:3, Informative)
No - to reconstruct 1 sector you have to read one sector from every other drive, then write 1 sector to the replacement drive. Effectively, to reconstruct you have to read thw whole raid. So the read and write speeds both count.
Re: (Score:3, Interesting)
Unfortunately all that is quite a myth for the most part.
Having worked in storage for a aeons the reality is that the difference between enterprise and "consumer grade rubbish" has very little to do anything but tollerance. If you picked up a 300G 10k enterprise drive and compared it to the consumer grade rubbish you'd find nothing different. It used to be the case, way back when, that they were very different but because consumer grade drives have gotten so much better its just not worth the expense of bui
Re:simple idea (Score:5, Insightful)
Re:simple idea (Score:4, Informative)
They do to varying degrees of success but just because a disk can't read a particular sector doesn't mean that the drive is faulty - it could be a simple error on the onboard controller that is causing the issue.
FC/SAS drives mostly leave error handling up to the array rather than doing it themselves because the arrays can typically make better decisions as to how to deal with the problem and helps cope with time sensitive applications. The array can choose to issue additional retries, reboot the drive while continuing to use RAID to serve the data, etc.
Consumer SAS drives on the other hand try really hard to recover from the problem - for example retrying again and again with different methods to get the sector and while admiral that leads to behaviours we see in consumer land where the PC just "locks up". The assumption here is that there is no RAID available and so reporting an error back to the host is "a bad thing". The enterprise SAS drives we're seeing on the market are starting to disable this automatic functionality to make them behave correctly when inserted into RAID arrays.
Usually ;-)
Re: (Score:3, Informative)
Solved a Long Time Ago (Score:5, Informative)
Re:Worked-around a Long Time Ago (Score:5, Interesting)
But really none of that should be necessary for the general case. Storing data in different physical locations is a good but entirely unrelated issue- the main problem of disk reliability is still very much in need of a solution. That's pretty much the point of the article: You can come up with various solutions which move the problem around, give multiple fallbacks for when something goes wrong.. but there's still the problem of things going wrong in the first place. I shouldn't need to use 12 separate disks spread across the globe just for basic reliability / redundancy
Re:Worked-around a Long Time Ago (Score:5, Funny)
I shouldn't need to use 12 separate disks spread across the globe just for basic reliability / redundancy
You're trying to weasel out of paying IBM protection money !
Re:Worked-around a Long Time Ago (Score:4, Funny)
Don't discourage the boy. Weaseling out of things is important to learn. It's what separates us from the animals
Re:Worked-around a Long Time Ago (Score:5, Funny)
Not from weasels, though...
Re:Worked-around a Long Time Ago (Score:5, Interesting)
Actually, storing data in a multiple data center / high availability environment is a completely related issue. The summary above talks of "entirely different paradigms." Cloud storage would be multiple data center based, which is entirely different from keeping the only copy on your local drives. In this concept, your machine would have enough OS to boot, and enough hard drive space to download the current version of whatever software you are leasing. Your personal info would always be maintained in the data centers, and only mirrored locally. Have a home failure? Drop in a new part or even a new PC, (possibly with an entirely different operating system, such as Chrome,) connect to the service, and you're 100% back.
It's no longer a novel concept for the home market. Consider Google Docs. It's not even being sold as "safer than RAID", it's being touted as "get it from anywhere" or "share with your friends". Safer than RAID is just a bonus.
So are we ready to move all our personal information to clouds? I certainly am not, but Google Docs are wildly popular and a lot of people are. I long ago learned that I can't look to myself to judge what the mainstream attitudes are in many things.
Re: (Score:3, Insightful)
Re:Worked-around a Long Time Ago (Score:5, Insightful)
Consider Google Docs.
If you have so much data that you're likely to encounter an error when rebuilding your RAID array, I don't think Google Docs is going to cut it.
Re: (Score:3, Insightful)
Re: (Score:3, Informative)
Well, the point the of the article is that if it takes your array 6 hours to rebuild instead of 4 because the capacities have gone up but the failure rate of the hardware is unchanged you have a problem. The problem is that you are more likely to experience another failure before the first one has been mitigated. If you have that additional failure on most raids (unless you are doing 5-5 or 1-5 or some other RAID over RAID scheme) you get down time. The volume is off line and must be restored from some
Bogus outdated thinking (Score:5, Interesting)
The author says it himself in the article:
"And running software RAID-5 or RAID-6 equivalent does not address the underlying issues with the drive. Yes, you could mirror to get out of the disk reliability penalty box, but that does not address the cost issue."
but he hasn't adressed the fact that today you get 100 times as much diskspace for the same cost as you did 10 years ago when cost was a factor. In real life cost isn't a factor when it comes to datastorage, simply because it's really low in real life projects, as compared to the other costs in a project requiring storage. So if you want the reliability you go get a mirror. Drivespace is dirt cheap.
As for the rebuildtimes, fine, go buy FASTER drives. I dont see the problem. HP and many other vendors have long been trying to sell combined raid soltions (like the EVA) where you mix high storage with high performance drives (like SSD vs. SATA).
The only real argument for the validity of this article is the personal use of drives/storage. And name 3 people you know who run raid-5 on their personal PCs, and I'll show you 3 guys who can't afford an SSD drive.
Re:Bogus outdated thinking (Score:5, Insightful)
I admit I haven't RTFA, but I don't quite get your statement of "And name 3 people you know who run raid-5 on their personal PCs, and I'll show you 3 guys who can't afford an SSD drive.", I can't see how an SSD is a replacement for a raid-5 array. Everyone I know who uses a raid-5 uses it for large amounts of storage with a basic level of protection against data loss. I could justify replacing a raid-0 set up with a SSD.
That said I definitely couldn't afford an SSD that would be able to replace the raid-5 in my pc (4x500GB usable space of 1.34TB), the largest SSD listed on ebuyer.com are 250GB @ £360 each, I would need 8 to match my raid 5 setup which is £2880 which is probably enough to build 2 reasonable machines both with a 1.34TB raid-5 using normal HDDs.
Re: (Score:3, Interesting)
RAID5 is not backup. It's resilience for bringing the whole system down with a failure.
RAID was originally developed to make what we consider small storage capacities (then massive) affordable and reasonably reliable.
You're using RAID5 in it's "intended" use- but an SSD of the same capacity will be inherently MORE reliable (by a factor of how many of those magnetic disks you remove) than your system design right now.
From personal experience with a system customer base of literally thousands of enterprise c
Re:Bogus outdated thinking (Score:5, Insightful)
That's why I prefer software RAID.
Re: (Score:3, Informative)
Granted, our smallest c
Re:Bogus outdated thinking (Score:5, Insightful)
Raid when used for protecting your computer will not protect your data it just makes your system able to tolerate hard drive failure.
... Which will protect my data when a drive fails.
RAID-5 means that I can have 3x500GB drives with 1GB of space, and not have the same worry (total loss of data) that I would if a 1x1TB drive failed.
We know it doesn't replace backup. We know it doesn't protect against theft, fire, malicious data destruction etc etc. You do realise who you're talking to, don't you? This is an IT article on Slashdot. Telling people on this thread that RAID isn't a replacement for regular backups is like telling a mechanic that a stick of celery is not a suitable replacement for a piston.
Re: (Score:3, Informative)
To do a true raid-5, cost of the drives is fairly negligible.
While you are absolutely correct about cost, I think your definition of what a true raid-5 is needs a little work.
The purpose of RAID-n is to survive failures with near-zero downtime. The larger the disparity grows between capacity and performance as array sizes increase, the less and less these RAID's are serving their purpose. The chance of a drive failure while rebuilding a multi-TB array is quite significant, an occurrence that RAID-n was supposed to minimize to near-zero levels.
In the future, ther
Re: (Score:3, Informative)
Try a rebuild on a much larger aggregate running a dual parity array under load. Trust me, they can easily run days. Say you have a 16 disk aggregate using 1TB 7200RPM disks. Because you need every block in a stripe to reconstruct parity, you need to read from the other disks to reconstruct; so 14 reads and 1 write per block.
You're also misunderstanding how the SSD caching works for ZFS. Blocks are only pulled in after repeated requests, which isn't going to be the case for a resliver. There will be at
Re:Bogus outdated thinking (Score:5, Insightful)
And name 3 people you know who run raid-5 on their personal PCs, and I'll show you 3 guys who can't afford an SSD drive.
Huh ? That's like saying show me 3 people who have a nice pair of running shoes and I'll show you 3 guys who can't afford a car.
Comment removed (Score:4, Funny)
Re:Bogus outdated thinking (Score:4, Informative)
And name 3 people you know who run raid-5 on their personal PCs, and I'll show you 3 guys who can't afford an SSD drive.
Yeah, every time an article on storage catches my eye, I have to check laptop SSD prices. So far, each time I do this, for the cost of a drive the size I need, I could buy a new snowboard, or a laptop, bike, half a holiday, room full of beer... etc. I really want one, but so far I haven't been able to look at that list and say "I'd rather have an SSD!"
Re:Bogus outdated thinking (Score:4, Funny)
Re:Bogus outdated thinking (Score:5, Informative)
The problem is IT guys and PHB's that think RAID=Backup.
It's not and it never has been a backup solution. RAID is high availability and nothing more.
RAID does it's job perfectly for high availability and will continue to do so for decades. Sorry but I have yet to see any other technology deliver the capacity I use for my small 30TB Database we have at work. Our Raid 50 array works great. We also realtime mirror that to the Backup SQL server (not for backup of data but backup of the entire server so that when SQL1 goes offline SQL2 picks up the work.)
SQL2 is backed up to a SDAT tape magazine nightly.
RAID does what it's supposed to do perfectly, it's days are not numbered because no other technology other than RAID can provide high availability.
Re:Bogus outdated thinking (Score:4, Insightful)
Re: (Score:3, Informative)
As for the rebuild times, fine, go buy FASTER drives.
Hard drives are getting bigger faster than they are getting faster.
Hard drives are getting bigger faster than they are getting more reliable.
In an enterprise setting, SATA based storage is a reality, for cost reasons, in tiers 2 and 3.
Your suggestion that this problem is solved simply by buying faster drives is a poor one.
And in a few generations of high speed drives, the problem with manifest regardless.
Henry's article is not as clear as it could be, how
Enlighten me (Score:4, Insightful)
(Certain) RAID (levels) address the issue of potential dataloss due to hardware malfunction. How does moving to an Object-Based Storage Device address this issue better? Actually, I don't see how RAID and OSD are mutually exclusive.
Harddisks, not RAID (Score:5, Insightful)
Now that's a stupid article.
It basically says, you can't read a harddisk more than X times before you get an error on some sector, so RAID is dead. That's a logical nonsequitur. RAID is a generic technology that also applies to flash memory cards, USB sticks, anything you can store data on basically. The base technique says "given this reliability, you can up the reliability if you add some redundancy". There's no link to harddisks other than that that's what they're used for right now.
Re: (Score:3, Insightful)
RAID is here to stay for a while no doubt, but it's a response to a series of problems that has problems of it's own. You can take 5+1 drives make an array where one bad chassis slot can indeed take the whole thing out, or you make a bunch of mirrors at the expense of capacity, or you can stripe one scary large fragile volume.In production it's about performance & availability. Realize that the whole data integrity thing is relative and merely an illusion. It's kinda like on Futurama when they had the t
RAID is here to stay (Score:5, Insightful)
Disclaimer: I work for a storage vendor.
> FTA: The real fix must be based on new technology such as OSD, where the disk knows what is stored on it and only has to read and write the objects being managed, not the whole device
OSD doesn't change anything. The disk has failed. How has OSD helped?
> FTA: or something like declustered RAID
Just skimming that document it seems to claim: only reconstruct data, not white space, and use a parity scheme that limits damage. Enterprise arrays that have native filesystem virtualisation (WAFL for example) already do this. RAID 6 arrays do this.
Lets recap. Physical devices including SSDs will fail. You need to be able to recover from failure. The failure could be as bad as the entire physical device failing, or as bad as a single sector being unreadable. In the former case a RAID reconstruct will recover the data but you'll hit RAID recovery errors due to the raw amount of data that needs to be recovered. Enterprise arrays mitigate the risk of recovery errors by using RAID 6. They could even recover the data from a DR mirrored system as part of the recovery scheme.
And when RAID 6 has a high enough risk that it's worth expanding the scheme everyone will start switching from double parity schemes to triple parity schemes since their much less expensive in terms of spindle count than RAID 6+1.
One assumption is, at some point in the future, reconstructions will be a continual occurring background task just like any other background task that enterprise arrays handle. As long as there is enough resiliency and performance isn't impacted then it doesn't matter if a disk is being rebuilt.
Re:RAID is here to stay (Score:5, Informative)
And when RAID 6 has a high enough risk that it's worth expanding the scheme everyone will start switching from double parity schemes to triple parity schemes since their much less expensive in terms of spindle count than RAID 6+1.
I don't think you've quite understood the problem described. You can have an infinite number of parity disks, but it does you no good if recovering one data disk causes another data disk to fail.
Imagine a disk fails on every 100TB of reads (10^14). You have ten 1TB data disks. Imagine you keep them in perfect rotation so they've spent 10, 20, 30, 40, 50, 60, 70, 80, 90 and 100% of their lifetime. The last disk dies and you replace it with a new drive (0%). To rebuild the drive you read 1TB from each data disk and use whatever parity you need. They've now spent 11, 21, 31, 41, 51, 61, 71, 81, 91 and 1% (your new disk) of their lifetime and you can read another 9TB before you need a new disk.
Now we try doing the same with ten 10TB disks and the same reliability. The last disk dies and you replace it, only now you must read 10TB from each disk. Instead of adding 1% to the lifetime it adds 10% so that they've spent 20, 30, 40, 50, 60, 70, 80, 90, 100 and 10% (your new disk) of their lifetime. But now another disk fails, you can recover that but then another will fail and another and another and another.
Basically, parity does not solve that issue. If you had a mirror, you would instead copy the mirrored disk with significantly less wear on the disks. RAID is very nice as a high-level check that the data isn't corrupted but it's a very inefficient way of rebuilding a whole disk.
Re: (Score:3, Interesting)
RAID 1 has much less reliability than RAID 6. Assume a typical case: one disk totally fails. You then start to reconstruct - in a RAID 1 scheme a single sector error will result in the rebuild failing. Not great.
In RAID 6 you start the rebuild and you get a single sector error from one of the drives you're rebuilding from. At that point you've got yet another parity scheme available (in the form of the RAID 6 bit) that figures out what that sector should have been and then continues the rebuild. Then you go
Re: (Score:3, Insightful)
Even this doesn't handle the other side of the scenario...
Buy your box of drives and put them in a RAID-6. Chances are you just bought all of the drives at the same time, from the same vendor, and they're probably all the same model of the same brand. Chances are also very good that they're from the same manufacturing lot. You've got N "identical" drives. Install them all into your drive enclosure, power the whole thing up, build your RAID-6, put it into service.
Now all of your "identical" drives are ru
Re: (Score:3, Interesting)
Well the logical thing IMHO is after the first year you put in a new drive and do an array rebuild after making a backup.
Drives are really cheap and I would do that for as long as the array is in use.
Reuse the old drives in desktops if they are SATA.
Not perfect but it keeps you from having an array of old drives in your server.
Re: (Score:3, Informative)
Actually, reliability quickly scales towards RAID 1+0 as the number of drives increases. In a 14 drive array, a single drive failure in both is fine. A second drive failure has the possibility of destroying the RAID 1+0 array, but the chance of the right drive failing is low. With 3 total drive failures, RAID 6 will fail, while RAID 1+0 has a low probability of failure.
Rebuild times are also much shorter on RAID 1+0 as only a single drive has to be read, which reduces heat produced and the chance of a sec
Re: (Score:3, Interesting)
Re: (Score:3, Insightful)
Basically you are suggesting someone would make and then sell a disk which could only be read, entirely, 10 times in it's entire life time?
Well that's easily solved. We won't buy those disks.
Hardware RAID is dead (Score:3, Interesting)
Hardware RAID is dead - software for redundant storage is just getting started. I am looking forward to making use of btrfs so I can have some consistency and confidence to how I deal with any ultimately disposable storage component.
The ZFS folks have been doing it fine for some time now.
Hardware RAID controllers have no place in modern storage arrays - except those forced to run Windows
Re:Hardware RAID is dead (Score:5, Insightful)
First of all, "Hardware RAID" is still software, just executed by dedicated circuits. The distinction is kind of moot. For low-cost, low performance systems, software can run on your main box to perform this task, but for high-end applications you'll want dedicated hardware to take care of it, so your machine can do what it needs to do with more zeal.
So my guess is that you're not working for a storage vendor. I haven't seen many people switch to SW RAID recently. If anything, the Unix world is finally crawling out of its "lvm striping" hole. Most servers anywhere are running on stuff like HP's Proliants, and I don't see customers ship back the SmartArray controllers.
Re: (Score:3, Informative)
> First of all, "Hardware RAID" is still software, just executed by dedicated circuits. The distinction is kind of moot.
I'm not sure where in my post you saw anything about a comparison between Hardware RAID or Software RAID.
> So my guess is that you're not working for a storage vendor. I haven't seen many people switch to SW RAID recently.
I work for NetApp. I didn't think it mattered much in the post I made though. To your second point, as all of the NetApp Enterprise storage systems use software bas
Re:Hardware RAID is dead (Score:4, Informative)
When I think of software RAID, I think of parity data being handled by the operating system, being done on x86 chips as part of the kernel or offloaded via a driver (thinking Fake-RAID).
If you're abstracting your storage away from the operating system that uses it, say via iSCSI or NFS or SMB to a dedicated storage box, like a NetApp filer or a Celerra, then I would consider that hardware RAID, personally speaking. If you're saying that these dedicated storage boxes manage parity, mirroring and so on all done with the same chip that's also running their local operating systems, then I have to admit that yes, that sounds like software RAID to me, but the real distinction I've come to draw between software and hardware RAID is a matter of performance and feature set. If said boxes give the same or better performance (I/Ops and throughput) to a workload as a dedicated, internal storage system managed by something like my 9650SE, then hell..... who cares, right? Aside from being rather impressed that such is possible without dedicated XOR chips, that is.
Non-issue ... (Score:4, Interesting)
For enterprise level storage systems, this is also a non-issue because of thin provisioning.
I thought RAID was about spindle count (Score:5, Insightful)
I admit I'm not an expert, but I was under the impression that RAID was mainly about ensuring you a large number of spindles and some redundancy so you can serve data quickly even if a couple of drives fail while the servers are under pressure. Surely you would not rely on a RAID to avoid data loss since you should be keeping external backups anyway?
Re:I thought RAID was about spindle count (Score:5, Informative)
You don't rely on RAID to avoid data loss; you rely on it as a first line in providing continuity. We run backups of large systems here, but we tend to do other things too: synchronous live mirroring between sites of the critical data. And beter system design. There are some systems where, whilst we _could_ go back to tape (or VTL) at a pinch, having to do so would be a disaster in itself.
We're designing systems that permit rapid service recovery (the most live critical data) and a second tier of online recovery to get the rest back. We just can't afford the downtime.
Double-spindle failures on RAID systems are just one of those things that you _will_ see. Deciding whether a system deserves some other measure of redundancy is mostly an actuarial, rather than a technical, decision.
Wrong assumptions (Score:5, Insightful)
The article assumes that when within a RAID5 array a drive encounters a single sector failure (the most common failure scenario), an entire disk has to go offline, be replaced and rebuilt.
That is utter nonsense, of course. All that's needed is to rebuild a single affected stripe of the array to a spare disk. (You do have spares in your RAID setups, right?)
As soon as the single stripe is rebuilt, the whole array is again in a fully redundant state again - although the redundancy is spread across the drive with a bad sector and the spare.
Even better, modern drives have internal sector remapping tables and when a bad sector occurs, all the array has to do is to read the other disks, calculate the sector, and WRITE it back to the FAILED drive.
The drive will remap the sector, replace it with a good one, and tada, we have a well working array again. In fact, this is exactly what Linux's MD RAID5 driver does, so it's not just a theory.
Catastrophic whole-drive failures (head crash, etc) do happen, too. And there the article would have a point - you need to rebuild the whole array. But then - these are by a couple orders of magnitude less frequent than simple data errors. So no reason to worry again.
*sigh*
Re: (Score:2, Insightful)
Even if only a sector in a disk has failed, I'd mark the entire disk as failed and replace it as soon as I could. Maybe I'm paranoid, but I've seen many times that when something starts to fail, it continues failing at increasing speed.
If you want smaller drives... (Score:5, Interesting)
If you want smaller drives to speed up rebuild times then, erm, buy smaller drives? You can get ~70Gb 10Krpm and 15Krpm drives fairly readily - much smaller than the 500-to-2000-Gb monsters and faster too. You can still buy ~80Gb PATA drives too, I've seen them when shopping for larger models, though you only save a couple of peanuts compared to the cost of 250+Gb units.
If you can't afford those but still don't want 500+Gb drives because they take too long to rebuild if the array is compromised and needs a rebuild, and management won't let you buy bog standard 160Gb (or smaller) drives as they only cost 20% less than 750Gb units without the speed benefits of the high cost 15Krpm ones, how about using software RAID and only using the first part of the drive? Easily done with Linux's software RAID (partition the drives with a single 100Gb (for example) partition, and RAID that instead of the full drive) and I'm sure just as easy with other OSs. You'll get speed bonuses too: you'll be using the fastest part of the drive in terms of bulk transfer speed (most spinning drives are arranged such that the earlier tracks have higher data density) and you'll have lower latency on average as the heads will never need to move the full diameter of the platter. And you've got the rest of the drive space to expand onto if needed later. Or maybe you could hide your porn stash there.
ZFS, Anyone? (Score:2, Interesting)
I've managed to get this going, using the excellent FreeNAS - although proceed with caution, as only the beta build supports it, and I've already had serious (all data lost) crashes twice.
However the principle is sound, and I'm sure this will become standard before long - the only trouble being that HP, Dell and the like can't simply offer upgrades for existing RAID cards - due to the nature of ZFS, it needs a 'proper' CPU and a gig or two or RAM. Even so, it does protect against many of the problems now be
Fountain codes? (Score:3, Interesting)
ZFS (Score:5, Informative)
This is something the ZFS creators have been talking about for some time, and been actively trying to solve.
ZFS now has triple parity, as well as actively checksumming every disk block.
Re:ZFS (Score:5, Informative)
I thought I should add:
ZFS speeds up rebuilding a RAID (called resilvering) over traditional non-intelligent or non-filesystem based RAIDS by only rebuilding the blocks that actually contain live data; there's no need to rebuild EVERYTHING if only half the filesystem is in use.
ZFS also starts the resilvering process by rebuilding the most IMPORTANT parts first; the filesystem metadata and works its way down the tree to the leaf nodes rebuilding data. This way, if more disks fail, you have attempted to rebuild the most data possible. If filesystem metadata is hose, everything is hosed.
ZFS tells you which files are corrupt, if any are, and insufficient replicas exist to due failed disks.
All this on top of double or triple parity. :)
Old news (Score:2, Interesting)
Parity declustering (Score:5, Interesting)
Actually I like the parity declustering idea that was linked to in that article, seems to me if implemented correctly it could mitigate a large part of the issue. I have personally encountered the hard error on RAID5 rebuild issue, twice, so there definitely is a problem to be addressed...and yes, I do now only implement RAID6 as a result.
For those who haven't RTFATFALT (RTFA the f*** article links to), parity declustering, as I understand it, is where you have, say, an 8 drive array, but where each block is written to only a subset of those drives, say 4. Now, obviously you loose 25% of your storage capacity (1/4), but consider a rebuild for a failed disk. In this instance only 50% of your blocks are likely to be on your failed drive, so immediately you cut your rebuild time in half, halving your data reads, and therefore your chance of encountering a hard error. Larger numbers of disks in the array, or spanning your data over fewer drives, cuts this further.
Now, consider the flexibility you could build into an implmentation of this scheme. Simply by allowing the number of drives a block spans to be configurable on a per block basis, you could then allow any filesystem that is on that array to say, on a per file basis, how many disks to span over. You could then allow apps and sysadmins to say that a given file needs to have the maximum write performance, so diskSpan=2, which gives you effectively RAID10 for that file (each block is written to 2 drives, but with multiple blocks in the file is likely to be written to a different pair of drives, not quite RAID10, but close). Where you didn't want a file to consume 2x its size on the storage system, you could allow a higher diskSpan number. You could also allow configurable parity on a per block basis, so particularly important files can survive multiple disk failures, temp files could have no parity. There would need to be a rule however that parity+diskSpan is less than or equal to the number of devices in the array.
Obviously there is an issue here where the total capacity of the array is not knowable, files with diskSpan numbers lower than the default for the array will reduce the capacity, numbers higher will increase it. This alone might require new filesystems, but you could implement todays filesystems on this array as long as you disallowed the per-block diskSpan feature.
This even helps for expanding the array, as there is now no need to re-read all of the data in the array (with the resulting chance of encountering a hard error, adding huge load to the system causing a drive to fail, etc). The extra capacity is simply available. Over time you probably want a redistribution routine to move data from the existing array members to the new members to spread the load and capacity.
How about you implement a performance optimiser too, that looks for the most frequently accessed blocks and ensures they are evenly spread over the disks. If you take into account the performance of the individual disks themselves, you could allow for effectively a hierarchical filesystem, so that one array contains, say, SSD, SAS and SATA drives, and the optimiser ensures that data is allocated to individual drives based on the frequency of access of that data and the performance of the drive. Obviously the applications or sysadmin could indicate to the array which files were more performance sensitive, so influencing the eventual location of the data as it is written.
Remembering an article earlier this week: (Score:4, Interesting)
RAID concept is fine, it's that HDs are too big (Score:5, Interesting)
As others have mentioned, this is something that is discussed on the ZFS mailing lists frequently.
For more info there, check out the digest for zfs-discuss@opensolaris.org
and, in particular, check out Richard Elling's blog [sun.com]
(Disclaimer: I work for Sun, but not in the ZFS group)
The fundamental problem here isn't the RAID concept, is that the throughput and access times of spinning rust haven't changed much in 30 years. Fundamentally, today's hard drive is no more than 100 times as fast (both in throughput and latency) than a 1980s one, while it holds well over 1 million times more.
ZFS (and other advanced filesystems) will now do partial reconstruction of a failed drive (that is, they don't have to bit copy the entire drive, only the parts which are used), which helps. But there are still problems. ZFS's pathological case results in rebuild times of 2-3 WEEKS for a 1TB drive in a RAID-Z (similar to RAID-5). It's all due to the horribly small throughput, maximum IOPs, and latency of the hard drive.
SSDs, on the other hand, are no where near the problem. They've got considerably more throughput than a hard drive, and, more importantly, THOUSANDS of times better IOPS. Frankly, more than any other reason, I expect the significant IOPS of the SSD to signal the death knell of HDs in the next decade. By 2020, expect HDs to be gone from everything, even in places where HDs still have better GB/$. The rebuild rates and maintenance of HDs simply can't compete with flash.
Note: IOPS = I/O Per Second, or the number of read/write operations (irregardless of size) which a disk can service. HDs top out around 350, consumer SSDs do under 10,000, and high-end SSDs can do up to 100,000.
-Erik
Re: (Score:3, Insightful)
"The fundamental problem here isn't the RAID concept, is that the throughput and access times of spinning rust haven't changed much in 30 years."
Uh, there's another bigger problem. The drive error rate (when reading data) hasn't changed that much either while data on a drive has dramatically increased.
When doing a rebuild when you've lost all redundancy a single read error means the rebuild will fail. Increase the size of a drive (while keeping error rates constant) and you increase the likelihood of a re
Re: (Score:3, Funny)
1 error = 1TB rebuild (Score:2)
The real problem with "classic" RAID is that 1 single error means a total rebuild of the array.
Look the solution is obvious (Score:5, Funny)
The cloud. Just cloud it, baby. Nothing bad ever happens in the cloud; they're so white and fluffy after all.
doesn't raid 10 solve this? (Score:3, Interesting)
RAID 4 has a dedicated parity drive, not 5 (Score:5, Interesting)
RAID 4 is where you have one dedicated parity drive. RAID 5 solves this by spreading the parity information for each drive to all the other drives in the array. RAID 6 adds a second parity block for increased reliability, but as a result of the increased write for that extra parity block, it slows down write speeds.
The real key to making RAID 4, 5, or 6 work is that you really need 4-6 drives in the array to take advantage of the design. I wouldn't say that it will fall out of favor though, because having solid protection from a single drive going bad really is critical for many businesses. Backups are all well and good for if your system crashes, but for most businesses, uptimes are more critical yet. So, backups for data so corruption problems can be rolled back, and RAID 5,6,10 for stability and to avoid having the entire system die if one drive goes bad. What takes more time, doing a data restore from a backup for when an individual application has problems, or having to restore the entire system from a backup, with the potential that the backup itself was corrupted?
With that said, web farms and other applications can get away with just using a cluster approach instead of a single well designed machine(or set of machines) have become popular, but there are many situations which make a system with one or more RAID arrays a better choice. The focus on RAID 0 and 1 for SMALL systems and residential setups has simply kept many people from realizing how useful a 4-drive RAID 5 setup would be.
Then again, most people go to a backup when they screw up their system, not because of a hard drive failure. With techs upgrading hardware before they run into a hard drive failure, the need for RAID 1, 4, 5, and 6 has dropped.
I will say this, since a RAID 5 array can rebuild on the fly(since it keeps working even if one drive fails), the rebuild time itself does not significantly impact system availability. Gone are the days when a rebuild has to be done while the system is down.
RAID6 with enterprise hardware is reliable (Score:3, Interesting)
I use RAID6 for several high-volume machines at work. Having double parity plus a hot spare means rebuild time is no worry.
But if you are not a fan you can always throw something together with ZFS's RAIDZ or RAIDZ2 which is also distributed parity but the ZFS filesystem checksums and keeps multiple (distributed) copies of every block to detect and fix data corruption before it becomes a bigger problem.
People using ZFS have been able to detect silent data corruption from a faulty power supply that other solutions would never have found just because of the checksumming process.
I'm not sure I get it (Score:3, Interesting)
Is he saying that you can never read a whole hard disk because it will fail before you get to the end?
That's what it seems like he's saying but my hard disks usually last for years of continuous so I'm not sure it's true.
Dear Seagate, Western Digital, et. al: (Score:5, Interesting)
Here's what I want, folks:
A 5.25 inch device with 5 double-sided platters running at 5400 RPM. Basically the same size as a desktop CD/DVD drive, ala Quantum Bigfoot.
I want 8 sides of the platters dedicated to data, and the other two sides dedicated to parity (or one parity and the other servo), essentially a self-contained RAID on a single disk.
I want all data heads to write and read simultaneously, in Parallel. The idea is to have 64 byte sectors on each platter which are recombined into a 512-byte result. 8 heads writing and reading in paralell means HUGE throughput for sequential operations.
It's RAID 5 or 6 on a single disk, although without spindle redundancy.
And I also want a high-performance option: 2 sets of read/write heads 180 degrees apart, which effectively would cut seek times in half, making the drive perform more like a 10k RPM drive. With current densities, that's 12 TB in the volume of a DVD drive. It solves speed, sector error recovery and capacity issues. The only thing missing is a data bus that can handle the throughput.
Re: (Score:3, Insightful)
Without spindle redundancy...
or logic element redundancy...
or power supply redundancy...
or cable interconnect redundancy...
add to that the cost of adding dedicated RAID hardware to every single drive (that's an expensive PLD), and it's no wonder it's not on the market. High cost - no return.
Re: (Score:3, Interesting)
I want 8 sides of the platters dedicated to data
More platters == more mass. Which translates to more power required for the motor, higher energy usage and much more heat generated by the drive. Generating more heat == quicker hardware failures. Also with bigger / larger / more platters, it's much harder to spin the platters faster. Usually more platters == slower RPM drive speed and much slower seek rates. If you can do fewer, smaller, and lighter platters, you can make the drive spin faster and perform better -- this is exactly what the Velocirapto
Comment removed (Score:3, Interesting)
SATA vs FC/SAS: grapes and oranges (Score:4, Insightful)
The chart he's using goes from SCSI, to fiberchannel, to SAS... to SATA. When you go from professional/server interfaces to hobby/desktop ones, of course the rebuild time skyrockets. If you did this article a few years ago and slid ATA in as the last data point instead of fiberchannel, you'd be seeing the knee showing up then instead of now. How about looking at 2010 and doing the calculations with 6 Gb SAS interconnect and 3 Gb drives, instead of 1.5 Gb SATA and 1 Gb drives?
Re: (Score:3, Insightful)
A search engine doesn't mind losing data, most of the storage is essentially just a cache or summary of the internet and can be regenerated. That said, Google already have so many mirrors for performance reasons that actual data loss is practically impossible.
Re: (Score:3, Informative)
Who says there are no errors with optical media?
I've seen a CD with light shining through after 5 years.