Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

RAID's Days May Be Numbered 444

storagedude sends in an article claiming that RAID is nearing the end of the line because of soaring rebuild times and the growing risk of data loss. "The concept of parity-based RAID (levels 3, 5 and 6) is now pretty old in technological terms, and the technology's limitations will become pretty clear in the not-too-distant future — and are probably obvious to some users already. In my opinion, RAID-6 is a reliability Band Aid for RAID-5, and going from one parity drive to two is simply delaying the inevitable. The bottom line is this: Disk density has increased far more than performance and hard error rates haven't changed much, creating much greater RAID rebuild times and a much higher risk of data loss. In short, it's a scenario that will eventually require a solution, if not a whole new way of storing and protecting data."
This discussion has been archived. No new comments can be posted.

RAID's Days May Be Numbered

Comments Filter:
  • simple idea (Score:3, Interesting)

    by shentino ( 1139071 ) <shentino@gmail.com> on Friday September 18, 2009 @05:16AM (#29463913)

    Don't consider an entire drive is dead if you get a piddly one-sector error.

    Just mark it read only and keep chugging.

    • reallocate on write (Score:3, Informative)

      by Spazmania ( 174582 )

      Or just regenerate and write the one sector from the parity data since all modern hard disks reallocate bad sectors on write.

    • Re:simple idea (Score:5, Informative)

      by paulhar ( 652995 ) on Friday September 18, 2009 @05:34AM (#29463985)

      Enterprise arrays copy all the good data off the drive to a spare drive, use RAID to recover the failed sector(s), then fail the broken disk.

      • Re:simple idea (Score:5, Insightful)

        by Anonymous Coward on Friday September 18, 2009 @08:06AM (#29464767)

        Enterprise arrays are also very VERY different from what most people know as RAID. Smart controllers, smart drive cages, drives that are a magnitude better than the consumer grade garbage.

        The Summary talks about how speed has not kept up with capacity, Yes that is correct in the low grade consumer junk. Enterprise server class RAID drives are a different story. The 15,000 RPM drives I have in my RAID 50 array here on the Database server are insanely fast. Plus server class drives are not silly unstable capacities like 1Tb or 1.5Tb they area "OMG small" 300gb size but are stable as a rock.

        So I guess the question is, Is the summary talking about RAID on junk drives or RAID on real drives?

        • Re: (Score:3, Interesting)

          by Coren22 ( 1625475 )

          They aren't talking about drive speeds as much as failure rate:

          The bottom line is this: Disk density has increased far more than performance and hard error rates haven't changed much, creating much greater RAID rebuild times and a much higher risk of data loss.

          They are talking about the MTBF of drives has not gone up as fast as the capacity, and the fact that a missed write is actually quite likely with a modern high capacity drive. Even saying drive speeds haven't gone up is very accurate, 15k RPM drives have been around for quite a while now, at least for 10 years, and there has not been an improvement in speed in that time. Where are my 30k RPM drives?~

          Also, I have a bit of a problem with your st

          • Re:simple idea (Score:5, Interesting)

            by paulhar ( 652995 ) on Friday September 18, 2009 @09:02AM (#29465215)

            You're not likely to see 30k RPM drives any time soon. The speed of a 15k drive means that the outer edge of the 3 1/2" drive is spinning pretty fast... getting close to the speed of sound and the lions share of power consumed by 15k drives is consumed in counteracting the air buffeting the heads. With 2 1/2" drives we could go faster but while drives are open to the air it's not likely we'll see much in the short term.

            It's why CDROM speeds haven't gone up much since the old day of 52x.

            As areal density improves the drives will be able to push out more raw MB/sec just like DVD is better than CD, but in terms of IOPs it's not likely to dramatically improve.

            • Re: (Score:3, Funny)

              by JediTrainer ( 314273 )
              lions share of power consumed by 15k drives is consumed in counteracting the air buffeting the heads

              Until some genius figures out how to build one with no air inside?
              • Re:simple idea (Score:4, Informative)

                by operagost ( 62405 ) on Friday September 18, 2009 @10:05AM (#29465897) Homepage Journal
                I'll assume you aren't trolling, and point out that disks work BECAUSE OF the air inside. The heads gain lift.
              • Re:simple idea (Score:5, Interesting)

                by Firethorn ( 177587 ) on Friday September 18, 2009 @10:15AM (#29466017) Homepage Journal

                Even partial evacuation would help, but you run into the problem that the read heads are designed to use the air to keep them from contacting the platters, so you'd need to replace that effect somehow.

                The Space shuttle and ISS even have special sensors to shut the hard drives down if the air pressure goes too low. Reading about which was how I found out that hard drives are designed to use air.

                Not to mention that you're now trying to build an air tight container, but if you're looking at ultra-high performance drives that's less of an issue.

                Still, you have to look at how much such a drive would cost, and whether the cost would ever be repaid - if I was looking at investing in such technology I'd be concerned that Flash would outpace my vacuum drives before I got them released. Even if I DO manage to find a niche, would the niche last long enough against flash memory that's getting faster and cheaper so quickly?

                For certain data sets and access patterns, flash is already much cheaper than the old raid options - the best example I saw was a dataset of a few hundred gigabytes that was mostly read-only, but accessed so much so randomly they had to mirror it on 10 hard drives to meet the read demands. One professional level SSD performed BETTER, while costing less than half of the setup.

                • by speedtux ( 1307149 ) on Friday September 18, 2009 @10:29AM (#29466203)

                  Filling the drive with helium should help; the speed of sound in helium is 3x higher than in air, and it offers less resistance.

                  (Hydrogen would be even better, but it has a tendency to interact with metals in unfortunate ways.)

                  • by the_other_chewey ( 1119125 ) on Friday September 18, 2009 @06:35PM (#29472353)

                    Filling the drive with helium should help;

                    Yeah. For about half a week. Helium has the smallest "gas particles" there are - Hydrogen atoms would
                    be smaller, but those really like to bond, and an H_2 molecule is quite a bit larger than a Helium atom

                    That's why He leaks out of everything. No exception. It diffuses through "leakproof" welds for vacuum tanks.
                    It diffuses through the steel walls of tanks (albeit more slowly). That's also why He is used in leakage detection:
                    If you see less than $not_so_few He atoms on the outside of the container you test within a couple of seconds after you injected a little bit of He, the container is considered airtight.

                    The only way to keep a HE atmosphere in your drive would be to constantly refill it. I don't think that there'll be any scenario where this would seem like an even remotely good idea.

                  • Re: (Score:3, Interesting)

                    by dkf ( 304284 )

                    Filling the drive with helium should help; the speed of sound in helium is 3x higher than in air, and it offers less resistance.

                    (Hydrogen would be even better, but it has a tendency to interact with metals in unfortunate ways.)

                    Thinking about it, methane might be a more practical choice. Yes, it's denser than helium so the effect won't be anything like as strong (the speed of sound in methane is only about 40% faster) but it's also very cheap and available, and won't cause too many problems from interacting with the rest of the drive. Having to seal the drive is an issue, yes, but that's not far off what's needed now; it's imperative that dust is kept out of the platter enclosure anyway...

              • Re:simple idea (Score:4, Interesting)

                by Gothmolly ( 148874 ) on Friday September 18, 2009 @10:37AM (#29466303)

                Can you say "instantaneous heat death" ? Vacuum is an excellent insulator.

              • by Binary Boy ( 2407 ) on Friday September 18, 2009 @12:30PM (#29467761)

                lions share of power consumed by 15k drives is consumed in counteracting the air buffeting the heads

                Until some genius figures out how to build one with no air inside?

                Lions need air.

            • Re:simple idea (Score:4, Informative)

              by amoeba1911 ( 978485 ) on Friday September 18, 2009 @10:12AM (#29465979) Homepage

              Speed of sound at sea level: 340.29 m/s verify [google.com]

              ((3.5 inches) * (2.54 (cm / inches)) * pi) * (((15000 / minute) * (1 minute)) / (60 second)) * (0.01 (meter / centimeter)) = 69.8218967 m / s verify [google.com]

              If my calculation is correct, the outer edge of a 3.5" plate spinning at 15000 RPM is moving at 69.82m/s, which is about 20% of speed of sound. It's fast, but it's nowhere near the speed of sound.

              • Re: (Score:3, Informative)

                by zippthorne ( 748122 )

                You know google does the conversion for you: 2*pi*3.5 inches * 15,000 minute^-1 in m/s [google.com] = 140 m / s

              • Re: (Score:3, Funny)

                by Anonymous Coward

                340.29 m/s is the speed of sound in a vaccuum.

                Moran.

              • Re: (Score:3, Informative)

                by RedBear ( 207369 )

                Besides which I have no idea what the speed of sound has to do with the theoretical upper limit of the speed of a spinning disk. It's not like an airplane wing with a trailing shock wave. I would think there would be much more pressing problems that are keeping us from seeing 30K RPM hard drives anytime soon, like:

                - Shear strength of the platter material
                - Total mass of the platter, especially near the edge
                - Heat generated in the bearings
                - Energy necessary to spin the platter at that speed
                - Torsional forces

            • Re: (Score:3, Informative)

              by Rich0 ( 548339 )

              I'm surprised that nobody has mentioned the issue of failure of the drive material itself at higher rotational velocities.

              I believe CDs are limited to 52X because the polycarbonate they are constructed of explodes when you get too much higher than that (with a safety factor of course).

              A metal hard drive probably can take more speed, but I'm sure that at some point you get deformation of the platter. You also have bearings/etc to deal with. 30k is a pretty fast rotation rate - and we're talking about a dev

          • Re: (Score:3, Interesting)

            Why not add multiple heads to the same platter?

            Keep the disk spinning at 15K but add heads with their own actuator and everything. One could read only the other write only. Whatever makes sense.

        • Re:simple idea (Score:4, Interesting)

          by alva_edison ( 630431 ) <ThAlEdison@@@gmail...com> on Friday September 18, 2009 @08:55AM (#29465157)
          The problem becomes space in the data center.  I don't know about you, but we're trying to cram Petabytes into existing computer rooms and coming up short.  Plus you don't address Tier 2 or Tier 3 storage which tends to be on SATA or near-line SAS both of which have the ridiculous size problem.  Calling 15,000 RPM fast in the datacenter is also misleading because those are the speeds we've been at for a few years now, 10GB iSCSI (or FCoE, which bypasses the collison problem) is about to render that untenable.  The current solution tends toward storage virtualization (in this case virtualization means excessive amounts of high-speed cache in front of controllers and less control on where controllers allocate space).  The future is most likely some kind of grid technology (like XIV from IBM).  Where any blcok is on two random drives in the array, and only the controller knows where.  This means that drive rebuilds become subject to swarm speeds (since there is an equal chance that it is pulling data from every other drive in the tower).
        • Re: (Score:3, Interesting)

          by pjr.cc ( 760528 )

          Unfortunately all that is quite a myth for the most part.

          Having worked in storage for a aeons the reality is that the difference between enterprise and "consumer grade rubbish" has very little to do anything but tollerance. If you picked up a 300G 10k enterprise drive and compared it to the consumer grade rubbish you'd find nothing different. It used to be the case, way back when, that they were very different but because consumer grade drives have gotten so much better its just not worth the expense of bui

    • Re:simple idea (Score:5, Insightful)

      by Eric Smith ( 4379 ) on Friday September 18, 2009 @06:45AM (#29464321) Homepage Journal
      The drives already do that internally. By the time they're reporting errors, bad things are happening, and it really IS time to replace the drive. Anyhow, drives are inexpensive. It's more cost effective to replace them than to spend a lot of time screwing around with them.
      • Re:simple idea (Score:4, Informative)

        by paulhar ( 652995 ) on Friday September 18, 2009 @07:05AM (#29464405)

        They do to varying degrees of success but just because a disk can't read a particular sector doesn't mean that the drive is faulty - it could be a simple error on the onboard controller that is causing the issue.

        FC/SAS drives mostly leave error handling up to the array rather than doing it themselves because the arrays can typically make better decisions as to how to deal with the problem and helps cope with time sensitive applications. The array can choose to issue additional retries, reboot the drive while continuing to use RAID to serve the data, etc.

        Consumer SAS drives on the other hand try really hard to recover from the problem - for example retrying again and again with different methods to get the sector and while admiral that leads to behaviours we see in consumer land where the PC just "locks up". The assumption here is that there is no RAID available and so reporting an error back to the host is "a bad thing". The enterprise SAS drives we're seeing on the market are starting to disable this automatic functionality to make them behave correctly when inserted into RAID arrays.

        Usually ;-)

        • Re: (Score:3, Informative)

          by operagost ( 62405 )
          The only real difference between WD's enterprise SATA and their consumer line (other than, perhaps, the warranty) is a firmware setting that determines how long it attempts to write to a sector before giving up and using a spare block. It has to be reduced for enterprise use so that the RAID controller doesn't fail the disk prematurely. My WD disks kept "failing" until I set this timeout shorter. It's been a year since I did that, and I've had no failures or data corruption. It's possible that this is n
  • by BBCWatcher ( 900486 ) on Friday September 18, 2009 @05:27AM (#29463963)
    Honestly, there really aren't that many unsolved problems in computing if you are sufficiently aware enough to include mainframes and mainframe operating disciplines in your consideration. The basic way the mainframe community solved this particular problem long ago was to, first, take a holistic view about mitigating data loss. Double concurrent spindle failures are just one possible risk element. What about, for example, an entire data center exploding in a spectacular fireball? (Or whatever.) IBM, for example, came up with several different flavors of GDPS [ibm.com] and continues to refine them, and they include multiple approaches to data storage tiering across geographies, depending on what you're trying to achieve. Data loss, whether physical or otherwise (such as security breaches), is not a particular problem with this class of technology and associated IT discipline, nor does there seem to be any signs of a growing problem in this particular technology class.
    • by Anonymous Coward on Friday September 18, 2009 @06:22AM (#29464213)

      But really none of that should be necessary for the general case. Storing data in different physical locations is a good but entirely unrelated issue- the main problem of disk reliability is still very much in need of a solution. That's pretty much the point of the article: You can come up with various solutions which move the problem around, give multiple fallbacks for when something goes wrong.. but there's still the problem of things going wrong in the first place. I shouldn't need to use 12 separate disks spread across the globe just for basic reliability / redundancy

      • by Fred_A ( 10934 ) <fred@NOspam.fredshome.org> on Friday September 18, 2009 @06:43AM (#29464303) Homepage

        I shouldn't need to use 12 separate disks spread across the globe just for basic reliability / redundancy

        You're trying to weasel out of paying IBM protection money !

      • by plover ( 150551 ) * on Friday September 18, 2009 @07:42AM (#29464609) Homepage Journal

        Actually, storing data in a multiple data center / high availability environment is a completely related issue. The summary above talks of "entirely different paradigms." Cloud storage would be multiple data center based, which is entirely different from keeping the only copy on your local drives. In this concept, your machine would have enough OS to boot, and enough hard drive space to download the current version of whatever software you are leasing. Your personal info would always be maintained in the data centers, and only mirrored locally. Have a home failure? Drop in a new part or even a new PC, (possibly with an entirely different operating system, such as Chrome,) connect to the service, and you're 100% back.

        It's no longer a novel concept for the home market. Consider Google Docs. It's not even being sold as "safer than RAID", it's being touted as "get it from anywhere" or "share with your friends". Safer than RAID is just a bonus.

        So are we ready to move all our personal information to clouds? I certainly am not, but Google Docs are wildly popular and a lot of people are. I long ago learned that I can't look to myself to judge what the mainstream attitudes are in many things.

        • Re: (Score:3, Insightful)

          by 2obvious4u ( 871996 )
          And then like AOL, Google goes out of business (shocker I know) and all your data is lost forever. The cloud is good for a lot of stuff, but for data storage it should be part of the solution, not 100% of it.
        • by Hatta ( 162192 ) * on Friday September 18, 2009 @10:08AM (#29465925) Journal

          Consider Google Docs.

          If you have so much data that you're likely to encounter an error when rebuilding your RAID array, I don't think Google Docs is going to cut it.

        • Re: (Score:3, Insightful)

          by tomhudson ( 43916 )
          Faster to just copy it to a usb key. You have multiple copies of your data, and no longer have to worry about network latency, or even if there IS a network available.
  • by twisteddk ( 201366 ) on Friday September 18, 2009 @05:28AM (#29463967)

    The author says it himself in the article:

    "And running software RAID-5 or RAID-6 equivalent does not address the underlying issues with the drive. Yes, you could mirror to get out of the disk reliability penalty box, but that does not address the cost issue."

    but he hasn't adressed the fact that today you get 100 times as much diskspace for the same cost as you did 10 years ago when cost was a factor. In real life cost isn't a factor when it comes to datastorage, simply because it's really low in real life projects, as compared to the other costs in a project requiring storage. So if you want the reliability you go get a mirror. Drivespace is dirt cheap.

    As for the rebuildtimes, fine, go buy FASTER drives. I dont see the problem. HP and many other vendors have long been trying to sell combined raid soltions (like the EVA) where you mix high storage with high performance drives (like SSD vs. SATA).

    The only real argument for the validity of this article is the personal use of drives/storage. And name 3 people you know who run raid-5 on their personal PCs, and I'll show you 3 guys who can't afford an SSD drive.

    • by TechnoFrood ( 1292478 ) on Friday September 18, 2009 @06:56AM (#29464365)

      I admit I haven't RTFA, but I don't quite get your statement of "And name 3 people you know who run raid-5 on their personal PCs, and I'll show you 3 guys who can't afford an SSD drive.", I can't see how an SSD is a replacement for a raid-5 array. Everyone I know who uses a raid-5 uses it for large amounts of storage with a basic level of protection against data loss. I could justify replacing a raid-0 set up with a SSD.

      That said I definitely couldn't afford an SSD that would be able to replace the raid-5 in my pc (4x500GB usable space of 1.34TB), the largest SSD listed on ebuyer.com are 250GB @ £360 each, I would need 8 to match my raid 5 setup which is £2880 which is probably enough to build 2 reasonable machines both with a 1.34TB raid-5 using normal HDDs.

      • Re: (Score:3, Interesting)

        by Svartalf ( 2997 )

        RAID5 is not backup. It's resilience for bringing the whole system down with a failure.

        RAID was originally developed to make what we consider small storage capacities (then massive) affordable and reasonably reliable.

        You're using RAID5 in it's "intended" use- but an SSD of the same capacity will be inherently MORE reliable (by a factor of how many of those magnetic disks you remove) than your system design right now.

        From personal experience with a system customer base of literally thousands of enterprise c

        • by metamatic ( 202216 ) on Friday September 18, 2009 @09:47AM (#29465645) Homepage Journal

          Blow a controller? Better hope you have an identical one in stock. You can't just swap out a differing controller of the same brand or pop a different brand in- they all do things ever so slightly differently on the disks.

          That's why I prefer software RAID.

        • Re: (Score:3, Informative)

          by Hi_2k ( 567317 )
          That's why the smart money is based on node-based storage: Multiple boxes that are interchangeable. It's a shameless product plug, but I work for Isilon Systems, and our solution is that the whole system is considered replaceable: We don't sell a configuration that doesn't allow you to yank an entire box transparently. A drive failure is rebuilt and ready for swapping as soon as it comes up: Most of our admins don't know about disk failures until their data is already reprotected.

          Granted, our smallest c
    • by drsmithy ( 35869 ) <drsmithy&gmail,com> on Friday September 18, 2009 @07:00AM (#29464383)

      And name 3 people you know who run raid-5 on their personal PCs, and I'll show you 3 guys who can't afford an SSD drive.

      Huh ? That's like saying show me 3 people who have a nice pair of running shoes and I'll show you 3 guys who can't afford a car.

    • by daybot ( 911557 ) * on Friday September 18, 2009 @07:04AM (#29464399)

      And name 3 people you know who run raid-5 on their personal PCs, and I'll show you 3 guys who can't afford an SSD drive.

      Yeah, every time an article on storage catches my eye, I have to check laptop SSD prices. So far, each time I do this, for the cost of a drive the size I need, I could buy a new snowboard, or a laptop, bike, half a holiday, room full of beer... etc. I really want one, but so far I haven't been able to look at that list and say "I'd rather have an SSD!"

    • by Lumpy ( 12016 ) on Friday September 18, 2009 @08:32AM (#29464955) Homepage

      The problem is IT guys and PHB's that think RAID=Backup.

      It's not and it never has been a backup solution. RAID is high availability and nothing more.

      RAID does it's job perfectly for high availability and will continue to do so for decades. Sorry but I have yet to see any other technology deliver the capacity I use for my small 30TB Database we have at work. Our Raid 50 array works great. We also realtime mirror that to the Backup SQL server (not for backup of data but backup of the entire server so that when SQL1 goes offline SQL2 picks up the work.)

      SQL2 is backed up to a SDAT tape magazine nightly.

      RAID does what it's supposed to do perfectly, it's days are not numbered because no other technology other than RAID can provide high availability.

    • by Coren22 ( 1625475 ) on Friday September 18, 2009 @08:57AM (#29465167) Journal
      I will never run RAID 5 on anything but data I don't care about. The risk is too great, and the rebuild times are not near good enough. RAID 1 or 10 is the only way to go. The acronym is Redundant Array of Inexpensive Disks, if they are so Inexpensive, why are you concerned about the difference between losing 1 drive to parity, or losing half your drives to duplicates. I cannot think of a single place where RAID 5 is appropriate, the performance loss on write just isn't worth the trouble.
    • Re: (Score:3, Informative)

      by Courageous ( 228506 )

      As for the rebuild times, fine, go buy FASTER drives.

      Hard drives are getting bigger faster than they are getting faster.

      Hard drives are getting bigger faster than they are getting more reliable.

      In an enterprise setting, SATA based storage is a reality, for cost reasons, in tiers 2 and 3.

      Your suggestion that this problem is solved simply by buying faster drives is a poor one.

      And in a few generations of high speed drives, the problem with manifest regardless.

      Henry's article is not as clear as it could be, how

  • Enlighten me (Score:4, Insightful)

    by El_Muerte_TDS ( 592157 ) on Friday September 18, 2009 @05:31AM (#29463971) Homepage

    (Certain) RAID (levels) address the issue of potential dataloss due to hardware malfunction. How does moving to an Object-Based Storage Device address this issue better? Actually, I don't see how RAID and OSD are mutually exclusive.

  • by Anonymous Coward on Friday September 18, 2009 @05:35AM (#29463995)

    Now that's a stupid article.

    It basically says, you can't read a harddisk more than X times before you get an error on some sector, so RAID is dead. That's a logical nonsequitur. RAID is a generic technology that also applies to flash memory cards, USB sticks, anything you can store data on basically. The base technique says "given this reliability, you can up the reliability if you add some redundancy". There's no link to harddisks other than that that's what they're used for right now.

    • Re: (Score:3, Insightful)

      by J4 ( 449 )

      RAID is here to stay for a while no doubt, but it's a response to a series of problems that has problems of it's own. You can take 5+1 drives make an array where one bad chassis slot can indeed take the whole thing out, or you make a bunch of mirrors at the expense of capacity, or you can stripe one scary large fragile volume.In production it's about performance & availability. Realize that the whole data integrity thing is relative and merely an illusion. It's kinda like on Futurama when they had the t

  • by paulhar ( 652995 ) on Friday September 18, 2009 @05:41AM (#29464029)

    Disclaimer: I work for a storage vendor.

    > FTA: The real fix must be based on new technology such as OSD, where the disk knows what is stored on it and only has to read and write the objects being managed, not the whole device
    OSD doesn't change anything. The disk has failed. How has OSD helped?

    > FTA: or something like declustered RAID
    Just skimming that document it seems to claim: only reconstruct data, not white space, and use a parity scheme that limits damage. Enterprise arrays that have native filesystem virtualisation (WAFL for example) already do this. RAID 6 arrays do this.

    Lets recap. Physical devices including SSDs will fail. You need to be able to recover from failure. The failure could be as bad as the entire physical device failing, or as bad as a single sector being unreadable. In the former case a RAID reconstruct will recover the data but you'll hit RAID recovery errors due to the raw amount of data that needs to be recovered. Enterprise arrays mitigate the risk of recovery errors by using RAID 6. They could even recover the data from a DR mirrored system as part of the recovery scheme.

    And when RAID 6 has a high enough risk that it's worth expanding the scheme everyone will start switching from double parity schemes to triple parity schemes since their much less expensive in terms of spindle count than RAID 6+1.

    One assumption is, at some point in the future, reconstructions will be a continual occurring background task just like any other background task that enterprise arrays handle. As long as there is enough resiliency and performance isn't impacted then it doesn't matter if a disk is being rebuilt.

    • by Kjella ( 173770 ) on Friday September 18, 2009 @06:54AM (#29464359) Homepage

      And when RAID 6 has a high enough risk that it's worth expanding the scheme everyone will start switching from double parity schemes to triple parity schemes since their much less expensive in terms of spindle count than RAID 6+1.

      I don't think you've quite understood the problem described. You can have an infinite number of parity disks, but it does you no good if recovering one data disk causes another data disk to fail.

      Imagine a disk fails on every 100TB of reads (10^14). You have ten 1TB data disks. Imagine you keep them in perfect rotation so they've spent 10, 20, 30, 40, 50, 60, 70, 80, 90 and 100% of their lifetime. The last disk dies and you replace it with a new drive (0%). To rebuild the drive you read 1TB from each data disk and use whatever parity you need. They've now spent 11, 21, 31, 41, 51, 61, 71, 81, 91 and 1% (your new disk) of their lifetime and you can read another 9TB before you need a new disk.

      Now we try doing the same with ten 10TB disks and the same reliability. The last disk dies and you replace it, only now you must read 10TB from each disk. Instead of adding 1% to the lifetime it adds 10% so that they've spent 20, 30, 40, 50, 60, 70, 80, 90, 100 and 10% (your new disk) of their lifetime. But now another disk fails, you can recover that but then another will fail and another and another and another.

      Basically, parity does not solve that issue. If you had a mirror, you would instead copy the mirrored disk with significantly less wear on the disks. RAID is very nice as a high-level check that the data isn't corrupted but it's a very inefficient way of rebuilding a whole disk.

      • Re: (Score:3, Interesting)

        by paulhar ( 652995 )

        RAID 1 has much less reliability than RAID 6. Assume a typical case: one disk totally fails. You then start to reconstruct - in a RAID 1 scheme a single sector error will result in the rebuild failing. Not great.

        In RAID 6 you start the rebuild and you get a single sector error from one of the drives you're rebuilding from. At that point you've got yet another parity scheme available (in the form of the RAID 6 bit) that figures out what that sector should have been and then continues the rebuild. Then you go

        • Re: (Score:3, Insightful)

          by dpilot ( 134227 )

          Even this doesn't handle the other side of the scenario...

          Buy your box of drives and put them in a RAID-6. Chances are you just bought all of the drives at the same time, from the same vendor, and they're probably all the same model of the same brand. Chances are also very good that they're from the same manufacturing lot. You've got N "identical" drives. Install them all into your drive enclosure, power the whole thing up, build your RAID-6, put it into service.

          Now all of your "identical" drives are ru

          • Re: (Score:3, Interesting)

            by LWATCDR ( 28044 )

            Well the logical thing IMHO is after the first year you put in a new drive and do an array rebuild after making a backup.
            Drives are really cheap and I would do that for as long as the array is in use.
            Reuse the old drives in desktops if they are SATA.
            Not perfect but it keeps you from having an array of old drives in your server.

        • Re: (Score:3, Informative)

          by atamido ( 1020905 )

          Actually, reliability quickly scales towards RAID 1+0 as the number of drives increases. In a 14 drive array, a single drive failure in both is fine. A second drive failure has the possibility of destroying the RAID 1+0 array, but the chance of the right drive failing is low. With 3 total drive failures, RAID 6 will fail, while RAID 1+0 has a low probability of failure.

          Rebuild times are also much shorter on RAID 1+0 as only a single drive has to be read, which reduces heat produced and the chance of a sec

      • Re: (Score:3, Interesting)

        by maraist ( 68387 ) *
        I don't understand what your failure rate strategy is. First of all, there's no such thing as saying you are 90% or 10% of the way through a disk's life.. It's a probability distribution, who's probability is dramatically effected by the current events (and somewhat related to historical events). A drive might be at a 0.00005% probability of failure at any given moment, but then a large sustained read occurs which adjusts the heat and causes voltage fluctuations , so now you're operating at 0.001% probabi
      • Re: (Score:3, Insightful)

        Basically you are suggesting someone would make and then sell a disk which could only be read, entirely, 10 times in it's entire life time?

        Well that's easily solved. We won't buy those disks.

  • by PiSkyHi ( 1049584 ) on Friday September 18, 2009 @05:46AM (#29464049)

    Hardware RAID is dead - software for redundant storage is just getting started. I am looking forward to making use of btrfs so I can have some consistency and confidence to how I deal with any ultimately disposable storage component.

    The ZFS folks have been doing it fine for some time now.

    Hardware RAID controllers have no place in modern storage arrays - except those forced to run Windows

    • by Chrisje ( 471362 ) on Friday September 18, 2009 @05:57AM (#29464109)

      First of all, "Hardware RAID" is still software, just executed by dedicated circuits. The distinction is kind of moot. For low-cost, low performance systems, software can run on your main box to perform this task, but for high-end applications you'll want dedicated hardware to take care of it, so your machine can do what it needs to do with more zeal.

      So my guess is that you're not working for a storage vendor. I haven't seen many people switch to SW RAID recently. If anything, the Unix world is finally crawling out of its "lvm striping" hole. Most servers anywhere are running on stuff like HP's Proliants, and I don't see customers ship back the SmartArray controllers.

      • Re: (Score:3, Informative)

        by paulhar ( 652995 )

        > First of all, "Hardware RAID" is still software, just executed by dedicated circuits. The distinction is kind of moot.

        I'm not sure where in my post you saw anything about a comparison between Hardware RAID or Software RAID.

        > So my guess is that you're not working for a storage vendor. I haven't seen many people switch to SW RAID recently.

        I work for NetApp. I didn't think it mattered much in the post I made though. To your second point, as all of the NetApp Enterprise storage systems use software bas

        • by RulerOf ( 975607 ) on Friday September 18, 2009 @06:59AM (#29464377)
          FWIW, I'm a happy 3ware customer... saddened by their sellout to LSI, but I digress.

          When I think of software RAID, I think of parity data being handled by the operating system, being done on x86 chips as part of the kernel or offloaded via a driver (thinking Fake-RAID).

          If you're abstracting your storage away from the operating system that uses it, say via iSCSI or NFS or SMB to a dedicated storage box, like a NetApp filer or a Celerra, then I would consider that hardware RAID, personally speaking. If you're saying that these dedicated storage boxes manage parity, mirroring and so on all done with the same chip that's also running their local operating systems, then I have to admit that yes, that sounds like software RAID to me, but the real distinction I've come to draw between software and hardware RAID is a matter of performance and feature set. If said boxes give the same or better performance (I/Ops and throughput) to a workload as a dedicated, internal storage system managed by something like my 9650SE, then hell..... who cares, right? Aside from being rather impressed that such is possible without dedicated XOR chips, that is.
  • Non-issue ... (Score:4, Interesting)

    by Lazy Jones ( 8403 ) on Friday September 18, 2009 @05:47AM (#29464053) Homepage Journal
    Modern RAID arrays show no dramatic performance degradation while rebuilding, also with RAID-50/RAID-60 arrays, only a fraction of the disk accesses is slower than usually when a single drive is replaced.

    For enterprise level storage systems, this is also a non-issue because of thin provisioning.

  • by BlueParrot ( 965239 ) on Friday September 18, 2009 @05:50AM (#29464073)

    I admit I'm not an expert, but I was under the impression that RAID was mainly about ensuring you a large number of spindles and some redundancy so you can serve data quickly even if a couple of drives fail while the servers are under pressure. Surely you would not rely on a RAID to avoid data loss since you should be keeping external backups anyway?

    • by gedhrel ( 241953 ) on Friday September 18, 2009 @06:30AM (#29464245)

      You don't rely on RAID to avoid data loss; you rely on it as a first line in providing continuity. We run backups of large systems here, but we tend to do other things too: synchronous live mirroring between sites of the critical data. And beter system design. There are some systems where, whilst we _could_ go back to tape (or VTL) at a pinch, having to do so would be a disaster in itself.

      We're designing systems that permit rapid service recovery (the most live critical data) and a second tier of online recovery to get the rest back. We just can't afford the downtime.

      Double-spindle failures on RAID systems are just one of those things that you _will_ see. Deciding whether a system deserves some other measure of redundancy is mostly an actuarial, rather than a technical, decision.

  • Wrong assumptions (Score:5, Insightful)

    by vojtech ( 565680 ) <vojtech@suse.cz> on Friday September 18, 2009 @06:03AM (#29464131)

    The article assumes that when within a RAID5 array a drive encounters a single sector failure (the most common failure scenario), an entire disk has to go offline, be replaced and rebuilt.

    That is utter nonsense, of course. All that's needed is to rebuild a single affected stripe of the array to a spare disk. (You do have spares in your RAID setups, right?)

    As soon as the single stripe is rebuilt, the whole array is again in a fully redundant state again - although the redundancy is spread across the drive with a bad sector and the spare.

    Even better, modern drives have internal sector remapping tables and when a bad sector occurs, all the array has to do is to read the other disks, calculate the sector, and WRITE it back to the FAILED drive.
    The drive will remap the sector, replace it with a good one, and tada, we have a well working array again. In fact, this is exactly what Linux's MD RAID5 driver does, so it's not just a theory.

    Catastrophic whole-drive failures (head crash, etc) do happen, too. And there the article would have a point - you need to rebuild the whole array. But then - these are by a couple orders of magnitude less frequent than simple data errors. So no reason to worry again.

    *sigh*

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Even if only a sector in a disk has failed, I'd mark the entire disk as failed and replace it as soon as I could. Maybe I'm paranoid, but I've seen many times that when something starts to fail, it continues failing at increasing speed.

  • by asdf7890 ( 1518587 ) on Friday September 18, 2009 @06:05AM (#29464133)

    If you want smaller drives to speed up rebuild times then, erm, buy smaller drives? You can get ~70Gb 10Krpm and 15Krpm drives fairly readily - much smaller than the 500-to-2000-Gb monsters and faster too. You can still buy ~80Gb PATA drives too, I've seen them when shopping for larger models, though you only save a couple of peanuts compared to the cost of 250+Gb units.

    If you can't afford those but still don't want 500+Gb drives because they take too long to rebuild if the array is compromised and needs a rebuild, and management won't let you buy bog standard 160Gb (or smaller) drives as they only cost 20% less than 750Gb units without the speed benefits of the high cost 15Krpm ones, how about using software RAID and only using the first part of the drive? Easily done with Linux's software RAID (partition the drives with a single 100Gb (for example) partition, and RAID that instead of the full drive) and I'm sure just as easy with other OSs. You'll get speed bonuses too: you'll be using the fastest part of the drive in terms of bulk transfer speed (most spinning drives are arranged such that the earlier tracks have higher data density) and you'll have lower latency on average as the heads will never need to move the full diameter of the platter. And you've got the rest of the drive space to expand onto if needed later. Or maybe you could hide your porn stash there.

  • ZFS, Anyone? (Score:2, Interesting)

    by Tomsk70 ( 984457 )

    I've managed to get this going, using the excellent FreeNAS - although proceed with caution, as only the beta build supports it, and I've already had serious (all data lost) crashes twice.

    However the principle is sound, and I'm sure this will become standard before long - the only trouble being that HP, Dell and the like can't simply offer upgrades for existing RAID cards - due to the nature of ZFS, it needs a 'proper' CPU and a gig or two or RAM. Even so, it does protect against many of the problems now be

  • Fountain codes? (Score:3, Interesting)

    by andrewagill ( 700624 ) on Friday September 18, 2009 @06:11AM (#29464155) Homepage
    What about fountain codes [wikipedia.org]? The coding there is capable of recovering from a greater variety of faults.
  • ZFS (Score:5, Informative)

    by DiSKiLLeR ( 17651 ) on Friday September 18, 2009 @06:16AM (#29464179) Homepage Journal

    This is something the ZFS creators have been talking about for some time, and been actively trying to solve.

    ZFS now has triple parity, as well as actively checksumming every disk block.

    • Re:ZFS (Score:5, Informative)

      by DiSKiLLeR ( 17651 ) on Friday September 18, 2009 @06:22AM (#29464209) Homepage Journal

      I thought I should add:

      ZFS speeds up rebuilding a RAID (called resilvering) over traditional non-intelligent or non-filesystem based RAIDS by only rebuilding the blocks that actually contain live data; there's no need to rebuild EVERYTHING if only half the filesystem is in use.

      ZFS also starts the resilvering process by rebuilding the most IMPORTANT parts first; the filesystem metadata and works its way down the tree to the leaf nodes rebuilding data. This way, if more disks fail, you have attempted to rebuild the most data possible. If filesystem metadata is hose, everything is hosed.

      ZFS tells you which files are corrupt, if any are, and insufficient replicas exist to due failed disks.

      All this on top of double or triple parity. :)

  • Old news (Score:2, Interesting)

    by EmTeedee ( 948267 )
    Read that before on slashdot. Why RAID 5 Stops Working In 2009 [slashdot.org]
  • Parity declustering (Score:5, Interesting)

    by Biolo ( 25082 ) on Friday September 18, 2009 @06:37AM (#29464273)

    Actually I like the parity declustering idea that was linked to in that article, seems to me if implemented correctly it could mitigate a large part of the issue. I have personally encountered the hard error on RAID5 rebuild issue, twice, so there definitely is a problem to be addressed...and yes, I do now only implement RAID6 as a result.

    For those who haven't RTFATFALT (RTFA the f*** article links to), parity declustering, as I understand it, is where you have, say, an 8 drive array, but where each block is written to only a subset of those drives, say 4. Now, obviously you loose 25% of your storage capacity (1/4), but consider a rebuild for a failed disk. In this instance only 50% of your blocks are likely to be on your failed drive, so immediately you cut your rebuild time in half, halving your data reads, and therefore your chance of encountering a hard error. Larger numbers of disks in the array, or spanning your data over fewer drives, cuts this further.

    Now, consider the flexibility you could build into an implmentation of this scheme. Simply by allowing the number of drives a block spans to be configurable on a per block basis, you could then allow any filesystem that is on that array to say, on a per file basis, how many disks to span over. You could then allow apps and sysadmins to say that a given file needs to have the maximum write performance, so diskSpan=2, which gives you effectively RAID10 for that file (each block is written to 2 drives, but with multiple blocks in the file is likely to be written to a different pair of drives, not quite RAID10, but close). Where you didn't want a file to consume 2x its size on the storage system, you could allow a higher diskSpan number. You could also allow configurable parity on a per block basis, so particularly important files can survive multiple disk failures, temp files could have no parity. There would need to be a rule however that parity+diskSpan is less than or equal to the number of devices in the array.

    Obviously there is an issue here where the total capacity of the array is not knowable, files with diskSpan numbers lower than the default for the array will reduce the capacity, numbers higher will increase it. This alone might require new filesystems, but you could implement todays filesystems on this array as long as you disallowed the per-block diskSpan feature.

    This even helps for expanding the array, as there is now no need to re-read all of the data in the array (with the resulting chance of encountering a hard error, adding huge load to the system causing a drive to fail, etc). The extra capacity is simply available. Over time you probably want a redistribution routine to move data from the existing array members to the new members to spread the load and capacity.

    How about you implement a performance optimiser too, that looks for the most frequently accessed blocks and ensures they are evenly spread over the disks. If you take into account the performance of the individual disks themselves, you could allow for effectively a hierarchical filesystem, so that one array contains, say, SSD, SAS and SATA drives, and the optimiser ensures that data is allocated to individual drives based on the frequency of access of that data and the performance of the drive. Obviously the applications or sysadmin could indicate to the array which files were more performance sensitive, so influencing the eventual location of the data as it is written.

  • by Chrisq ( 894406 ) on Friday September 18, 2009 @06:44AM (#29464315)
    Will scalable distributed storage systems like Hadoop [wikipedia.org] and Google File System take over from RAID?
  • by trims ( 10010 ) on Friday September 18, 2009 @06:45AM (#29464323) Homepage

    As others have mentioned, this is something that is discussed on the ZFS mailing lists frequently.

    For more info there, check out the digest for zfs-discuss@opensolaris.org

    and, in particular, check out Richard Elling's blog [sun.com]

    (Disclaimer: I work for Sun, but not in the ZFS group)

    The fundamental problem here isn't the RAID concept, is that the throughput and access times of spinning rust haven't changed much in 30 years. Fundamentally, today's hard drive is no more than 100 times as fast (both in throughput and latency) than a 1980s one, while it holds well over 1 million times more.

    ZFS (and other advanced filesystems) will now do partial reconstruction of a failed drive (that is, they don't have to bit copy the entire drive, only the parts which are used), which helps. But there are still problems. ZFS's pathological case results in rebuild times of 2-3 WEEKS for a 1TB drive in a RAID-Z (similar to RAID-5). It's all due to the horribly small throughput, maximum IOPs, and latency of the hard drive.

    SSDs, on the other hand, are no where near the problem. They've got considerably more throughput than a hard drive, and, more importantly, THOUSANDS of times better IOPS. Frankly, more than any other reason, I expect the significant IOPS of the SSD to signal the death knell of HDs in the next decade. By 2020, expect HDs to be gone from everything, even in places where HDs still have better GB/$. The rebuild rates and maintenance of HDs simply can't compete with flash.

    Note: IOPS = I/O Per Second, or the number of read/write operations (irregardless of size) which a disk can service. HDs top out around 350, consumer SSDs do under 10,000, and high-end SSDs can do up to 100,000.

    -Erik

    • Re: (Score:3, Insightful)

      "The fundamental problem here isn't the RAID concept, is that the throughput and access times of spinning rust haven't changed much in 30 years."

      Uh, there's another bigger problem. The drive error rate (when reading data) hasn't changed that much either while data on a drive has dramatically increased.

      When doing a rebuild when you've lost all redundancy a single read error means the rebuild will fail. Increase the size of a drive (while keeping error rates constant) and you increase the likelihood of a re

  • The real problem with "classic" RAID is that 1 single error means a total rebuild of the array.

  • by jayhawk88 ( 160512 ) <jayhawk88@gmail.com> on Friday September 18, 2009 @07:08AM (#29464417)

    The cloud. Just cloud it, baby. Nothing bad ever happens in the cloud; they're so white and fluffy after all.

  • by davros-too ( 987732 ) on Friday September 18, 2009 @07:13AM (#29464441) Homepage
    Um, don't schemes like raid 1+0 solve the parity rebuild problem? Even in the worst case of full disk loss, only one disk needs to be rebuilt and even for a large disk that doesn't take very long. Am I missing something?
  • by Targon ( 17348 ) on Friday September 18, 2009 @07:43AM (#29464615)

    RAID 4 is where you have one dedicated parity drive. RAID 5 solves this by spreading the parity information for each drive to all the other drives in the array. RAID 6 adds a second parity block for increased reliability, but as a result of the increased write for that extra parity block, it slows down write speeds.

    The real key to making RAID 4, 5, or 6 work is that you really need 4-6 drives in the array to take advantage of the design. I wouldn't say that it will fall out of favor though, because having solid protection from a single drive going bad really is critical for many businesses. Backups are all well and good for if your system crashes, but for most businesses, uptimes are more critical yet. So, backups for data so corruption problems can be rolled back, and RAID 5,6,10 for stability and to avoid having the entire system die if one drive goes bad. What takes more time, doing a data restore from a backup for when an individual application has problems, or having to restore the entire system from a backup, with the potential that the backup itself was corrupted?

    With that said, web farms and other applications can get away with just using a cluster approach instead of a single well designed machine(or set of machines) have become popular, but there are many situations which make a system with one or more RAID arrays a better choice. The focus on RAID 0 and 1 for SMALL systems and residential setups has simply kept many people from realizing how useful a 4-drive RAID 5 setup would be.

    Then again, most people go to a backup when they screw up their system, not because of a hard drive failure. With techs upgrading hardware before they run into a hard drive failure, the need for RAID 1, 4, 5, and 6 has dropped.

    I will say this, since a RAID 5 array can rebuild on the fly(since it keeps working even if one drive fails), the rebuild time itself does not significantly impact system availability. Gone are the days when a rebuild has to be done while the system is down.

  • by niola ( 74324 ) <jon@niola.net> on Friday September 18, 2009 @07:56AM (#29464713) Homepage

    I use RAID6 for several high-volume machines at work. Having double parity plus a hot spare means rebuild time is no worry.

    But if you are not a fan you can always throw something together with ZFS's RAIDZ or RAIDZ2 which is also distributed parity but the ZFS filesystem checksums and keeps multiple (distributed) copies of every block to detect and fix data corruption before it becomes a bigger problem.

    People using ZFS have been able to detect silent data corruption from a faulty power supply that other solutions would never have found just because of the checksumming process.

  • by Joce640k ( 829181 ) on Friday September 18, 2009 @08:27AM (#29464913) Homepage

    Is he saying that you can never read a whole hard disk because it will fail before you get to the end?

    That's what it seems like he's saying but my hard disks usually last for years of continuous so I'm not sure it's true.

  • by ThreeGigs ( 239452 ) on Friday September 18, 2009 @09:09AM (#29465255)

    Here's what I want, folks:
    A 5.25 inch device with 5 double-sided platters running at 5400 RPM. Basically the same size as a desktop CD/DVD drive, ala Quantum Bigfoot.
    I want 8 sides of the platters dedicated to data, and the other two sides dedicated to parity (or one parity and the other servo), essentially a self-contained RAID on a single disk.
    I want all data heads to write and read simultaneously, in Parallel. The idea is to have 64 byte sectors on each platter which are recombined into a 512-byte result. 8 heads writing and reading in paralell means HUGE throughput for sequential operations.

    It's RAID 5 or 6 on a single disk, although without spindle redundancy.

    And I also want a high-performance option: 2 sets of read/write heads 180 degrees apart, which effectively would cut seek times in half, making the drive perform more like a 10k RPM drive. With current densities, that's 12 TB in the volume of a DVD drive. It solves speed, sector error recovery and capacity issues. The only thing missing is a data bus that can handle the throughput.

    • Re: (Score:3, Insightful)

      by EmagGeek ( 574360 )

      Without spindle redundancy...

      or logic element redundancy...

      or power supply redundancy...

      or cable interconnect redundancy...

      add to that the cost of adding dedicated RAID hardware to every single drive (that's an expensive PLD), and it's no wonder it's not on the market. High cost - no return.

    • Re: (Score:3, Interesting)

      by adisakp ( 705706 )

      I want 8 sides of the platters dedicated to data

      More platters == more mass. Which translates to more power required for the motor, higher energy usage and much more heat generated by the drive. Generating more heat == quicker hardware failures. Also with bigger / larger / more platters, it's much harder to spin the platters faster. Usually more platters == slower RPM drive speed and much slower seek rates. If you can do fewer, smaller, and lighter platters, you can make the drive spin faster and perform better -- this is exactly what the Velocirapto

  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Friday September 18, 2009 @09:26AM (#29465457)
    Comment removed based on user account deletion
  • The chart he's using goes from SCSI, to fiberchannel, to SAS... to SATA. When you go from professional/server interfaces to hobby/desktop ones, of course the rebuild time skyrockets. If you did this article a few years ago and slid ATA in as the last data point instead of fiberchannel, you'd be seeing the knee showing up then instead of now. How about looking at 2010 and doing the calculations with 6 Gb SAS interconnect and 3 Gb drives, instead of 1.5 Gb SATA and 1 Gb drives?

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...