Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
Data Storage Hardware

Disk Drive Failures 15 Times What Vendors Say 284

jcatcw writes "A Carnegie Mellon University study indicates that customers are replacing disk drives more frequently than vendor estimates of mean time to failure (MTTF) would require.. The study examined large production systems, including high-performance computing sites and Internet services sites running SCSI, FC and SATA drives. The data sheets for the drives indicated MTTF between 1 and 1.5 million hours. That should mean annual failure rates of 0.88%, annual replacement rates were between 2% and 4%. The study also shows no evidence that Fibre Channel drives are any more reliable than SATA drives."
This discussion has been archived. No new comments can be posted.

Disk Drive Failures 15 Times What Vendors Say

Comments Filter:
  • Re:Repeat? (Score:4, Informative)

    by georgewilliamherbert ( 211790 ) on Friday March 02, 2007 @05:20PM (#18211790)
    We did both this study and the Google study in the first couple of days after FAST was over. Completely redundant....
  • In other news... (Score:5, Informative)

    by Mr. Underbridge ( 666784 ) on Friday March 02, 2007 @05:22PM (#18211808)
    ...Carnegie Mellon researchers can't tell a mean from a median. This is inherently a long-tailed distribution in which the mean will be much higher than the median. Imagine a simple situation in which failure rates are 50%/yr, but those that last beyond a year last a long time. Mean time to failure might be 1000 years. You simply can't compare the statistics the way they have without knowing a lot more about the distribution than I saw in the article. Perhaps I missed it while skimming.
  • I believe it... (Score:3, Informative)

    by madhatter256 ( 443326 ) on Friday March 02, 2007 @05:23PM (#18211832)
    Yeh. Don't rely on the HDD after it surpasses its' manufacturer warranty.
  • Re:Repeat? (Score:5, Informative)

    by ajs ( 35943 ) <ajs AT ajs DOT com> on Friday March 02, 2007 @05:34PM (#18211992) Homepage Journal

    The best part about the entire thing is the very last quote:

    "If they told me it was 100,000 hours, I'd still protect it the same way. If they told me if was 5 million hours I'd still protect it the same way. I have to assume every drive could fail."

    Just common sense.
    It's "common sense," but not as useful as one might hope. What MTTF tells you is, within some expected margin of error, how much failure you should plan on in a statistically significant farm. So, for example, I know of an installation that has thousands of disks used for everything from root disks on relatively drop-in-replaceable compute servers to storage arrays. On the budgetary side, that installation wants to know how much replacement cost to expect per annum. On the admin side, that installation wants to be prepared with an appropriate number of redundant systems, and wants to be able to assert a failure probability for key systems. That is, if you have a raid array with 5 disks and one spare, then you want to know the probability that three disks will fail on it in the, let's say, 6 hour worst-case window before you can replace any of them. That probability is non-zero, and must be accounted for in your computation of anticipated downtime, along with every other unlikely, but possible event that you can account for.

    When a vendor tells you to expect 1 0.2% failure rate, but it's really 2-4% that's a HUGE shift in the impact to your organization.

    When you just have one or a handful of disks in your server at home, that's a very different situation from a datacenter full of systems with all kinds of disk needs.
  • by reset_button ( 903303 ) on Friday March 02, 2007 @05:36PM (#18212036)
    Here are the main conclusions:
    • the MTTF is always much lower than the observed time to disk replacement
    • SATA is not necessarily less reliable than FC and SCSI disks
    • contrary to popular belief, hard drive replacement rates to not enter steady state after the first year of operation, and in fact steadily increase over time.
    • early onset of wear-out has a stronger impact on replacement than infant mortality.
    • they show that the common assumptions that the time between failure follows an exponential distribution, and that failures are independent, are not correct.
    It was an interesting paper (won the best paper award) at this year's FAST (File and Storage Technologies) conference. Here is a link [cmu.edu] to the paper, and the summary [usenix.org] from the conference.
  • New meaning for RAID: Redundant Articles of Identical Discourse.
    Slashdot has a high rate of RAID, which is a bad thing. Which is a bad thing. It has been a whole 9 days. Slashdot needs a story moderation system so dupe articles can get modded out of existance. Ditto for slashdot editors who do the duping! :) (I have long since disabled tagging since 99% of the tags were completely worthless: "yes", "no", "maybe", "fud", etc. If tagging is actually useful now, please let me know!)

    Can we get redundant posting on the story about google's paper [slashdot.org]?
  • by mollymoo ( 202721 ) on Friday March 02, 2007 @05:42PM (#18212110) Journal

    TFA seems surprised by SATA drives lasting as long as Fibre...why one earth would your data interface have any consequences on the drive internals?

    Fibre Channel drives, like SCSI drives, are assumed to be "enterprise" drives and therefore better built than "consumer" SATA and PATA drives. It's nothing inherent to the interface, but a consequence of the environment in which that interface is expected to be used. At least, that's the idea.

  • by Beardo the Bearded ( 321478 ) on Friday March 02, 2007 @05:43PM (#18212126)
    What, really?

    The same companies that lie about the capacity on EVERY SINGLE DRIVE they make? You don't think that they're a bunch of lying fucking weasels? (We're both using sarcasm here.)

    I don't care how you spin it. 1024 is the multiple. NOT 1000!

    Failure doesn't get fixed because making a drive more reliable means it costs more. If it costs more, it's not going to get purchased.

  • Re:In other news... (Score:4, Informative)

    by Falkkin ( 97268 ) on Friday March 02, 2007 @05:57PM (#18212300) Homepage
    In other news, Carnegie Mellon researchers know more about statistics than you give them credit for; blame ComputerWorld for crappy coverage of what the paper says. If you read the paper or the abstract, the researchers actually claim the opposite of what you are suggesting, namely, that the "infant mortality effect" (bathtub curve) often claimed for hard drives isn't actually the case. See Figure 4 in the paper and Section 5 ("Statistical properties of disk failures"). The paper is online here:

    http://www.usenix.org/events/fast07/tech/schroeder /schroeder_html/index.html [usenix.org]
  • by Spazmania ( 174582 ) on Friday March 02, 2007 @05:59PM (#18212324) Homepage
    They certainly charge enough more. SATA drives run about $0.50 per gig. Comparable Fibre Channel drives run about $3 per gig. A sensible person would expect the Fibre Channel drive to be as much as 6 times as reliable, but per the article there is no difference.
  • Re:Even better ... (Score:5, Informative)

    by Falkkin ( 97268 ) on Friday March 02, 2007 @06:01PM (#18212348) Homepage
    This is handled in the paper. See this graph: http://www.usenix.org/events/fast07/tech/schroeder /schroeder_html/img14b.PNG [usenix.org]

    Unfortunately there is no big "spike"; the average replacement rate just grows and grows with time.
  • just assume 3 years (Score:5, Informative)

    by crabpeople ( 720852 ) on Friday March 02, 2007 @06:05PM (#18212416) Journal
    A good rule of thumb is 3 years. Most hard drives fail in 3 years. I dont know why, but im currently seeing alot of bad 2004 branded drives and consider that right on schedule. Last year the 02-03 drives were the ones failing left and right. I just pulled one this morning thats stamped march 04. Just started acting up a few days ago. Like clockwork.

  • by dangitman ( 862676 ) on Friday March 02, 2007 @06:09PM (#18212478)
    Pick any two.

    I've noticed this personally. Now, anecdotal evidence doesn't count for a lot, and it may be a case that we are pushing our drives more. But back in the day of 40MB hard drives that cost a fortune, they used to last forever. The only drive I ever had fail on me in the old days were the Syquest removable HD cartridges, for obvious reasons. But even they didn't fail that often, considering the extra wear-and-tear of having a removable platter with separate heads in the drive.

    But these days, with our high-capacity ATA drives, I see hard drives failing every month. Sure, the drives are cheap and huge, but they don't seem to make them like they used to. I guess it's just a consequence of pushing the storage and speed to such high levels, and cheap mass-production. Although the drives are cheap, if somebody doesn't back up their data, the costs are incalculable if the data is valuable.

  • by Lord Ender ( 156273 ) on Friday March 02, 2007 @06:11PM (#18212506) Homepage
    Before computers were used in real engineering, we could get away with "k" sometimes meaning 1024 (like in memory addresses) and sometimes meaning 1000 (like in network speeds). Those days are past. Now that computers are part of real engineering work, even the slightest amount of ambiguity is not acceptable .

    Differentiating between "k" (=1000) and "ki" (=1024) is a sign that the computer industry is finally maturing. It's called progress.

  • Off-Topic: SI Units (Score:5, Informative)

    by ewhac ( 5844 ) on Friday March 02, 2007 @06:21PM (#18212634) Homepage Journal

    I just can't believe that the same vendors that would misrepresent the capacity of their disk by redefining a Gigabyte as 1,000,000,000 bytes instead of 1,073,741,824 bytes would misrepresent their MTBF too!

    Not that this is actually relevant or anything, but there's been a long-standing schism between the computing community and the scientific community concerning the meaning of the SI prefixes Kilo, Mega, and Giga. Until computers showed up, Kilo, Mega, and Giga referred exclusively to multipliers of exactly 1,000, 1,000,000, and 1,000,000,000, respectively. Then, when computers showed up and people had to start speaking of large storage sizes, the computing guys overloaded the prefixes to mean powers of two which were "close enough." Thus, when one speaks of computer storage, Kilo, Mega, and Giga refer to 2**10, 2**20, and 2**30 bytes, respectively. Kilo, Mega, and Giga, when used in this way, are properly slang, but they've gained traction in the mainstream, causing confusion among members of differing disciplines.

    As such, there has been a decree [nist.gov] to give the powers of two their own SI prefix names. The following have been established:

    • 2**10: Kibi (abbreviated Ki)
    • 2**20: Mebi (Mi)
    • 2**30: Gibi (Gi)

    These new prefixes are gaining traction in some circles. If you have a recent release of Linux handy, type /sbin/ifconfig and look at the RX and TX byte counts. It uses the new prefixes.

    Schwab

  • Re:Not So Fuzzy math (Score:4, Informative)

    by Annoying ( 245064 ) on Friday March 02, 2007 @06:22PM (#18212636)
    0.88% != 0.88
    0.0088 * 15 = 0.132 (13%)
    13% you say? The excerpt says 2%-4%. RTA and you'll see though they report up to 13% on some systems.
  • by Anonymous Coward on Friday March 02, 2007 @06:30PM (#18212736)
    Where I work we have some large compute clusters where the nodes report memory errors. It's actually very common for a memory module to start throwing errors that eventually exceed a threshold for replacement.

    We see everything eventually die - power supplies, fans, motherboards, RAM, CPUs, drives. Nothing is immune from "wearing out" except maybe the boxes themselves.
  • This is only news... (Score:2, Informative)

    by rickb928 ( 945187 ) on Friday March 02, 2007 @07:25PM (#18213312) Homepage Journal
    ...to those of you who haven't managed 24x7x365 servers very much. And little news to those of you who have a computer at all.

    I expect most desktop drives to last 5 years max. MAX. No manufacturer has an edge. It's just the way it is. MTBF is fiction.

    For an always-on server, I expect failures about every 3-4 years. For my clients who cared enough to pay for the very best, I replaced the drives in the 3rd year without waiting. No failures costa a bit more.

    My experience is that Seagate and Fujitsu are my best server drives. IBM was also on the list, but I'm watching Hitachi. No decision.

    The losers: Quantum (thankfully gone), Samsung (until recently), Maxtor. Not my opinion, my experience.

    Now, in fairness, these are some of my historical losers:

    Seagate: Early IDE drives and the 'stiction' problem. Remember banging drives to get them started?

    Quantum 'Bigfoot' drives: popular in Compaq machines, the 5.25" .7" thin piece of junk. died often. Even Compaq admitted these were bad.

    Seagate SCSI drives: Many different types had a bad habit of going off-line for no apparent reason. Your Novell server would log the 'device deactivated to a non-media defect' error. Just restarting the bus controller would sometimes wake them up. Sometimes repowering the drives. Would happen every few months. Usually when I was elsewhere...

    And then there was Miniscribe.

    But MTBF numbers are universally fiction. Imagine trying to sell the idea of a wave bearing lasting 16 years to an engineer with real-world experience. I figure MTBF numbers come out of the marketing department.

    -rick
  • by Chonine ( 840828 ) on Friday March 02, 2007 @07:39PM (#18213458)
    Standard metric is indeed powers of 10, and a megabyte is indeed 10^6 bytes.

    To clear up the confusion, the notation for binary, as in 2^20 bytes was developed. That would be a Mebibyte.

    http://en.wikipedia.org/wiki/Mebibyte [wikipedia.org]

  • Re:Repeat? (Score:4, Informative)

    by ShakaUVM ( 157947 ) on Friday March 02, 2007 @09:30PM (#18214254) Homepage Journal
    Except MTBF is just pulled out of their asses. Look at the development cycle of a hard drive. Look at the MTBF. I used to work for an engineering company, and have worked doing test suites to determine MTBF. Sure, there's numbers involved, but it's probably 60% wishful thinking and 40% science.

    Believe me, they aren't determining an 11 year MTBF empirically.
  • Re:Check SMART Info (Score:3, Informative)

    by Chalex ( 71702 ) on Friday March 02, 2007 @09:32PM (#18214266) Homepage
    Slightly off-topic, but if you haven't checked the Google paper on Self-Monitoring, Analysis and Reporting Technology (SMART) info provided by your drive to see if it is having errors, you probably should. The paper is available here: http://hardware.slashdot.org/hardware/07/02/18/042 0247.shtml [slashdot.org]

    The conclusions are roughly the following: a) if there are SMART errors, the disk will fail soon, b) if there are no SMART errors, the disk is still likely to fail. They saw no SMART errors on 36% of their failed disks.
  • When I was trying the Vista RC, it told me that my drive was close to failing. ... About the only feature that impressed me in Vista, sadly.
    Be sad no more. SmartMonTools [sourceforge.net] will run in UNIX or Windows and notify you if it detects SMART errors. For the Windows installer look for the phrase "Install the Windows package" on the smartmontools home page..
  • by CorporalKlinger ( 871715 ) on Friday March 02, 2007 @10:08PM (#18214468)
    I think one of the key problems here isn't necessarily the statistical methods used, it is that the CMU team was comparing real-life drive performance to the "ideal" performance levels predicted by the drive manufacturers. Allow me to provide two examples of this "apples to oranges" comparison problem.

    I have had two computers with power supply units that were "acting up." They ended up killing my hard drives on multiple occasions - Seagates, WD's, Maxtors, etc. It didn't matter what type of drive you put in these systems, the drive would die after anywhere from a week to two years. I later discovered that the power supplies were the problems, replaced them with brand new ones, and replaced the drives one last time. That was quite some time ago (years), and those drives, although small, still work, and have been transferred into newer computer systems since that time. The PSU was killing the drives; they weren't inherently bad or had a manufacturing defect. A friend of mine who lives in an apartment building constructed circa 1930 experienced similar problems with his drives. After just a few months, it seemed like his drives would spontaneously fail. When I tested his grounding plug, I found that it was carrying a voltage of about 30V (a hot ground - how wonderful). Since he moved out of that building and replaced his computer's PSU, no drive failures.

    The same type of thing is true in automobile mileage testing. Car manufacturers must subject their cars to tests based on rules and procedures dictated by state and federal government agencies. These tests are almost never real world - driving on hilly terrain, through winds, with the headlights and window wipers on, plus the AC for defrost. They're based on a certain protocol developed in a laboratory to level the playing field and ensure that the ratings, for the most part, are similar. It simply means when you buy a new car, you can expect that under ideal conditions and at the beginning of the vehicle's life, it should BE ABLE to get the gas mileage listed on the window (based on an average sampling of the performance of many vehicles).

    My point is that there really isn't a decent way to go about ensuring that an estimated statistic is valid for individual situations. By modifying the environmental conditions, the "rules of the game" change. A data-center with exceptional environmental control and voltage regulation systems, and top-quality server components (PSU's, voltage regulators, etc.) should expect to experience fewer drive failures per year than the drives found in an old chicken-shack data center set up in some hillbilly's back yard out in the middle of nowhere where quality is the last thing on the IT team's mind. It's impractical to expect that EVERY data center will be ideal - and since it's very very difficult to have better than the "ideal" testing conditions used in the MTTF tests - the real-life performance can only move towards more frequent and early failures. Using the car example above, since almost nobody is going to be using their vehicle in conditions BETTER than the ideal dictated by the protocols set forth by the government, and almost EVERYONE will be using their vehicles under worse conditions, the population average and median have nowhere to go but down. That doesn't mean the number is wrong, it just means that it's what the vehicle is capable of - but almost never demonstrates in terms of its performance - since ideal conditions in the real world are SO rare.

Never buy from a rich salesman. -- Goldenstern

Working...