Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Disk Failure Rates More Myth Than Metric 283

Lucas123 writes "Using mean time between failure rates suggest that disks can last from 1 million to 1.5 million hours, or 114 to 170 years, but study after study shows that those metrics are inaccurate for determining hard drive life. One study found that some disk drive replacement rates were greater than one in 10. This is nearly 15 times what vendors claim, and all of these studies show failure rates grow steadily with the age of the hardware. One former EMC employee turned consultant said, 'I don't think [disk array manufacturers are] going to be forthright with giving people that data because it would reduce the opportunity for them to add value by 'interpreting' the numbers.'"
This discussion has been archived. No new comments can be posted.

Disk Failure Rates More Myth Than Metric

Comments Filter:
  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Saturday April 05, 2008 @03:34PM (#22974524)
    Comment removed based on user account deletion
    • by Murphy Murph ( 833008 ) <sealab.murphy@gmail.com> on Saturday April 05, 2008 @03:40PM (#22974564) Journal

      I've gone through many over the years, replacing them as they became too small - still using some small ones many years old for minor tasks, etc. and he only drive I've ever had partially fail is the one I accidentally launched across a room.

      My anecdotal converse is I have never had a hard drive not fail. I am a bit on the cheap side of the spectrum, I'll admit, but having lost my last 40GB drives this winter I now claim a pair of 120s as my smallest.
      I always seem to have a use for a drive, so I run them until failure.

      • My anecdotal converse is I have never had a hard drive not fail. I am a bit on the cheap side of the spectrum, I'll admit, but having lost my last 40GB drives this winter I now claim a pair of 120s as my smallest. I always seem to have a use for a drive, so I run them until failure.

        If this was the case I would seriously consider looking for a problem that's not directly related to the hard drives themselves. Around 80% of HDD failures are controller board failures, I wonder if maybe your setup is experi
        • I have very clean power, and use UPSs to boot. I believe I simply use them much longer than average. How many drives have you had running 24x7x365 for seven years?
      • by Epistax ( 544591 )
        Judging by your nick, you aren't representative of everyone. Just anyone who happens to read this message.
    • by Anonymous Coward on Saturday April 05, 2008 @03:43PM (#22974578)
      Wait. You've got a huge Wang, and you're throwing it out? D00d, that's just uncool. Give it to someone else at least. It would be fun to ask people "wanna come see my huge Wang?" just to see their reaction! :)

      hah. captcha word: largest
    • Re: (Score:3, Insightful)

      by Anonymous Coward
      Drive failures are actually fairly common, but usually the failures are due to cooling issues. Given that most PCs aren't really set up to ensure decent hard drive cooling, it is probable that the failure ratings are inflated due to operation outside of the expected operational parameters (which are probably not conservative enough for real usage). In my opinion, if you have more than a single hard drive closely stacked in your case you should have some sort of hard drive fan.
      • by hedwards ( 940851 ) on Saturday April 05, 2008 @04:09PM (#22974742)
        I think cooling issues are somewhat less common than most people think, but they are definitely significant. And I wouldn't care to suggest that people neglect to handle heat dissipation on general principle.

        Dirty, spikey power is a much larger problem. A few years back I had 3 or 4 nearly identical WD 80gig drives die within a couple of months of each other, They were replaced with identical drives that are still chugging along find all this time later. The only major difference is that I gave each system a cheapo UPS.

        Being somewhat I cheap, I tend to use disks until they wear out completely. After a few years I shift the disks to storing things which are permanently archived elsewhere or swap. Seems to work out fine, only problem is what happens if the swap goes bad while I'm using it.
        • by afidel ( 530433 ) on Saturday April 05, 2008 @05:01PM (#22975070)
          I would tend to agree with that. I run a datacenter that's cooled to 74 degrees and has good clean power from the online UPS's and I've had 6 drive failures out of about 500 drives over the last 22 months. Three were from older servers that weren't always properly cooled (the company had a crappy AC unit in their old data closet.) The other three all died in their first month or two after installation. So properly treated server class drives are dying at a rate of about .5% per year for me, I'd say that jives with manufacturer MTBF.
        • Re: (Score:3, Interesting)

          by Reziac ( 43301 ) *
          I live where the power spikes and sags constantly. My machines are all on UPSs. And each PC has a decent quality PSU. And if a HD runs more than "pleasantly warm" to the touch, it gets its own dedicated fan. Consequently, I firmly believe all HDs are supposed to live A Long Time... the oldest of my 24/7 HDs right now is 10 years old, and has about 80,000 actual hours on it -- Like yourself, I think they're supposed to be worn out before being thrown out. :)

          Of course, yonder is a large stack of backups, whic
      • by GIL_Dude ( 850471 ) on Saturday April 05, 2008 @04:09PM (#22974748) Homepage
        I'd agree with you there; I have had probably 8 or 9 hard drives fail over the years (I currently have 10 running in the house right now and I have 8 running at my desk at work, so I do have a lot of drives). I am sure that I have caused some of the failures by just what you are talking about - I've maxed out the cases (for example my server has 4 drives in it, but was designed for 2 - I had to make my own bracket to jam the 4th in there, the 3rd went in place of a floppy). But I've never done anything about cooling and I probably caused this myself. Although to hear the noises coming from some of the platters when they failed I'm sure at least a couple weren't just heat. For example at work I have had 2 drives fail in just bog standard HP Compaq dc7700 desktops (without cramming in extra stuff). Sometimes they just up and die, other times I must have helped them along with heat.
        • Re: (Score:3, Informative)

          by Depili ( 749436 )

          Excess heat can cause the lubricant of a hd to go bad and causes weird noises, also logic board failures/head positioning failures cause quite a racket.

          In my experience most drives fail without any indications from smart tests, ie. logic board failures, bad sectors are quite rare nowadays.

      • When they fail within minutes, in an open box, with extra fans blowing across them (4 out of 4 from one batch, 2 out of 4 with a replacement batch - and yes, they were also individually checked in another machine afterwards, but let's face it, when they're making grinding or zip-zip-zip noises, they're defective) there's a problem with quality control. Specifically, China.

        Also , do NOT use those hard drive fans that mount under the hd - I tried that with a raid 4 years ago. The fans become unbalanced aft

    • by serviscope_minor ( 664417 ) on Saturday April 05, 2008 @04:08PM (#22974726) Journal
      I'm about to lug a huge Wang hard drive out to the trash pickup on Monday - weighs over 100 pounds... still runs. Actually it uses removable platters but still...

      <Indiana Jones> IT BELONGS IN A MUSEUM!</Indiana Jones>
    • by kesuki ( 321456 ) on Saturday April 05, 2008 @04:13PM (#22974776) Journal
      And i had 5 fail This year, welcome, the the law of averages. note i own about 15 hard drives including the 5 that failed.
    • Re: (Score:3, Informative)

      by Kjella ( 173770 )
      1.6GB drive: failed
      3.8GB drive: failed
      45GB drive: failed
      2x500GB drive: failed

      Still working:
      9GB
      27GB
      100GB
      120GB
      2x160GB
      2x250GB
      3x500GB
      2x750GB
      3x500GB external

      However, in all the cases they've been the worst possible. The 45GB drive was my primary drive at the time with all my recent stuff. The 2x500GB were in a RAID5, you know what happens in a RAID5 when two drives fail? Yep. Right now I'm running 3xRAID1 for the important stuff (+ backup), JBOD on everything else.
    • by STrinity ( 723872 ) on Saturday April 05, 2008 @04:26PM (#22974862) Homepage

      I'm about to lug a huge Wang
      There needs to be a -1 "Too Easy" moderation option.
    • For an opposing anecdote, my family had 3 fairly new drives fail within 3 months of each other - 1 Seagate (approx 1 year old), 1 Samsung (approx 6 months old) and 1 Western Digital (3 weeks old).

      During this period, I learned not to buy WD drives in Australia again - whereas Seagate and Samsung handle warranty returns locally, and each took about 3 days to get a new drive to me, WD wanted me to send the drive to Singapore, and estimated a 4-week turnaround. Fortunately, I was able to convince the retailer t
    • I already had 2 harddrives fail in 2 seperate notebooks. They weren't old either at the time, maybe one was 16 months and the other was 3 months at the time. I've only owned about 4 notebooks.

      Something about moving around and harddrives don't mix. (Can't wait for SSD).
    • Re: (Score:2, Interesting)

      Am I the only one who wants to hear more about the drive that went ballistic?
    • by Xtravar ( 725372 )
      I agree with you almost 100%.
      The only time I had a hard drive die was at work... which is probably one of the worst places for it to happen.

      And our tech people couldn't recover data; I had to ask for the broken drive and recover it myself.

      And I was quite dicked because we get just one big partition and so the fragmentation rate was extremely high over my important documents.

      That's why:
      1. always partition everything
      2. never use Maxtor drives
      3. never buy Dell
  • by **loki969** ( 880141 ) on Saturday April 05, 2008 @03:35PM (#22974538)
    ...those that make backups and those that never had a hard drive fail.
  • by dpbsmith ( 263124 ) on Saturday April 05, 2008 @03:38PM (#22974554) Homepage
    If everyone knows how much a disk drive costs, and nobody can find out how long a disk drive really will last, there is no way the marketplace can reward the vendors of durable and reliable products.

    The inevitable result is a race to the bottom. Buyers will reason they might was well buy cheap, because they at least know they're saving money, rather then paying for quality and likely not getting it.
    • by piojo ( 995934 )

      The inevitable result is a race to the bottom. Buyers will reason they might was well buy cheap, because they at least know they're saving money, rather then paying for quality and likely not getting it.
      That's the description of a lemon market. However, I don't think it applies here, because brands gain reputations in this realm. If one brand of hard drives becomes known as flaky, people (and OEMs) will stop buying it.
    • by commodoresloat ( 172735 ) * on Saturday April 05, 2008 @04:56PM (#22975046)

      If everyone knows how much a disk drive costs, and nobody can find out how long a disk drive really will last, there is no way the marketplace can reward the vendors of durable and reliable products.
      And that may be the exact reason why the vendors are providing bad data. On the flip side, however, if people knew how often drives failed, perhaps we'd buy more of them in order to always have backups.
    • For the most part, buying a more expensive drive doesn't necessarily mean it's more reliable. The Google paper on the subject said that they saw no significant difference between the regular desktop drives and the pricey Fiber Channel drives.
  • Maybe they mean the MTBF for drives that are just on, but not being used. I've never put any stock into those numbers, because I've had too many drives fail to believe that they're supposed to be lasting 100 years. I've had 3 die in the last 3 years alone (all in my server, so probably getting more than average use, but still...)
    • by zappepcs ( 820751 ) on Saturday April 05, 2008 @03:56PM (#22974664) Journal
      The problem is that the MTBF is calculated on an accelerated lifecycle test schedule. Life in general does not actually act like the accelerated test expanded out to 1day=1day. It is an approximation, and prone to errors because of the aggregated averages created by the test.

      On average, a disk drive can last as long as the MTBF number. What are the chances that you have an average drive? They are slim. Each component in the drive, every resistor, every capacitor, every part has an MTBF. They also have tolerance values: that is to say they are manufactured to a value with a given tolerance of accuracy. Each tolerance has to be calculated as one component out of tolerance could cause failure of complete sections of the drive itself. When you start calculating that kind of thing it becomes similar to an exercise in calculating safety on the space shuttle... damned complex in nature.

      The tests remain valid because of a simple fact. In large data centers where you have large quantities of the same drive spinning in the same lifecycles, you will find that a percentage of them fail within days of each other. That means that there is a valid measurement of the parts in the drive, and how they will stand the test of life in a data center.

      Is your data center an 'average' life for a drive? The accelerated lifecycle tests cannot tell you. All the testing does is look for failures of any given part over a number of power cycles, hours of use etc. It is quite improbable that your use of the drive will match that of the expanded testing life cycle.

      The MTBF is a good estimation of when you can be certain of a failure of one part or another in your drive. There is ALWAYS room for it to fail prior to that number. ALWAYS.

      Like any electronic device for consumers, if it doesn't fail in the first year, it's likely to last as long as you are likely to be using it. Replacement rates of consumer societies mean that manufacturers don't have to worry too much about MTBF as long as it's longer than the replacement/upgrade cycle.

      If you are worried about data loss, implement a good data backup program and quit worrying about drive MTBFs.
      • Re: (Score:3, Insightful)

        Great post above. It also depends on how you count "failure." I've had external drives fail where the disk would still spin up, but the interface was the failure point. I took the disk out of the external enclosure and it worked just fine with a direct IDE (I know, who uses that anymore?) connection.

        If I were running a data-based business I'd count that as a "failure" since I had to go deal with the drive, but the HD company probably wouldn't since no data was permanently lost.
        • Re: (Score:3, Insightful)

          by BSAtHome ( 455370 )
          There is another failure rate that you have to take into account: unrecoverable bit-read error-rate. This is detected as an error in the upstream connection, which can cause the controller to fail the drive. An unrecoverable read fails the ECC mechanism and can under circumstances be recovered by performing a re-read of the sector.

          The error-rate is in the order of 10^14 bits. Calculating this on a busy system, reading 1MBytes/s gives you approx. 10^7 seconds for each unrecoverable read failure. Or, that mea
      • by SuperQ ( 431 ) * on Saturday April 05, 2008 @06:14PM (#22975458) Homepage
        MTBF is NOT calculated for a single drive. MTBF is calculated based on an average for ANY pool size of drives.

        If you have 10,000 drives, and the failure is 1 in 1,000,000 hours, you will have a failure every 100 hours.

        Here's a good document on disk failure information:
        http://research.google.com/archive/disk_failures.pdf [google.com]
    • by mollymoo ( 202721 ) * on Saturday April 05, 2008 @05:47PM (#22975328) Journal

      Maybe they mean the MTBF for drives that are just on, but not being used. I've never put any stock into those numbers, because I've had too many drives fail to believe that they're supposed to be lasting 100 years.

      If you think an MTBF of 100 years means the disk will last 100 years you're bound to be disappointed, because that's not what it means. MTBF is calculated in different ways by different companies, but generally there are at least two numbers you need to look at, MTBF and the design or expected lifetime. A disk with an MTBF of 200 000 hours and a lifetime of 20 000 hours means that 1 in 10 are expected to fail during their lifetime, or with 200 000 disks one will fail every hour. It does not mean the average drive will last 200 000 years. After the lifetime is over all bets are off.

      In short, the MTBF is a statistical measure of the expected failure rate during the expected lifetime of a device, it is not a measure of the expected lifetime of a device.

  • warranties (Score:5, Insightful)

    by qw0ntum ( 831414 ) on Saturday April 05, 2008 @03:45PM (#22974602) Journal
    The best metric is probably going to be the length of warranty the manufacturer offers. They have financial incentive to find out the REAL mean time until failure in calculating the warranty.
    • by dh003i ( 203189 )

      The best metric is probably going to be the length of warranty the manufacturer offers. They have financial incentive to find out the REAL mean time until failure in calculating the warranty.
      They do provide "real" MTBF numbers. It's just MTBF isn't for what you think it's for. See my post explaining this.
      • by qw0ntum ( 831414 )
        Yes... we say the same thing (last paragraph). I know very well what MTBF means and how it's calculated. In your words, I put my stock in the warranty, because "that's what they're willing to put their money behind." The warranty is set so that most devices don't stop working until after the warranty period ends. This more accurately reflects the amount of time a drive lasts under normal use.

        I'm not saying that MTBF isn't a completely unreliable number. I'd imagine there is a correlation between higher M
      • by afidel ( 530433 )
        It's worse than your post implies because the manufacturers actually specify that drives be replaced every so often to get the MTBF rating. Basically the only thing an MTBF rating is good for is figuring out statistically what the chances are of a given RAID configuration losing data before a rebuild can be completed.
    • The best metric is probably going to be the length of warranty the manufacturer offers. They have financial incentive to find out the REAL mean time until failure in calculating the warranty.

      ASSuming anything approaching a significant of drives which fail during the warranty period are claimed. Otherwise a warranty is nothing more than advertising.
      I strongly suspect this is not the case and you are simply replacing one false metric with another.
    • Re: (Score:3, Insightful)

      by ooloogi ( 313154 )
      Warranties beyond about two years become largely meaningless for this purpose, because after a drive is getting older people often won't bother claiming warranty for what is by then such a small drive. The cost of shipping/transport is likely to be more than the marginal $/GB on a new drive.

      So in this way a manufacturer can get away with a long warranty, without necessarily incurring a cost for unreliability.
      • Except... (Score:3, Insightful)

        by absurdist ( 758409 )
        ...that by the time the drive fails beyond that warranty, the vendor is more likely than not not going to have any drives that small in stock. So they'll replace it with whatever's on the shelf, which is usually an order of magnitude larger, at the very least.
  • put the 500GB drive into your bottom drawer ... the unused disk will break when thrown out by your great great grand kids - who will simultaneously wonder if you really did use storage of such tiny capacity.
  • What MTBF is for. (Score:5, Insightful)

    by sakusha ( 441986 ) on Saturday April 05, 2008 @03:51PM (#22974640)
    I remember back in the mid 1980s when I received a service management manual from DEC, it had some information that really opened my eyes about what MTBF was really intended for. It had a calculation (I have long since forgotten the details) that allowed you to estimate how many service spares you would need to keep in stock to service any installed base of hardware, based on MTBF. This was intended for internal use in calculating spares inventory level for DEC service agents. High MTBF products needed fewer replacement parts in inventory, low MTBF parts needed lots of parts in stock. Presumably internal MTBF ratings were more accurate than those released to end users.

    So anyway.. MTBF is not intended as an indicator of a specific unit's reliability. It is a statistical measurement to calculate how many spares are needed to keep a large population of machines working. It cannot be applied to a single unit in the way it can be applied to a large population of units.

    Perhaps the classical example is about the old tube-based computers like ENIAC, if a single tube has an MTBF of 1 year, but the computer has 10,000 tubes, you'd be changing tubes (on average) more than once an hour, you'd rarely even get an hour of uptime. (I hope I got that calculation vaguely correct)
    • by dh003i ( 203189 )
      Good post, I think we were on the same wavelength, as I posted something very similar to that below.
      • Re:What MTBF is for. (Score:4, Informative)

        by sakusha ( 441986 ) on Saturday April 05, 2008 @04:09PM (#22974740)
        Thanks. I read your comment and got to thinking about it a bit more. I vaguely recall that in those olden days, MTBF was not an estimate, it was calculated from the service reports of failed parts. The calculations were released in monthly reports so we could increase our spares inventory to cover parts that were proving to be less reliable than estimated. But then, those were the days when every installed CPU was serviced by authorized agents, so data gathering was 100% accurate.
    • Re:What MTBF is for. (Score:4, Informative)

      by davelee ( 134151 ) on Saturday April 05, 2008 @04:22PM (#22974830)
      MTBFs are designed to specify a RATE of failure, not the expected lifetime. This is because disk manufacturers don't test MTBF by running 100 drives until they die, but rather running say, 10000 drives and counting the number that fail during some period of months perhaps. As drives age, clearly the failure rate will increase and thus the "MTBF" will shrink.

      long story short -- a 3 year old drive will not have the same MTBF as a brand new drive. And a MTBF of 1 million hours doesn't mean that the median drive will live to 1 million hours.
    • Re: (Score:3, Informative)

      by flyingfsck ( 986395 )
      That is an urban legend. Colossus and Eniac were far more reliable than that. The old tube based computers seldom failed, because the tubes were run at very low power levels and tubes degrade slowly, they don't pop like a light bulb (which is run at a very high power level to make a little visible light). Colossus for example was built largely from Plessey telephone exchange registers and telex machines. These registers were in use in phone exchanges for decades after the war. I saw some tube based exc
    • by jrumney ( 197329 )

      It cannot be applied to a single unit in the way it can be applied to a large population of units.

      This is the case with any statistic. They are very useful for predicting trends in a large enough population, but completely useless for predicting individuals' behaviour.

    • Re: (Score:3, Informative)

      by Bacon Bits ( 926911 )
      Exactly, it's a basic misunderstanding of what MTBF means.

      Let's say you buy quality SAS drives for your servers and SAN. They're Enterprise grade, so they have a MTBF of 1 million hours. Your servers and SAN have a total of 500 disks between them all. How many many drives should you expect to fail each year?

      IIRC, this is the calculation:

      1 year = 365 days x 24 hours = 8760 hours per year
      500 disks * 8760 hours per year = 4,380,000 disk-hours per year
      4,380,000 disk-hours per year / 1,000,000 hours per disk
  • by dh003i ( 203189 ) <dh003i@gmail. c o m> on Saturday April 05, 2008 @03:55PM (#22974654) Homepage Journal
    I think that a lot of people are mis-understanding MTBF. A HD might have a MTBF of 100 years. This doesn't mean that the company expects the vast majority of consumers to have that HD running for 100 years without problems.

    MTBF numbers are generated by running say thousands of hard-drives of the same model and batch/lot, and seeing how long it takes before 1 fails. This may be a day or so. You then figure out how many total HD running hours it took before failure. If you have 1,000 HD's running, and it takes 40 hours before one fails, that's a 40,000 hr MTBF. But this number isn't generated by running say 10 hard-drives, waiting for all of them to fail, and averaging that number.

    Thus, because of the way MTBF numbers are generated, they may or may not reflect hard-drive reliability beyond a few weeks. It depends on our assumptions about hard-drive stress and usage beyond the length of time before the 1st HD of the 1,000 or so they were testing failed. Most likely, it says less and less about hard-drive reliability beyond that initial point of failure (which is on the order of tens or hundreds of hours, not hundreds of thousands of hours or millions of hours!).

    To be sure, all-else equal, a higher MTBF is better than a lower one. But as far as I'm concerned, those numbers are more useful for predicting DOA, duds, or quick-failure; and are more useful to professionals who might be employing large arrays of HD's. They are not particularly useful for getting a good idea of how long your HD will actually last.

    HD manufacturers also publish an expected life-cycle of their HD. But I usually put the most stock in the length of the warranty. That's what they're willing to put their money behind. Albeit, it's possible their strategy is just to warranty less than how long they expect 90% of HD's to last, so they can then sell them cheaper. But if you've had a HD and you've had it for longer than what the manufacturer publishes as the expected-life, what they're saying by that is you've basically got a good value, and will probably want to have something else on hand, and be backed up.
    • Nope, MTBF is usually *calculated* and the number is just that - a number - it means fuck-all in real time. The numbers are used comparitively, to show the designers which potentially stressed components need to be looked at during the design phase. Eventually the numbers are mis-used by the marketing department to mislead the customers, but that is not the intent of the designers and is not the purpose of the MTBF calculations.
    • Re: (Score:3, Insightful)

      Comment removed based on user account deletion
  • by arivanov ( 12034 ) on Saturday April 05, 2008 @03:55PM (#22974660) Homepage
    Disk MTBF is quoted for 20C.

    Here is an example of my server. At 18C ambient in a well cooled and well designed case with dedicated hard drive fans he Maxtors I use for RAID1 run at 29ÂC. My Media server which is in the loft with sub-16C ambient runs them at 24-34 depending on the position in the case (once again, proper high end case with dedicated hard drive fans).

    Very few hard disk enclosures can bring the temperature down to 24-25C.

    SANs or high density servers usually end up running disks at 30C+ while at 18C ambient. In fact I have seen disks run at 40C or more in "enterprise hardware".

    From there on it is not amazing that they fail at a rate different from the quoted one. In fact I would have been very surprised if they did.
    • by 0123456 ( 636235 )
      From what I remember, the Google study showed that temperature made far less difference than had previously been believed (of course my memory may be past its MTBF).
    • by ABasketOfPups ( 1004562 ) on Saturday April 05, 2008 @04:10PM (#22974756)

      Google says that's just not what they've seen [google.com]. "The figure shows that failures do not increase when the average temperature increases. In fact, there is a clear trend showing that lower temperatures are associated with higher failure rates. Only at the very high temperatures is there a slight reversal of this trend."

      On the graph it's clear that 30-35C is best at three years. But up until then, 35-40C has lower failure rates, and both have lower rates by a lot than the 15-30C range.

      • Re: (Score:3, Insightful)

        by drsmithy ( 35869 )

        However, Google's data doesn't appear to have a lot of points when temperatures get over 45 degrees or so (as to be expected, since most of their drives are in a climate controlled machine room).

        The average drive temperature in the typical home PC would be *at least* 40 degrees, if not higher. While it's been some time since I checked, I seem to recall the drive in my mum's G5 iMac was around 50 degrees when the machine was _idle_.

        Google's data is useful for server room environments, but I'd be hesistant

      • Re: (Score:2, Informative)

        by ooloogi ( 313154 )
        From the Google study, it would appear that there was a brand of hard drive that ran cool and was unreliable. If there's a correlation between brand/model/design and temperature (which there will be), then the temperature study may just be showing that up.

        To get a meaningful result, it would require taking a population of the same drive and comparing the effects of temperature on it.
    • by Jugalator ( 259273 ) on Saturday April 05, 2008 @04:13PM (#22974772) Journal
      I agree, I had a Maxtor disk that ran at something like 50-60 C and wondered when it was going to fail, never really treated it as my safest drive. And lo and behold, after ~3-4 years the first warnings on bad sectors started cropping up, and a year later Windows panicked and told me to immediately back it up if I hadn't already because I guess the number of SMART errors were building up.

      On the other hand, I had a Samsung disk that ran at 40 C tops, in a worse drive bay too! The Maxtor one had free air passage in the middle bay (no drives nearby), where the Samsung was side-by-side with the metal casing.

      So I'm thinking there can be some measurable differences between drive brands, and a study of this, along with perhaps relationship with brand failure rates would be most interesting!
      • Re: (Score:3, Informative)

        by drsmithy ( 35869 )

        On the other hand, I had a Samsung disk that ran at 40 C tops, in a worse drive bay too! The Maxtor one had free air passage in the middle bay (no drives nearby), where the Samsung was side-by-side with the metal casing.

        Air is a much better insulator than metal.

    • by afidel ( 530433 )
      Yeah my datacenter is 23-24C and the hottest disk bays in my SAN average about 37C. I don't care because my SAN is designed to lose an entire bay without losing data and the manufacturer is responsible for warranty replacement parts. So far in 22 months of operation it's lost three drive out of 160 and two of those were basically DOA with the other dying at about two months.
  • Didn't Google present data on their disk failure rates? How about other large purchasers? Who cares if the manufacturers don't report them. If you have some very large purchasers report them, it may be more useful information, anyway.
  • I would put the quotation marks around "add value" instead of adding them around "interpreting".

    They are obviously interpreting the numbers.

    How the hell can they be adding value is way beyond me.

    Adding price, may be, but VALUE ????
    • by drsmithy ( 35869 )

      How the hell can they be adding value is way beyond me.

      By having larger amounts of data and more skill in interpreting it.

  • by omnirealm ( 244599 ) on Saturday April 05, 2008 @04:14PM (#22974786) Homepage
    While we are on the topic of failing drives, I think it would be appropriate to include a warning about USB drives and warranties.

    I purchased a 500GB Western Digital My Book about a year and a half ago. I figured that a pre-fab USB enclosed drive would somehow be more reliable than building one myself with a regular 3.5" internal drive and my own separately purchased USB enclosure (you may dock me points for irrational thinking there). Of course, I started getting the click-of-death about a month ago, and I was unpleasantly surprised to discover that the warranty on the drive was only for 1 year, rather than the 3 year warranty that I would have gotten for a regular 3.5" 500GB Western Digital drive at the time. Meanwhile, my 750GB Seagate drive in a AMS VENUS enclosure has been chugging along just fine, and if it fails sometime in the next four years, I will still be able to exchange it under warranty.

    The moral of the story is that, when there is a difference in the warranty periods (i.e., 1 year vs. 5 years), it makes a lot more sense to build your own USB enclosed drive rather than order a pre-fab USB enclosed drive.
    • Did you check/confirm with Western Digital? I bought the My Book World edition. It was clearly written "3 year warranty" on the box, but when I registered it it only said 1 year. After raising a stink they changed my online registration warranty to 3 years.

      Needless to say, it's my last WD drive. Their service suxk.
  • Drive manufacturers take a new hard drive, run a hundred drives or so for some number of weeks, and measure the failure rate. Then they extrapolate that failure rate out to thousands of hours... So, let's say one in 100 drives fail in a 1000-hour test (just under six weeks). MTBF = 100,000 hours, or 11.4 years!

    To make this sort of test work, it must be run over a much longer period of time. But in the process of designing, building, testing and refining disk drive hardware and firmware (software), the

  • by gelfling ( 6534 ) on Saturday April 05, 2008 @04:22PM (#22974834) Homepage Journal
    But since 1981 I have had exactly zero catastrophic PC drive crashes. That's not to say I haven't seen some bad/relocated sectors, but hard failures? None. Granted that's only 20 drives. But in fact in my experience in PC's, midranges and mainframes in almost 30 years I have seen zero hard drive crashes.
  • by Kupfernigk ( 1190345 ) on Saturday April 05, 2008 @04:39PM (#22974942)
    which many people confuse with MTTF (mean time to failure) - which is relevant in predicting the life of equipment. It needs to be stated clearly that MTBF applies to populations; if I have 1000 hard drives with a MTBF of 1 million hours, I would on average expect one failure every thousand hours. These are failures rather than wearouts, which are a completely different phenomenon.

    Anecdotal reports of failures also need to consider the operating environment. If I have a server rack, and most servers in the rack have a drive failure in the first year, is it the drive design or the server design? Given the relative effort that usually goes into HDD design and box design, it's more likely to be due to poor thermal management in the drive enclosure. Back in the day when Apple made computers (yes, they did once, before they outsourced it) their thermal management was notoriously better than that of many of the vanilla PC boxes, and properly designed PC-format servers like the HP Kayaks were just as expensive as Macs. The same, of course, went for Sun, and that was one reason why elderly Mac and Sparc boxes would often keep chugging along as mail servers until there were just too many people sending big attachments.

    One possibly related oddity that does interest me is laptop prices. The very cheap laptops are often advertised with optional 3 year warranties that cost as much as the laptop. Upmarket ones may have three year warranties for very little. I find myself wondering if the difference in price really does reflect better standards of manufacture so that the chance of a claim is much less, whether cheap laptops get abused and are so much more likely to fail, or whether the warranty cost is just built into the price of the more expensive models because most failures in fact occur in the first year.

  • Hard drives have been becoming less and less reliable as densities increase. Seagate, WD, Hitachi, Maxtor, Toshiba, heck, they all die, often sooner than their warranties are up. They're mechanical devices, for crying out loud. So here's a bit of good advice: If you really care about your data, use a RAID array with redundancy (RAID 1 or 5). It will cost a bit more, but you'll sleep better at night. Thank you all for your kind attention. That is all.
  • by oren ( 78897 ) on Saturday April 05, 2008 @05:24PM (#22975198)
    Disk reliability metrics are much more science than myth. Like all science, this means you actually need to put some minimal effort into understanding them. Unlike myths :-)

    Disks have two separate reliability metrics. The first is their expected life time. In general disks failure follows a "bathtub distribution". They are much more likely to fail at the first few weeks of operation. If they make it past this phase, they become very reliable - for a while anyway. Once their expected lifetime is reached, their failure rate starts steeply climbing.

    The often quoted MTBF numbers express the disk reliability during the "safe" part of this probability distribution. Therefore, a disk with an expected lifetime of, say, 4 years, can have an MTBF of 100 years. This sounds theoretical until you consider that if you have 200 of such disks, you can expect that on average one of them will fail each year.

    People running large data warehouses are painfully aware of these two separate numbers. They need to replace all "expired" disks, and also have enough redundancy to survive disk failures in the duration.

    The article goes so far as to state this:

    "When the vendor specs a 300,000-hour MTBF -- which is common for consumer-level SATA drives -- they're saying that for a large population of drives, half will fail in the first 300,000 hours of operation," he says on his blog. "MTBF, therefore, says nothing about how long any particular drive will last."

    However, this obviously flew over the head of the author:

    The study also found that replacement rates grew constantly with age, which counters the usual common understanding that drive degradation sets in after a nominal lifetime of five years, Schroeder says.

    Common understanding is that 5 years is a bloody long life expectancy for a hard disk! It would take divine intervention to stop failures from rising after such a long time!
  • by AySz88 ( 1151141 ) on Saturday April 05, 2008 @05:32PM (#22975254)
    MTBF is only valid during the "lifetime" of a drive. (For example, "lifetime" might mean the five years during which a drive is under warranty.) Thus, the MTBF is the mean time before failure if you replace the drive every five years with other drives with identical MTBF. Thus the 100-some year MTBF doesn't mean that an individual drive will last 100+ years, it means that your scheme of replacing every 5 years will work for an average time of 100+ years.
    Of course, I think this is another deceptive definition from the hard drive industry... To me, the drive's lifetime ends when it fails, not "5 years".
    Source: http://www.rpi.edu/~sofkam/fileserverdisks.html [rpi.edu]
  • Was this even a question? I mean, did anybody actually believe the claims from the hard drive manufacturers?
  • People say, 'Tape is kind of boring.' Well, I say go in and tell your customer that you have lost their back-up tapes and you'll see excitement pretty quickly.

  • by fluffy99 ( 870997 ) on Sunday April 06, 2008 @02:54AM (#22978074)
    To the guys who claim they've never lost a drive, you've had what? Maybe 3 or 4? I deal with several large raids, encompassing a few hundred drives and running 24/7. The power and cooling are very tightly controlled. Looking at our statistics, we have about a 5% failure rate for drives within the first year. About 10% over four years. SCSI drives seem to last longer than SATA drives, but they are also much more expensive. The MTBF numbers from the manufacturers are total BS. The best number to go by is the warranty, because that's what matters to the manufacturer. Depending on the expected failure rate of a particular model and the profit margin, they set the warranty period to minimize the number of replacements and still be able to make a profit. Some models that might be a 5% or even 10% warranty replacement rate.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...